How to use FFmpeg (with examples)

Have you ever had a video file that you needed to modify or optimise? You might have a video that is taking up too much space on your hard drive, or you just need to trim a small section from a long video or reduce the resolution. The go-to tool in these situations is FFmpeg , a software utility used by professionals and home users. In this article, we'll explain what FFmpeg is, how to install it, and look at some of the most common and useful commands using FFmpeg.

What is FFmpeg?

FFmpeg is a free and open-source video and audio processing tool that you run from the command-line.

FFmpeg is the tool of choice choice for multiple reasons:

  • Free: It's a completely free option.
  • Open-source: It has an active and dedicated open-source community continually deploying fixes, improvements, and new features.
  • Platform compatibility: FFmpeg is available for Windows, Mac, and Linux.
  • Command-line interface: It is a lightweight solution offering a vast array of options through a command-line interface.

How to install FFmpeg

Some operating systems, such as Ubuntu , install FFmpeg by default, so you might already have it on your computer.

Check if it's installed with the following command:

If it gives you a version number and build information, you already have it installed.

If not, or you are using Windows or a Mac then you will need to download a static or compiled binary executable from a third party vendor. Unfortunately FFmpeg only provide the source code and not the ready to run software.

Here are the key steps you'll need to follow:

  • Navigate to the FFmpeg download page.
  • Under Get packages & executable files , select your operating system to display a list of vendors.
  • Visit the most suitable vendor and follow the instructions on their web site. Typically you will either need to run a set of commands or you will need to download a zipped file (.zip, .7z, .tar.gz, etc.) containing the FFmpeg executable.
  • If downloading, extract the contents of the zipped file to your chosen location. If you browse the extracted files you should find a file called ffmpeg or ffmpeg.exe in a bin folder.

To run FFmpeg you will need to use the command-line; open a new terminal and navigate to the directory where you extracted the ffmpeg file, then type and run the following command again:

If installed correctly, you should see an output similar to below:

One last step to make FFMpeg more useful and available from any folder is to add it to your system PATH. This is different for each operating system, but typically involves adding the directory where the ffmpeg executable is located to the PATH environment variable.

Now that FFmpeg is successfully installed, let's look at how to use FFmpeg, with examples!

FFmpeg examples and common uses

Let's look at some of the most common and useful commands in FFmpeg.

You will need a sample video file to test the commands with. You can use any video file you have on your computer, or you can download this test file , which is names scott-ko.mp4 .

Convert video formats using FFmpeg

One of the simplest and easiest commands to get started with is converting one video format to another. This is a common task when you need to make a video compatible with a specific device, player or platform.

A basic command to convert a video from one format to another, using our scott-ko.mp4 sample file is:

This simple command will convert the video from the MP4 format to WEBM. FFmpeg is smart enough to know that the video and audio codec should be converted to be compatible with the new file type. For example, from h264 (MP4) to vp9 (WEBM) for video and aac (MP4) to opus (WEBM) for audio.

It is also possible to convert from one video format to another and have full control over the encoding options. The command to do that uses the following template:

Here are the options and the placeholders you can replace with your own values:

  • -i input.mp4 : Replace input.mp4 with the path to your input video file.
  • -c:v video_codec : Specify the video codec for the output. Replace video_codec with the desired video codec (e.g., libx265 for H.265).
  • -c:a audio_codec : Specify the audio codec for the output. Replace audio_codec with the desired audio codec (e.g., aac for AAC audio).
  • output.ext : Replace this with the desired output file name and extension (e.g., output.mp4).

Here's an example of converting an MP4 video to an MKV video using H.264 codec for video and AAC codec for audio:

Trim a video using FFmpeg

If you have a long video and want to extract a small portion, you can trim the video using FFmpeg. You use the ss (start time) and t (duration) options.

Here's an example template command:

Use the following options and replace the placeholders with your specific values:

  • -ss start_time : Replace start_time with the start time to trim from. You can use various time formats like HH:MM:SS or seconds. For example, if you want to start trimming from 1 minute and 30 seconds, you can use -ss 00:01:30 or drop the hour and use -ss 01:30 .
  • -t duration : Specify the duration of the trim. Again, you can use various time formats. For example, if you want to trim 20 seconds, you can use -t 20 .
  • -c copy : This option copies the video and audio codecs without re-encoding, which is faster and preserves the original quality. If you need to re-encode, you can specify different codecs or omit this option.

Here's an example command trimming a video from 1 minute and 30 seconds to 20 seconds:

For more information and examples, see how to trim a video using FFmpeg .

Crop a video using FFmpeg

In the age of smartphones and social networks, cropping videos to different sizes and aspect rations has become an essential requirement when working with video. To crop a video using FFmpeg, use the crop filter .

Here's an example template:

The options and placeholders are described below:

  • input.mp4 : Replace this with the name of your filename or the path to your input video.
  • -filter:v "crop=w:h:x:y" Use the crop video filter and specify the cropping parameters w (width), h (height), x (cropping x coordinate), and y (cropping y coordinate) according to your requirements.
  • output.mp4 : Replace this with the desired filename or path for the output video.

Here's an example command cropping a video to a width of 640 pixels, a height of 640 pixels, and starting the crop from coordinates 900 pixels across and 50 pixels down:

If you run this command using the provided test file , you'll see it is creates a square video cropped to the speakers face.

For more information and examples, see how to crop and resize videos using FFmpeg .

Extract or remove the audio from a video using FFmpeg

There are two common scenarios where you might want to work with a videos audio - extracting the audio so there is no video, or removing the audio from a video so it is silent, or muted.

To extract and save the audio from a video file using FFmpeg, use this command template:

The following options are used and you can replace the following placeholders with your own preferences:

  • input.mp4 : Replace this with the path to your input video file.
  • -vn : This option disables video processing.
  • output.mp3 : Replace this with the desired output audio file name and extension. In this example, the output is saved as an MP3 file.

Here is an example command using our test file:

To remove audio (or mute) a video file using FFmpeg, you can use the -an option, which disables audio processing. Here's an example command:

Here is an explanation of the options used:

  • -an : This option disables audio processing.
  • -c:v copy : This option copies the video stream without re-encoding, which is faster and preserves the original video quality. If you want to re-encode the video, you can specify a different video codec.

Here is an example using the test file:

Concatenate videos using FFmpeg

Concatenating videos is the technical term FFmpeg uses to describe joining, merging or stitching multiple video clips together. To concatenate (or join) multiple video files together in FFmpeg, you can use the concat demuxer.

First, create a text file containing the list of video files you want to concatenate. Each line should contain the file path of a video file.

For example, create a file named filelist.txt and include a list of video files on your hard drive:

Then, use the following FFmpeg command to concatenate the videos:

Here is a summary of the options used:

  • -f concat : This specifies the format (concat) to be used.
  • -safe 0 : This allows using absolute paths in the file list.
  • -i filelist.tx t: This specifies the input file list.
  • -c copy : This copies the streams (video, audio) without re-encoding, preserving the original quality. If you need to re-encode, you can specify different codecs or omit this option.
  • merged.mp4 : Replace this with the desired output file name and extension.

Adjust the file paths in filelist.txt according to your specific file names and paths. The order in which you list the files in the text file determines the order of concatenation.

For more information and examples, see merge videos using FFmpeg concat .

Resize a video using FFmpeg

You might need to resize a video if the resolution is very high, for example - you have a 4K video but you player only supports 1080p. To resize a video using FFmpeg, you can use the scale filter set using the -vf (video filter) option.

Replace the placeholders with your specific values:

  • -vf "scale=w:h" : Replace w and h with the desired width and height of the output video. You can also set a single dimension, such as -vf "scale=-1:720" to maintain the original aspect ratio.
  • resized.mp4 : Replace this with the desired output video file name and extension.

Here's an example command resizing our test video to 720p resolution and maintaining the aspect ratio:

Compress a video using FFmpeg

Video files are typically large and can take up a lot of space on your hard drive, cloud storage or take a long time to download. To compress a video using FFmpeg, you typically need to re-encode it using a more efficient video codec or by adjusting other encoding parameters.

There are many different ways to do this but here's an example template to get you started:

Here's the options and placeholders you can replace:

  • -c:v libx264 : This option sets the video codec to H.264 (libx264). H.264 is a widely used and efficient video codec.
  • -crf 25 : This controls the video quality. A lower CRF (constant rate factor) value results in higher quality but larger file size. Typical values range from 18 to 28, with 23 being a reasonable default.
  • -c:a aac -b:a 128k : These options set the audio codec to AAC with a bitrate of 128 kbps. Adjust the bitrate according to your preferences.
  • compressed.mp4 : Replace this with the desired output file name and extension.

For more information and examples, see how to compress video using FFmpeg .

Using our test file, we can compress the video from 31.9MB to 6.99MB using this command:

Convert a series of images to a video using FFmpeg

Who doesn't love a video montage? With FFmpeg it's easy to create a video from a series of images, simply use wildcard input glob pattern along with the -framerate option.

Here's an example command:

  • -framerate 1 : This sets the frame rate of the output video. Adjust the value according to your preference (e.g., 1 picture per second). Omitting the framerate will default to a framerate of 25.
  • -pattern_type glob -i 'path/to/images/*.jpg' : This specifies the input images using a glob pattern. Adjust the pattern and path to the location of your image files.
  • -c:v libx264 -pix_fmt yuv420p : These options specify the video codec (libx264) and pixel format. Adjust these options based on your preferences and compatibility requirements.
  • montage.mp4 : Replace this with the desired output file name and extension.

For more information and examples, see How to use FFmpeg to convert images to video .

Convert video to GIF using FFmpeg

GIFs are a popular animation format used for memes in messaging applications like WhatsApp or Facebook Messenger and a great way to send animations in emails among other use cases. There are a number of ways to convert and optimise a video to a GIF using FFmpeg, but here is a simple command template to get started with:

Here's a breakdown of the options used and what to replace:

  • -vf "fps=10,scale=320:-1:flags=lanczos" : This sets the video filters for the GIF conversion. The fps option sets the frames per second (adjust the value as needed), and scale specifies the output dimensions. The flags=lanczos part is for quality optimization.
  • -c:v gif : This specifies the video codec for the output, in this case, GIF.
  • animation.gif : Replace this with the desired output file name and extension.

For more information and examples, see how to convert video to animated GIF using FFmpeg .

Speed up and slow down videos using FFmpeg

To speed up or slow down a video in FFmpeg, you can use the setpts filter . The setpts filter adjusts the presentation timestamp of video frames, effectively changing the speed of the video. Here are examples of both speeding up and slowing down a video.

Speed up a video

To double the speed of a video, use a setpts value of 0.5:

Slow down a video

To slow down a video by a factor (e.g., 2x slower), you can use a setpts value greater than 1:

These commands adjust the video speed by manipulating the presentation timestamps (PTS). The values 0.5 and 2.0 in the examples represent the speed factor. You can experiment with different speed factors to achieve the desired result.

Here is an example command that doubles the speed of our test file:

Note that only the video is sped up, but not the audio.

Go forth and explore

This guide provides a quick primer on how to get started and use FFmpeg for various video processing tasks, along with some simple examples. The number of options and possibilities with FFmpeg is vast, and it's worth exploring the FFmpeg documentation and FFmpeg wiki to learn more about the tool and its capabilities.

FFmpeg's major strength is its versatility. However, it has a steep learning curve, with cryptic commands and an intimidating array of options. If you want to run FFmpeg commercially as part of a workflow, pipeline or application you'll also need to consider hosting the software, managing updates and security, and scaling the infrastructure to meet demand.

Shotstack was created to streamline automated video editing and video processing without having to learn complicated commands or worry about scaling infrastructure. Shotstack is an FFmpeg alternative offered as a collection of API's and SDK's that allow you to programmatically create, edit and render videos in the cloud. It's a great way to get started with video processing without having to worry about the complexities of FFmpeg.

Get started with Shotstack's video editing API in two steps:

  • Sign up for free to get your API key.
  • Send an API request to create your video: curl --request POST 'https://api.shotstack.io/v1/render' \ --header 'x-api-key: YOUR_API_KEY' \ --data-raw '{ "timeline": { "tracks": [ { "clips": [ { "asset": { "type": "video", "src": "https://shotstack-assets.s3.amazonaws.com/footage/beach-overhead.mp4" }, "start": 0, "length": "auto" } ] } ] }, "output": { "format": "mp4", "size": { "width": 1280, "height": 720 } } }'

Andrew Bone

BY ANDREW BONE 5th February 2024

Studio Real Estate

Experience Shotstack for yourself.

  • Seamless integration
  • Dependable high-volume scaling
  • Blazing fast rendering
  • Save thousands

You might also like

How to compress video using FFmpeg

How to compress video using FFmpeg

Maab Saleem

Use FFmpeg to crop and resize videos

Kathy Calilao

How to use FFmpeg to convert images to video

Joyce Echessa

ottverse.com

How to Speed Up or Slow Down a Video using FFmpeg

In this article, we explain how to speed up or slow down a video using FFmpeg. Whether you’re a video editor, a developer dealing with media files, or an enthusiast curious about video manipulation, you’ll find value in this guide 🙂

We’ll begin with setting up FFmpeg on your system, delve into understanding some essential commands, and then move towards our main focus – slowing down and speeding up videos using FFmpeg.

Let’s dive right in.

Table of Contents

Setting up FFmpeg

Before we learn to speed up or slow down videos using FFmpeg, we first have to install it on our computers. Setting up FFmpeg on your computer, whether it’s Windows, macOS, or Linux-based, is a straightforward process. However, the steps vary slightly depending on the platform. Here are the steps to install it –

For Windows users:

  • Download the latest FFmpeg static build from the official website . Ensure you select the correct architecture (32-bit or 64-bit) for your system.
  • Extract the downloaded ZIP file. You’ll find a folder named ‘ffmpeg-<version>-essentials_build’.
  • Add the ‘bin’ directory within this folder to your system’s PATH. This step allows you to run FFmpeg from the command line, irrespective of the directory you’re in.

For macOS users:

  • Open Terminal.
  • If you have Homebrew installed, simply type in brew install ffmpeg . If you don’t, consider installing Homebrew first. It simplifies the software installation process on macOS.

For Linux users:

The FFmpeg package is generally included in the standard repository of most Linux distributions. Use your distribution’s package manager to install FFmpeg. For example, on Ubuntu, use sudo apt-get install ffmpeg .

After installation, verify it by typing ffmpeg -version in the command line. The output should look something like this –

If you see details of the installed FFmpeg version, you’ve successfully set it up. If you are interested in FFmpeg, go here to see more options on installing FFmpeg .

Now, let’s learn to slow down a video using FFmpeg. After that, we’ll learn how to speed up a video.

How to Slow Down Video with FFmpeg

We’ll now explore the process of slowing down a video using the setpts video filter in FFmpeg. ‘setpts’ stands for “set presentation timestamp” and it adjusts the frame timestamps, which can effectively slow down or speed up your video playback.

Here’s the basic command to slow down a video using FFmpeg and the setpts parameter:

ffmpeg -i input.mp4 -vf "setpts=2.0*PTS" output.mp4

In this command,

  • the -vf option tells FFmpeg that we are going to apply a video filter.
  • The “setpts=2.0*PTS” portion is our filter of choice. PTS stands for Presentation TimeStamp in the video stream, and by multiplying it by 2.0, we are effectively doubling the length of the video, thus slowing it down by half.

The setpts filter can take any positive floating-point number as an argument. If you want to slow the video down even more, simply increase the value. For instance, using setpts=4.0*PTS would make the video play at quarter speed.

How to Speed Up Video with FFmpeg

Speeding up a video involves reducing its overall playback duration. So, if you have a 10 min video (600 seconds) and you speed it up by a factor of 10, then the output is going to be 1 min long (60 seconds). We can easily speed up a video using the setpts filter in FFmpeg as follows.

As you might recall, the ‘setpts’ filter adjusts the frame timestamps, which affects the playback speed. To speed up the video, we use a value less than 1.0. Here’s how to do it:

ffmpeg -i input.mp4 -vf "setpts=0.5*PTS" output.mp4

This command reduces the timestamps by half, effectively doubling the video speed. You can adjust the multiplier according to your needs. A smaller value will speed up the video more.

In the example below, I speed up the original video by a factor of 4 using setpts=0.25*PTS .

Before we end this article, I’d like to briefly touch upon presentation timestamps so that you know what they are when you are changing their values.

SetPts will Re-encode your videos

Just as a cautionary note, when you use setpts in FFmpeg, it will drop frames to achieve the requested speed-up. And, this will force FFmpeg to re- encode your content. Always remember that you can set the video quality you need using CBR, Capped VBR, or CRF techniques while speeding up or slowing down your videos.

Appendix: What is PTS or Presentation Time Stamp?

The Presentation Time Stamp, or PTS, is part of the data in video and audio streams that indicates when a frame (video) or packet (audio) should be presented to the viewer or listener.

Essentially, it’s a timestamp that tells the media player, “This is the exact moment when this particular frame or packet should be displayed or played.”

To better understand, consider watching a movie. Each frame of the movie has a specific time when it should appear on your screen. This timing ensures that all the frames are shown in the correct sequence and at the right speed, giving you a smooth viewing experience. The timing of these frames is dictated by their PTS values.

Now, the PTS values are expressed in “timebase units,” which are fractions of a second. The timebase is determined by the video or audio stream’s frame or sample rate. For instance, if a video has a frame rate of 30 frames per second (fps), each frame will have a duration of 1/30 of a second, and the PTS will increment by this amount for each subsequent frame.

So, if we have a sequence of frames with PTS values like 0, 1/30, 2/30, 3/30, and so forth, the video player will present each frame precisely 1/30 of a second after the previous one, resulting in a smoothly playing video at the correct speed.

When we manipulate the PTS values, as we do when slowing down or speeding up a video using FFmpeg, we’re altering these timestamps. For instance, if we slow down a video by a factor of two (using setpts=2.0*PTS ), we’re effectively doubling the PTS values for each frame. This makes the video player display each frame for twice as long, halving the video’s playback speed. Conversely, if we speed up a video by a factor of two (using setpts=0.5*PTS ), we’re halving the PTS values, making the frames display twice as quickly and doubling the playback speed.

It’s important to note that manipulating PTS values doesn’t alter the actual media content (i.e., the images in the video frames or the audio samples), but rather the timing of when they are presented, which is how the speed changes are achieved.

It’s also worth noting that PTS values can be presented in different ways. They are generally represented in timebase units, as previously explained, but they can also be represented as real time, depending on the context. The pkt_pts_time field shows the PTS in seconds, which is the real time representation.

By now, you’ve gained a solid understanding of how to speed up or slow down a video using FFmpeg using the setpts filter. Remember, that it involves re-encoding and you can always adjust the encoding parameters to achieve your desired output video quality.

To learn more about FFmpeg, head over our Recipes in FFmpeg section .

Until next time, happy streaming!

krishna profile

Krishna Rao Vijayanagar

Krishna Rao Vijayanagar, Ph.D., is the Editor-in-Chief of OTTVerse, a news portal covering tech and business news in the OTT industry.

With extensive experience in video encoding, streaming, analytics, monetization, end-to-end streaming, and more, Krishna has held multiple leadership roles in R&D, Engineering, and Product at companies such as Harmonic Inc., MediaMelon , Airtel Digital, and Visionular Inc. . Krishna has published numerous articles and research papers and speaks at industry events to share his insights and perspectives on the fundamentals and the future of OTT streaming.

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

An ffmpeg and SDL Tutorial

Tutorial 05: synching video, how video syncs.

So this whole time, we've had an essentially useless movie player. It plays the video, yeah, and it plays the audio, yeah, but it's not quite yet what we would call a movie . So what do we do?

PTS and DTS

Fortunately, both the audio and video streams have the information about how fast and when you are supposed to play them inside of them. Audio streams have a sample rate, and the video streams have a frames per second value. However, if we simply synced the video by just counting frames and multiplying by frame rate, there is a chance that it will go out of sync with the audio. Instead, packets from the stream might have what is called a decoding time stamp (DTS) and a presentation time stamp (PTS) . To understand these two values, you need to know about the way movies are stored. Some formats, like MPEG, use what they call "B" frames (B stands for "bidirectional"). The two other kinds of frames are called "I" frames and "P" frames ("I" for "intra" and "P" for "predicted"). I frames contain a full image. P frames depend upon previous I and P frames and are like diffs or deltas. B frames are the same as P frames, but depend upon information found in frames that are displayed both before and after them! This explains why we might not have a finished frame after we call avcodec_decode_video2 .

So let's say we had a movie, and the frames were displayed like: I B B P. Now, we need to know the information in P before we can display either B frame. Because of this, the frames might be stored like this: I P B B. This is why we have a decoding timestamp and a presentation timestamp on each frame. The decoding timestamp tells us when we need to decode something, and the presentation time stamp tells us when we need to display something. So, in this case, our stream might look like this: PTS: 1 4 2 3 DTS: 1 2 3 4 Stream: I P B B Generally the PTS and DTS will only differ when the stream we are playing has B frames in it.

When we get a packet from av_read_frame () , it will contain the PTS and DTS values for the information inside that packet. But what we really want is the PTS of our newly decoded raw frame, so we know when to display it.

Fortunately, FFMpeg supplies us with a "best effort" timestamp, which you can get via, av_frame_get_best_effort_timestamp ()

Now, while it's all well and good to know when we're supposed to show a particular video frame, but how do we actually do so? Here's the idea: after we show a frame, we figure out when the next frame should be shown. Then we simply set a new timeout to refresh the video again after that amount of time. As you might expect, we check the value of the PTS of the next frame against the system clock to see how long our timeout should be. This approach works, but there are two issues that need to be dealt with.

First is the issue of knowing when the next PTS will be. Now, you might think that we can just add the video rate to the current PTS — and you'd be mostly right. However, some kinds of video call for frames to be repeated. This means that we're supposed to repeat the current frame a certain number of times. This could cause the program to display the next frame too soon. So we need to account for that.

The second issue is that as the program stands now, the video and the audio chugging away happily, not bothering to sync at all. We wouldn't have to worry about that if everything worked perfectly. But your computer isn't perfect, and a lot of video files aren't, either. So we have three choices: sync the audio to the video, sync the video to the audio, or sync both to an external clock (like your computer). For now, we're going to sync the video to the audio.

Coding it: getting the frame PTS

Now let's get into the code to do all this. We're going to need to add some more members to our big struct, but we'll do this as we need to. First let's look at our video thread. Remember, this is where we pick up the packets that were put on the queue by our decode thread. What we need to do in this part of the code is get the PTS of the frame given to us by avcodec_decode_video2 . The first way we talked about was getting the DTS of the last packet processed, which is pretty easy: double pts; for(;;) { if(packet_queue_get(&is->videoq, packet, 1) avcodec_decode_video2 (is->video_st->codec, pFrame, &frameFinished, packet); if(packet->dts != AV_NOPTS_VALUE) { pts = av_frame_get_best_effort_timestamp (pFrame); } else { pts = 0; } pts *= av_q2d (is->video_st->time_base); We set the PTS to 0 if we can't figure out what it is.

Well, that was easy. A technical note: You may have noticed we're using int64 for the PTS. This is because the PTS is stored as an integer. This value is a timestamp that corresponds to a measurement of time in that stream's time_base unit. For example, if a stream has 24 frames per second, a PTS of 42 is going to indicate that the frame should go where the 42nd frame would be if there we had a frame every 1/24 of a second (certainly not necessarily true).

We can convert this value to seconds by dividing by the framerate. The time_base value of the stream is going to be 1/framerate (for fixed-fps content), so to get the PTS in seconds, we multiply by the time_base .

Coding: Synching and using the PTS

So now we've got our PTS all set. Now we've got to take care of the two synchronization problems we talked about above. We're going to define a function called synchronize_video that will update the PTS to be in sync with everything. This function will also finally deal with cases where we don't get a PTS value for our frame. At the same time we need to keep track of when the next frame is expected so we can set our refresh rate properly. We can accomplish this by using an internal video_clock value which keeps track of how much time has passed according to the video. We add this value to our big struct. typedef struct VideoState { double video_clock; // pts of last decoded frame / predicted pts of next decoded frame Here's the synchronize_video function, which is pretty self-explanatory: double synchronize_video(VideoState *is, AVFrame *src_frame, double pts) { double frame_delay; if(pts != 0) { /* if we have pts, set video clock to it */ is->video_clock = pts; } else { /* if we aren't given a pts, set it to the clock */ pts = is->video_clock; } /* update the video clock */ frame_delay = av_q2d (is->video_st->codec->time_base); /* if we are repeating a frame, adjust clock accordingly */ frame_delay += src_frame->repeat_pict * (frame_delay * 0.5); is->video_clock += frame_delay; return pts; } You'll notice we account for repeated frames in this function, too.

Now let's get our proper PTS and queue up the frame using queue_picture , adding a new pts argument: // Did we get a video frame? if(frameFinished) { pts = synchronize_video(is, pFrame, pts); if(queue_picture(is, pFrame, pts) The only thing that changes about queue_picture is that we save that pts value to the VideoPicture structure that we queue up. So we have to add a pts variable to the struct and add a line of code: typedef struct VideoPicture { ... double pts; } int queue_picture(VideoState *is, AVFrame *pFrame, double pts) { ... stuff ... if(vp->bmp) { ... convert picture ... vp->pts = pts; ... alert queue ... } So now we've got pictures lining up onto our picture queue with proper PTS values, so let's take a look at our video refreshing function. You may recall from last time that we just faked it and put a refresh of 80ms. Well, now we're going to find out how to actually figure it out.

Our strategy is going to be to predict the time of the next PTS by simply measuring the time between the previous pts and this one. At the same time, we need to sync the video to the audio. We're going to make an audio clock : an internal value thatkeeps track of what position the audio we're playing is at. It's like the digital readout on any mp3 player. Since we're synching the video to the audio, the video thread uses this value to figure out if it's too far ahead or too far behind.

We'll get to the implementation later; for now let's assume we have a get_audio_clock function that will give us the time on the audio clock. Once we have that value, though, what do we do if the video and audio are out of sync? It would silly to simply try and leap to the correct packet through seeking or something. Instead, we're just going to adjust the value we've calculated for the next refresh: if the PTS is too far behind the audio time, we double our calculated delay. if the PTS is too far ahead of the audio time, we simply refresh as quickly as possible. Now that we have our adjusted refresh time, or delay , we're going to compare that with our computer's clock by keeping a running frame_timer . This frame timer will sum up all of our calculated delays while playing the movie. In other words, this frame_timer is what time it should be when we display the next frame. We simply add the new delay to the frame timer, compare it to the time on our computer's clock, and use that value to schedule the next refresh. This might be a bit confusing, so study the code carefully: void video_refresh_timer(void *userdata) { VideoState *is = (VideoState *)userdata; VideoPicture *vp; double actual_delay, delay, sync_threshold, ref_clock, diff; if(is->video_st) { if(is->pictq_size == 0) { schedule_refresh(is, 1); } else { vp = &is->pictq[is->pictq_rindex]; delay = vp->pts - is->frame_last_pts; /* the pts from last time */ if(delay = 1.0) { /* if incorrect delay, use previous one */ delay = is->frame_last_delay; } /* save for next time */ is->frame_last_delay = delay; is->frame_last_pts = vp->pts; /* update delay to sync to audio */ ref_clock = get_audio_clock(is); diff = vp->pts - ref_clock; /* Skip or repeat the frame. Take delay into account FFPlay still doesn't "know if this is the best guess." */ sync_threshold = (delay > AV_SYNC_THRESHOLD) ? delay : AV_SYNC_THRESHOLD; if(fabs(diff) = sync_threshold) { delay = 2 * delay; } } is->frame_timer += delay; /* computer the REAL delay */ actual_delay = is->frame_timer - ( av_gettime () / 1000000.0); if(actual_delay SDL_LockMutex (is->pictq_mutex); is->pictq_size--; SDL_CondSignal (is->pictq_cond); SDL_UnlockMutex (is->pictq_mutex); } } else { schedule_refresh(is, 100); } } There are a few checks we make: first, we make sure that the delay between the PTS and the previous PTS make sense. If it doesn't we just guess and use the last delay. Next, we make sure we have a synch threshold because things are never going to be perfectly in synch. ffplay uses 0.01 for its value. We also make sure that the synch threshold is never smaller than the gaps in between PTS values. Finally, we make the minimum refresh value 10 milliseconds*. * Really here we should skip the frame, but we're not going to bother.

We added a bunch of variables to the big struct so don't forget to check the code. Also, don't forget to initialize the frame timer and the initial previous frame delay in stream_component_open : is->frame_timer = (double) av_gettime () / 1000000.0; is->frame_last_delay = 40e-3;

Synching: The Audio Clock

Now it's time for us to implement the audio clock. We can update the clock time in our audio_decode_frame function, which is where we decode the audio. Now, remember that we don't always process a new packet every time we call this function, so there are two places we have to update the clock at. The first place is where we get the new packet: we simply set the audio clock to the packet's PTS. Then if a packet has multiple frames, we keep time the audio play by counting the number of samples and multiplying them by the given samples-per-second rate. So once we have the packet: /* if update, update the audio clock w/pts */ if(pkt->pts != AV_NOPTS_VALUE) { is->audio_clock = av_q2d (is->audio_st->time_base)*pkt->pts; } And once we are processing the packet: /* Keep audio_clock up-to-date */ pts = is->audio_clock; *pts_ptr = pts; n = 2 * is->audio_st->codec->channels; is->audio_clock += (double)data_size / (double)(n * is->audio_st->codec->sample_rate); A few fine details: the template of the function has changed to include pts_ptr , so make sure you change that. pts_ptr is a pointer we use to inform audio_callback the pts of the audio packet. This will be used next time for synchronizing the audio with the video.

Now we can finally implement our get_audio_clock function. It's not as simple as getting the is->audio_clock value, thought. Notice that we set the audio PTS every time we process it, but if you look at the audio_callback function, it takes time to move all the data from our audio packet into our output buffer. That means that the value in our audio clock could be too far ahead. So we have to check how much we have left to write. Here's the complete code: double get_audio_clock(VideoState *is) { double pts; int hw_buf_size, bytes_per_sec, n; pts = is->audio_clock; /* maintained in the audio thread */ hw_buf_size = is->audio_buf_size - is->audio_buf_index; bytes_per_sec = 0; n = is->audio_st->codec->channels * 2; if(is->audio_st) { bytes_per_sec = is->audio_st->codec->sample_rate * n; } if(bytes_per_sec) { pts -= (double)hw_buf_size / bytes_per_sec; } return pts; } You should be able to tell why this function works by now ;)

So that's it! Go ahead and compile it: gcc -o tutorial05 tutorial05.c -lavutil -lavformat -lavcodec -lswscale -lz -lm \ `sdl-config --cflags --libs` and finally! you can watch a movie on your own movie player. Next time we'll look at audio synching, and then the tutorial after that we'll talk about seeking.

>> Synching Audio

email: dranger at gmail dot com

ffmpeg Commands

Some quick samples of ffmpeg commands I frequently use.

Quick Overview

If you’re not familiar with ffmpeg, here’s the basic command structure:

Common Settings

Resize a video.

Use the scale video filter ( -vf ). The arguments to the scale filter are width:height . You can use -1 in place of either value to set only one value and maintain the current aspect ratio.

For example, to scale the video to a frame height of 720 pixels:

Speed Up or Slow Down a Video

Use the setpts (set presentation timestamp) video filter ( -vf ). The argument to the setpts filter is a formula for how to set the timestamp. Some example values:

  • setpts=0.66*PTS - roughly 1.33x speed
  • setpts=0.5*PTS - double the speed
  • setpts=0.25*PTS - quadruple the speed
  • setpts=2.0*PTS - half the speed

For example, to double the speed of a video:

Combining Video Filters

Filters are combined using a comma to separate them. For example, to scale a video and change its speed, you could use this command:

Removing Audio

Simply add the -an flag, which means “no audio”.

Changing the Video Codec

Sometimes you receive a video with a non-standard codec, or you want to convert from one format to another. The most common format today is mp4, and the most common codec is h.264. You can use a command like this to ensure that it’s h.264 and change the video format from .mov (for example, if you exported from Apple Photo Booth) to .mp4 :

Changing the Quality

I often receive files that have astronomically-high bitrates, especially if they used an Apple product (i.e. iOS screen recording, or Apple Photo Booth) to record the video. You can generally significantly drop the bitrate and not suffer any perceivable reduction in quality. Here’s an example of how to do that:

ffmpeg Recipes

My most common use of ffmpeg combines many of the settings above into one of these commands.

Scale to 720p, Format, and Reduce Bitrate

Same, but also remove audio and speed up (good for asl videos).

Get the Reddit app

FFmpeg is the leading multimedia framework, able to decode, encode, transcode, mux, demux, stream, filter and play pretty much anything that humans and machines have created.

Using ffmpeg to determine exact date and time a particular frame was recorded at

Is there anything encoded inside an A/V frame of an MPEG file that ffmpeg could use to determine the Unix Epoch timestamp of when that frame was recorded? I understand there is PTS (presentation time stamp) but that only appears to be a relative time from the start of the first frame in the file. I am hoping there is a way to know (via ffmpeg ) the exact date and time that, say, Frame #928,382 was recorded at.

Otherwise, how do DVR systems know which segment of a stored MPEG file to send back when a user wants to see all the footage stored between, say, October 10th at 3:35 PM and October 10th at 4:25 PM?

By continuing, you agree to our User Agreement and acknowledge that you understand the Privacy Policy .

Enter the 6-digit code from your authenticator app

You’ve set up two-factor authentication for this account.

Enter a 6-digit backup code

Create your username and password.

Reddit is anonymous, so your username is what you’ll go by here. Choose wisely—because once you get a name, you can’t change it.

Reset your password

Enter your email address or username and we’ll send you a link to reset your password

Check your inbox

An email with a link to reset your password was sent to the email address associated with your account

Choose a Reddit account to continue

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Using ffmpeg to cut out a scene by original timestamp

I have a recording (a simple, unprocessed TS from my satellite receiver, not encrypted) that has uneccessary stuff at the beginning and the ending. I want to cut out the main feature now with ffmpeg.

So I started with ffplay to get the timestamps for the beginning and the ending and crop detection:

This was the output for the first and the last frame I need:

The stream itself starts at timestamp 81825.820733 :

With all that information, I tried to cut the desired part out (and also removing streams I don't need) and convert everything into an MKV. It turned out I don't need to crop, so I can simply copy everything:

The result is an empty MKV file.

What would be the correct way to use ffmpeg to cut a specific scene identified by the timestamps out?

For reference:

ffmpeg version I'm using:

ffmpeg's identification data for the original input stream:

The things that appear on the console while trying to encode:

... stream identification data goes here (see above) ...

After some time, ffmpeg finishes:

zero0's user avatar

3 Answers 3

Could have several causes. Cutting without re-encoding is always error prone. Try to remux your ts first with mkvmerge with the desired streams and then try cutting that mkv with FFmpeg again using a simpler command like:

timonsku's user avatar

  • Thanks a lot for your insight, I had multiple non-existing PPS 0 referenced when cutting without reencode, and some apps eg Adobe Premiere would not open it, until I did: ffmpeg -i media.ts -ss 0.04 -to 143.08 -c copy -f matroska - | ffmpeg -y -i - -c copy out.ts and suddenly Premiere likes it again. VLC had not a problem on any scenario though, as it probably assumed the next PPS it could encounter as PPS 0 –  Jose Alban Commented Sep 28, 2017 at 14:24

It turned out that it is quite easy to cut by the original timestamp. Let's stick with the example I've given in the question. ffplay -i recording_xyz.ts -vf "cropdetect=24:16:0" gives you the following information about the stream:

Most important here is the start: 81824.820733 information from the second line. Keep this number in mind.

Now we need our desired start and stop timestamps. ffplay can be paused by the space key. Pause at the desired start and the desired end. You can "navigate" the stream by clicking with the mouse in the ffplay video window. The beginning of the stream is left, the end is at the right border of the window.

When paused, you can read the following on the console:

You see again a timestamp here, marked with t: . In this example, the value is 81953.624278 . Same goes for the desired end of the stream. In our example, this is 87259.194348 .

With this information, you can easily calculate the values for ffmpeg's -ss and -to parameters:

  • Let b be the stream start timestamp (here 81824.820733 )
  • Let s be the desired start timestamp (here 81953.624278 )
  • Let e be the desired end timestamp (here 87259.194348 )

The relative starting point (in seconds) is s - b = 81953.624278 - 81824.820733 = 128.803545 . For the relative ending point (in seconds): e - b = 87259.194348 - 81824.820733 = 5434.373615 . Convert the seconds now to the format hh:mm:ss.msec . For convenience, you might want to use this little Python script:

With the example above, you'll receive the following output:

With this information, you can use ffmpeg:

I tried this with several recordings, and it always worked well. It is also very easy to incorporate cropping if necessary:

Potential pitfalls

  • If the recording goes over midnight, you might experience a lower timestamp at the end of the stream compared to the beginning timestamp. You need to find the timestamp values around the break: The highest one before the break, and the lowest after the break. For calculating the desired end, you need to add the value at the end, the distance between the stream start and the highest value before the break and the distance between the lowest value after the break and the desired ending timestamp.
  • The method for finding your desired start and end point is very unprecise. In my humble opinion, this is OK for TV recordings.
  • The method is not suitable for removing commercials.

The total duration of the input is 5880 seconds long, but you are attempting to create an output starting at duration 81953 seconds.

ffmpeg should provide a warning indicating this, but perhaps it did not, or maybe you trimmed it from your output. This is one reason to provide the complete console output and not just selected segments (multiple repeating lines may be trimmed).

llogan's user avatar

Your Answer

Sign up or log in, post as a guest.

Required, but never shown

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy .

Not the answer you're looking for? Browse other questions tagged ffmpeg or ask your own question .

  • Featured on Meta
  • Introducing an accessibility dashboard and some upcoming changes to display...
  • We've made changes to our Terms of Service & Privacy Policy - July 2024
  • Announcing a change to the data-dump process

Hot Network Questions

  • How can I move a specific element of a tabular down or up?
  • How can I select all pair of two points A and B has integer coordinates and length of AB is also integer?
  • Is it OK to call a person "sempai" who is actually much younger than you?
  • Fantasy book series that begins with mages arriving at a small village and taking a young woman against her will to be trained
  • Fantasy book in which a joker wonders what's at the top of a stone column
  • What is the most fuel-efficient flight profile for a small plane?
  • get Account which has highest number of opportunities
  • Tic-Tac-Toe Console Game in Java
  • Can a train/elevator be feasible for scaling huge mountains (modern technology)?
  • Applying a Little Artificial Intelligence to a Patched Titlecaps Algorithm
  • Adjust circle radius by spline parameter
  • How to get this fencing wire at a [somewhat] equal tension
  • Is there a formal definition of an animal?
  • Is there a pre-defined compiler macro for legacy Microsoft C 5.10 to get the compiler's name and version number?
  • What purpose did the lower-right "Enter" key serve on the original Mac 128k keyboard?
  • Safe(r) / Easier Pointer Allocation Interface in C (ISO C11)
  • Low current continuity checker
  • Why do commercial airliners go around on hard touchdown?
  • How to interpret the "As much as" in "As much as I like her, I can't agree with her on this."
  • Have King Shark and Killer Croc fought before?
  • What's "unregulated baggage"?
  • What would "doctor shoes" have looked like in "Man on the Moon"?
  • English equivalent to the famous Hindi proverb "the marriage sweetmeat: those who eat it regret, and those who don't eat it also regret"?
  • What is the anti-trust argument made by X Corp's recent lawsuit against advertisers that boycotted X/Twitter

presentation time stamp ffmpeg

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Extracting frame timestamp (PTS) and seting it as a file name

I have a video file that lasts 9.3s and was recorded at FPS=10. I would like to use FFMPEG in order to extract frames from this video at arbitrary FPS (e.g. FPS=3). Example command:

But, I need to know which frame from original video FFMPEG has extracted. What I mean by that is, I would like to include in file name a timestamp (e.g. image_00_00:00:00.1.jpg , where 00 is index generated by FFMPEG and 00:00:00.1 is timestamp from which frame was extracted.).

I want to be able to SEEK to that specific timestamp and extract the same frame FFMPEG has generated for me.

By using following command I am able to draw timestamp (pts) on each frame. But, what I need is that timestamp inside filename and I dont know how to get it.

  • video-editing

4 Answers 4

ffmpeg docs suggests this to timestamp ALL frames- ffmpeg -i input.mp4 -f image2 -frame_pts true %d.jpg

If you check out https://ffmpeg.org/ffmpeg-all.html and search for "expand the filename with pts"

but I dont think its the true Presentation Time Stamp, rather its an index.

SuperNoobieAU4000's user avatar

According to the following post , we may use the following command for setting the file to Time Stamps in milliseconds:

ffmpeg -vsync 0 -i video.mkv -r 1000 -f image2 -frame_pts 1 %d.jpg

  • -vsync 0 - Each frame is passed with its timestamp from the demuxer to the muxer.
  • -r 1000 - set the framerate to the output to 1000Hz, for converting the index to milliseconds.
  • -frame_pts 1 - Use current frame pts for filename.

For setting the file name as if the framerate is 3Hz, we have to know the original video framerate.

For example, if the original framerate is 25fps, use the following command:

ffmpeg -vsync 0 -i video.mkv -r 1000*25/3 -f image2 -frame_pts 1 %d.jpg

There is also an option to use setpts filter:

ffmpeg -vsync 0 -i video.mkv -vf "setpts=N*333.333" -f image2 -frame_pts 1 -enc_time_base -1 %d.jpg

The above command sets the filename to count in steps of 333.

Rotem's user avatar

According to the post How can l use ffmpeg to extract frames with a certain fps and scaling this could work:

harrymc's user avatar

This worked for me !.

Abdullah Farweez's user avatar

You must log in to answer this question.

  • The Overflow Blog
  • This developer tool is 40 years old: can it be improved?
  • Unpacking the 2024 Developer Survey results
  • Featured on Meta
  • We've made changes to our Terms of Service & Privacy Policy - July 2024
  • Introducing an accessibility dashboard and some upcoming changes to display...

Hot Network Questions

  • AttributeError: DescribeData: Method spatialReference does not exist when iterating listLayers()
  • Summing three binoharmonic series
  • How to proceed if my manager is ghosting me?
  • Can right shift by n-bits operation be implemented using hardware multiplier just like left shift?
  • Reportedly there are Marders in Russia's Kursk region. Has this provoked any backlash in Germany?
  • Why is “water takes the steepest path downhill” a common approximation?
  • Why do commercial airliners go around on hard touchdown?
  • How does Chakotay know about the mirror universe?
  • What's the difference between "Model detail" and "Texture detail" in Portal 1?
  • Reference for the proof that Möbius transformations extend to isometries of hyperbolic 3-space
  • How can I append comma except lastline?
  • Who‘s to say that beliefs held because of rational reasons are indeed more justified than beliefs held because of emotional ones
  • Who checks and balances SCOTUS?
  • Adjust circle radius by spline parameter
  • Can a train/elevator be feasible for scaling huge mountains (modern technology)?
  • How to get this fencing wire at a [somewhat] equal tension
  • Why did Jesus give Pilate "no answer" to the question “Where are You from?” (John 19:9)?
  • Possible downsides of not dealing with the souls of the dead
  • What is the most fuel-efficient flight profile for a small plane?
  • How could ocean liners survive in a major capacity after large jet airliners become common?
  • How can I run a machine language program off the disk drive on an Apple II?
  • How does "regina" derive from "rex"?
  • Do academic researchers generally not worry about their work infringing on patents? Have there been cases where they wish they had?
  • When/why did software only engines overtake custom hardware?

presentation time stamp ffmpeg

  • Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers
  • Advertising & Talent Reach devs & technologists worldwide about your product, service or employer brand
  • OverflowAI GenAI features for Teams
  • OverflowAPI Train & fine-tune LLMs
  • Labs The future of collective knowledge sharing
  • About the company Visit the blog

Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Get early access and see previews of new features.

ffmpeg c/c++ get frame count or timestamp and fps

I am using ffmpeg to decode a video file in C. I am struggling to get either the count of the current frame I am decoding or the timestamp of the frame. I have read numerous posts that show how to calculate an estimated frame no based on the fps and frame timestamp, however I am not able to get either of those.

What I need: fps of video file, timestamp of current frame or frame no(not calculated)

What I have: I am able to get the time of the video using

I am counting the frames currently as I process them, and getting a current frame count, this is not going to work longterm though. I can get the total frame count for the file using

I have read this may not work for all streams, although it has worked for every stream I have tried.

I have tried using the time_base.num and time_base.den values and packet.pts, but I can't make any sense of the values that I am getting from those, so I may just need to understand better what those values are.

Does anyone know of resources that show examples on how to get this values?

broschb's user avatar

This url discusses why the pts values may not make sense and how to get sensible ones: An ffmpeg and SDL Tutorial by Dranger

Here is an excerpt from that link, which gives guidance on exactly what you are looking for in terms of frame numbers and timestamps. If this seems useful to you then you may want to read more of the document for a fuller understanding:

So let's say we had a movie, and the frames were displayed like: I B B P. Now, we need to know the information in P before we can display either B frame. Because of this, the frames might be stored like this: I P B B. This is why we have a decoding timestamp and a presentation timestamp on each frame. The decoding timestamp tells us when we need to decode something, and the presentation time stamp tells us when we need to display something. So, in this case, our stream might look like this: PTS: 1 4 2 3 DTS: 1 2 3 4 Stream: I P B B Generally the PTS and DTS will only differ when the stream we are playing has B frames in it. When we get a packet from av_read_frame() , it will contain the PTS and DTS values for the information inside that packet. But what we really want is the PTS of our newly decoded raw frame, so we know when to display it. Fortunately, FFMpeg supplies us with a "best effort" timestamp, which you can get via, av_frame_get_best_effort_timestamp()

William Miller's user avatar

  • This answer has been flagged for removal because it is a link-only answer. Could you please expand this answer so it provides an answer to the question without requiring the reader to click to the linked webpage? –  josliber Commented Jan 15, 2016 at 14:03
  • I'll try to comply with this requirement to provide more than a link, but I must point out two things: First, the question asked for "resources that show examples on how to get this values", so it seems to ask for alink rather than something longer; (2) three years ago, the answer was apparently what the questioner needed since it was chosen as the answer (it was the ONLY answer. So presumably it has been helping the original questioner and possibly other visitor for three years. –  Beel Commented Jan 18, 2016 at 0:46
  • 1 @Beel Whether or not it's been helping for a long time, Stack Overflow's answer policy says that answers providing just a link aren't complete answers; if you stripped out the formatting and left just the text, they should still answer the question. –  anon Commented Jun 23, 2016 at 1:30

Your Answer

Reminder: Answers generated by artificial intelligence tools are not allowed on Stack Overflow. Learn more

Sign up or log in

Post as a guest.

Required, but never shown

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy .

Not the answer you're looking for? Browse other questions tagged c++ c ffmpeg or ask your own question .

  • The Overflow Blog
  • This developer tool is 40 years old: can it be improved?
  • Unpacking the 2024 Developer Survey results
  • Featured on Meta
  • We've made changes to our Terms of Service & Privacy Policy - July 2024
  • Introducing an accessibility dashboard and some upcoming changes to display...
  • Tag hover experiment wrap-up and next steps

Hot Network Questions

  • Backfill civicrm_mailing unknown error when upgrading to 5.76.0
  • When/why did software only engines overtake custom hardware?
  • What type of concept is "mad scientist"?
  • Fantasy book series that begins with mages arriving at a small village and taking a young woman against her will to be trained
  • How can I run a machine language program off the disk drive on an Apple II?
  • Schengen visa expires during my flight layover
  • If a planet or star is composed of gas, the center of mass will have zero gravity but immense pressure? I'm trying to understand fusion in stars
  • What's "unregulated baggage"?
  • Did Einstein say the one-way speed-of-light is "not a fact of nature"?
  • Abrupt increase of evaluation time of MatrixExp
  • Challenges to complete apple ID account to download Apps
  • Have King Shark and Killer Croc fought before?
  • An English equivalent of the Japanese idiom "be tied with a red string (of destiny)"
  • How do I turn off spell checking?
  • How different is Wittgenstein's Language game from Contextuality?
  • Why do commercial airliners go around on hard touchdown?
  • What is "were't"?
  • Closed form of an integral using Mathematica or otherwise
  • "Seagulls are gulling away."
  • ULN2803 relay and led together
  • Is there mutable aliasing in this list of variable references?
  • How can I select all pair of two points A and B has integer coordinates and length of AB is also integer?
  • Adjust circle radius by spline parameter
  • Low current continuity checker

presentation time stamp ffmpeg

IMAGES

  1. Adding a timestamp on frames captured using FFmpeg

    presentation time stamp ffmpeg

  2. Using ffmpeg to add presentation timestamp

    presentation time stamp ffmpeg

  3. Ffmpeg extract frames from video with timestamp

    presentation time stamp ffmpeg

  4. Ffmpeg extract frames from video with timestamp

    presentation time stamp ffmpeg

  5. ffmpeg filter to add timestamp on video (2 Solutions!!)

    presentation time stamp ffmpeg

  6. get timestamp of a keyframe exactly before a given timestamp with

    presentation time stamp ffmpeg

COMMENTS

  1. ffmpeg.c what are pts and dts ? what does this code block do in ffmpeg

    Those are the decoding time stamp (DTS) and presentation time stamp (PTS). You can find an explanation here inside a tutorial. So let's say we had a movie, and the frames were displayed like: I B B P. Now, we need to know the information in P before we can display either B frame. Because of this, the frames might be stored like this: I P B B.

  2. Interpreting pts_time in FFmpeg

    pts_time=6.506000 means an absolute presentation timestamp of 6.506 seconds. It's relative presentation time depends on the start_time of the file, for which use -show_entries format=start_time. ffprobe seeks to keyframes, so it will seek to the nearest KF at or before the specified time and then print info for the stated number of packets.

  3. linux

    Using ffmpeg to add presentation timestamp. Ask Question Asked 4 years, 9 months ago. Modified 4 years, 9 months ago. Viewed 3k times 3 I'm trying to add presentation timestamp given a known frame rate. While this does work it does seem to be deprecated. I'm running the command:

  4. How to speed up / slow down a video

    To double the speed of the video with the setpts filter, you can use: ffmpeg -i input.mkv -filter:v "setpts=0.5*PTS" output.mkv. The filter works by changing the presentation timestamp (PTS) of each video frame. For example, if there are two succesive frames shown at timestamps 1 and 2, and you want to speed up the video, those timestamps need ...

  5. ffmpeg Documentation

    If the argument consists of timestamps, ffmpeg will round the specified times to the nearest output timestamp as per the encoder time base and force a keyframe at the first frame having timestamp equal or greater than the computed timestamp. ... Presentation timestamp of the frame or packet, as an integer. Should be multiplied by the timebase ...

  6. How to use FFmpeg (with examples)

    The setpts filter adjusts the presentation timestamp of video frames, effectively changing the speed of the video. Here are examples of both speeding up and slowing down a video. Speed up a video. To double the speed of a video, use a setpts value of 0.5: ffmpeg -i input.mp4 -filter:v "setpts=0.5*PTS" fast.mp4 Slow down a video

  7. How to Speed Up or Slow Down a Video using FFmpeg

    Here's the basic command to slow down a video using FFmpeg and the setpts parameter: ffmpeg -i input.mp4 -vf "setpts=2.0*PTS" output.mp4. In this command, the -vf option tells FFmpeg that we are going to apply a video filter. The "setpts=2.0*PTS" portion is our filter of choice.

  8. An ffmpeg and SDL Tutorial

    Instead, packets from the stream might have what is called a decoding time stamp (DTS) and a presentation time stamp (PTS). To understand these two values, you need to know about the way movies are stored. ... Fortunately, FFMpeg supplies us with a "best effort" timestamp, which you can get via, av_frame_get_best_effort_timestamp() Synching.

  9. ffmpeg Commands · Jeremy Thomerson

    Use the setpts (set presentation timestamp) video filter (-vf). The argument to the setpts filter is a formula for how to set the timestamp. Some example values: ... My most common use of ffmpeg combines many of the settings above into one of these commands. Scale to 720p, Format, and Reduce Bitrate. ffmpeg -i path/to/original/file.mp4 -c: ...

  10. ffmpeg

    I am attempting to extract frames with their timestamps from videos using the command line, but I am struggling to relate both the output of the showinfo filter to the actual frames outputted by the command, and the corresponding output file names from the -frame_pts option. I am extracting 1 frame per second using the following command:

  11. libav

    @dstob The timestamps associated with frames are the presentation timestamps of the input video, i.e. when each frame should be shown, relative to the start of the video. - slhck. ... You can of course also tell ffmpeg to throw away the input timestamps with -vsync drop, but that happens after filtering, IIRC. - slhck. Commented Nov 21 ...

  12. How do I `drawtext` video playtime ("elapsed time") with `:`s on a

    Question. How do I drawtext video playtime ("elapsed time") on a video, with FFmpeg's --filter_complex option?. Example. Assuming I have a video whose duration is 150 seconds: Elapsed 1 second since the video started: the video displays 00:01 / 02:30.; Elapsed 2 seconds since the video started: the video displays 00:02 / 02:30.; Elapsed 3 seconds since the video started: the video displays ...

  13. ffmpeg

    I'm using FFmpeg to add a PTS (presentation time stamp) on the frames as follows: $ my-program | ffmpeg -i - -filter:v setpts='(RTCTIME - RTCSTART) / (TB * 1000000)' out.mp4. This filter computes the current time, and puts it as the PTS. The trouble is that my-program does not produce any output if there isn't any change in the video.

  14. Using ffmpeg to determine exact date and time a particular ...

    I understand there is PTS (presentation time stamp) but that only appears to be a relative time from the start of the first frame in the file. I am hoping there is a way to know (via ffmpeg) the exact date and time that, say, Frame #928,382 was recorded at.

  15. Using ffmpeg to cut out a scene by original timestamp

    With this information, you can easily calculate the values for ffmpeg's -ss and -to parameters: Let b be the stream start timestamp (here 81824.820733) Let s be the desired start timestamp (here 81953.624278) Let e be the desired end timestamp (here 87259.194348) The relative starting point (in seconds) is s - b = 81953.624278 - 81824.820733 ...

  16. How to get time stamp of closest keyframe before a given timestamp with

    (annoyingly, telling ffmpeg to seek to exactly to the timestamp of the keyframe seems to make ffmpeg exclude that keyframe in the output, but subtracting 0.5 seconds from the keyframe's actual timestamp does the trick. For bash you need to use bc to evaluate expressions with decimals, but in zsh -ss 00:00:$[$(ffnearest input.mkv 28)-0.5] works.)

  17. FFmpeg extracting current frame time stamp

    FFmpeg extracting current frame time stamp. Ask Question Asked 13 years ago. Modified 8 years, ... av_read_frame() will give you a PTS (Presentation Time Stamp). It is AVPacket's member pts. Perhaps that value can help you decide when to stop reading. ... getting frame no and time stamp from a video using ffprobe. 9. ffmpeg extract frame ...

  18. How to find the presentation time stamp of a given frame number for

    How to find the presentation time stamp of a given frame number for FFMPEG decoding? Ask Question Asked 6 years, 10 months ago. Modified 6 years, 10 months ago. Viewed 1k times 1 I am using the C APIs of ffmpeg for some video processing. ... FFmpeg extracting current frame time stamp. 1. FFmpeg Video First Frame Time Code value. 0.

  19. Extracting frame timestamp (PTS) and seting it as a file name

    1. According to the following post, we may use the following command for setting the file to Time Stamps in milliseconds: ffmpeg -vsync 0 -i video.mkv -r 1000 -f image2 -frame_pts 1 %d.jpg. -vsync 0 - Each frame is passed with its timestamp from the demuxer to the muxer. -r 1000 - set the framerate to the output to 1000Hz, for converting the ...

  20. ffmpeg extract frame timestamps from video

    So each frame needs to represent some time frame within the video. Here is my current ffmpeg code to extract the frames: ffmpeg -i inputFile -f image2 -ss mySS -r myR frame-%05d.png. When using the above command how would I add a timestamp to each frame? So i know for example frame 5 is at 9s within the video.

  21. ffmpeg c/c++ get frame count or timestamp and fps

    When we get a packet from av_read_frame() , it will contain the PTS and DTS values for the information inside that packet. But what we really want is the PTS of our newly decoded raw frame, so we know when to display it. Fortunately, FFMpeg supplies us with a "best effort" timestamp, which you can get via, av_frame_get_best_effort_timestamp ...