ffmpeg not finding vcodec libx264

I've installed the latest ffmpeg but it seems unable to locate the video codecs. Do I need to completeley remove ffmpeg and re run the ./configure differently in order for ffmpeg to find the video codecs? Here's my current configuration: FFmpeg version git-f61cbc2, Copyright (c) 2000-2011 the FFmpeg developers built on Jan 18 2011 10:59:49 with gcc 4.0.1 (Apple Inc. build 5465) configuration: --enable-libmp3lame --enable-shared --disable-mmx --arch=x86_64 libavutil 50.36. 0 / 50.36.

Add SRT subtitle to video with ffmpeg

I use ffmpeg to encode, and add subtitle to a video by following command $ ffmpeg -i hifi.avi -i hifi.srt -acodec libfaac -ar 48000 -ab 128k -ac 2 -vcodec libx264 -vpre ipod640 -s 480x240 -b 256k -scodec copy hifi.m4v -newsubtitle Here is the output ffmpeg version 0.8.git, Copyright (c) 2000-2011 the FFmpeg developers built on Aug 4 2011 11:11:39 with gcc 4.5.2 configuration: --extra-cflags=-I/usr/local/include --extra-ldflags=-L/usr/local/lib --disable-shared --enable-static --enable-gpl

Asking ffmpeg to extract frames at the original frame rate

On the FFmpeg documentation (here, and here) I read that, by default, FFmpeg chooses to extract frames at 25 frames per second (otherwise you can specify a framerate with the -r option) My problem is that I have a folder with dozens of videos, each of them recorded at different frame rates, so my question is: Is there a way to ask FFmpeg to extract frames from a video at the "native" frame rate (i.e. the original frame rate at which the video was recorded)? In case it matters, I am working wi

FFMPEG Oddities when decoding

Here's my decoding code: http://pastebin.ca/2120920 I can successfully decode H264 video's, and some AVI's. My problem is that it's not very stable, it doesn't decode most formats yet it works with others, is this a problem with damaged headers or maybe missing information in the video data? How can I make this decoder code more effective?

Delete a part from ogg file with ffmpeg

I use this command to crop audio files: ffmpeg -ss 50 -i "input.ogg" -acodec copy -y -t 100 "output.ogg" This works fine. But now, I'd like to delete a section from an audio file - preferably without recompressing it. Example: input.ogg has duration 60sec, delete section [10s:20s] => output.ogg has then duration 50sec and includes section [0:10s] and [20s:60s] from input.ogg. Is it possible with one command or do I have to split the file into two and then join it back?

FFMPEG: FLV header for AVPacket

I used FFMPEg codes in my app, where I need to get FLV packets for my program. For this I use avcodec_encode_video2(). My problem is that function creates AVPacket packet, which does not keep a full FLV format, only its body. But I need still its header. Usually another function (av_write_frame()) makes it. I cannot use av_write_frame() in my app, because it does not fit my requirement. So maybe anybody knows a function in ffmpeg library, which could add FLV header to the created packets by av

ffmpeg & png watermark issue

I tried to create a watermark (using a png image) on a video like this: ffmpeg -i test.wmv -b:a 300k -ar 22050 -t 10 -f flv -s 352x288 -vf "movie = watermark_logo352.png [watermark]; [in][watermark] overlay =0:0 [out]" out.flv but I get the error: ffmpeg version 0.10.4 Copyright (c) 2000-2012 the FFmpeg developers built on Jun 14 2012 13:14:31 with gcc 4.4.5 configuration: --prefix=/home/username --enable-cross-compile --enable-shared --arch=amd64 --target-os=linux --disable-yasm --enable-

FFMPEG how to mux MJPEG encoded data into mp4 or avi container c++

I'm looking for a way to mux mjpeg (compressed) video data into a video container like mp4 or avi. (I'll also need to add audio in the future). Since i use FFMPEG in other parts of my project as well i'd like to do it using those libraries if possible. I'm not looking for command line FFMPEG use! I've tried to use the muxing example in ffmpeg with that i can only create a (very large) .mjpeg file with video information. This is not what I am looking for. Examples would be very welcome, but a po

Ffmpeg ffprobe - getting file info from pipe

I've got an oog file (it was mixed by sox from two audiostreams recorded by pbx Asterisk) and I'm trying to get file information with ffprobe. When I use something like cat %filename%.ogg | ffprobe -i - I get invalid file info (Duration : N/A, wrong bitrate and etc.) When I try ffprobe -i %filename% Everything works fine and I get file info. What could be wrong? File content?

Ffmpeg What does the "struct AVCodec *codec" in "struct AVCodecContext" represent?

I am using FFMpeg libraries in C/C++ to develop a media player . This source uses the following code to find the decoder of a video stream in a file : pCodec=avcodec_find_decoder(pCodecCtx->codec_id); , where pCodecCtx is the pointer to the Codec Context of the video stream and pCodec is a pointer to AVCodec which was initialised to NULL . If we have to explicitly find the decoder , then what is the struct AVCodec *codec found in struct AVCodecContext ? This is defined here . Can som

Ffmpeg Muxing with libav

I have a program which is supposed to demux input mpeg-ts, transcode the mpeg2 into h264 and then mux the audio alongside the transcoded video. When I open the resulting muxed file with VLC I neither get audio nor video. Here is the relevant code. My main worker loop is as follows: void *writer_thread(void *thread_ctx) { struct transcoder_ctx_t *ctx = (struct transcoder_ctx_t *) thread_ctx; AVStream *video_stream = NULL, *audio_stream = NULL; AVFormatContext *output_context = ini

ffmpeg - set metatag to .ts file

i have a .mp4 video, that is recorded in iphone4s.This video file contains 'Rotate - 180' metadata. When i am converting the .mp4 file to .ts using ffmpeg. I lost the 'Rotate' meta tag. The ffmpeg command that i have used is given below. ffmpeg -i input_file.mp4 -vcodec copy -acodec copy -vbsf h264_mp4toannexb output_file.ts is there any one know how to set 'Rotate' meta data to a .ts file ? or any other way to copy all meta datas in the input .mp4 file to output .ts file Thank you

Can you put the result of a blackdetect filter in a textfile using ffmpeg?

I'm testing out the "blackdetect" filter in ffmpeg. I want to have the times when the video is black to be read by a script (like actionscript or javascript). I tried: ffmpeg -i video1.mp4 -vf "blackdetect=d=2:pix_th=0.00" -an -f null - And I get a nice result in the ffmpeg log: ffmpeg version N-55644-g68b63a3 Copyright (c) 2000-2013 the FFmpeg developers built on Aug 19 2013 20:32:00 with gcc 4.7.3 (GCC) configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-av isyn

Ffmpeg Script to make movie from images

Hello I want a script or a way to make a video from images. I have a folder with a lot of pictures named randomly like "flowers.jpg", "tree.jpg", etc. I also have an "intro.jpg" photo which I want to add at the start of every video. What I want exactly is to create a video (any format, .avi etc) for a custom duration with only two photos like this: intro.jpg (10-20 seconds or how much i want) + tree.jpg (1 hour or how much i want) intro.jpg + flowers.jpg ... and so on. Sorry for being a newbi

How to force ffmpeg into non-interactive mode?

Sometimes you want ffmpeg to ask you whether it should overwrite a file. Sometimes it's just a script that you prefer would fail if something is amiss. I.e. don't rely on stdin to answer if you have a question.

ffmpeg scale, how to crop correctly

I'm using this command to encode videos $transcode = FFMPEG_BINARY.' -loglevel panic -y -i "'.$files['original'].'" -vf scale='.VIDEO_SIZE_X.':'.VIDEO_SIZE_Y.' -vcodec libx264 -profile main -preset slow -r 25 -b '.VIDEO_BITRATE.' -maxrate '.VIDEO_BITRATE.' -bufsize 1000k -threads '.VIDEO_THREADS.' -acodec aac -ar 44100 -f mp4 -strict -2 '.$files['mp4']; where: VIDEO_SIZE_X = 640 and VIDEO_SIZE_Y = 480, VIDEO_BITRATE = 900k it all seems to work fine, but the problem I'm having is that the vi

Ffmpeg How to get FFplay to stop looping last audio packet?

I'm trying to get FFplay to simply stop (pause) on the last image when playing through. The default behavior for FFplay appears to use the -loops perimeter, which causes the last audio packet to be looped - even though the image appears in a paused state. Is there a way to also stop playing audio on end of file?

GOP structure via FFmpeg

I have two questions regarding the Group-of-Pictures (GOP) in MPEG4 Video (both for MPEG4 Part2 and H.264): How can I extract the GOP structure and size of a video sequence using FFmpeg? I know that the av_get_picture_type_char function of the AVPicture struct yields picture types for each frame, but I wonder if there is a more direct method to obtain the GOP information? How can I detect whether the sequence has open GOPs or closed GOPs, i.e. whether B frames from one GOP are allowed to refer

ffmpeg Invalid data found when processing input h264 to h265

I want to convert video files from h264 to h265. The command I use worked for many files so far, but now I get an error for some files: # ffmpeg -i rst.mkv -vcodec hevc -x265-params crf=28 -sn -acodec copy -map 0 out.mkv ffmpeg version 2.8.6 Copyright (c) 2000-2016 the FFmpeg developers built with gcc 4.8.5 (Gentoo 4.8.5 p1.3, pie-0.6.2) configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --enable-shared --cc=x86_64-pc-linux-gnu-gcc --cxx=x86_6

How to run ffmpeg - no exe or bat file

I downloaded ffmpeg 3.1.1 but there is no .exe or .bat file to run from the command line. I searched for answers. Apparently, there should be a \bin folder. I end up with a lot of folders but no such folder and no executable file. All ffmpeg files have a .c extension or other extensions but nothing that can be run from the command line.

Ffmpeg YouTube live says not receiving data

So, I'm using ffmpeg. I can stream videos to YouTube live I've downloaded from the internet successfully using this command: ffmpeg -re -i "C:\video.flv" -c:v libx264 -preset slow -crf 18 -c:a copy -f flv "rtmp://a.rtmp.youtube.com/live2/xyz" When I try to stream a video that's been recorded from a specific device, that is also flv and with same command, it's not working. FFMpeg says it's transmitting, no errors there. In the live dashboard on YouTube I get a green "Starting" briefly but th

FFMPEG Recode all audio streams while keeping originals

I am trying to add a additional set of audio tracks into some video files, as part of a automated process. I would like to keep all the original audio tracks and have a second re-coded copy. What I have been using is: ffmpeg -i file -map 0:v -codec:v copy -map 0:a -codec:a copy -map 0:a:0 -codec:a:0 aac -strict experimental ...(Bitrate, filters etc all with :a:0) -map 0:s -codec:s copy output file However I can't work out how to change this to handle input files that have multiple audio trac

I want live stream 1 folder on youtube by ffmpeg

I have code for live any video, but I don't know how to live 1 folder have several videos. ffmpeg -re -stream_loop -1 -i "1.mp4" -vcodec libx264 -preset veryfast -maxrate 2000k -bufsize 1000k -vf “scale=1280:720,format=yuv420p” -g 50 -acodec libmp3lame -b:a 128k -ac 2 -ar 44100 -f flv rtmp://a.rtmp.youtube.com/live2/*****

Ffmpeg H.264 - Identify Access Units of an image

I need to parse a H.264 stream to collect only NAL's needed to form a complete image, of only one frame. I'm reading the H.264 standard, but it's confuse and hard to read. I made some experiments but, did not worked. For example, i extracted an access unit with primary_pic_type == 0 containing only slice_type == 7 (I-Slice), it should give me a frame, but i tried to extract from ffmpeg, it did not work. But, when i append the next access_unit, containing only slice_type == 5 (P-Slice) it worked.

Is it possible to force ffmpeg to use hardware decoding with H.264 input stream?

I am running a raspberry pi 3B with ffmpeg compiled with the --enable-omx-rpi option. I am trying to do frame-capture from a webcam stream (h.264, 1920x1080) to JPG files at 5 frames per second. This operation currently causes the board to show very high CPU utilization and get very hot. For this reason, I am assuming hardware decoder is not being utilized. Is there a way to 1) determine whether ffmpeg is using hardware decoding, and 2) force it to be enabled? EDIT: here's the log: ffmpeg

ffmpeg normalization waveform

Im creating waveform's to my audio player by code: ffmpeg -i source.wav -filter_complex "aformat=channel_layouts=mono,showwavespic=s=1280x90:colors=#000000" -frames:v 1 output.png Sometimes waveform looking so bad like here: Sometime in other song looking good like here: So first waveform is tiny.. How can I normalize scale output waveform to size of output image 90px height?

Convert one image + one video's audio to video with ffmpeg

I'm trying to convert one video file (flv) and one image file (jpg) to one mp4 video, with the video's audio sample rate increased, and the image overlayed on top of the audio. Right now I'm using this command: ffmpeg -i video.flv -loop 1 -i photo.jpg -y -filter:a "asetrate=55786.5‬" -vcodec png -filter:v "pad=ceil(iw/2)*2:ceil(ih/2)*2" -map 1:0 -map 0:1 -y -tune stillimage finalvideo.mp4 It produces an mp4 that plays just fine in VLC, but will not upload to YouTube, which is what I eventuall

FFMPEG merge audio tracks into one and encode using NVENC

I often shoot films with several audio inputs, resulting in video files with multiple audio tracks supposed to be played all together at the same time. I usually go through editing those files and there I do whatever I want with those files, but sometimes I would also like to just send the files right away online without editing, in which case I would enjoy FFMPEG's fast & simple & quality encoding. But here's the catch: most online video streaming services don't support multiple audio

Record stream using SDP file & ffmpeg

I have a stream created using ffmpeg using the following command: ffmpeg -re -thread_queue_size 4 -i video.mp4 -strict -2 -vcodec copy -an -f rtp rtp://127.0.0.1:51372 -sdp_file test.sdp This creates a .sdp file while streaming the local video file over RTP. The SDP file: v=0 o=- 0 0 IN IP4 127.0.0.1 s=Serenity - HD DVD Trailer c=IN IP4 127.0.0.1 t=0 0 a=tool:libavformat 58.29.100 m=video 51372 RTP/AVP 96 b=AS:4674 a=rtpmap:96 H264/90000 a=fmtp:96 packetization-mode=1; sprop-parameter-sets=

FFMPEG Input area failure

I'm trying to convert an input clip to a fixed, padded output and applying an overlay on it. This command works fine for resizing: ffmpeg -y -i clip.mp4 -vf scale="min(iw*375/ih\,500):min(375\,ih*500/iw),pad=500:375:(500-iw)/2:(375- ih)/2" output.mp4 I created the following one to make sure the overlay would be created: ffmpeg -y -i clip.mp4 -i clip_overlay.png -strict -2 -filter_complex "[0]scale=min(iw*375/ih\,500):min(375\,ih*500/iw),pad=500:375:(500-iw)/2:(375-ih)/2[v];[v][1]overlay=x=

ffmpeg: concatenating mp4 files. Video freezes for few seconds on first frame on output mp4

I created several MP4 files using ffmpeg. All of the videos have same settings and codec. Only difference is frames per second and duration. I then concatenated the videos using command below. ffmpeg -f concat myList.txt -c copy output.mp4 I notice that when launching/opening the output.mp4 file in windows media player, it stops/freezes on the first frame of the video for about three four seconds and then starts playing, rest of the videos has correct fps and runs smoothly. Has anyone encou

How FFmpeg CRF works

How FFmpeg -CRF works? How do they decide the best quality of this specific second? Will we get better results if I will split the file by seconds and encode each slice separately with -CRF and then join all the slices or would I will get the same/wors results and why?

Ffmpeg How to extract Geolocation metadata from an Apple Live Photo?

I'm trying to organize my photo collection and convert .mov files to .jpeg files while retaining all of the meta data that has been stored. I'm running into a problem with Apple's "Live Photos" though... I recently downloaded all of the photos from my iCloud account and found that many have been stored as .mov files as "Live Photos". As I only want to include photos in this collection, I'd like to convert all of these .mov files to .jpg files. So... I'm trying to use python and shell comma

How to concat 2 aac files using ffmpeg with delay between?

I try to concat 2 aac files: ffmpeg -i 2.aac -i 3.aac -filter_complex "[0]asetpts=0;[1]asetpts=8000; concat=n=2:a=0:a=1 [aout]" -map "[aout]" out.aac first to 0 second, second to 8's second I got the following error: Cannot find a matching stream for unlabeled input pad 0 on filter Parsed_concat_2 What I'm doing wrong?

How can I concatenate multiple MP4 videos with FFMPEG without audio sync issues?

My procedure is as follows: convert the videos to 1920x1080 at 60 FPS (some videos had only 30 FPS) save the converted videos in a text file merging the video by an FFMPEG concat After the videos are merged, the audio is out of sync with the video. To convert the videos I use the following command: ffmpeg -i input.mp4 -vf scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:-1:-1,setsar=1 -r 60 output.mp4 (got it from here: How can I upscale videos with FFmpeg to a fixed resoluti

FFMPEG Downgrade Only If 4k Video and Keep Aspect Ratio

Is there a way to downgrade video resolutions if and only if they are above a certain resolution? For example, right now I am doing to: ffmpeg -i 4k_VIdeo.MP4 -vf scale=1920:1080 -c:v libx264 -crf 35 1080-video-smaller.mp4 But if the video is 720:600 or a smaller resolution, I don't want to expand to 1920, also if the video is not the aspect ratio of 1920:1080, I want to keep the same aspect ratio so it doesn't look distorted. Is there a way of doing this?

FFmpeg Batch .mov > .gif conversion

I am trying to batch convert a folder of .mov's into .gif's. Input .mov's are 1920x1080 resolution and I would like to convert to 720x480 (to save file size). I have the following code, but not sure how to add the -vf scale=720 into this code: for i in *.mov; do ffmpeg -ss 1 -i "$i" "${i%.*}.gif"; done The above code works, just running it through terminal. Any help on adding the scale or any other optimizations to reduce file size would be greatly appreciated. Thanks

Cutting multiple videos with x amount of time from the end (ffmpeg?)

I have no history with ffmpeg, but I am assuming this would be the right tool for the job. I am trying to cut a folder of videos with different lengths. I want to cut them all to be 12 seconds from the end. That is: on a 30 second video I would be left with 00:18 - 00:30. 00:00-00:17 would be deleted. I am on mac OS Mojave. It seems that ffmpeg is the right tool for the job to batch edit these videos. Can someone walk me through this? I have some basic understanding but will need the code/script

How to continuously output screenshots while also outputting hls with ffmpeg

So I have the following process of ingesting rtsp and output hls. ffmpeg -fflags nobuffer \ -rtsp_transport udp \ -i rtsp://<source>/ \ -vsync 0 \ -copyts \ -vcodec copy \ -movflags frag_keyframe+empty_moov \ -an \ -hls_flags delete_segments \ -f segment \ -segment_list_flags live \ -segment_time 1 \ -segment_list_size 5 \ -segment_format mpegts \ -segment_list streaming.m3u8 \ -segment_list_type m3u8 \ -segment_list_entry_prefix ./ \ %d.ts and I want to also output scr

How to migrate atadenoise in ffmpeg to my own project?

This is the entrance to the atadenoise filter: libavfilter/vf_atadenoise.c static int filter_slice(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs) { ... } which is used to call s->dsp.filter_row[p](src, dst, srcf, w, mid, size, thra, thrb, weights); Therefore, the specific noise reduction method is selected according to the configured parameters: static void fweight_row##name(const uint8_t *ssrc, uint8_t *ddst, \ const uint8_t *ssrcf[SIZE]

Combine multiple filter_complex and overlay functions with FFMpeg Commands

I am having trouble combining these 4 passes in ffmpeg into a single process. First exec("$this->ffmpeg -i ".storage_path('/app/public_html/uploads/job.mp4')." -codec:a libmp3lame -b:a 128k -vf scale=".$res_dimension->res_width."x".$res_dimension->res_height.",setsar=1:1 ".storage_path('2.mp4')); Second exec("$this->ffmpeg -i ".storage_path($original_file_path)." -codec:a libmp3lame -b:a 128k -vf scale=".$res_dimension->

How can I visualize the information of frozen signals in a multicast with ffmpeg?

I am trying to detect frozen video signals with ffmpeg. As input I am using a multicast, which is where the signals are contained. I am using the following command: ffmpeg -i udp//:multicast_address -vf "freezedetect=n=-60dB:d=2" -map 0 -f null - This command informs me if one of the signals is frozen, but it does not inform me which of those signals. Does anyone know how to solve this issue?

pass ffmpeg options through mlt xml

I'm looking at an MLT XML file that I created with kdenlive and would like to tweak the command line options passed to ffmpeg. If I understand correclty, this is the part that I need to edit: <consumer f="mp4" g="15" channels="2" crf="15" progressive="1" target="thetargetfile.mp3" threads="0" real_time="-3" format_options="-stillimage" mlt_service="avformat" vcodec="libx264" ab="

Ffmpeg and ffprobe not showing subtitles stream in m3u8 file

I'm trying to get an mp4 video from a m3u8 playlist file (in simple HLS), with audio, video and subtitles. I've managed to extract the video and audio stream because they are relatively easy: the input m3u8, according to ffprobe, contains 3 different programs, and the third program (called Program 2) is the one I need, because it contains both the highest quality video and the English highest quality audio. So, what I really am doing is ffmpeg -i "blahblah.m3u8" -c copy -map 0:p:2:v:0

How to debayer bmp image with FFMpeg? (.exe file)

I have an bayer image .bmp I would like to debayer with FFMpeg, I thought that possibly FFMpeg knows to debayer it by default so I tried to use such a very simple query ffmpeg -i input.bmp output.png but output.png looks gray, so ffmpeg doesn't apply debayer automatically. I tried to figure out if is it possible with ffmpeg, but there is almost nothing about it on google. Image example (it is too large to upload it here): https://drive.google.com/file/d/1V8HwOuIo9PBX3ix0eKFQFGimskU_H0mN/view?us

Ffmpeg H264 motion scale

What is the motion scale in in H264 motion vectors? Take a look a the following motion vectors that I extracted using pyav (pythonic bindings for libav - roughly same as ffmpeg): 'source', 'w', 'h', 'src_x', 'src_y', 'dst_x', 'dst_y', 'flags', 'motion_x', 'motion_y', 'motion_scale' -1,8,16,138,8,148,8,0,-40,0,4 -1,8,16,156,8,156,8,0,0,0,4 -1,16,16,168,8,168,8,0,0,0,4 Consider the first line. It is my understanding that by multiplying src_x 138 by motion_scale 4 we get the actual src_x value in

  1    2   3   4   5   6  ... 下一页 最后一页 共 25 页