Search Results

Search found 174 results on 7 pages for 'bitrate'.

Page 4/7 | < Previous Page | 1 2 3 4 5 6 7  | Next Page >

  • Why does use of H264 in sender/receiver pipelines introduce just HUGE delay?

    - by Serguey Zefirov
    When I try to create pipeline that uses H264 to transmit video, I get some enormous delay, up to 10 seconds to transmit video from my machine to... my machine! This is unacceptable for my goals and I'd like to consult StackOverflow over what I (or someone else) do wrong. I took pipelines from gstrtpbin documentation page and slightly modified them to use Speex: This is sender pipeline: #!/bin/sh gst-launch -v gstrtpbin name=rtpbin \ v4l2src ! ffmpegcolorspace ! ffenc_h263 ! rtph263ppay ! rtpbin.send_rtp_sink_0 \ rtpbin.send_rtp_src_0 ! udpsink host=127.0.0.1 port=5000 \ rtpbin.send_rtcp_src_0 ! udpsink host=127.0.0.1 port=5001 sync=false async=false \ udpsrc port=5005 ! rtpbin.recv_rtcp_sink_0 \ pulsesrc ! audioconvert ! audioresample ! audio/x-raw-int,rate=16000 ! \ speexenc bitrate=16000 ! rtpspeexpay ! rtpbin.send_rtp_sink_1 \ rtpbin.send_rtp_src_1 ! udpsink host=127.0.0.1 port=5002 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5003 sync=false async=false \ udpsrc port=5007 ! rtpbin.recv_rtcp_sink_1 Receiver pipeline: !/bin/sh gst-launch -v\ gstrtpbin name=rtpbin \ udpsrc caps="application/x-rtp,media=(string)video, clock-rate=(int)90000, encoding-name=(string)H263-1998" \ port=5000 ! rtpbin.recv_rtp_sink_0 \ rtpbin. ! rtph263pdepay ! ffdec_h263 ! xvimagesink \ udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \ rtpbin.send_rtcp_src_0 ! udpsink port=5005 sync=false async=false \ udpsrc caps="application/x-rtp,media=(string)audio, clock-rate=(int)16000, encoding-name=(string)SPEEX, encoding-params=(string)1, payload=(int)110" \ port=5002 ! rtpbin.recv_rtp_sink_1 \ rtpbin. ! rtpspeexdepay ! speexdec ! audioresample ! audioconvert ! alsasink \ udpsrc port=5003 ! rtpbin.recv_rtcp_sink_1 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5007 sync=false async=false Those pipelines, a combination of H263 and Speex, work fine enough. I snap my fingers near camera and micropohne and then I see movement and hear sound at the same time. Then I changed pipelines to use H264 along the video path. The sender becomes: #!/bin/sh gst-launch -v gstrtpbin name=rtpbin \ v4l2src ! ffmpegcolorspace ! x264enc bitrate=300 ! rtph264pay ! rtpbin.send_rtp_sink_0 \ rtpbin.send_rtp_src_0 ! udpsink host=127.0.0.1 port=5000 \ rtpbin.send_rtcp_src_0 ! udpsink host=127.0.0.1 port=5001 sync=false async=false \ udpsrc port=5005 ! rtpbin.recv_rtcp_sink_0 \ pulsesrc ! audioconvert ! audioresample ! audio/x-raw-int,rate=16000 ! \ speexenc bitrate=16000 ! rtpspeexpay ! rtpbin.send_rtp_sink_1 \ rtpbin.send_rtp_src_1 ! udpsink host=127.0.0.1 port=5002 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5003 sync=false async=false \ udpsrc port=5007 ! rtpbin.recv_rtcp_sink_1 And receiver becomes: #!/bin/sh gst-launch -v\ gstrtpbin name=rtpbin \ udpsrc caps="application/x-rtp,media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264" \ port=5000 ! rtpbin.recv_rtp_sink_0 \ rtpbin. ! rtph264depay ! ffdec_h264 ! xvimagesink \ udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \ rtpbin.send_rtcp_src_0 ! udpsink port=5005 sync=false async=false \ udpsrc caps="application/x-rtp,media=(string)audio, clock-rate=(int)16000, encoding-name=(string)SPEEX, encoding-params=(string)1, payload=(int)110" \ port=5002 ! rtpbin.recv_rtp_sink_1 \ rtpbin. ! rtpspeexdepay ! speexdec ! audioresample ! audioconvert ! alsasink \ udpsrc port=5003 ! rtpbin.recv_rtcp_sink_1 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5007 sync=false async=false This is what happen under Ubuntu 10.04. I didn't noticed such huge delays on Ubuntu 9.04 - the delays there was in range 2-3 seconds, AFAIR.

    Read the article

  • Alcatel-Lucent Boosts Broadband Over Copper To 300Mbps

    - by Ratman21
    alphadogg at Slashdot writes "Alcatel-Lucent has come up with a way to [0]move data at 300Mbps over copper lines. So far the results have only been reproduced in a lab environment — real products and services won't be available for at least a year. From the article: 'Researchers at the company's Bell Labs demonstrated the 300Mbps technology over a distance of 400 meters using VDSL2 (Very high bitrate Digital Subscriber Line), according to Stefaan Vanhastel, director of product marketing at Alcatel-Lucent Wireline Networks. The test showed that it can also do 100Mbps over a distance of 1,000 meters, he said. Currently, copper is the most common broadband medium. About 65 percent of subscribers have a broadband connection that's based on DSL, compared to 20 percent for cable and 12 percent for fiber, according to market research company Point Topic. Today, the average advertised DSL speeds for residential users vary between 9.2 Mbps and 1.9Mbps in various parts of the world, Point Topic said.'" Discuss this story at: http://tech.slashdot.org/comments.pl?sid=10/04/21/239243

    Read the article

  • Playing a Song causing WP7 to crash on phone, but not on emulator

    - by Michael Zehnich
    Hi there, I am trying to implement a song into a game that begins playing and continually loops on Windows Phone 7 via XNA 4.0. On the emulator, this works fine, however when deployed to a phone, it simply gives a black screen before going back to the home screen. Here is the rogue code in question, and commenting this code out makes the app run fine on the phone: // in the constructor fields private Song song; // in the LoadContent() method song = Content.Load<Song>("song"); // in the Update() method if (MediaPlayer.GameHasControl && MediaPlayer.State != MediaState.Playing) { MediaPlayer.Play(song); } The song file itself is a 2:53 long, 2.28mb .wma file at 106kbps bitrate. Again this works perfectly on emulator but does not run at all on phone. Thanks for any help you can provide!

    Read the article

  • Joining Two MKV files in Ubuntu?

    - by Ryan McClure
    I have an opera that I'm ripping to my computer in MKV format with Handbrake. This opera is on two discs. Is there a way to join the resulting MKV's together? They will have the same bitrate, resolution, etc. If I do this, can I keep chapters from both MKV files organized? And, since I have subtitles in the file (not burnt in), will they still stay intact? I'm not too sure if this question is off-topic or not. If it is, feel more than free to delete it. :)

    Read the article

  • Converting mp4 to mp3

    - by aki
    I have a video I need to convert to mp3 (from the command line - not GUI) video.mp4 I tried: ffmpeg -i -b 192 video.mp4 video.mp3 with no success. I get the following error: WARNING: library configuration mismatch Seems stream 0 codec frame rate differs from container frame rate: 59.83 (29917/500) -> 59.75 (239/4) WARNING: The bitrate parameter is set too low. It takes bits/s as argument, not kbits/s Encoder (codec id 86017) not found for output stream #0.0 so I tried lame: lame -h -b 192 video.mp4 video.mp3 I get: Warning: unsupported audio format Am I missing something?

    Read the article

  • Is there anything software-related I can do, to make ubuntu play high-quality mkv files smoothely?

    - by Roy
    I've noticed that beyond let's say..a 20 MB/s bitrate, a movie would lag, played on my laptop.. It results in me missing the highest bitrated scenes of a movie.. And sometimes having to compromise for watching the movie at a lesser quality.. I was wondering if there's anything I can do software-wise to play movies smoothley..? At the moment Ubuntu is installed with all the default settings on an 60GB SSD I use VLC ofcourse.. I also have 2 1TB HDD - maybe I can use them as pagefiles? I don't really know alot about this so maybe this is irrelevant.. I took the laptop battery out since it was dead..but I think this is also irrelevant.. would appriciate a response, even if there's nothing that can be done software-wise :)

    Read the article

  • Using FFMPEG to reliably convert videos to mp4 for iphone/ipod and flash players

    - by Jake Stevenson
    I need to convert videos for use in both a flash player and the iphone/ipod touch. I'm using the following batch script with ffmpeg: @echo off ffmpeg.exe -i %1 -s qvga -acodec libfaac -ar 22050 -ab 128k -vcodec libx264 -threads 0 -f ipod %2 This always outputs an mp4 file, and I can always play it on my PC. The videos also seem to play fine on my iphone 3GS. But with some input files it won't work for older iphone versions (3G and iPod touch). Here's the ffmpeg output from one such file: D:\ffmpeg>encode.bat d:\temp\recording.flv d:\temp\out.m4v FFmpeg version SVN-r18709, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --enable-memalign-hack --prefix=/mingw --cross-prefix=i686-ming w32- --cc=ccache-i686-mingw32-gcc --target-os=mingw32 --arch=i686 --cpu=i686 --e nable-avisynth --enable-gpl --enable-zlib --enable-bzlib --enable-libgsm --enabl e-libfaac --enable-libfaad --enable-pthreads --enable-libvorbis --enable-libtheo ra --enable-libspeex --enable-libmp3lame --enable-libopenjpeg --enable-libxvid - -enable-libschroedinger --enable-libx264 libavutil 50. 3. 0 / 50. 3. 0 libavcodec 52.27. 0 / 52.27. 0 libavformat 52.32. 0 / 52.32. 0 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0. 7. 1 / 0. 7. 1 built on Apr 28 2009 04:04:42, gcc: 4.2.4 [flv @ 0x187d650]skipping flv packet: type 18, size 164, flags 0 Input #0, flv, from 'd:\temp\recording.flv': Duration: 00:00:07.17, start: 0.001000, bitrate: N/A Stream #0.0: Video: flv, yuv420p, 320x240, 1k tbr, 1k tbn, 1k tbc Stream #0.1: Audio: nellymoser, 44100 Hz, mono, s16 [libx264 @ 0x13518b0]using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE 4.2 [libx264 @ 0x13518b0]profile Baseline, level 4.2 Output #0, ipod, to 'd:\temp\out.m4v': Stream #0.0: Video: libx264, yuv420p, 320x240, q=2-31, 200 kb/s, 1k tbn, 1k tbc Stream #0.1: Audio: libfaac, 22050 Hz, mono, s16, 128 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 90 fps= 0 q=-1.0 Lsize= 128kB time=6.87 bitrate= 152.4kbits/s video:92kB audio:32kB global headers:1kB muxing overhead 2.620892% [libx264 @ 0x13518b0]slice I:8 Avg QP:29.62 size: 7047 [libx264 @ 0x13518b0]slice P:82 Avg QP:30.83 size: 467 [libx264 @ 0x13518b0]mb I I16..4: 17.9% 0.0% 82.1% [libx264 @ 0x13518b0]mb P I16..4: 0.6% 0.0% 0.0% P16..4: 23.1% 0.0% 0.0% 0.0% 0.0% skip:76.3% [libx264 @ 0x13518b0]final ratefactor: 57.50 [libx264 @ 0x13518b0]SSIM Mean Y:0.9544735 [libx264 @ 0x13518b0]kb/s:8412.6 My suspicion is that it has something to do with the audio encoding. If so, does anyone know how to force it to reencode the audio to the proper format? Any other ideas?

    Read the article

  • PHP: Class to parse OGG and .ogv files?

    - by Nic Hubbard
    I am looking for a php class that can parse ogg and .ogv files so that I can get some of the metadata out of the files, such as comments, bitrate, length, etc. I have found this: http://opensource.grisambre.net/ogg/ but after testing it, it does not seem to parse and of the files that I test it with. Has anyone had luck with an alternative? I would use getID3(), but it does not support ogg video.

    Read the article

  • How do I do the SQL equivalent of "DISTINCT" in CouchDB?

    - by Blaine LaFreniere
    I have a bunch of MP3 metadata in couchDB. I want to return every album that is in the MP3 metadata, but no duplicates. A typical document looks like this: { "_id": "005e16a055ba78589695c583fbcdf7e26064df98", "_rev": "2-87aa12c52ee0a406084b09eca6116804", "name": "Fifty-Fifty Clown", "number": 15, "artist": "Cocteau Twins", "bitrate": 320, "album": "Stars and Topsoil: A Collection (1982-1990)", "path": "Cocteau Twins/Stars and Topsoil: A Collection (1982-1990)/15 - Fifty-Fifty Clown.mp3", "year": 0, "genre": "Shoegaze" }

    Read the article

  • Converting WAV to MP3 on Linux with low bitrates

    - by Olly
    I need to convert WAV files to MP3 files so they can be played on a website. I think that LAME would probably be the best tool. However the WAV files are low bitrate (around 8kbits recorded from a phone) and LAME's website states that it is the "best MP3 encoder at mid-high bitrates and at VBR". Is there is a better encoder for lower bitrates? If so can you define "better"?

    Read the article

  • Writing a managed wrapper for unmanaged (C++) code - custom types/structs

    - by Bobby
    faacEncConfigurationPtr FAACAPI faacEncGetCurrentConfiguration( faacEncHandle hEncoder); I'm trying to come up with a simple wrapper for this C++ library; I've never done more than very simple p/invoke interop before - like one function call with primitive arguments. So, given the above C++ function, for example, what should I do to deal with the return type, and parameter? FAACAPI is defined as: #define FAACAPI __stdcall faacEncConfigurationPtr is defined: typedef struct faacEncConfiguration { int version; char *name; char *copyright; unsigned int mpegVersion; unsigned long bitRate; unsigned int inputFormat; int shortctl; psymodellist_t *psymodellist; int channel_map[64]; } faacEncConfiguration, *faacEncConfigurationPtr; AFAIK this means that the return type of the function is a reference to this struct? And faacEncHandle is: typedef struct { unsigned int numChannels; unsigned long sampleRate; ... SR_INFO *srInfo; double *sampleBuff[MAX_CHANNELS]; ... double *freqBuff[MAX_CHANNELS]; double *overlapBuff[MAX_CHANNELS]; double *msSpectrum[MAX_CHANNELS]; CoderInfo coderInfo[MAX_CHANNELS]; ChannelInfo channelInfo[MAX_CHANNELS]; PsyInfo psyInfo[MAX_CHANNELS]; GlobalPsyInfo gpsyInfo; faacEncConfiguration config; psymodel_t *psymodel; /* quantizer specific config */ AACQuantCfg aacquantCfg; /* FFT Tables */ FFT_Tables fft_tables; int bitDiff; } faacEncStruct, *faacEncHandle; So within that struct we see a lot of other types... hmm. Essentially, I'm trying to figure out how to deal with these types in my managed wrapper? Do I need to create versions of these types/structs, in C#? Something like this: [StructLayout(LayoutKind.Sequential)] struct faacEncConfiguration { uint useTns; ulong bitRate; ... } If so then can the runtime automatically "map" these objects onto eachother? And, would I have to create these "mapped" types for all the types in these return types/parameter type hierarchies, all the way down until I get to all primitives? I know this is a broad topic, any advice on getting up-to-speed quickly on what I need to learn to make this happen would be very much appreciated! Thanks!

    Read the article

  • Custom WM profile - issues with codec

    - by dominolog
    Hello I create my custom WM encoder profile. The reason I need a custom, non standard WM profile is that I need that the video resolution must be the same as input video stream. I created below profile but after I encode my video and audio with it, the WMP while loading says that the WMV1 codec is not found and prompts me for downloading WM encoder codecs. After installing them, the problem still exists. <profile version="589824" storageformat="1" name="mReplay Hi-End profile; WM Format 9; Audio &amp; Video" description="Streams: 1 audio 1 video"> <streamconfig majortype="{73647561-0000-0010-8000-00AA00389B71}" streamnumber="1" streamname="Audio Stream" inputname="Audio409" bitrate="320008" bufferwindow="-1" reliabletransport="0" decodercomplexity="" rfc1766langid="en-us" > <wmmediatype subtype="{00000161-0000-0010-8000-00AA00389B71}" bfixedsizesamples="1" btemporalcompression="0" lsamplesize="14861"> <waveformatex wFormatTag="353" nChannels="2" nSamplesPerSec="44100" nAvgBytesPerSec="40001" nBlockAlign="14861" wBitsPerSample="16" codecdata="008800000F0035E80000"/> </wmmediatype> </streamconfig> <streamconfig majortype="{73646976-0000-0010-8000-00AA00389B71}" streamnumber="2" streamname="Video Stream" inputname="Video409" bitrate="100000" bufferwindow="-1" reliabletransport="0" decodercomplexity="AU" rfc1766langid="en-us" vbrenabled="1" vbrquality="95" bitratemax="0" bufferwindowmax="0"> <videomediaprops maxkeyframespacing="80000000" quality="100"/> <wmmediatype subtype="{31564D57-0000-0010-8000-00AA00389B71}" bfixedsizesamples="0" btemporalcompression="1" lsamplesize="0"> <videoinfoheader dwbitrate="100000" dwbiterrorrate="0" avgtimeperframe="400000"> <rcsource left="0" top="0" right="0" bottom="0"/> <rctarget left="0" top="0" right="0" bottom="0"/> <bitmapinfoheader biwidth="0" biheight="0" biplanes="1" bibitcount="24" bicompression="WMV1" bisizeimage="0" bixpelspermeter="0" biypelspermeter="0" biclrused="0" biclrimportant="0"/> </videoinfoheader> </wmmediatype> </streamconfig> <streamprioritization> <stream number="1" mandatory="0"/> <stream number="2" mandatory="0"/> </streamprioritization> </profile>

    Read the article

  • How to map frame number to time in ffmpeg output

    - by jabberbuzz
    Is it possible to get the ffmpeg output in multiple lines? I am trying to map the frame number to the time, but ffmpeg overwrites the output on the same line. frame= 638 fps=160 q=0.0 Lsize= -0kB time=26.61 bitrate= -0.0kbits/s Or is there a quicker way to get the time on the video timeline given the frame number?

    Read the article

  • FFmpeg extract clip - stream frame rate differs from container frame rate (x264, aac)

    - by fideli
    Summary H.264 video seems to have a really high frame rate that requires a scaling factor to the applied to the duration of video that I'm trying to extract (900x lower). Body I'm trying to extract a clip from a movie that I have in MP4 format (created using Handbrake). After trying mencoder and VLC, I decided to give FFmpeg a shot since it was the least troublesome when it came to copying the codecs. That is, compared to mencoder and VLC, the resulting file was still playable in QuickTime (I know about Perian, etc, I'm just trying to learn how all this works). Anyway, my command was as follows: ffmpeg -ss 01:15:51 -t 00:05:59 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 During the copy, The following comes up: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from outofsight.mp4': Duration: 01:57:42.10, start: 0.000000, bitrate: 830 kb/s Stream #0.0(und): Video: h264, yuv420p, 720x384, 25 tbr, 22500 tbn, 45k tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16 Output #0, mp4, to 'out.mp4': Stream #0.0(und): Video: libx264, yuv420p, 720x384, q=2-31, 90k tbn, 22500 tbc Stream #0.1(eng): Audio: libfaac, 48000 Hz, stereo, s16 Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 2591 fps=2349 q=-1.0 size= 8144kB time=101.60 bitrate= 656.7kbits/s … Instead of a 5:59 duration clip, I get the entire rest of the movie. So, to test this, I ran the ffmpeg command with -t 00:00:01. What I got was exactly a 15:00 minute clip. So I did some black box engineering and decided to scale my -t option by calculating what value to enter given that 1 second was interpreted as 900 s. For my desired 359 s clip, I calculated 0.399 s and so my ffmpeg command became: ffmpeg -ss 01:15.51 -t 00:00:00.399 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 This works, but I have no idea why the duration is scaled by 900. Investigating further, each ffmpeg run has the line: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) 45000/25 = 1800. Must be a relation somewhere. Somehow, the obscenely high frame rate is causing issues with the timing. How is that frame rate so high? The best part about this is that the resulting clip.mp4 has the exact same feature (due to the copied video codec), and taking further clips from this needs the same scaling for the -t duration option. Therefore, I've made it available for anyone willing to check this out. Appendix The preamble for ffmpeg on my system (built using MacPorts ffmpeg port): FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/opt/local --disable-vhook --enable-gpl --enable-postproc --enable-swscale --enable-avfilter --enable-avfilter-lavf --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-libdirac --enable-libschroedinger --enable-libfaac --enable-libfaad --enable-libxvid --enable-libx264 --mandir=/opt/local/share/man --enable-shared --enable-pthreads --cc=/usr/bin/gcc-4.2 --arch=x86_64 libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 1. 4. 0 / 1. 4. 0 libswscale 1. 7. 1 / 1. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Jan 4 2010 21:51:51, gcc: 4.2.1 (Apple Inc. build 5646) (dot 1)

    Read the article

  • FFmpeg extract clip - stream frame rate differs from container frame rate (x264, aac)

    - by fideli
    Summary H.264 video seems to have a really high frame rate that requires a scaling factor to the applied to the duration of video that I'm trying to extract (900x lower). Body I'm trying to extract a clip from a movie that I have in MP4 format (created using Handbrake). After trying mencoder and VLC, I decided to give FFmpeg a shot since it was the least troublesome when it came to copying the codecs. That is, compared to mencoder and VLC, the resulting file was still playable in QuickTime (I know about Perian, etc, I'm just trying to learn how all this works). Anyway, my command was as follows: ffmpeg -ss 01:15:51 -t 00:05:59 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 During the copy, The following comes up: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from outofsight.mp4': Duration: 01:57:42.10, start: 0.000000, bitrate: 830 kb/s Stream #0.0(und): Video: h264, yuv420p, 720x384, 25 tbr, 22500 tbn, 45k tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16 Output #0, mp4, to 'out.mp4': Stream #0.0(und): Video: libx264, yuv420p, 720x384, q=2-31, 90k tbn, 22500 tbc Stream #0.1(eng): Audio: libfaac, 48000 Hz, stereo, s16 Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 2591 fps=2349 q=-1.0 size= 8144kB time=101.60 bitrate= 656.7kbits/s … Instead of a 5:59 duration clip, I get the entire rest of the movie. So, to test this, I ran the ffmpeg command with -t 00:00:01. What I got was exactly a 15:00 minute clip. So I did some black box engineering and decided to scale my -t option by calculating what value to enter given that 1 second was interpreted as 900 s. For my desired 359 s clip, I calculated 0.399 s and so my ffmpeg command became: ffmpeg -ss 01:15.51 -t 00:00:00.399 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 This works, but I have no idea why the duration is scaled by 900. Investigating further, each ffmpeg run has the line: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) 45000/25 = 1800. Must be a relation somewhere. Somehow, the obscenely high frame rate is causing issues with the timing. How is that frame rate so high? The best part about this is that the resulting clip.mp4 has the exact same feature (due to the copied video codec), and taking further clips from this needs the same scaling for the -t duration option. Therefore, I've made it available for anyone willing to check this out. Appendix The preamble for ffmpeg on my system (built using MacPorts ffmpeg port): FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/opt/local --disable-vhook --enable-gpl --enable-postproc --enable-swscale --enable-avfilter --enable-avfilter-lavf --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-libdirac --enable-libschroedinger --enable-libfaac --enable-libfaad --enable-libxvid --enable-libx264 --mandir=/opt/local/share/man --enable-shared --enable-pthreads --cc=/usr/bin/gcc-4.2 --arch=x86_64 libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 1. 4. 0 / 1. 4. 0 libswscale 1. 7. 1 / 1. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Jan 4 2010 21:51:51, gcc: 4.2.1 (Apple Inc. build 5646) (dot 1) EDIT Not sure whether it was a bug or not, but it seems to be fixed now in my current version of ffmpeg, at least for this video (version 0.6.1 from MacPorts).

    Read the article

  • Make exact mp4 (H264) format for uploading to youtube

    - by WHITECOLOR
    With ffmpeg I'm converting video from mp3 and picture to upload it to youtube. After upload, conversion fails. Reasons are unknown. I believe the problem is in format. By the way If I'm uploading file 5 minutes length, it fails if I upload 30 seconds of this file it succeeds. I have donwload mp4 file from youtube. Then I uploaded it, it is done very fast. So a nice solution would be to convert videos to the same format that is done by google. I got the following output by mpeg: ffmpeg version N-44264-g070b0e1 Copyright (c) 2000-2012 the FFmpeg developers built on Sep 7 2012 17:38:57 with gcc 4.7.1 (GCC) configuration: --enable-gpl --enable-version3 --disable-pthreads --enable-runt ime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libass - -enable-libcelt --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-l ibfreetype --enable-libgsm --enable-libmp3lame --enable-libnut --enable-libopenj peg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheo ra --enable-libutvideo --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-li bvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --ena ble-zlib libavutil 51. 72.100 / 51. 72.100 libavcodec 54. 55.100 / 54. 55.100 libavformat 54. 25.105 / 54. 25.105 libavdevice 54. 2.100 / 54. 2.100 libavfilter 3. 16.100 / 3. 16.100 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 15.100 / 0. 15.100 libpostproc 52. 0.100 / 52. 0.100 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'youtubetrack0.mp4': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 creation_time : 2012-10-02 22:58:57 Duration: 00:06:46.66, start: 0.000000, bitrate: 176 kb/s Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yu v420p, 450x360, 78 kb/s, 6 fps, 6 tbr, 12 tbn, 12 tbc Metadata: creation_time : 1970-01-01 00:00:00 handler_name : VideoHandler Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, s16, 95 kb/s Metadata: creation_time : 2012-10-02 22:58:57 handler_name : IsoMedia File Produced by Google, 5-11-2011 Is it possible to construct ffmpeg parameters so that that would give the same format that google internally does? Is the information above sufficient? I couldn't construct needed params. For example I don't understand how to set tbn and what 95 kb/s mean in "Stream #0:1(und): Audio:". Now I just do: ffmpeg -i videoimage.jpg -i audio.mp3 video.mp4 Info I've got: ffmpeg version N-44998-gdf82454 Copyright (c) 2000-2012 the FFmpeg developers built on Oct 2 2012 23:03:12 with gcc 4.7.1 (GCC) configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --disable-pthreads --enable-runtime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libass --enable-libcelt --enable-libopencore-amrnb --en able-libopencore-amrwb --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libnut --enable-libopenjpeg --enable-librtmp --enable-libschroedinger - -enable-libspeex --enable-libtheora --enable-libutvideo --enable-libvo-aacenc -- enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enab le-libxavs --enable-libxvid --enable-zlib libavutil 51. 73.101 / 51. 73.101 libavcodec 54. 63.100 / 54. 63.100 libavformat 54. 29.105 / 54. 29.105 libavdevice 54. 3.100 / 54. 3.100 libavfilter 3. 19.102 / 3. 19.102 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 16.100 / 0. 16.100 libpostproc 52. 1.100 / 52. 1.100 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 encoder : Lavf54.25.105 Duration: 00:06:46.81, start: 0.000000, bitrate: 129 kb/s Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuvj420p, 450x360, 3392 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc Metadata: handler_name : VideoHandler Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, s16, 127 kb/s Metadata: handler_name : SoundHandler This video fails the conversion on youtube. I also tried to use other vcode parmam and extensions of output file (mp4, wmv, avi) but failed too. Would be greatful for help.

    Read the article

  • Why Is Vertical Resolution Monitor Resolution so Often a Multiple of 360?

    - by Jason Fitzpatrick
    Stare at a list of monitor resolutions long enough and you might notice a pattern: many of the vertical resolutions, especially those of gaming or multimedia displays, are multiples of 360 (720, 1080, 1440, etc.) But why exactly is this the case? Is it arbitrary or is there something more at work? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Trojandestroy recently noticed something about his display interface and needs answers: YouTube recently added 1440p functionality, and for the first time I realized that all (most?) vertical resolutions are multiples of 360. Is this just because the smallest common resolution is 480×360, and it’s convenient to use multiples? (Not doubting that multiples are convenient.) And/or was that the first viewable/conveniently sized resolution, so hardware (TVs, monitors, etc) grew with 360 in mind? Taking it further, why not have a square resolution? Or something else unusual? (Assuming it’s usual enough that it’s viewable). Is it merely a pleasing-the-eye situation? So why have the display be a multiple of 360? The Answer SuperUser contributor User26129 offers us not just an answer as to why the numerical pattern exists but a history of screen design in the process: Alright, there are a couple of questions and a lot of factors here. Resolutions are a really interesting field of psychooptics meeting marketing. First of all, why are the vertical resolutions on youtube multiples of 360. This is of course just arbitrary, there is no real reason this is the case. The reason is that resolution here is not the limiting factor for Youtube videos – bandwidth is. Youtube has to re-encode every video that is uploaded a couple of times, and tries to use as little re-encoding formats/bitrates/resolutions as possible to cover all the different use cases. For low-res mobile devices they have 360×240, for higher res mobile there’s 480p, and for the computer crowd there is 360p for 2xISDN/multiuser landlines, 720p for DSL and 1080p for higher speed internet. For a while there were some other codecs than h.264, but these are slowly being phased out with h.264 having essentially ‘won’ the format war and all computers being outfitted with hardware codecs for this. Now, there is some interesting psychooptics going on as well. As I said: resolution isn’t everything. 720p with really strong compression can and will look worse than 240p at a very high bitrate. But on the other side of the spectrum: throwing more bits at a certain resolution doesn’t magically make it better beyond some point. There is an optimum here, which of course depends on both resolution and codec. In general: the optimal bitrate is actually proportional to the resolution. So the next question is: what kind of resolution steps make sense? Apparently, people need about a 2x increase in resolution to really see (and prefer) a marked difference. Anything less than that and many people will simply not bother with the higher bitrates, they’d rather use their bandwidth for other stuff. This has been researched quite a long time ago and is the big reason why we went from 720×576 (415kpix) to 1280×720 (922kpix), and then again from 1280×720 to 1920×1080 (2MP). Stuff in between is not a viable optimization target. And again, 1440P is about 3.7MP, another ~2x increase over HD. You will see a difference there. 4K is the next step after that. Next up is that magical number of 360 vertical pixels. Actually, the magic number is 120 or 128. All resolutions are some kind of multiple of 120 pixels nowadays, back in the day they used to be multiples of 128. This is something that just grew out of LCD panel industry. LCD panels use what are called line drivers, little chips that sit on the sides of your LCD screen that control how bright each subpixel is. Because historically, for reasons I don’t really know for sure, probably memory constraints, these multiple-of-128 or multiple-of-120 resolutions already existed, the industry standard line drivers became drivers with 360 line outputs (1 per subpixel). If you would tear down your 1920×1080 screen, I would be putting money on there being 16 line drivers on the top/bottom and 9 on one of the sides. Oh hey, that’s 16:9. Guess how obvious that resolution choice was back when 16:9 was ‘invented’. Then there’s the issue of aspect ratio. This is really a completely different field of psychology, but it boils down to: historically, people have believed and measured that we have a sort of wide-screen view of the world. Naturally, people believed that the most natural representation of data on a screen would be in a wide-screen view, and this is where the great anamorphic revolution of the ’60s came from when films were shot in ever wider aspect ratios. Since then, this kind of knowledge has been refined and mostly debunked. Yes, we do have a wide-angle view, but the area where we can actually see sharply – the center of our vision – is fairly round. Slightly elliptical and squashed, but not really more than about 4:3 or 3:2. So for detailed viewing, for instance for reading text on a screen, you can utilize most of your detail vision by employing an almost-square screen, a bit like the screens up to the mid-2000s. However, again this is not how marketing took it. Computers in ye olden days were used mostly for productivity and detailed work, but as they commoditized and as the computer as media consumption device evolved, people didn’t necessarily use their computer for work most of the time. They used it to watch media content: movies, television series and photos. And for that kind of viewing, you get the most ‘immersion factor’ if the screen fills as much of your vision (including your peripheral vision) as possible. Which means widescreen. But there’s more marketing still. When detail work was still an important factor, people cared about resolution. As many pixels as possible on the screen. SGI was selling almost-4K CRTs! The most optimal way to get the maximum amount of pixels out of a glass substrate is to cut it as square as possible. 1:1 or 4:3 screens have the most pixels per diagonal inch. But with displays becoming more consumery, inch-size became more important, not amount of pixels. And this is a completely different optimization target. To get the most diagonal inches out of a substrate, you want to make the screen as wide as possible. First we got 16:10, then 16:9 and there have been moderately successful panel manufacturers making 22:9 and 2:1 screens (like Philips). Even though pixel density and absolute resolution went down for a couple of years, inch-sizes went up and that’s what sold. Why buy a 19″ 1280×1024 when you can buy a 21″ 1366×768? Eh… I think that about covers all the major aspects here. There’s more of course; bandwidth limits of HDMI, DVI, DP and of course VGA played a role, and if you go back to the pre-2000s, graphics memory, in-computer bandwdith and simply the limits of commercially available RAMDACs played an important role. But for today’s considerations, this is about all you need to know. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • unknown codec 0x0064 (G726 ADPCM)

    - by Colin Pickard
    I've got a number of audio recordings which I can't play back. It appears to be a case of a missing codec, but I can't locate the codec in question. VLC will not play the files. ffmpeg gives the error Unsupported codec (id=0) for input stream 0. This is the output from MediaInfo. General CompleteName : Format : Wave FileSize/String : 164 KiB Duration/String : 1mn 24s OverallBitRate/String : 16.0 Kbps Audio Format : ADPCM CodecID : 64 CodecID/Info : G.726 CodecID/Hint : APICOM Duration/String : 1mn 24s BitRate_Mode/String : Constant BitRate/String : 16.0 Kbps Channel(s)/String : 1 channel SamplingRate/String : 8 000 Hz BitDepth/String : 2 bits StreamSize/String : 164 KiB (100%) Can anyone help?

    Read the article

  • execute a command in all subdirectories bash

    - by Luigi R. Viggiano
    I have a directory structure composed by: iTunes/Music/${author}/${album}/${song.mp3} I implemented a script to strip my mp3 bitrate to 128 kbps using lame (which works on a single file at time). My script looks like this 'normalize_mp3.sh': #!/bin/bash SAVEIFS=$IFS IFS=$(echo -en "\n\b") for f in *.mp3 do lame --cbr $f __out.mp3 mv __out.mp3 $f done IFS=$SAVEIFS This works fine, if I go folder by folder and execute this command. But I'd like to have a "global" command, like in 4DOS so I can run: $ cd iTunes/Music $ global normalize_mp3.sh and the global command would traverse all subdirs and execute the normalize_mp3.sh to strip all my mp3 in all subfolders. Anyone knows if there is a unix equivalent to the 4dos global command? I tried to play with find -exec but I just managed to get an headache.

    Read the article

  • Convert audio file to FLAC with ffmpeg?

    - by elpsk
    can I convert one of this format to compatible 16000.0 Sample Rate FLAC file? kAudioFormatLinearPCM = 'lpcm', kAudioFormatAppleIMA4 = 'ima4', kAudioFormatMPEG4AAC = 'aac ', kAudioFormatMACE3 = 'MAC3', kAudioFormatMACE6 = 'MAC6', kAudioFormatULaw = 'ulaw', kAudioFormatALaw = 'alaw', kAudioFormatMPEGLayer1 = '.mp1', kAudioFormatMPEGLayer2 = '.mp2', kAudioFormatMPEGLayer3 = '.mp3', kAudioFormatAppleLossless = 'alac' I tried using ffmpeg ffmpeg -i audio.xxx -acodec flac audio.flac but result is FFmpeg version CVS, Copyright (c) 2000-2004 Fabrice Bellard Mac OSX universal build for ffmpegX configuration: --enable-memalign-hack --enable-mp3lame --enable-gpl --disable-vhook --disable-ffplay --disable-ffserver --enable-a52 --enable-xvid --enable-faac --enable-faad --enable-amr_nb --enable-amr_wb --enable-pthreads --enable-x264 libavutil version: 49.0.0 libavcodec version: 51.9.0 libavformat version: 50.4.0 built on Apr 15 2006 04:58:19, gcc: 4.0.1 (Apple Computer, Inc. build 5250) Input #0, wsaud, from 'audio.alac': Duration: 00:00:03.8, start: 0.000000, bitrate: 199 kb/s Stream #0.0: Audio: adpcm_ima_ws, 24931 Hz, stereo, 199 kb/s Unable for find a suitable output format for 'audio.flac' I also installed flac codec for mac, but nothing... I tried also use convtoflac.sh (from http://legroom.net/software/convtoflac) but result is similar. Any idea to convert in flac?

    Read the article

  • Realtek HD Audio playing weird with certain video formats

    - by dyasny
    Hi, I have a Gigabyte motherboard with an onboard Realtek HD sound card. The card is working perfectly everywhere, except for a single video format, where the voice is distorted, sounds as if it's been passed through a metal tube. Been googling for this, but couldn't find an answer anywhere. The movie plays fine on other systems (got Linux everywhere else), but on this one (winXP-x64-sp2) it just doesn't. Here are some details: MPC: Type: KLCP WMV File Audio: 0x000a 22050Hz mono 20Kbps [Raw Audio 0] Video: Windows Media Video 9 400x300 29.97fps 227Kbps [Raw Video 1] VLC: Codec: wmas Sample rate: 22050 Bits per sample: 16 Bitrate: 20kb/s

    Read the article

  • How can I unlock a file without closing the file handle?

    - by netvope
    When I'm downloading a video file with Google Chrome, the file is locked and cannot be opened by media players. I want to be able to play the file while it's being downloaded. The video bitrate is lower than my Internet bandwidth so it wouldn't be a problem. Popular unlockers work by closing file handles, which would interrupt the download. How can I remove the read-lock without closing the file handle? Can I replace the read-lock with a write-lock?

    Read the article

  • 1080 HD video display - Windows 7

    - by blasteralfred
    I am running Windows 7 in my Dell Studio 1555 Laptop. I have ATI Mobility Radeon HD 4570 graphics card inside. When I try to watch a video, I can see only some disturbed texture. I can hear the audio and no prob with that. The I can see the video frames peeking sometimes but is pretty slower. I have DirectX (directx_feb2010_redist) and Win7 codecs installed. Here are some screenshots; The video details are; Name: video.mp4 Frame width: 1920 Frame height: 1080 Frame rate: 29 fps data Rate: 7958 kbps Total bitrate: 8052 kbps Just comment if you want more info.

    Read the article

< Previous Page | 1 2 3 4 5 6 7  | Next Page >