Search Results

Search found 612 results on 25 pages for 'tao ffmpeg'.

Page 16/25 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Mobile Video Detection

    - by aaroninfidel
    Hi, I'm using DeviceAtlas to detect mobile phones, I was wondering if anyone had some good resources in terms of standard codecs, video dimensions that are used and how you go about serving video to mobile devices. Thanks! -Aaron

    Read the article

  • Understanding PTS and DTS in video frames

    - by theateist
    I had fps issues when transcoding from avi to mp4(x264). Eventually the problem was in PTS and DTS values, so lines 12-15 where added before av_interleaved_write_frame function: 1. AVFormatContext* outContainer = NULL; 2. avformat_alloc_output_context2(&outContainer, NULL, "mp4", "c:\\test.mp4"; 3. AVCodec *encoder = avcodec_find_encoder(AV_CODEC_ID_H264); 4. AVStream *outStream = avformat_new_stream(outContainer, encoder); 5. // outStream->codec initiation 6. // ... 7. avformat_write_header(outContainer, NULL); 8. // reading and decoding packet 9. // ... 10. avcodec_encode_video2(outStream->codec, &encodedPacket, decodedFrame, &got_frame) 11. 12. if (encodedPacket.pts != AV_NOPTS_VALUE) 13. encodedPacket.pts = av_rescale_q(encodedPacket.pts, outStream->codec->time_base, outStream->time_base); 14. if (encodedPacket.dts != AV_NOPTS_VALUE) 15. encodedPacket.dts = av_rescale_q(encodedPacket.dts, outStream->codec->time_base, outStream->time_base); 16. 17. av_interleaved_write_frame(outContainer, &encodedPacket) After reading many posts I still do not understand: outStream->codec->time_base = 1/25 and outStream->time_base = 1/12800. The 1st one was set by me but I cannot figure out why and who set 12800? I noticed that before line (7) outStream->time_base = 1/90000 and right after it it changes to 1/12800, why? When I transcode from avi to avi, meaning changing the line (2) to avformat_alloc_output_context2(&outContainer, NULL, "avi", "c:\\test.avi"; , so before and after line (7) outStream->time_base remains always 1/25 and not like in mp4 case, why? What is the difference between time_base of outStream->codec and outStream? To calc the pts av_rescale_q does: takes 2 time_base, multiplies their fractions in cross and then compute the pts. Why it does this in this way? As I debugged, the encodedPacket.pts has value incremental by 1, so why changing it if it does has value? At the beginning the dts value is -2 and after each rescaling it still has negative number, but despite this the video played correctly! Shouldn't it be positive?

    Read the article

  • Qt SDk 4.6.2 on mac os x: invoke ffmpeg ??

    - by varunmagical
    Hello, I am writing an FFmpeg frontend in Qt & testing it on linux, windows & Mac. (FFmpeg is a popular command line tool for video operations) My project is working well on Linux & windows but I cannot invoke FFmpeg on Mac! I have compiled it from svn source on Mac & I have ensured that it is working properly by running it in Mac terminal. In my project, I have created a widget that shows FFmpeg output during conversion, but on mac, It always stays blank. Need help!

    Read the article

  • Is there a best practice for concatenating MP3 Files, adjusting sample rates to match, while preserving original files?

    - by Scott
    Hello overflow community! Does anyone know if there is a "best practice" to concatenate mp3 files to create new files, while preserving the original files? I am working on a CentOS Linux machine, in command line. I will eventually call the command line from a PHP script. I have been doing research and I have come up with a process that I think could work. It combines general advice from different forums, blogs, and sources like this one. So here I go: Create a temporary folder Loop through files to create a new, converted copy, of file into a "raw" format (which one, I don't know. I didn't know "raw" files existed before too long ago. I could use some suggestions on this) Store the path to the temporary files, in the temporary folder, and then loop through the files to concatenate them and then put the new merged file the final "processed directory" Delete the contents of the temporary file with the temporary raw files inside. Convert the final file from "raw" to mp3 and enjoy the finished result I'm thinking that this course of action might be best because I can't necessarily control the quality of the original "source" mp3s. The only other option I could think of would be to create a script that would perform a similar process upon files being added to the system leaving only the files with the "proper" format and removing the original "erroneous" file. Hopefully you can see that I have put some thought into this and that I'm trying to leverage the collective knowledge of this community to choose the best direction. Perhaps there is a better path that I could take? By concatenate, I mean to join together in sequence to create a new audio file from the "concatenated files."

    Read the article

  • Echo cancellation

    - by Jorg B Jorge
    Can any of you suggest a good and stable echo cancelation package (gnu or not) to be linked with my videoconference application (C/C++) (Windows / Linux / MacOSX) ? My application should be freeware, so i do not want to pay for each user who download the app.

    Read the article

  • Detecting crosses in an image

    - by MrOrdinaire
    I am working on a program to detect the tips of a probing device and analyze the color change during probing. The input/output mechanisms are more or less in place. What I need now is the actual meat of the thing: detecting the tips. In the images below, the tips are at the center of the crosses. I thought of applying BFS to the images after some threshold'ing but was then stuck and didn't know how to proceed. I then turned to OpenCV after reading that it offers feature detection in images. However, I am overwhelmed by the vast amount of concepts and techniques utilized here and again, clueless about how to proceed. Am I looking at it the right way? Can you give me some pointers? Image extracted from short video Binary version with threshold set at 95

    Read the article

  • Convert MP3 to AAC,FLAC to AAC (.NET/C#) FREE :)

    - by PearlFactory
    So I was tasked with looking at converting 10 million tracks from mp3 320k to AAC and also Converting from mp3 320k to mp3 128k After a bit of hunting around the tool you need to use is FFMPEG Download x64 WindowsAlso for the best results get the Nero AACEncoder Download Now the command line STEP 1(From Flac)ffmpeg -i input.flac -f wav - | neroAacEnc -ignorelength -q 0.5 -if - -of output.m4aor (From mp3)ffmpeg -i input.mp3 -f wav - | neroAacEnc -ignorelength -q 0.5 -if - -of output.m4aNow the output.m4a is a intermediate state that we now put a ACC wrapper on via FFMpeg STEP 2ffmpeg -i output.m4a -vn -acodec copy final.aacDone :) There are a couple of options with the FFMPEG library as in we can look at importing the librarys and manipulation the API for the direct result FFMPEG has this support. You can get the relevant librarys from HereThey even have the source if you are that keen :-)In this case I am going to wrap the command lines into c# external process threads.( For the app that i am building to convert the 10 million tracks there is a complex multithreaded app to support this novel code )//Arrange Metadata about Call Process myProcess = new Process();ProcessStartInfo p = new ProcessStartInfo();string sArgs = string.format(" -i {0} -f wav - | neroAacEnc -ignorelength -q 0.5 -if - -of {1}",inputfile,outputfil) ; p.FileName = "ffmpeg.exe" ; p.CreateNoWindow = true; p.RedirectStandardOutput = true; //p.WindowStyle = ProcessWindowStyle.Normal p.UseShellExecute = false;//Execute p.Arguments = sArgs; myProcess.StartInfo = p; myProcess.Start(); myProcess.WaitForExit();//Write details about call  myProcess.StandardOutput.ReadToEnd();Now in this case we would execute a 2nd call using the same code but with different sArgs to put the AAC wrapper on the m4a file. Thats it. So if you need to do some conversions of any kind for you ASP.net sites/apps this is a great start and super fast.. With conversion times of around 2-3 seconds all of this can be done on the fly:-)Justin Oehlmannref : StackOverflow.com

    Read the article

  • grailsApplication access in Grails unit Test

    - by Reza
    I am trying to write unit tests for a service which use grailsApplication.config to do some settings. It seems that in my unit tests that service instance could not access the config file (null pointer) for its setting while it could access that setting when I run "run-app". How could I configure the service to access grailsApplication service in my unit tests. class MapCloudMediaServerControllerTests { def grailsApplication @Before public void setUp(){ grailsApplication.config= ''' video{ location="C:\\tmp\\" // or shared filesystem drive for a cluster yamdi{ path="C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\yamdi" } ffmpeg { fileExtension = "flv" // use flv or mp4 conversionArgs = "-b 600k -r 24 -ar 22050 -ab 96k" path="C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\ffmpeg" makethumb = "-an -ss 00:00:03 -an -r 2 -vframes 1 -y -f mjpeg" } ffprobe { path="C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\ffprobe" params="" } flowplayer { version = "3.1.2" } swfobject { version = "" qtfaststart { path= "C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\qtfaststart" } } ''' } @Test void testMpegtoFlvConvertor() { log.info "In test Mpg to Flv Convertor function!" def controller=new MapCloudMediaServerController() assert controller!=null controller.videoService=new VideoService() assert controller.videoService!=null log.info "Is the video service null? ${controller.videoService==null}" controller.videoService.grailsApplication=grailsApplication log.info "Is grailsApplication null? ${controller.videoService.grailsApplication==null}" //Very important part for simulating the HTTP request controller.metaClass.request = new MockMultipartHttpServletRequest() controller.request.contentType="video/mpg" controller.request.content= new File("..\\MapCloudMediaServer\\web-app\\videoclips\\sample3.mpg").getBytes() controller.mpegtoFlvConvertor() byte[] videoOut=IOUtils.toByteArray(controller.response.getOutputStream()) def outputFile=new File("..\\MapCloudMediaServer\\web-app\\videoclips\\testsample3.flv") outputFile.append(videoOut) } }

    Read the article

  • Convert old AVI files to a modern format

    - by iWerner
    Hi, we have a collection of old home videos that were saved in AVI format a long time ago. I want to convert these files to a more modern format because the Totem Movie Player that comes with Ubuntu 10.4 seems to be the only program capable of playing them. The files seem to be encoded with a MJPEG codec, and playing them in VLC or Windows Media Player plays only the sound but there is no video. Avidemux was able to open the files, but the quality of the video is severely degraded: The video skips frames and is interlaced (it's not interlaced when playing it in Totem). Neither ffmpeg nor mencoder seems to be able to read the video stream. mencoder reports that it is using ffmpeg's codec. Here's a section from its output: ========================================================================== Opening video decoder: [ffmpeg] FFmpeg's libavcodec codec family [mjpeg @ 0x92a7260]mjpeg: using external huffman table [mjpeg @ 0x92a7260]mjpeg: error using external huffman table, switching back to internal Unsupported PixelFormat -1 Selected video codec: [ffmjpeg] vfm: ffmpeg (FFmpeg MJPEG) while running ffmpeg produces the following: $ ffmpeg -i input.avi output.avi FFmpeg version SVN-r0.5.1-4:0.5.1-1ubuntu1, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --extra-version=4:0.5.1-1ubuntu1 --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --disable-stripping --disable-vhook --enable-runtime-cpudetect --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --enable-shared --disable-static libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 0. 4. 0 / 0. 4. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Mar 4 2010 12:35:30, gcc: 4.4.3 [avi @ 0x87952c0]non-interleaved AVI Input #0, avi, from 'input.avi': Duration: 00:00:15.24, start: 0.000000, bitrate: 22447 kb/s Stream #0.0: Video: mjpeg, yuvj422p, 720x544, 25 tbr, 25 tbn, 25 tbc Stream #0.1: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s Output #0, avi, to 'output.avi': Stream #0.0: Video: mpeg4, yuv420p, 720x544, q=2-31, 200 kb/s, 90k tbn, 25 tbc Stream #0.1: Audio: mp2, 44100 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 0 fps= 0 q=0.0 Lsize= 143kB time=15.23 bitrate= 76.9kbits/s video:0kB audio:119kB global headers:0kB muxing overhead 20.101777% So the problem is that output does not contain any video, as evidenced by the video:0kB at the end. In all of the above cases the audio comes out fine. So my question is: What can I do to convert these files to a more modern format with more modern codecs?

    Read the article

  • Is it possible to install ffmpeg and x264 on a Synology Diskstation 209?

    - by Kieran Benton
    Hi, Complete linux novice here! :) I'm trying to get my brilliant DS209 NAS box to do some transcoding for me of a few AVI videos to a format suitable for my Apply iTouch - yes I could do it with another machine and Handbrake but it would be really useful to offload some of this to the NAS to do overnight. I've managed to install ipkg onto my DS209 NAS box and have played around with installing some packages (binutils, mono, bash etc). I've even managed to install ffmpeg from ipkg and put together the correct command line profile to do the encoding as a .sh file: time ffmpeg -y -i $1 -f mp4 -title $2 -vcodec libx264 -level 21 -s 426×320 -b 512k -bt 512k -bufsize 4M -maxrate 4M -g 250 -coder 0 -threads 0 -acodec libfaac -ac 2 -ab 64k $3 However running this I get a missing dependency on libx264. I've tried building this from the latest source in git, but I get errors during the make process that I just don't understand (way out of my depth). encoder/set.c: In function 'x264_sei_version_write': encoder/set.c:491: error: 'X264_VERSION' undeclared (first use in this function) encoder/set.c:491: error: (Each undeclared identifier is reported only once encoder/set.c:491: error: for each function it appears in.) make: *** [encoder/set.o] Error 1 Can anyone else try building it or give me a pointer as to what I can do to get this going? Its been a good learning experience so far! Thanks.

    Read the article

  • Another sound not working post

    - by Thomas Smart
    Tried all the other "sound not working" posts i think, lost count. purge/reinstall alsa and pulse, reboot, add user to audio group, various lines in the alsa config file such as "options snd-hda-intel model=" then tried different options like generic, auto, basic, default, etc. tried pulseaudio -k && sudo alsa force-reload a few times, with and without rebooting. Hardware: 16gb ram, core I7-4790, Intel Haswell mboard with onboard sound and graphics Multimedia: Audio Adapter: HDA-Intel-HDA Intel HDMI OS: Ubuntu server 14.04 with ubuntu-desktop installed. GUI sound settings lists only the dummy sound card alsamixer -c 0 ¦ Card: HDA Intel HDMI F1: Help ¦ ¦ Chip: Intel Haswell HDMI F2: System information ¦ ¦ View: F3:[Playback] F4: Capture F5: All F6: Select sound card ¦ ¦ Item: S/PDIF ¦ ¦ +--+ ¦ ¦ ¦OO¦ ¦ ¦ +--+ ¦ ¦ < S/PDIF > ¦ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: HDMI [HDA Intel HDMI], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 aplay -L default Playback/recording through the PulseAudio sound server null Discard all samples (playback) or generate zero samples (capture) pulse PulseAudio Sound Server hdmi:CARD=HDMI,DEV=0 HDA Intel HDMI, HDMI 0 HDMI Audio Output dmix:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct sample mixing device dsnoop:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct sample snooping device hw:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct hardware device without any conversions plughw:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Hardware device with all software conversions cat /proc/asound/cards 0 [HDMI ]: HDA-Intel - HDA Intel HDMI HDA Intel HDMI at 0xf7d14000 irq 46 cat /proc/asound/devices 1: : sequencer 2: [ 0- 3]: digital audio playback 3: [ 0- 0]: hardware dependent 4: [ 0] : control 33: : timer mplayer -ao alsa:device=hdmi /usr/share/sounds/ubuntu/stereo/system-ready.ogg MPlayer 1.1-4.8 (C) 2000-2012 MPlayer Team mplayer: could not connect to socket mplayer: No such file or directory Failed to open LIRC support. You will not be able to use your remote control. Playing /usr/share/sounds/ubuntu/stereo/system-ready.ogg. libavformat version 54.20.4 (external) Mismatching header version 54.20.3 libavformat file format detected. [lavf] stream 0: audio (vorbis), -aid 0 Load subtitles in /usr/share/sounds/ubuntu/stereo/ ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version 54.35.0 (external) AUDIO: 44100 Hz, 1 ch, floatle, 80.0 kbit/5.67% (ratio: 10000->176400) Selected audio codec: [ffvorbis] afm: ffmpeg (FFmpeg Vorbis) ========================================================================== [AO_ALSA] alsa-lib: confmisc.c:768:(parse_card) cannot find card '1' [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_card_driver returned error: No such file or directory [AO_ALSA] alsa-lib: confmisc.c:392:(snd_func_concat) error evaluating strings [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_concat returned error: No such file or directory [AO_ALSA] alsa-lib: confmisc.c:1251:(snd_func_refer) error evaluating name [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory [AO_ALSA] alsa-lib: conf.c:4727:(snd_config_expand) Evaluate error: No such file or directory [AO_ALSA] alsa-lib: pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM hdmi [AO_ALSA] Playback open error: No such file or directory Failed to initialize audio driver 'alsa:device=hdmi' Could not open/initialize audio device -> no sound. Audio: no sound Video: no video Exiting... (End of file) mplayer -ao alsa:device=hw=0.3 /usr/share/sounds/ubuntu/stereo/system-ready.ogg MPlayer 1.1-4.8 (C) 2000-2012 MPlayer Team mplayer: could not connect to socket mplayer: No such file or directory Failed to open LIRC support. You will not be able to use your remote control. Playing /usr/share/sounds/ubuntu/stereo/system-ready.ogg. libavformat version 54.20.4 (external) Mismatching header version 54.20.3 libavformat file format detected. [lavf] stream 0: audio (vorbis), -aid 0 Load subtitles in /usr/share/sounds/ubuntu/stereo/ ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version 54.35.0 (external) AUDIO: 44100 Hz, 1 ch, floatle, 80.0 kbit/5.67% (ratio: 10000->176400) Selected audio codec: [ffvorbis] afm: ffmpeg (FFmpeg Vorbis) ========================================================================== [AO_ALSA] Format floatle is not supported by hardware, trying default. AO: [alsa] 44100Hz 2ch s16le (2 bytes per sample) Video: no video Starting playback... A: 0.4 (00.4) of 0.8 (00.7) 0.1% Exiting... (End of file) Thank you for your time and help :)

    Read the article

  • how can i convert a video into image files using ffmpeg in c#?

    - by moon
    string inputpath = strFileNamePath; string outputpath = "C:\\Image\\"; //for (int iIndex = 0; iIndex < 1000; iIndex++) //{ //string fileargs = "-i" + " " + inputpath + " -ab 56 -ar 44100 -b 200 -r 15 -s 320x240 -f flv " + outputpath + "SRT.flv"; string fileargs = "-i" + " " + inputpath + " " + outputpath + "image.jpg"; System.Diagnostics.Process p = new System.Diagnostics.Process(); p.StartInfo.FileName = "C:\\Documents and Settings\\Badr\\My Documents\\Visual Studio 2008\\Projects\\Video2image2video.\\ffmpeg\\ffmpeg.exe"; p.StartInfo.Arguments = fileargs; p.StartInfo.UseShellExecute = false; p.StartInfo.CreateNoWindow = true; p.StartInfo.RedirectStandardOutput = true; p.Start(); i have this code this creates only 1 image of video i apply a loop but i repeatedly generates the initial image how can i get images of all the video thanx in advance

    Read the article

  • SH404SEF URLs in Joomla 1.5

    - by Tao Bellamine
    I have two modules to play with urls, the global configuration module and the sh404sef module. The global config is set to "Sef urls: YES" and "mod rewrite enabled: YES" and the sh404sef is set "url optimization: NO". My problem is, even with "Sef urls" set in the global config, my urls still don't seem to be that "user friendly" so I turn on the "Url optimization" using the sh404sef module, and I get better descriptive urls. However, the problem I inherit from doing this is that my dynamically populated chronoforms get messed up (only the chrono forms, other forms are fine); These forms are now showing up at the homepage instead of their own reserved page. Here's an example: Old form "GOOD" url: http://www.mycraftwork.com/index.php?option=com_content&view=article&id=94 New optimized "BAD" URL: http://www.mycraftwork.com/handthrown-pottery/alladin-teapot/index.php?option=com_content&view=article&id=94 Any help would be GREATLY appreciated! I can even turn the sh404sef on and off if some people are interested in seeing the issue LIVE. Thanks!! Tao Bellamine

    Read the article

  • Can mediatomb VLC profile transcode audio as MP3 rather than mpga ?

    - by djangofan
    In the /etc/mediatomb/config.xml, can mediatomb VLC profile transcode audo as MP3 rather than mpga ? My Sony GoogleTV wont render streamed .avi files files with a mpga audio in them. The original files are Divx encoded with 128kbs MP3 audio but mediatomb is transcoding them. How can I change this? Any ideas? Can I turn off the audio and video transcoding somehow? I need some ideas to try. <profile name="vlcprof" enabled="yes" type="external"> <mimetype>video/mpeg</mimetype> <agent command="vlc" arguments="-I dummy %in --sout #transcode{venc=ffmpeg,vcodec=mp2v,vb=4096,fps=25,aenc=ffmpeg,acodec=mpga,ab=192,samplerate=44100,channels=2}:standard{access=file,mux=ps,dst=%out} vlc:quit"/> <buffer size="10485760" chunk-size="131072" fill-size="2621440"/> <accept-url>yes</accept-url> <first-resource>yes</first-resource> </profile> I know that that MP3 encoding support is external to FFmpeg and must be configured appropriately, but I have no idea how to handle that. I would guess I can work around that by somehow telling ffmpeg to not transcode the audio stream? Also, should I create a separate vlcprof entry for video/avi ? Can you create more than one profile for VLC in the config.xml for mediatomb?

    Read the article

  • cannot add movie file to pitivi

    - by niku
    when i try adding this movie file to pitivie i get the error as follows (i can play the file in mplayer just fine tho): Problem: An internal error occurred while analyzing this file: Could not find GStreamer caps mapping for FFmpeg codec 'h264', and you are using an external libavcodec. This is most likely due to a packaging problem and/or libavcodec having been upgraded to a version that is not compatible with this version of gstreamer-ffmpeg. Make sure your gstreamer-ffmpeg and libavcodec packages come from the same source/repository. Extra information: gstffmpegdec.c(1336): gst_ffmpegdec_negotiate (): /GstPipeline:Discoverer-file:///media/Data/pitivi/taw2.mkv/GstDecodeBin2:dbin/ffdec_h264:ffdec_h2645 Also, my system language is English, but regional format is set to German. mplayer installed in German, I want it in English tho... Thanks a lot for your help :) I know I'm a noob .

    Read the article

  • Is it possible to modify a video codec + distribute it?

    - by Nick
    this is my first question on this particular stackexchange node, not sure if it's the most appropriate place for this question (if not, guidance to the appropriate node would be appreciated). the abstract: I'm interested in modifying existing video codecs and distributing my modded codecs in such a way as to make them easily added to a users codec library... for example to be added to their mpeg streamclip, ffmpeg etc. some details: I've had some experience modifying codecs by hacking ffmpeg source files and compiling my hacked code (so that for ex: my version of ffmpeg has a very different h.263 than yours). I'm interested now in taking these modified codecs and somehow making them easily distributable, so others could "add them" to their "libraries." Also, I realize there are some tricky rights/patent issues here, this is in part my motivation. I'm interested in the patent quagmires, and welcome any thoughts on this as well. ctx link: if it helps (to gauge where I'm coming from) here's a link to a previous codec-hacking project of mine http://nickbriz.com/glitchcodectutorial/

    Read the article

  • What free Remote Desktop (server) solutions are there?

    - by Tao
    I know Ubuntu comes with a "Remote Desktop" option that appears to be a straightforward VNC server, and I'm trying to understand the alternatives. Here are the possibilities I've heard about so far: VNC VNC + SSH Tunnelling NX Server, free edition FreeNX NeatX X2Go X11 Forwarding over SSH xrdp I'm coming at this from a Windows user's perspective: To the best of my experience, RDP (aka Terminal Services) is a reasonably secure (barring mitm/server spoofing), efficient desktop sharing protocol with well-supported clients, that can be exposed to the internet when necessary without major fears of intrusion. To the best of my knowledge straight VNC is none of those things, which is where I get confused - why wouldn't a better desktop sharing technology be developed or used in the open-source world? I know VNC can be wrapped with SSH, but that seems beyond the reach of a casual user. X11 forwarding over SSH may be more or less efficient, I have no idea, but is definitely even more complicated, and doesn't (as far as I know) give you access to already-running stuff (no desktop sharing as such, just remote application running). So, I'd like any feedback/preferences amongst these or any other "Free" desktop sharing options, using these criteria and/or any others: Security (esp. for access across internet) Efficiency (bandwidth usage, responsiveness, etc) Free-ness, as in Speech (not sure where RDP or FreeNX lie for this) Free-ness, as in Beer (are there any commercial solutions with usable dependable free offerings?) Ease of use (server and client side) Cross-OS Client availability Cross-OS Server availability Support for independent sessions and shared (and/or "Console") sessions Ongoing support/maintenance/development Thanks!

    Read the article

  • codes to convert from avi to asf

    - by George2
    Hello everyone, No matter what library/SDK to use, I want to convert from avi to asf very quickly (I could even sacrifice some quality of video and audio). I am working on Windows platform (Vista and 2008 Server), better .Net SDK/code, C++ code is also fine. :-) I learned from the below link, that there could be a very quick way to convert from avi to asf to support streaming better, as mentioned "could convert the video from AVI to ASF format using a simple copy (i.e. the content is the same, but container changes).". My question is after some hours of study and trial various SDK/tools, as a newbie, I do not know how to begin with so I am asking for reference sample code to do this task. :-) (as this is a different issue, we decide to start a new topic. :-) ) http://stackoverflow.com/questions/743220/streaming-avi-file-issue thanks in advance, George EDIT 1: I have tried to get the binary of ffmpeg from, http://ffmpeg.arrozcru.org/autobuilds/ffmpeg-latest-mingw32-static.tar.bz2 then run the following command, C:\software\ffmpeg-latest-mingw32-static\bin>ffmpeg.exe -i test.avi -acodec copy -vcodec copy test.asf FFmpeg version SVN-r18506, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --enable-memalign-hack --prefix=/mingw --cross-prefix=i686-ming w32- --cc=ccache-i686-mingw32-gcc --target-os=mingw32 --arch=i686 --cpu=i686 --e nable-avisynth --enable-gpl --enable-zlib --enable-bzlib --enable-libgsm --enabl e-libfaac --enable-pthreads --enable-libvorbis --enable-libmp3lame --enable-libo penjpeg --enable-libtheora --enable-libspeex --enable-libxvid --enable-libfaad - -enable-libschroedinger --enable-libx264 libavutil 50. 3. 0 / 50. 3. 0 libavcodec 52.25. 0 / 52.25. 0 libavformat 52.32. 0 / 52.32. 0 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0. 7. 1 / 0. 7. 1 built on Apr 14 2009 04:04:47, gcc: 4.2.4 Input #0, avi, from 'test.avi': Duration: 00:00:44.86, start: 0.000000, bitrate: 5291 kb/s Stream #0.0: Video: msvideo1, rgb555le, 1280x1024, 5 tbr, 5 tbn, 5 tbc Stream #0.1: Audio: pcm_s16le, 22050 Hz, mono, s16, 352 kb/s Output #0, asf, to 'test.asf': Stream #0.0: Video: CRAM / 0x4D415243, rgb555le, 1280x1024, q=2-31, 1k tbn, 5 tbc Stream #0.1: Audio: pcm_s16le, 22050 Hz, mono, s16, 352 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 224 fps=222 q=-1.0 Lsize= 29426kB time=44.80 bitrate=5380.7kbits/s video:26910kB audio:1932kB global headers:0kB muxing overhead 2.023317% C:\software\ffmpeg-latest-mingw32-static\bin> http://www.microsoft.com/windows/windowsmedia/player/webhelp/default.aspx?&mpver=11.0.6001.7000&id=C00D11B1&contextid=230&originalid=C00D36E6 then have the following error when using Windows Media Player to play it, does anyone have any ideas? http://www.microsoft.com/windows/windowsmedia/player/webhelp/default.aspx?&mpver=11.0.6001.7000&id=C00D11B1&contextid=230&originalid=C00D36E6

    Read the article

  • As a C# developer, would you learn Java to develop for Android or use MonoDroid instead?

    - by Dan Tao
    I'd consider myself pretty well versed in C#. It's my language of choice at the moment, and it's where basically all my professional experience lies. Still, I'm puzzled by the existence of the MonoDroid project. My understanding has always been that C# and Java are very close. Like, if you know one, you can learn the other really quickly. So, as I've considered developing my first Android app, I just assumed I would familiarize myself with Java enough to get started and then just sort of learn as I go. Wouldn't this make more sense than using MonoDroid, which is likely to be less feature-rich than the Java Android SDK, and requires learning its own API (albeit a .NET API) anyway? I just feel like it would be better to learn a new language (and an extremely popular one at that) and get some experience in it—when it's so close to what you already know anyway—rather than stick with a technology you're experienced with, without gaining any more valuable skills. Maybe I'm grossly misrepresenting the average potential MonoDroid user. Maybe it's more for people who are experienced in Java and .NET and just prefer .NET. Or maybe (in fact it's likely) there are other factors I just haven't considered. I'm just wondering, why would you use MonoDroid instead of just developing for Android using Java?

    Read the article

  • How do you educate your teammates without seeming condescending or superior?

    - by Dan Tao
    I work with three other guys; I'll call them Adam, Brian, and Chris. Adam and Brian are bright guys. Give them a problem; they will figure out a way to solve it. When it comes to OOP, though, they know very little about it and aren't particularly interested in learning. Pure procedural code is their MO. Chris, on the other hand, is an OOP guy all the way -- and a cocky, condescending one at that. He is constantly criticizing the work Adam and Brian do and talking to me as if I must share his disdain for the two of them. When I say that Adam and Brian aren't interested in learning about OOP, I suspect Chris is the primary reason. This hasn't bothered me too much for the most part, but there have been times when, looking at some code Adam or Brian wrote, it has pained me to think about how a problem could have been solved so simply using inheritance or some other OOP concept instead of the unmaintainable mess of 1,000 lines of code that ended up being written instead. And now that the company is starting a rather ambitious new project, with Adam assigned to the task of getting the core functionality in place, I fear the result. Really, I just want to help these guys out. But I know that if I come across as just another holier-than-thou developer like Chris, it's going to be massively counterproductive. I've considered: Team code reviews -- everybody reviews everybody's code. This way no one person is really in a position to look down on anyone else; besides, I know I could learn plenty from the other members on the team as well. But this would be time-consuming, and with such a small team, I have trouble picturing it gaining much traction as a team practice. Periodic e-mails to the team -- this would entail me sending out an e-mail every now and then discussing some concept that, based on my observation, at least one team member would benefit from learning about. The downside to this approach is I do think it could easily make me come across as a self-appointed expert. Keeping a blog -- I already do this, actually; but so far my blog has been more about esoteric little programming tidbits than straightforward practical advice. And anyway, I suspect it would get old pretty fast if I were constantly telling my coworkers, "Hey guys, remember to check out my new blog post!" This question doesn't need to be specifically about OOP or any particular programming paradigm or technology. I just want to know: how have you found success in teaching new concepts to your coworkers without seeming like a condescending know-it-all? It's pretty clear to me there isn't going to be a sure-fire answer, but any helpful advice (including methods that have worked as well as those that have proved ineffective or even backfired) would be greatly appreciated. UPDATE: I am not the Team Lead on this team. Chris is. UPDATE 2: Made community wiki to accord with the general sentiment of the community (fancy that).

    Read the article

  • How do i pipe stdout/stderr in .NET?

    - by acidzombie24
    I want to do something like this ffmpeg -i audio.mp3 -f flac - | oggenc2.exe - -o audio.ogg i know how to do ffmpeg -i audio.mp3 -f flac using the process class in .NET but how do i pipe it to oggenc2? Any example of how to do this (it doesnt need to be ffmpeg or oggenc2) would be fine.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >