Search Results

Search found 25204 results on 1009 pages for 'event stream processing'.

Page 20/1009 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • FFmpeg extract clip - stream frame rate differs from container frame rate (x264, aac)

    - by fideli
    Summary H.264 video seems to have a really high frame rate that requires a scaling factor to the applied to the duration of video that I'm trying to extract (900x lower). Body I'm trying to extract a clip from a movie that I have in MP4 format (created using Handbrake). After trying mencoder and VLC, I decided to give FFmpeg a shot since it was the least troublesome when it came to copying the codecs. That is, compared to mencoder and VLC, the resulting file was still playable in QuickTime (I know about Perian, etc, I'm just trying to learn how all this works). Anyway, my command was as follows: ffmpeg -ss 01:15:51 -t 00:05:59 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 During the copy, The following comes up: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from outofsight.mp4': Duration: 01:57:42.10, start: 0.000000, bitrate: 830 kb/s Stream #0.0(und): Video: h264, yuv420p, 720x384, 25 tbr, 22500 tbn, 45k tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16 Output #0, mp4, to 'out.mp4': Stream #0.0(und): Video: libx264, yuv420p, 720x384, q=2-31, 90k tbn, 22500 tbc Stream #0.1(eng): Audio: libfaac, 48000 Hz, stereo, s16 Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 2591 fps=2349 q=-1.0 size= 8144kB time=101.60 bitrate= 656.7kbits/s … Instead of a 5:59 duration clip, I get the entire rest of the movie. So, to test this, I ran the ffmpeg command with -t 00:00:01. What I got was exactly a 15:00 minute clip. So I did some black box engineering and decided to scale my -t option by calculating what value to enter given that 1 second was interpreted as 900 s. For my desired 359 s clip, I calculated 0.399 s and so my ffmpeg command became: ffmpeg -ss 01:15.51 -t 00:00:00.399 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 This works, but I have no idea why the duration is scaled by 900. Investigating further, each ffmpeg run has the line: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) 45000/25 = 1800. Must be a relation somewhere. Somehow, the obscenely high frame rate is causing issues with the timing. How is that frame rate so high? The best part about this is that the resulting clip.mp4 has the exact same feature (due to the copied video codec), and taking further clips from this needs the same scaling for the -t duration option. Therefore, I've made it available for anyone willing to check this out. Appendix The preamble for ffmpeg on my system (built using MacPorts ffmpeg port): FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/opt/local --disable-vhook --enable-gpl --enable-postproc --enable-swscale --enable-avfilter --enable-avfilter-lavf --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-libdirac --enable-libschroedinger --enable-libfaac --enable-libfaad --enable-libxvid --enable-libx264 --mandir=/opt/local/share/man --enable-shared --enable-pthreads --cc=/usr/bin/gcc-4.2 --arch=x86_64 libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 1. 4. 0 / 1. 4. 0 libswscale 1. 7. 1 / 1. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Jan 4 2010 21:51:51, gcc: 4.2.1 (Apple Inc. build 5646) (dot 1) EDIT Not sure whether it was a bug or not, but it seems to be fixed now in my current version of ffmpeg, at least for this video (version 0.6.1 from MacPorts).

    Read the article

  • C#&ndash;Using a delegate to raise an event from one class to another

    - by Bill Osuch
    Even though this may be a relatively common task for many people, I’ve had to show it to enough new developers that I figured I’d immortalize it… MSDN says “Events enable a class or object to notify other classes or objects when something of interest occurs. The class that sends (or raises) the event is called the publisher and the classes that receive (or handle) the event are called subscribers.” Any time you add a button to a Windows Form or Web app, you can subscribe to the OnClick event, and you can also create your own event handlers to pass events between classes. Here I’ll show you how to raise an event from a separate class to a console application (or Windows Form). First, create a console app project (you could create a Windows Form, but this is easier for this demo). Add a class file called MyEvent.cs (it doesn’t really need to be a separate file, this is just for clarity) with the following code: public delegate void MyHandler1(object sender, MyEvent e); public class MyEvent : EventArgs {     public string message; } Your event can have whatever public properties you like; here we’re just got a single string. Next, add a class file called WorkerDLL.cs; this will simulate the class that would be doing all the work in the project. Add the following code: class WorkerDLL {     public event MyHandler1 Event1;     public WorkerDLL()     {     }     public void DoWork()     {         FireEvent("From Worker: Step 1");         FireEvent("From Worker: Step 5");         FireEvent("From Worker: Step 10");     }     private void FireEvent(string message)     {         MyEvent e1 = new MyEvent();         e1.message = message;         if (Event1 != null)         {             Event1(this, e1);         }         e1 = null;     } } Notice that the FireEvent method creates an instance of the MyEvent class and passes it to the Event1 handler (which we’ll create in just a second). Finally, add the following code to Program.cs: static void Main(string[] args) {     Program p = new Program(args); } public Program(string[] args) {     Console.WriteLine("From Console: Creating DLL");     WorkerDLL wd = new WorkerDLL();     Console.WriteLine("From Console: Wiring up event handler");     WireEventHandlers(wd);     Console.WriteLine("From Console: Doing the work");     wd.DoWork();     Console.WriteLine("From Console: Done - press any key to finish.");     Console.ReadLine(); } private void WireEventHandlers(WorkerDLL wd) {     MyHandler1 handler = new MyHandler1(OnHandler1);     wd.Event1 += handler; } public void OnHandler1(object sender, MyEvent e) {     Console.WriteLine(e.message); } The OnHandler1 method is called any time the event handler “hears” an event matching the specified signature – you could have it log to a file, write to a database, etc. Run the app in debug mode and you should see output like this: You can distinctly see which lines were written by the console application itself (Program.cs) and which were written by the worker class (WorkerDLL.cs). Technorati Tags: Csharp

    Read the article

  • Stop event bubbling in Javascript

    - by Kartik Rao
    I have a html structure like : <div onmouseover="enable_dropdown(1);" onmouseout="disable_dropdown(1);"> My Groups <a href="#">(view all)</a> <ul> <li><strong>Group Name 1</strong></li> <li><strong>Longer Group Name 2</strong></li> <li><strong>Longer Group Name 3</strong></li> </ul> <hr /> Featured Groups <a href="#">(view all)</a> <ul> <li><strong>Group Name 1</strong></li> <li><strong>Longer Group Name 2</strong></li> <li><strong>Longer Group Name 3</strong></li> </ul> </div> I want the onmouseout event to be triggered only from the main div, not the 'a' or 'ul' or 'li' tags within the div! My onmouseout function is as follows : function disable_dropdown(d) { document.getElementById(d).style.visibility = "hidden"; } Can someone please tell me how I can stop the event from bubbling up? I tried the solutions (stopPropogation etc) provided on other sites, but I'm not sure how to implement them in this context. Any help will be appreciated. Thanks a lot!

    Read the article

  • Why doesn't onkeydown working properly on IE?

    - by Fabian
    function checkEnter(event) { var charcode; if (event && event.which) { charcode = event.which; alert("Case 1. event.which is " + charcode); } else if (event && !event.which) { charcode = event.keyCode; alert("Case 2. event.keyCode is " + charcode); } document.getElementById("text1").value=""; } <input type="text" id="text1" onkeyup="checkEnter(event)" /> The above function works on both IE7 and Chrome. function checkKeyPressed() { document.onkeydown = function(event) { var charcode; if (event && event.which) { charcode = event.which; alert("charcode is " + charcode); } else if (event && !event.which) { charcode = event.keyCode; alert("charcode (keyCode) is " + charcode); } } } <input type="button" id="button1" onclick="checkKeyPressed(event)" value="Button" /> However this one works only in Chrome. Any idea why?

    Read the article

  • MooTools Event Delegation using HTML5 data attributes.

    - by Anurag
    Is it possible to have event delegation using the HTML5 data attributes in MooTools? The HTML structure I have is: ?<div id="parent"> <div>not selectable</div> <div data-selectable="true">selectable</div> <div>not selectable either.</div> <div data-selectable="true">also selectable</div> </div>???????????????????????????????????????????????????????????????????????? And I want to setup <div id="parent"> to listen to all clicks only on child elements that have the data-selected attribute. Please let me know if I'm doing something wrong: The events are being setup as: $("parent").addEvent("click:relay([data-selectable])", function(event, el) { alert(this.get('text')); }); but the click callback is fired on clicking all div's, not just the ones with a data-selectable attribute defined. You can see this example on http://jsfiddle.net/NUGD4/ A workaround is to adding this as a CSS class, which works with delegation but I would prefer to be able to use data-attributes as it's used throughout the application.

    Read the article

  • event.target doesn't work

    - by rdesign
    Hey guys, I've wrote some jquery code with some draggable elements and one droparea. Unfortunately my droparea can't make a difference between various object. Here's my code. <script type="text/javascript"> $(function() { $("#droparea").droppable({ drop: function(event) { var $target = $(event.target); if($target.is("#flyer")) { alert("adasd"); } } }); }); </script> </head> <body> <div id="droparea"></div> <div class="polaroid" id="flyer"> <img src="images/muesliFlyer.png" alt="flyer" /> </div> Without the if it works. But then I can't get the dropped object. Any ideas why my target isn't recognized? thanks a lot.

    Read the article

  • half-sine pulse shaping

    - by kos
    hi, i wanted to know what is the pulse shape of the modem.oqpskmod? and if it is not half-sine pulse shape, how is it possible to make it half-sine pulse shape as it is stated in ieee 802.15.4(zigbee) standard where it shows it as follows p(t)=sin(pi*t/2*Tc) if 0<=t<=2Tc p(t)=0 if otherwise ? thanks a lot!

    Read the article

  • Why do I get rows of zeros in my 2D fft?

    - by Nicholas Pringle
    I am trying to replicate the results from a paper. "Two-dimensional Fourier Transform (2D-FT) in space and time along sections of constant latitude (east-west) and longitude (north-south) were used to characterize the spectrum of the simulated flux variability south of 40degS." - Lenton et al(2006) The figures published show "the log of the variance of the 2D-FT". I have tried to create an array consisting of the seasonal cycle of similar data as well as the noise. I have defined the noise as the original array minus the signal array. Here is the code that I used to plot the 2D-FT of the signal array averaged in latitude: import numpy as np from numpy import ma from matplotlib import pyplot as plt from Scientific.IO.NetCDF import NetCDFFile ### input directory indir = '/home/nicholas/data/' ### get the flux data which is in ### [time(5day ave for 10 years),latitude,longitude] nc = NetCDFFile(indir + 'CFLX_2000_2009.nc','r') cflux_southern_ocean = nc.variables['Cflx'][:,10:50,:] cflux_southern_ocean = ma.masked_values(cflux_southern_ocean,1e+20) # mask land nc.close() cflux = cflux_southern_ocean*1e08 # change units of data from mmol/m^2/s ### create an array that consists of the seasonal signal fro each pixel year_stack = np.split(cflux, 10, axis=0) year_stack = np.array(year_stack) signal_array = np.tile(np.mean(year_stack, axis=0), (10, 1, 1)) signal_array = ma.masked_where(signal_array > 1e20, signal_array) # need to mask ### average the array over latitude(or longitude) signal_time_lon = ma.mean(signal_array, axis=1) ### do a 2D Fourier Transform of the time/space image ft = np.fft.fft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log(mgft) log_mgft= np.log(mgft) Every second row of the ft consists completely of zeros. Why is this? Would it be acceptable to add a randomly small number to the signal to avoid this. signal_time_lon = signal_time_lon + np.random.randint(0,9,size=(730, 182))*1e-05 EDIT: Adding images and clarify meaning The output of rfft2 still appears to be a complex array. Using fftshift shifts the edges of the image to the centre; I still have a power spectrum regardless. I expect that the reason that I get rows of zeros is that I have re-created the timeseries for each pixel. The ft[0, 0] pixel contains the mean of the signal. So the ft[1, 0] corresponds to a sinusoid with one cycle over the entire signal in the rows of the starting image. Here are is the starting image using following code: plt.pcolormesh(signal_time_lon); plt.colorbar(); plt.axis('tight') Here is result using following code: ft = np.fft.rfft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log1p(mgft) plt.pcolormesh(log_ps); plt.colorbar(); plt.axis('tight') It may not be clear in the image but it is only every second row that contains completely zeros. Every tenth pixel (log_ps[10, 0]) is a high value. The other pixels (log_ps[2, 0], log_ps[4, 0] etc) have very low values.

    Read the article

  • How to read time from recorded surveillance camera video?

    - by stressed_geek
    I have a problem where I have to read the time of recording from the video recorded by a surveillance camera. The time shows up on the top-left area of the video. Below is a link to screen grab of the area which shows the time. Also, the digit color(white/black) keeps changing during the duration of the video. http://i55.tinypic.com/2j5gca8.png Please guide me in the direction to approach this problem. I am a Java programmer so would prefer an approach through Java. EDIT: Thanks unhillbilly for the comment. I had looked at the Ron Cemer OCR library and its performance is much below our requirement. Since the ocr performance is less than desired, I was planning to build a character set using the screen grabs for all the digits, and using some image/pixel comparison library to compare the frame time with the character-set which will show a probabilistic result after comparison. So I was looking for a good image comparison library(I would be OK with a non-java library which I can run using the command-line). Also any advice on the above approach would be really helpful.

    Read the article

  • keyUp event heard?: Overridden NSView method

    - by Old McStopher
    UPDATED: I'm now overriding the NSView keyUp method from a NSView subclass set to first responder like below, but am still not seeing evidence that it is being called. @implementation svsView - (BOOL)acceptsFirstResponder { return YES; } - (void)keyUp:(NSEvent *)event { //--do key up stuff-- NSLog(@"key up'd!"); } @end --ORIGINAL POST-- I'm new to Cocoa and Obj-C and am trying to do a (void)keyUp: from within the implementation of my controller class (which itself is of type NSController). I'm not sure if this is the right place to put it, though. I have a series of like buttons each set to a unique key equivalent (IB button attribute) and each calls my (IBAction)keyInput method which then passes the identity of each key onto another object. This runs just fine, but I also want to track when each key is released. --ORIGINAL [bad] EXAMPLE-- @implementation svsController //init //IBActions - (IBAction)keyInput:(id)sender { //--do key down stuff-- } - (void)keyUp:(NSEvent *)event { //--do key up stuff-- } @end Upon fail, I also tried the keyUp as an IBAction (instead of void), like the user-defined keyInput is, and hooked it up to the appropriate buttons in Interface Builder, but then keyUp was only called when the keys were down and not when released. (Which I kind of figured would happen.) Pardon my noobery, but should I be putting this method in another class or doing something differently? Wherever it is, though, I need it be able to access objects owned by the controller class. Thanks for any insight you may have.

    Read the article

  • Using imtophat in Matlab

    - by jaff12
    I'm trying to do top hat filtering in matlab. The imtophat function looks promising, but I have no idea how to use it. I dont have a lot of work with Matlab before. I am trying to look find basically small spots several pixels wide that are local max in my 2 dimensional array.

    Read the article

  • Extracting DCT coefficients from encoded images and video

    - by misha
    Is there a way to easily extract the DCT coefficients (and quantization parameters) from encoded images and video? Any decoder software must be using them to decode block-DCT encoded images and video. So I'm pretty sure the decoder knows what they are. Is there a way to expose them to whomever is using the decoder? I'm implementing some video quality assessment algorithms that work directly in the DCT domain. Currently, the majority of my code uses OpenCV, so it would be great if anyone knows of a solution using that framework. I don't mind using other libraries (perhaps libjpeg, but that seems to be for still images only), but my primary concern is to do as little format-specific work as possible (I don't want to reinvent the wheel and write my own decoders). I want to be able to open any video/image (H.264, MPEG, JPEG, etc) that OpenCV can open, and if it's block DCT-encoded, to get the DCT coefficients. In the worst case, I know that I can write up my own block DCT code, run the decompressed frames/images through it and then I'd be back in the DCT domain. That's hardly an elegant solution, and I hope I can do better. Presently, I use the fairly common OpenCV boilerplate to open images: IplImage *image = cvLoadImage(filename); // Run quality assessment metric The code I'm using for video is equally trivial: CvCapture *capture = cvCaptureFromAVI(filename); while (cvGrabFrame(capture)) { IplImage *frame = cvRetrieveFrame(capture); // Run quality assessment metric on frame } cvReleaseCapture(&capture); In both cases, I get a 3-channel IplImage in BGR format. Is there any way I can get the DCT coefficients as well?

    Read the article

  • Finding center of fingerprints.

    - by an_ant
    If we suppose that every fingerprint is made of concentric curves (ellipses or circles) - and I'm aware of the fact that not every fingerprint is - how can I find center of those concentric curves? Let's take this "ideal" fingerprint and try to find out its center ... My approaches were to try: Find the spectrum through columns/rows of the image and try to find columns/rows that maximize particular band of the spectrum. I thought that column going through the center would have most regular pattern of changing amplitudes - therefore, most recognizible harmonic. My second approach was to try to count the changes of black-and-white also through the columns and rows, and to maximize that amount among rows and columns also. While these methods work to the some extant, with some additional filtering, they fail, when fingerprint is "not ideal as this one is". Can you think of any different approach? Are there standard ways to do it?

    Read the article

  • Facebook FB.Event.subscribe does not work

    - by DNReNTi
    I'd like to follow how many likes I get on my page, but something is wrong. I am using the Facebook javascript event handler but it doesnt work. It should alerts me when I click on the like or on the dislike button but it does not do anything. Any idea where I am wrong? Thanks! And sorry for my english. Here is my UPDATED code: <!DOCTYPE html> <html xmlns:fb="http://ogp.me/ns/fb#"> <head> <title>FBlike check</title> </head> <body> <div id="fb-root"></div> <script> (function(d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = "//connect.facebook.net/en_US/all.js#xfbml=1&appId=00000000000000000"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); FB.Event.subscribe('edge.create', function(response) { alert('You liked the URL: ' + response); } ); </script> <fb:like href="https://www.facebook.com/XYZ" send="false" layout="button_count" width="200" show_faces="false"></fb:like> </body> </html>

    Read the article

  • Encode real-time dvb-s stream using mencoder

    - by karatchov
    My satellite receiver can stream the mpeg-2 video/audio output through lan. Using mencoder, I'm trying to build a script to encode and save the stream in real time with my Core2Duo 1.8 Ghz. Right now, I'm using a single pass, it produces good quality for a video rate of 800Kb/s, but takes more then 95% of CPU power, thus making a lot of frameskips is the computer is used while encoding. mencoder -o -vf lavcdeint -oac mp3lame -lameopts abr:q=2:aq=2 -ovc x264 -ffourcc avc1 -x264encopts crf=25:me=hex:subq=9:frameref=2:nocabac:threads=auto -mc 3 So, I'm considering using a 2-pass encoding to alleviate the processor and record 100% of the stream. But I have no idea how to start. For the info: Standard Stream: mpeg-2 720*576 25fps HD Stream: 1920*1080 50fps (this is not my goal to record it, but it will be super cool if I could)

    Read the article

  • Capture RTSP stream from IP Camera and store

    - by Keerthi
    I've got a few IP Cameras which output an RTSP (h264 mpeg4) stream. Hitting the URL locally via VLC: rtsp://192.168.0.21:554/mpeg4 I can stream the camera and dump to disk (on my desktop). I'd like to however store these files on my NAS (FreeNAS). I was looking at ways to capture the RTSP stream and dump them to disk but I'm unable to find anything. Is it possible to capture the stream on FreeBSD or Linux (RaspberryPi) and dump the streamed content to a disk local to Linux or FreeBSD - preferably every 30minutes? EDIT: The NAS is headless (HP N55L or something) and the RaspberryPi's are headless too. I've already looked into ZoneMinder but need something small. I was hoping maybe using Motion to detect motion on the stream but that will come later.

    Read the article

  • WPF Binding KeyDown event to Command

    - by Daniil Harik
    Hello, I want to bind KeyDown event handler (when user presses Ctrl+C and Ctrl+V) on Telerik's GridView to RelayCommand object in my ViewModel. I know about this post http://blog.functionalfun.net/2008/09/hooking-up-commands-to-events-in-wpf.html But I'm still bit confused about implementation of my scenario. I just don't understand how it works. Could someone point out how should my scenario be implemented. Thank You very much!

    Read the article

  • Win32 Event vs Semaphore

    - by JP
    Basically I need a replacement for Condition Variable and SleepConditionVariableCS because it only support Vista and UP. (For C++) Some suggested to use Semaphore, I also found CreateEvent. Basically, I need to have on thread waiting on WaitForSingleObject, until something one or more others thread tell me there is something to do. In which context should I use a Semaphore vs an Win Event? Thanks

    Read the article

  • WinForms window drag event

    - by Steve Syfuhs
    Is there an event in WinForms that get's fired when a window is dragged? Or is there a better way of doing what I want: to drop the window opacity to 80% when the window is being dragged around? Unfortunately this is stupidly tricky to search for because everyone is looking for drag and drop from the shell, or some other object.

    Read the article

  • Custom Event Handler

    - by Dremation
    I have a function that I'm using while(true) to repeatedly scan memory addresses of an application to detect change. I sleep the thread 1 second between iterations and this helps performance. However, is there a way to create a custom event handler to do away with the loops?

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >