Search Results

Search found 11561 results on 463 pages for 'hd video'.

Page 215/463 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • Segmentation fault while feeding in an mpeg file through ffmpeg

    - by angel6
    Hi, I've set up FFserver as the streaming server. I'm trying to feed in an mpeg file. But it comes up with a segmentation fault. Does anyone know how to fix this? The following is the command-line output I get $ ./ffmpeg -i test1.mpg http://localhost:8090/feed1.ffm FFmpeg version SVN-r22945, Copyright (c) 2000-2010 the FFmpeg developers built on Apr 22 2010 19:18:45 with gcc 4.4.1 configuration: --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable-pthreads --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libx264 --enable-libxvid --enable-x11grab libavutil 50.14. 0 / 50.14. 0 libavcodec 52.66. 0 / 52.66. 0 libavformat 52.61. 0 / 52.61. 0 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0.10. 0 / 0.10. 0 libpostproc 51. 2. 0 / 51. 2. 0 [mpeg @ 0xab0c420]max_analyze_duration reached Input #0, mpeg, from 'test1.mpg': Duration: 00:00:20.96, start: 0.768300, bitrate: 269 kb/s Stream #0.0[0x1e0]: Video: mpeg1video, yuv420p, 160x120 [PAR 1:1 DAR 4:3], 104857 kb/s, 30 fps, 30 tbr, 90k tbn, 30 tbc Stream #0.1[0x1c0]: Audio: mp2, 32000 Hz, 2 channels, s16, 64 kb/s Output #0, ffm, to 'http://localhost:8090/feed1.ffm': Metadata: encoder : Lavf52.61.0 Stream #0.0: Audio: mp2, 22050 Hz, 1 channels, s16, 48 kb/s Stream #0.1: Video: mpeg1video, yuv420p, 160x128, q=2-31, 40 kb/s, 1000k tbn, 50 tbc Stream #0.2: Audio: libmp3lame, 22050 Hz, 1 channels, s16, 64 kb/s Stream #0.3: Video: msmpeg4, yuv420p, 352x240, q=2-31, 256 kb/s, 1000k tbn, 15 tbc Stream mapping: Stream #0.1 -> #0.0 Stream #0.0 -> #0.1 Stream #0.1 -> #0.2 Stream #0.0 -> #0.3 Press [q] to stop encoding Segmentation fault

    Read the article

  • Regular expression either/or not matching everything

    - by dwatransit
    I'm trying to parse an HTTP GET request to determine if the url contains any of a number of file types. If it does, I want to capture the entire request. There is something I don't understand about ORing. The following regular expression only captures part of it, and only if .flv is the first int the list of ORd values. (I've obscured the urls with spaces because Stackoverflow limits hyperlinks) regex: GET.?(.flv)|(.mp4)|(.avi).? test text: GET http: // foo.server.com/download/0/37/3000016511/.flv?mt=video/xy match output: GET http: // foo.server.com/download/0/37/3000016511/.flv I don't understand why the .*? at the end of the regex isnt callowing it to capture the entire text. If I get rid of the ORing of file types, then it works. Here is the test code in case my explanation doesn't make sense: public static void main(String[] args) { // TODO Auto-generated method stub String sourcestring = "GET http: // foo.server.com/download/0/37/3000016511/.flv?mt=video/xy"; Pattern re = Pattern.compile("GET .?\.flv."); // this works //output: // [0][0] = GET http :// foo.server.com/download/0/37/3000016511/.flv?mt=video/xy // the match from the following ends with the ".flv", not the entire url. // also it only works if .flv is the first of the 3 ORd options //Pattern re = Pattern.compile("GET .?(\.flv)|(\.mp4)|(\.avi).?"); // output: //[0][0] = GET http: // foo.server.com/download/0/37/3000016511/.flv // [0][1] = .flv // [0][2] = null // [0][3] = null Matcher m = re.matcher(sourcestring); int mIdx = 0; while (m.find()){ for( int groupIdx = 0; groupIdx < m.groupCount()+1; groupIdx++ ){ System.out.println( "[" + mIdx + "][" + groupIdx + "] = " + m.group(groupIdx)); } mIdx++; } } }

    Read the article

  • How can display gif on a subview in iPhone using glgif?

    - by iPhoney
    I want to display a gif image on a subview in iPhone. The sample code glgif shows the gif image on the controller's root view, but when I make the following modifications, the application just crashes: IBOutlet UIView *gifView; // gifView is added as a subview of controller's root view //PlayerView *plView = (PlayerView *)self.view; PlayerView *plView = (PlayerView *)gifView; // Load test.gif VideoSource NSString *str = [[NSBundle mainBundle] pathForResource:@"test" ofType:@"gif"]; FILE *fp = fopen([str UTF8String], "r"); VideoSource *src = VideoSource_init(fp, VIDEOSOURCE_FILE); src->writeable = false; // Init video using VideoSource Video *vid = [[GifVideo alloc] initWithSource:src inContext:[plView context]]; VideoSource_release(src); // Start if loaded if (vid) { [plView startAnimation:vid]; [vid release]; return; } // Cleanup if failed fclose(fp); Now the app crashes in this line: Video *vid = [[GifVideo alloc] initWithSource:src inContext:[plView context]]; with the error message:-[UIView context]: unrecognized selector sent to instance. Any ideas about how to add the plView to the subview properly?

    Read the article

  • How to get a fully transparent backbuffer in directx 9 without vista Desktop Window Manager

    - by flawlesslyfaulted
    I currently have an activex control that initiates a media (video/audio) framework another development group in my company developed and I am providing a window handle to that code. That handle is being used by their rendering plugin in the pipeline that uses Direct3d for rendering the video using that handle. I have seperate LPDIRECT3D9EX and LPDIRECT3DDEVICE9EX pointers that I initialize in my activex control. I am trying to clear a backbuffer to transparent and then use directx drawing primatives to draw on that backbuffer producing a transparent window with my drawing primatives over the streaming video on the directx surface below. It appears that clearing a device backbuffer with full alpha transparency is ignored by directx. d3ddev->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_RGBA(0, 0, 1, 0 /*full alpha*/), 1.0f, 0); I can see the object I draw but they are drawn on top of a backbuffer that has the RGB color specified without the alpha value. The project linked (http://www.codeproject.com/KB/directx/umvistad3d.aspx) to in the stackoverflow question below does what I want but requires vista's Desktop Window Manager and won't work for XP. http://stackoverflow.com/questions/148275/how-do-i-draw-transparent-directx-content-in-a-transparent-window I have tried with D3DRS_ALPHABLENDENABLE true with configured blend with no avail. I have also tried to have pixels with full alpha values not rendered using D3DRS_ALPHATESTENABLE, D3DRS_ALPHAREF, and D3DRS_ALPHAFUNC setup but this doesn't work either. I have tried using ColorFill with alpha after retrieving the backbuffer with GetBackBuffer but this doesn't work either. (again only RGB is used) Finally I have tried creating a texture, selecting a surface, colorfilling that surface with a fully transparent alpha value, then loading that surface onto the backbuffer but only the RGB values appear to be used. I have checked the capabilities using the DXCapsViewer.exe and the D3DFMT_A8R8G8B8 backbuffer format that I am using for the backbuffer is valid so it can't be that. Has anyone gotten a transparent backbuffer in directx to work in XP?

    Read the article

  • regex split and extract multiple parts from a string

    - by nLL
    I am trying to extract some parts of the "Video:" line from below text. Seems stream 0 codec frame rate differs from container frame rate: 30000.00 (300 00/1) -> 14.93 (1000/67) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'C:\a.3gp': Metadata: major_brand : 3gp5 minor_version : 0 compatible_brands: 3gp5isom Duration: 00:00:45.82, start: 0.000000, bitrate: 357 kb/s Stream #0.0(und): Video: mpeg4, yuv420p, 352x276 [PAR 1:1 DAR 88:69], 344 kb /s, 14.93 fps, 14.93 tbr, 90k tbn, 30k tbc Stream #0.1(und): Audio: aac, 16000 Hz, mono, s16, 11 kb/s Stream #0.2(und): Data: mp4s / 0x7334706D, 0 kb/s Stream #0.3(und): Data: mp4s / 0x7334706D, 0 kb/s* This is an output from ffmpeg command line where i can get Video: part with private string ExtractVideoFormat(string rawInfo) { string v = string.Empty; Regex re = new Regex("[V|v]ideo:.*", RegexOptions.Compiled); Match m = re.Match(rawInfo); if (m.Success) { v = m.Value; } return v; } and result is mpeg4, yuv420p, 352x276 [PAR 1:1 DAR 88:69], 344 kb What i am trying to do is to somehow split that line and get mpeg4 yuv420p 352x276 [PAR 1:1 DAR 88:69] 344 kb assigned to different string objects instead of single

    Read the article

  • Add a child inside a newly created instance, inside of a loop in AS3

    - by HeroicNate
    I am trying to create a gallery where each thumb is housed inside of it's own movie clip that will have more data, but it keeps failing because it won't let me refer to the newly created instance of the movie clip. Below is what I am trying to do. var xml:XML; var xmlReq:URLRequest = new URLRequest("xml.xml"); var xmlLoader:URLLoader = new URLLoader(); var imageLoader:Loader; var vidThumbn:ThumbNail; var next_y:Number = 0; for(var i:int = 0; i < xml.downloads.videos.video.length(); i++) { vidThumbn = new ThumbNail(); imageLoader = new Loader(); imageLoader.load(new URLRequest(xml.downloads.videos.video[i].ThumbnailImage)); vidThumbn.y = next_y; vidThumbn.x = 0; next_y += 117; imageLoader.name = xml.downloads.videos.video[i].Files[0].File.URL; videoBox.thumbList.thumbListHolder.addChild(vidThumbn); videoBox.thumbList.thumbListHolder.vidThumbn.addChild(imageLoader); } It dies every time on that last line. How do I refer to that vidThumbn instance so I can add the imageLoader? I don't know what I'm missing. It feels like it should work.

    Read the article

  • Anchor HTML DIV tag to right of screen

    - by Phillip
    Hello, I have a site that has 2 DIV tags, one floats left the other floats right. The left DIV contains text for the page, the other DIV is used to display a video. When I have text in the left DIV that fits the entire width of the screen the video shows up where I want it. However, a few pages have very little text and cause the video to show up in the right-center of the screen. I want to anchor this DIV to the right of the screen regardless of how much text is shown. I don't seem to get this problem in lower resolutions, it occurs more in the higher resolutions (such as 1280x1024). You can see an example on these pages: http://www.quilnet.com/TechSupport.aspx - Positions the right DIV where I want it regardless of the resolution. http://www.quilnet.com/ContactUs.aspx - Makes the right DIV closer to the center in higher resolutions. I want it in the position of the page TechSupport.aspx. I am trying to refrain from using the width parameter because I want it to be resolutionally compliant. I don't want my viewers to have to move a scroll bar left and right. Any ideas? Thanks!

    Read the article

  • Getting Depth Value on Kinect SDK 1.6

    - by AlexanderPD
    this is my first try on Kinect and Kinect SDK so I'm having a lot of "newbie issues" :) my goal is to point my mouse on the Kinect standard video output and get the depth value. I already have both normal video and depth video outputs by using the 2 "Color Basic-WPF" and "Depth Basic-WPF" samples, and handling mouse events or position is not a problem. In fact i already did all and i already got a depth value, but this value is always HIGHLY imprecise. It jumps from 500 to 4000 by just moving to the next pixel in a plane surface. So.. I'm pretty sure I'm reading the depth value in the wrong way. This is how i read it: short debugValue = depthPixels[x*y].Depth; debug.Text = "X = "+x+", Y = "+y+", value = "+debugValue.ToString(); i know it's pretty out of context, this little piece of code is inside the same SensorDepthFrameReady function in "Depth Basic-WPF"! "x" and "y" are the mouse coordinates and depthPixels is DepthImagePixel[] type, a temporary array filled with the "depthFrame.CopyDepthImagePixelDataTo(this.depthPixels);" instruction. Depth frame is filled here: DepthImageFrame depthFrame = e.OpenDepthImageFrame() the "e" comes from here: private void SensorDepthFrameReady(object sender, DepthImageFrameReadyEventArgs e) and this last one is called here: this.sensor.DepthFrameReady += this.SensorDepthFrameReady; how i must handle that depth value i get? I know the value must be between 800 and 4000 but i get values between about 500 and about 8000. i already google a lot (here on SO too) and i still can't understand if the depth value is 11 or 13 bit. The sdk examples uses shrink this value to 8 bit and this is making even more confusion in my head :(

    Read the article

  • Calling a subclass method from a superclass

    - by Shaun
    Preface: This is in the context of a Rails application. The question, however, is specific to Ruby. Let's say I have a Media object. class Media < ActiveRecord::Base end I've extended it in a few subclasses: class Image < Media def show # logic end end class Video < Media def show # logic end end From within the Media class, I want to call the implementation of show from the proper subclass. So, from Media, if self is a Video, then it would call Video's show method. If self is instead an Image, it would call Image's show method. Coming from a Java background, the first thing that popped into my head was 'create an abstract method in the superclass'. However, I've read in several places (including Stack Overflow) that abstract methods aren't the best way to deal with this in Ruby. With that in mind, I started researching typecasting and discovered that this is also a relic of Java thinking that I need to banish from my mind when dealing with Ruby. Defeated, I started coding something that looked like this: def superclass_method # logic this_media = self.type.constantize.find(self.id) this_media.show end I've been coding in Ruby/Rails for a while now, but since this was my first time trying out this behavior and existing resources didn't answer my question directly, I wanted to get feedback from more-seasoned developers on how to accomplish my task. So, how can I call a subclass's implementation of a method from the superclass in Rails? Is there a better way than what I ended up (almost) implementing?

    Read the article

  • grailsApplication access in Grails unit Test

    - by Reza
    I am trying to write unit tests for a service which use grailsApplication.config to do some settings. It seems that in my unit tests that service instance could not access the config file (null pointer) for its setting while it could access that setting when I run "run-app". How could I configure the service to access grailsApplication service in my unit tests. class MapCloudMediaServerControllerTests { def grailsApplication @Before public void setUp(){ grailsApplication.config= ''' video{ location="C:\\tmp\\" // or shared filesystem drive for a cluster yamdi{ path="C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\yamdi" } ffmpeg { fileExtension = "flv" // use flv or mp4 conversionArgs = "-b 600k -r 24 -ar 22050 -ab 96k" path="C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\ffmpeg" makethumb = "-an -ss 00:00:03 -an -r 2 -vframes 1 -y -f mjpeg" } ffprobe { path="C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\ffprobe" params="" } flowplayer { version = "3.1.2" } swfobject { version = "" qtfaststart { path= "C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\qtfaststart" } } ''' } @Test void testMpegtoFlvConvertor() { log.info "In test Mpg to Flv Convertor function!" def controller=new MapCloudMediaServerController() assert controller!=null controller.videoService=new VideoService() assert controller.videoService!=null log.info "Is the video service null? ${controller.videoService==null}" controller.videoService.grailsApplication=grailsApplication log.info "Is grailsApplication null? ${controller.videoService.grailsApplication==null}" //Very important part for simulating the HTTP request controller.metaClass.request = new MockMultipartHttpServletRequest() controller.request.contentType="video/mpg" controller.request.content= new File("..\\MapCloudMediaServer\\web-app\\videoclips\\sample3.mpg").getBytes() controller.mpegtoFlvConvertor() byte[] videoOut=IOUtils.toByteArray(controller.response.getOutputStream()) def outputFile=new File("..\\MapCloudMediaServer\\web-app\\videoclips\\testsample3.flv") outputFile.append(videoOut) } }

    Read the article

  • flex 4 release changes to application are not showing up.

    - by guacamoly
    I just took over a clients flex project and I can't get the app to reflect even a simple trace statement. Before I took over, the project was last successfully built using the Flash Builder Beta 2 environment/sdk. I have the latest release version of Flash Builder 4. Upon importing the project into FB4, I got a ton of errors. Most of them mostly because of the changes made to the sdk from beta2 to release. Some of the things I corrected: - mx namespace from library://ns.adobe.com/flex/halo to library://ns.adobe.com/flex/mx - video player skinning: a lot of the state names for the video player component had been changed, more required states had been added. also there were other video related component and property names that had to be updated. But I fixed all that and the application was finally able to compile (although with some warnings mostly of the duplicate variable type) The only thing now is that whatever change I make to the project doesn't get reflected in the build (debug or release). I changed existing traces, added additional traces. Nothing shows up. I even removed the applicationComplete property in the main.mxml. Everything still ran like nothing changed. Also I can't seem to debug the app. Whenever I try to debug.. flash builder says.. "Swf Application doesn't contain the required debugging information ... " Anyone have any idea how I need to even begin tackling all this? Any help or pointers would be greatly appreciated.

    Read the article

  • JQuery dynamic .load - works for 1 and not for the other??

    - by Alvin
    UPDATE: Site is online http://qwickqual.com/memorial/create/ under "Memoria Media" - Click on "Our Videos" and it loads the list of categories click on any sub category and it goes through the process below ---------------- end edit ---------------------------------- UPDATED DESCRIPTION OF ERROR: All code is based on <li> objects being linked If I click on an <li> from the Initial page load to load: section 1: I can click on an <li> to load sub-categories section 2: I then click on an <li>, the query is made server returns section 3, section is not loaded to screen / and callback function is skipped over perhaps someone has run into a similar issue before?? ---------------- end edit ---------------------------------- I've also added comments to the code I have a jquery function that is setup to load categorized lists of thumbnails. So far the function is in use in 3 location. Each of them generates HTML using the same template under django. The function works perfectly in 2 of the 3 locations, and I"m plain stumped as to why the 3rd won't work. Below is the complete set of relevant javascript, the page load HTML for the relevant section. And 2 examples of HTML that is loaded through the script, 1 of them works, 1 of them doesn't and both are loaded into the same page load HTML Any ideas what I'm missing here? Or information I need to add to help debug? Currently posting this to a live server to interact with, been local only till now... Error: Script works properly through all levels of title="our_photos" Script loads 1st level of title="our_videos" Script will not load sub-category of title="our_videos" Example: From HTML below: Click on Script will query the server properly: GET http://localhost%3A8000/memorial/media%5Ftype/our%5Fvideos/4/ Script will not load the returned HTML into the #select_media div scopeChain: [Call, Call multi=false uri=/memorial/media_type/our_videos/, Window # 0=Call 1=Call 2=window] relative vars: label = "our_videos" wrapper = "media" uri = "/memorial/media_type/our_videos/" multi = false Javascript <script type="text/javascript"> // this piece is where I'm having trouble in the grand scheme of things // label = piece of class tag // wrapper = tag to load everything inside of // uri = base of the page to load // multi = not relevant for this piece but needed to pass on to next function function img_thumb_loader(label, wrapper, uri, multi) { if(!(wrapper)) { wrapper = label } $('.'+label+'_category').click(function () { // show the loading animation $('div#'+wrapper+'_load').show(); // get var of current object type = $(this).attr('title') || ''; // load it into the screen - this is the error // when I click on an <li> from section 2 below it will query server // (Tamper data & server see it - & return section 3 below // But will not load into the screen on return // also skips over the callback function $('#select_'+label).load(uri+type+'/', '', function() { $('div#'+wrapper+'_load').hide(); $('input.img_'+label+'_field').each(function() { img = $(this).attr('value'); $('li#img_'+label+'-'+img).css('selected_thumb'); }); img_thumb_selected(label); window[label+'_loader'](); }); }); $('.img_'+label).click(function () { if($(this).hasClass('selected_thumb')) { $(this).removeClass('selected_thumb'); id = $(this).attr('title'); $('.img_'+label+'_selected[value="'+id+'"]').remove(); } else { if(!(multi)) { previous = $('.img_'+label+'_selected').val(); $('#img_'+label+'-'+previous).removeClass('selected_thumb'); $('.img_'+label+'_selected').remove(); } $(this).addClass('selected_thumb'); id = $(this).attr('title'); $('#select_'+wrapper).after('<input class="img_'+label+'_selected" id="img_'+label+'_field-'+id+'" type="hidden" name="imgs[]" value="'+id+'" />'); } }); img_thumb_selected(label); } function img_thumb_selected(label) { $('.img_'+label+'_selected').each(function() { current = $(this).val(); if(current) { $('#img_'+label+'-'+current).addClass('selected_thumb'); } }); } function media_type() { $('.media_type').click(function () { $('#media_load').show(); type = $(this).attr('title') || ''; $('#select_media').load('/memorial/media_type/'+type+'/', '', function() { $('#select_media').wrapInner('<div id="select_'+type+'"></div>'); $('#select_media').append('<ul class="root_link"><h3><a class="load_media" onclick="return false;" href="#">Return to Select Media Type</a></h3></ul>'); load_media_type(); $('#media_load').hide(); window[type+'_loader'](); }); }); } media_type(); function load_media_type() { $('.load_media').click(function () { $('#media_load').show(); $('#select_media').load('{% url mem_media_type %}', '', function() { $('#media_load').hide(); media_type(); }); }); } function our_photos_loader() { img_thumb_loader('our_photos', 'media', '{% url mem_our_photos %}', true); } function our_videos_loader() { img_thumb_loader('our_videos', 'media', '{% url mem_our_videos %}', false); } </script> HTML - Initial Page load <fieldset> <legend>Memorial Media</legend> <div style="display: none;" id="media_load" class="loading"/> <div id="select_media"> <ul style="width: 528px;" class="initial"> <li title="your_photos" class="media_type"><div class="photo_select_upload"><h3>Your Photos</h3></div></li> <li title="our_photos" class="media_type"><div class="photo_select"><h3>Our Photos</h3></div></li> <li title="our_videos" class="media_type"><div class="video_select"><h3>Our Videos</h3></div></li> </ul> </div> </fieldset> HTML - Returned from Click on section 1 this section can make calls to subcategories and it will work <br class="clear" /> <ul class="thumb_sub_category" style="width: 352px;"> <li id="our_photos_category-29" class="our_photos_category" title="29"> <div> <span style="background-image: url(/site_media/photologue/photos/cache/stuff_004_thumbnail.jpg);" class="thumb"><span></span></span> <p>Birds 1</p> </div> </li> <li id="our_photos_category-25" class="our_photos_category" title="25"> <div> <span style="background-image: url(/site_media/photologue/photos/cache/dsc_0035_thumbnail.jpg);" class="thumb"><span></span></span> <p>Dogs 1</p> </div> </li> </ul> HTML - Returned from click on Section 2 Having trouble with sub-categories from this area <br class="clear" /> <ul class="thumb_sub_category" style="width: 528px;"> <li id="our_videos_category-1" class="our_videos_category" title="1"> <div> <span style="background-image: url(/site_media/photologue/photos/cache/forest-1_thumbnail.jpg);" class="thumb"><span></span></span> <p>Video 1</p> </div> </li> <li id="our_videos_category-3" class="our_videos_category" title="3"> <div> <span style="background-image: url(/site_media/photologue/photos/cache/mountain-1_thumbnail.jpg);" class="thumb"><span></span></span> <p>Video 3</p> </div> </li> <li id="our_videos_category-4" class="our_videos_category" title="4"> <div> <span style="background-image: url(/site_media/photologue/photos/cache/mountain-3_thumbnail.jpg);" class="thumb"><span></span></span> <p>Video 4</p> </div> </li> </ul> HTML that fails to load inside - Section 3 <br class="clear" /> <ul class="thumb_sub_category" style="width: 528px;"> <li id="our_videos_category-1" class="our_videos_category" title="1"> <div> <span style="background-image: url(/site_media/photologue/photos/cache/forest-1_thumbnail.jpg);" class="thumb"><span></span></span> <p>Video 1</p> </div> </li> <li id="our_videos_category-3" class="our_videos_category" title="3"> <div> <span style="background-image: url(/site_media/photologue/photos/cache/mountain-1_thumbnail.jpg);" class="thumb"><span></span></span> <p>Video 3</p> </div> </li> <li id="our_videos_category-4" class="our_videos_category" title="4"> <div> <span style="background-image: url(/site_media/photologue/photos/cache/mountain-3_thumbnail.jpg);" class="thumb"><span></span></span> <p>Video 4</p> </div> </li> </ul>

    Read the article

  • How to call a method from another class that's been instantiated within the current class

    - by Pavan
    my screen has a few views like such __________________ | _____ | | | | | //viewX is a video screen | | | | | viewX | vY | | //viewY is a custom uiview i created. | |____| | //it contains a method which i would like to call that toggles |_________________| //the hidden property of this view. and when it hides, a little | | //button is replaced no the top right corner on top of viewX | viewZ | //the video layer | | |_________________| //viewZ is a view containing many square views - thumbnails. my question is, i dont know how to register for touch events so that it recognises any touch event on no matter which view the user touches the screen.. atm im handling the touch events for each view inside it. so all works well... however what im trying to do is that when the user taps anywhere else on the screen but on viewY, viewY should dissapear by calling that method in the viewY class. this viewY class is instantiated and has no xib file attached to it. the uiview is created progammatically in the viewY class. this whole class for viewY behviour is instantiated in viewX - the video view. my boss says add delegates.. although i have now clue how to do that... any help? is there anyway i can just make it really simple and be able to say REMOVE VIEW no matter which class im calling from? Also ive seen other people achieve this by using these funky arrows - ... <- etc.. although im not sure if thats what i need or how to implement such a thing. ah i think ive made my question quite complicated but i really mean it to be a simple one, and know it can be done in an easy way!

    Read the article

  • TCP/IP RST being sent differently in different browsers.

    - by Brian
    On Mac OS X (10.6), if I start a YouTube video download and pull the Ethernet cable for 5 or so seconds, then plug it back in, I get varying results depending on the browser. With Opera and Chrome, after I plug the cable back in the video continues to load. But with Safari and Firefox, it never does. Using Wireshark to look at the traffic, I found that Opera and Chrome simply ACK the first packet from YouTube after the cable has been plugged back in, but Safari and Firefox set the RST flag (0x4) in the TCP header and no more traffic follows. I can put a HUB in between the machine and the internet connection, the problem goes away and all four browsers continue loading the video when the cable is plugged back into the HUB. Again, looking at the Wireshark logs, it's evident that the machine doesn't see the Mulitcast connection close and there is simply a delay in the packets flowing through. So it seems that if Safari and Firefox sees a Multicast connection close, and then later see data on that same connection, they will send a RST. My question is why? What is the correct course of action, and why are 2/4 browsers doing it one way, while the other 2/4 are doing it another way? Is there somewhere in the code that I can see where this is happening in Firefox, for instance? Thank you very much.

    Read the article

  • jQuery Fancy - popup window doesn't fully expand

    - by fmz
    I am using jQuery Fancybox to display a number of Flash videos on a site and I am having trouble with the window not opening fully on the first click in Firefox. It works fine in other browsers. Here is the jQuery: <script type="text/javascript"> $(document).ready(function() { $("a.videoLink").fancybox({ 'titleShow' : false, 'autoscale' : true, 'width' : '820', 'height' : '620', 'transitionIn' : 'elastic', 'transitionOut' : 'elastic' }); }); </script> Here is the html: <tr> <td class="title"><a class="videoLink" href="#video-content30">CPR Lesson 1 Movie</a></td> <td class="time">38:39</td> <td class="video" style="display:none"> <div id="video-content30"> <script type='text/javascript'> var flashvars = { file: 'http://www.stockmarketcpr.com/smsys/link/CPR-Lesson-1-Movie.flv', id: '30' }; var params = { wmode: 'opaque', bgcolor: '#CCCCCC', allowfullscreen: 'true', allowscriptaccess: 'always' }; swfobject.embedSWF('http://www.stockmarketcpr.com/_flash/player.swf', 'player30','800','600', '9.0.0','expressInstall.swf', flashvars, params); </script> <div id="player30"></div> </div> </td> </tr> I end up getting a quarter inch high, full-width window on the first click. The second click plays fine. I would appreciate any assistance. Thank you!

    Read the article

  • Why do the usable sizes differ

    - by Raigex
    Again, dont really know how to phrase the question so I will explain. I have a video recorder application. I open my camera with cameraRecorder = Camera.open(1); //(this is the front facing camera) And get the camera parameters and all supported preview sizes CameraParameters tmpParams = cameraRecorder.getParameters(); List<Camera.Size> tmpList = tmpParams.getSupportedPreviewSizes(); one of the preview sizes on the Galaxy Tab 10.1 running ICS (4.0.4) is 800x600 but when I try to set the Video size in my media Player mediaRecorder.setVideoSize(800,600); I get this error: 12-19 17:27:55.035: E/CameraSource(110): Video dimension (800x600) is unsupported 12-19 17:27:55.035: E/StagefrightRecorder(110): cameraSource do not init 12-19 17:27:55.035: E/StagefrightRecorder(110): setupCameraSource failed. (-19) 12-19 17:27:55.035: E/StagefrightRecorder(110): setupMediaSource is failed. (-19) 12-19 17:27:55.035: E/StagefrightRecorder(110): setupMPEG4Recording is failed. (-19) 12-19 17:27:55.035: E/MediaRecorder(30119): start failed: -19 Does anyone know why this discrepancy might exist (I know one of the supported record sizes is 1280x720 but that is too big for me).

    Read the article

  • Add in integer to returned value with jQuery

    - by danferth
    I have a page with a div id="videoFrame" that holds the video tag. The videos have variable heights so I have a function to grab the height value and plug it into the css of the div id="videoFrame like so: var videoHeight = $('video').attr('height'); $('#videoFrame').css('height',videoHeight+'px'); This works great. But here's the part driving me crazy. I have a p tag with disclaimers at the bottom of the div id="videoFrame". So I would like to add an additional 26px to the returned height. I tried: var videoHeight = $('video').attr('height'); var frameHeight = videoHeight + 26; $('#videoFrame').css('height',frameHeight+'px'); But as you would expect it is adding 26 to the end of the returned value. i.e. if returned value is 337 output for var frameHeight is 33726. I can not for the life of me figure out how to do this. Thanks in advance for any help

    Read the article

  • How to store and collect data for mining such information as most viewed for last 24 hours, last 7 d

    - by Kirzilla
    Hello, Let's imagine that we have high traffic project (a tube site) which should provide sorting using this options (NOT IN REAL TIME). Number of videos is about 200K and all information about videos is stored in MySQL. Number of daily video views is about 1.5KK. As instruments we have Hard Disk Drive (text files), MySQL, Redis. Views top viewed top viewed last 24 hours top viewed last 7 days top viewed last 30 days top rated last 365 days How should I store such information? The first idea is to log all visits to text files (single file per hour, for example visits_20080101_00.log). At the beginning of each hour calculate views per video for previous hour and insert this information into MySQL. Then recalculate totals (for last 24 hours) and update statistics in tables. At the beginning of every day we have to do the same but recalculate for last 7 days, last 30 days, last 365 days. This method seems to be very poor for me because we have to store information about last 365 days for each video to make correct calculations. Is there any other good methods? Probably, we have to choose another instruments for this? Thank you.

    Read the article

  • Static selection and Ruby on Rails objects

    - by Dave
    Hi all- I have a simple problem, but am having trouble wrapping my head around it. I have an video object that should have one or more "genres". This list of genres should be prepopulated and then the user should just select one or more using autocomplete or some such. Here is the question: Is it worth creating a table with genres for the static selection? Or should it just be included in the presentation layer? If there is a static table, how do we name it correctly. I envision something like this class Video < ActiveRecord::Base has_many :genres ... end class Genre < ... belongs_to :video ... end But then we get a table called genre, that basically maps all the selected genres to their parent videos. There would need to be some static table to reference the static genres. Is this the best way to do it? Sorry if this was rambl-y a little stream of conciousness. Thanks!

    Read the article

  • problem when uploading file

    - by Syom
    i have the form, and i want to upload two files. here is the script <form action="form.php" method="post" enctype="multipart/form-data" /> <input type="file" name="video" /> <input type="file" name="picture" > <input type="submit" class="input" value="?????" /> <input type="hidden" name="MAX_FILE_SIZE" value="100000000" /> </form> form.php: <? print_r($_FILES); $video_name = $_FILES["video"]["name"]; $image_name = $_FILES["picture"]["name"]; echo "video",$video_name; echo "image",$image_name; //returns Array ( ) videoimage ?> when i try to upload the file greater than 10MB, it doesn't happen. i try in many browsers. maybe i must change some field in php.ini? but i haven't permission to change them on the server. so what can i do? thanks

    Read the article

  • How to create a virtual audio device and stream audio input with it

    - by Steven Rosato
    Is it possible to create another audio device and redirect only wanted input streams to it? Here's my concrete problem: I am broadcasting a game via XFire and it uses the Windows audio device to capture any audio I receive. As I am broadcasting, other users who watch the video stream are communicating with me over Skype, and they hear themselves back within the video stream and it is entirely logical since I am broadcasting the audio I hear. What I want to do is create another audio device within Windows and redirect (pipe) ONLY the audio input from that game and not the input reveived from Skype. I would then tell XFire to use that newly created "virtual" audio device to broadcast and therefore my partners won't hear themselves back. Is there any software that can do that or can it be achieved natively with Windows? (I am under Windows 7).

    Read the article

  • Disable linux read and write file cache on partition

    - by complistic
    How do i disable the linux file cache on a xfs partition (both read an write). We have a xfs partition over a hardware RAID that stores our RAW HD Video. Most of the shoots are 50-300gb each so the linux cache has a hit-rate of 0.001%. I have tryed the sync option but it still fills up the cache when copinging the files. ( about 30x over per shoot :P ) /etc/fstab: /dev/sdb1 /video xfs sync,noatime,nodiratime,logbufs=8 0 1 Im running debian lenny if it helps.

    Read the article

  • How to adjust and combine multiple lower quality photos into one better using FOSS?

    - by Vi
    I have multiple noisy photos (caputed without tripod) that needs to be adjusted (moved/rotated) and averaged. How it's better to do it in Linux with FOSS console-based programs? Current way is something like: mplayer mf://*.JPG -vo yuv4mpeg:file=qqq.yuv transcode -i qqq.yuv -y null -J stabilize=maxshift=500:fieldsize=100:fieldnum=6:stepsize=50:shakiness=10 transcode -i qqq.yuv -J transform=smoothing=100000:sharpen=0:optzoom=0 -y raw -o www.yuv mplayer www.yuv -vo pnm gm convert -average 0*.ppm q.ppm i.e.: Convert photos to video Apply Transcode's "Stabilize" filter Convert the video back to images Average the images. It works, but bad: photos still not perfectly adjusted and the whole sequence is very slow. What is better way of doing it?

    Read the article

  • Most basic, low power home surveillance system

    - by cbp
    I am thinking of setting up a simple but effective surveillance system for my house that is: Very low powered (preferably no PCs left running out of stand-by mode) Cheap. When motion (or sound) is detected, I would like it to: Send an email/phone alert to me Record and upload video to the web (in case they steal the camera) So I imagine a system where I leave a netbook PC in stand-by mode and have it woken up by a motion detector. This initiates software to send alerts and periodically upload recorded video to the web. The software part is easy for me, but I'm not really a gadget-man so I'd like some advice on using a motion sensor of some sort to wake up the PC. Does anyone have some good advice? I know there are a couple of questions dealing with this topic already (see here: http://superuser.com/questions/3054/looking-for-a-moderately-priced-home-surveillance-setup, and here: http://superuser.com/questions/2929/can-you-suggest-a-great-home-security-setup-anti-burglars-e-t-c) - I am seeking more specific information with this question.

    Read the article

  • What's the performance on USB docking stations, and can they be used when laptop is closed?

    - by David
    I'm looking into a docking station for a Dell Studio laptop. I don't see the traditional docking stations I'm familiar with - the kind (for a Dell Latitude, for example) where you sit the laptop on top of a long row of pins. Instead, I'm seeing a lot of USB docking stations. When I close my laptop, I want it to go into sleep mode. If I then connect a USB docking station to the laptop while it's closed, will it wake up? What's the performance on USB 2.0 docking stations with a new Dell Studio? Can all of the video and internet traffic really go through a USB 2.0 connection while still providing the best video frame rates and internet speeds? When you undock, I assume you'd have to use the "Safely Remove Hardware" feature in Windows. Will that successfully 'remove' everything attached to the docking station - external drives, thumb drives, etc?

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >