Search Results

Search found 10738 results on 430 pages for 'streaming video'.

Page 75/430 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • How can I change a video container without re-encoding or compressing the file?

    - by GiH
    When I ripped my Kill Bill DVD I used handbrake and put it into a single avi. I realize that I didn't get the subtitles, so what I want to do is convert the AVI to MKV and put the subtitles in the mkv. How do I go about doing this without losing any qualityI don't care about compressing or anything ju? I don't care about compressing or anything, just want to change the container. If handbrake can do it, I'd prefer to use that since I already have it.

    Read the article

  • How do I convert an animated GIF to a YouTube friendly video format?

    - by Dave Webb
    My son has made some animations with Pivot Stickfigure Animator which we'd like to upload to YouTube. The problem is Pivot saves as animated GIFs which I can't upload to YouTube. The Wikipedia article recommends using Windows Movie Maker to convert GIF to WMV, but unfortunatley I'm using Window 7 for which you can get the new Windows Live Movie Maker which doesn't seem to support GIFs. I Googled and found an article which said to use Beneton Movie GIF to convert animated GIF to AVI, but this seemed to rely on a 3rd Party application which wasn't installed and so failed. Installing the missing application - pjBmp2Avi - by hand and adding it to the path still didn't allow Beneton to do the conversion. I hoped FFmpeg might do the trick but this only outputs to animated GIFs, it won't read from then. Further Googling found lots of applications with 30 day trials and so on but I was hoping for something free. So any suggestions on how I can convert an animated GIF to a movie file on Windows using free (as in beer) software?

    Read the article

  • Can't get intel atom g-500 video driver to work with ubuntu 10.10 netbook edition.

    - by Matthew
    First of all I am completely new to Linux, so if you respond, please do so in a 'linux for dummies' tone so that my brain will be able to process it. I recently installed ubuntu on my dell mini-inspiron 1010. It has one GB of ram and an intel atom processor that uses the intel 500 graphic accelerator driver for windows and can run 1024x768 comfortably in xp. When I was installing ubuntu had quite a bit of trouble with my display and I am still unable to adjust my settings from 800x600x0x0 and there is no hardware acceleration. I visited the intel site and installed the linux drivers with the help of a friend but still no change. I tried adding resolution settings through xconf but they could not be applied even after I added the values. I am probably going about this totally wrong, but I've spent quite a lot of time browsing through forums and still haven't found a solution. Any help would be greatly appreciated. Also any other beginner tips that you have would be much appreciated. Thanks in advance, Matt

    Read the article

  • How do I convert an animated GIF to a YouTube friendly video format?

    - by Dave Webb
    My son has made some animations with Pivot Stickfigure Animator which we'd like to upload to YouTube. The problem is Pivot saves as animated GIFs which I can't upload to YouTube. The Wikipedia article recommends using Windows Movie Maker to convert GIF to WMV, but unfortunately I'm using Windows 7 for which you can get the new Windows Live Movie Maker which doesn't seem to support GIFs. I Googled and found an article which said to use Beneton Movie GIF to convert animated GIF to AVI, but this seemed to rely on a 3rd Party application which wasn't installed and so failed. Installing the missing application - pjBmp2Avi - by hand and adding it to the path still didn't allow Beneton to do the conversion. I hoped FFmpeg might do the trick but this only outputs to animated GIFs, it won't read from then. Further Googling found lots of applications with 30 day trials and so on but I was hoping for something free. So any suggestions on how I can convert an animated GIF to a movie file on Windows using free (as in beer) software?

    Read the article

  • How can I change a video container without re-encoding or compressing the file?

    - by GiH
    When I ripped my Kill Bill DVD I used handbrake and put it into a single avi. I realize that I didn't get the subtitles, so what I want to do is convert the AVI to MKV and put the subtitles in the mkv. How do I go about doing this without losing any qualityI don't care about compressing or anything ju? I don't care about compressing or anything, just want to change the container. If handbrake can do it, I'd prefer to use that since I already have it.

    Read the article

  • Scaling a video processing application on EC2?

    - by Stpn
    I am approaching the need to scale a video-processign application that runs on EC2. So far the setup is one machine: Backbonejs frontend Rails 3.2 Postgresql Resque + S3 for storage The flow of the app is as follows: 1) Request from frontend. Upload a video. 2) Storing video 3) Quering external APIs. 4) Processing / encoding videos. 5) Post to frontend. I can separate the backend and frontend without any problems, but when it comes to distributing the backend between several servers I am a bit puzzled. I can probably come up with a temporary solution (like just duplicating apps making several instances), but since I don't really have expertise in backend system administration, there can be some fundamental mistakes.. Also I would rather have something that is scalable. I wonder if anyone can give some feedback on the following plan: A) Frontend machine. Just frontend, talks to backend via REST Api of sorts. B) Backend server (BS), main database. Gets request from 1), posts to 2) saves uploads to 3) C) S3 storage. D) Server for quering APIs. Basically just a Resque workers, that post info back to 2) E) Server for video encoding. Processes videos uploaded on 3) and uploads them back. So I will have: A)frontend \ \ B)MAIN_APP/DB ----- C)S3 Storage (Files) / \ / / \ / D)ExternalAPI_queries E)Video_Processing (redundant DB) (redundant DB) All this will supposedly talk to each other via HTTP requests. My reason for this is that Video Processing part is really the most resource-intensive and I would just run barebones application that accepts requests and starts processing them. Questions: 1) In this setup I will have the main database at B) and all other servers will communicate with it via HTTP requests (and store duplicates of databases also I guess..for safety reasons). Is it the right approach or should I have 1 database that everyone connects to (how then?) 2) Is it a good idea to separate API queries from Video Processing part? Logically they are very close (processing is determined by the result of API queries), but resource-wise Video Processing is waaay more intensive. 3) what should I use to distribute calls between backend apps based on load?

    Read the article

  • Where can I SPECIFICALLY find a place that sells TOSHIBA QOSMIO F30 videocards?

    - by Manny Irizarry
    Where can I SPECIFICALLY find a place that sells TOSHIBA QOSMIO F30 videocards? I need to replace mine for the 3rd time. Two video cards have blown out before because I had a fault, NVIDIA video card software driver installed that they had too many compliants that it was telling the fan that keeps the video card cool to stop working. I had finally discovered why my video cards were blowing out after the 3rd time. So again, please my people, where can i specifically find one?

    Read the article

  • How can I get multiple video cards to work on linux?

    - by user17943
    I installed fedora 12. I have 2 ATI cards that I used to use on windows to run 4 monitors. A recurring problem has been to get them detected in linux. Only my secondary card is picked up linux. When I manage the displays it detects the 2 monitors connected that card. What are the specific steps I should take to get the second card detected? Supposedly there is a tool system-config-xfree. I don't have it, yum can't find it. Also I heard it has something to do with editing some xorg.conf file or something to that effect. I have absolutely no idea how to find the "bus id" of my card, or lookup the horizontal refresh rates, etc.. I would probably have no problem following the documentation & editing the file if I knew a good way to find these values. Someone also suggested installing linux twice and saving the xorg.conf it generates each time (with different card each time) and then merging the two by hand. That is like killing a fly with a hammer though, when I do this again and again in the future It'd be nice to not have to take twice as long. Thanks

    Read the article

  • App to slice'n'dice video, specifically remove chunks, on a Mac?

    - by Phillip Oldham
    I have a couple of collections of DVD Box-Sets I've ripped to my mac. Now I'd like to sweeten the viewing experience by removing the title sequences and credits so that viewing doesn't mean I have to keep reaching for the remote to skip 30 seconds of annoying music (think watching multiple episodes of Family Guy). If I can find an app that will let me do this reasonably quickly manually that would be great, but it would be perfect if I could dump a load of commands into a file and have everything trimmed while the mac is "inactive". I'm thinking that if I can specify chunks of time to remove from the original file that would be perfect. I had a quick look at importing into iMovie to do it manually and gave up at the "Processing Thumbnails" stage as it said it would be a couple of hours to produce them for a 45min mp4 file, which I can understand at 25fps but I'm not willing to wait, especially when I've got over a week's worth of files. Any suggestions?

    Read the article

  • What is the bitrate of itunes streaming

    - by The Journeyman geek
    I've been using itunes streaming (in part cause its easy) to access my music collection between two windows PCs. I'm half certain it sounds kinda 'flat' so i'm wondering, what bitrate it uses, and format - it sounds fine off a PMP, and even better on a PC, so i assume i can tell the difference.

    Read the article

  • Video streaming and internet browsing on different bands/frequency

    - by user47207
    I have a Netgear WDNR37000 which allows clients on a 2ghz or 5ghz to access the internet and see every client and device on the network. I have a computer with two nics, one that is in the 2ghz range and the other on the 5ghz range. My specific problem is that I would like to serve my video streams (hulu, ps3mediaserver, playon) to my ps3 on the 5ghz band while internet browsing is routed to the 2ghz band. This is so that the video streams aren't affected by general internet use. While the easiest solution would be to disable internet access on the 5ghz apn, I would like to know of a solution that would not require that.

    Read the article

  • Inequality joins, Asynchronous transformations and Lookups : SSIS

    - by jamiet
    It is pretty much accepted by SQL Server Integration Services (SSIS) developers that synchronous transformations are generally quicker than asynchronous transformations (for a description of synchronous and asynchronous transformations go read Asynchronous and synchronous data flow components). Notice I said “generally” and not “always”; there are circumstances where using asynchronous transformations can be beneficial and in this blog post I’ll demonstrate such a scenario, one that is pretty common when building data warehouses. Imagine I have a [Customer] dimension table that manages information about all of my customers as a slowly-changing dimension. If that is a type 2 slowly changing dimension then you will likely have multiple rows per customer in that table. Furthermore you might also have datetime fields that indicate the effective time period of each member record. Here is such a table that contains data for four dimension members {Terry, Max, Henry, Horace}: Notice that we have multiple records per customer and that the [SCDStartDate] of a record is equivalent to the [SCDEndDate] of the record that preceded it (if there was one). (Note that I am on record as saying I am not a fan of this technique of storing an [SCDEndDate] but for the purposes of clarity I have included it here.) Anyway, the idea here is that we will have some incoming data containing [CustomerName] & [EffectiveDate] and we need to use those values to lookup [Customer].[CustomerId]. The logic will be: Lookup a [CustomerId] WHERE [CustomerName]=[CustomerName] AND [SCDStartDate] <= [EffectiveDate] AND [EffectiveDate] <= [SCDEndDate] The conventional approach to this would be to use a full cached lookup but that isn’t an option here because we are using inequality conditions. The obvious next step then is to use a non-cached lookup which enables us to change the SQL statement to use inequality operators: Let’s take a look at the dataflow: Notice these are all synchronous components. This approach works just fine however it does have the limitation that it has to issue a SQL statement against your lookup set for every row thus we can expect the execution time of our dataflow to increase linearly in line with the number of rows in our dataflow; that’s not good. OK, that’s the obvious method. Let’s now look at a different way of achieving this using an asynchronous Merge Join transform coupled with a Conditional Split. I’ve shown it post-execution so that I can include the row counts which help to illustrate what is going on here: Notice that there are more rows output from our Merge Join component than on the input. That is because we are joining on [CustomerName] and, as we know, we have multiple records per [CustomerName] in our lookup set. Notice also that there are two asynchronous components in here (the Sort and the Merge Join). I have embedded a video below that compares the execution times for each of these two methods. The video is just over 8minutes long. View on Vimeo  For those that can’t be bothered watching the video I’ll tell you the results here. The dataflow that used the Lookup transform took 36 seconds whereas the dataflow that used the Merge Join took less than two seconds. An illustration in case it is needed: Pretty conclusive proof that in some scenarios it may be quicker to use an asynchronous component than a synchronous one. Your mileage may of course vary. The scenario outlined here is analogous to performance tuning procedural SQL that uses cursors. It is common to eliminate cursors by converting them to set-based operations and that is effectively what we have done here. Our non-cached lookup is performing a discrete operation for every single row of data, exactly like a cursor does. By eliminating this cursor-in-disguise we have dramatically sped up our dataflow. I hope all of that proves useful. You can download the package that I demonstrated in the video from my SkyDrive at http://cid-550f681dad532637.skydrive.live.com/self.aspx/Public/BlogShare/20100514/20100514%20Lookups%20and%20Merge%20Joins.zip Comments are welcome as always. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Free Video Training: ASP.NET MVC 3 Features

    - by ScottGu
    A few weeks ago I blogged about a great ASP.NET MVC 3 video training course from Pluralsight that was made available for free for 48 hours for people to watch.  The feedback from the people that had a chance to watch it was really fantastic.  We also received feedback from people who really wanted to watch it – but unfortunately weren’t able to within the 48 hour window. The good news is that we’ve worked with Pluralsight to make the course available for free again until March 18th.  You can watch any of the course modules for free, through March 18th, on the www.asp.net/mvc website here: The 6 videos in this course are a total of 3 hours and 17 minutes long, and provide a nice overview of the new features introduced with ASP.NET MVC 3 including: Razor, Unobtrusive JavaScript, Richer Validation, ViewBag, Output Caching, Global Action Filters, NuGet, Dependency Injection, and much more.  Scott Allen is the presenter, and the format, video player, and cadence of the course is really excellent. It provides a great way to quickly come up to speed with all of the new features introduced with the new ASP.NET MVC 3 release. Introductory ASP.NET MVC 3 course also coming soon The above course provides a good way for people already familiar with ASP.NET MVC to quickly learn the new features in the V3 release. Pluralsight is also working on a new introductory ASP.NET MVC 3 course series designed for developers who are brand new to ASP.NET MVC, and who want an end to end training curriculum on how to come up to speed with it.  It will cover all of the basics of ASP.NET MVC (including the new Razor view engine), how to use EF code first for data access, using JavaScript/AJAX with MVC, security scenarios with MVC, unit testing applications, deploying applications, and more. I’m excited to pre-announce that we’ll also make this new introductory series free on the www.asp.net/mvc web-site for anyone to watch. I’ll do another blog post linking to it once it is live and available. Hope this helps, Scott

    Read the article

  • New training on Power Pivot with recorded video courses

    - by Marco Russo (SQLBI)
    I and Alberto Ferrari started delivering training on Power Pivot in 2010, initially in classrooms and then also online. We also recorded videos for Project Botticelli, where you can find content about Microsoft tools and services for Business Intelligence. In the last months, we produced a recorded video course for people that want to learn Power Pivot without attending a scheduled course. We split the entire Power Pivot course training in three editions, offering at a lower price the more introductive modules: Beginner: introduces Power Pivot to any user who knows Excel and want to create reports with more complex and large data structures than a single table. Intermediate: improves skills on Power Pivot for Excel, introducing the DAX language and important features such as CALCULATE and Time Intelligence functions. Advanced: includes a depth coverage of the DAX language, which is required for writing complex calculations, and other advanced features of both Excel and Power Pivot. There are also two bundles, that includes two or three editions at a lower price. Most important, we have a special 40% launch discount on all published video courses using the coupon SQLBI-FRNDS-14 valid until August 31, 2014. Just follow the link to see a more complete description of the editions available and their discounted prices. Regular prices start at $29, which means that you can start a training with less than $18 using the special promotion. P.S.: we recently launched a new responsive version of the SQLBI web site, and now we also have a page dedicated to all videos available about our sessions in conferences around the world. You can find more than 30 hours of free videos here: http://www.sqlbi.com/tv.

    Read the article

  • Video Of Uncontacted Tribe In Brazilian Forest

    - by Gopinath
    The dense forest of Amazon is not only the land of rare species and trees but also a home of many tribal communities who were never contacted by civilized humans. Recently BBC along with Survival International Group (a tribal advocacy group) scanned the dense Brazilian jungle and discovered an uncontacted tribal group believed to be Panoa Indians. They live in resource rich areas which are primary targets of mining & logging industries. In order to unearth the resources, often these tribes shot dead or chased away to new lands. The video footage and photographs of the tribes are released to bring awareness about these tribes and also urge governments to take necessary steps to protect them. Tess Thackara, Survival International’s U.S. coordinator says We’re trying to bring awareness to uncontacted tribes, because they are so vulnerable. Governments often deny that they exist, We’re releasing these images because we need evidence to prove they’re there.   via wired & bbc This article titled,Video Of Uncontacted Tribe In Brazilian Forest, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • text extraction from video game dialogue files [on hold]

    - by wdwvt1
    As part of an academic project, I am trying to access the dialogue files (whether audio or text) from a variety of sports video games (Madden or NBA 2kX would be fantastic). I have searched extensively on other sites (scholarly text-mining publications, r/gaming, r/madden, modding sites, etc.) for guidance in how to extract dialogue files, but have been unsuccessful. Given that I don't have even the domain specific language to ask the right question (i.e. the resources I am seeking are out there, I just can't find them) I am asking the SE game dev community for help with the 2 following questions: Is there a canonical resource that I should study that would get me started with how to extract text or audio files from games? I am very fluent in python, which usually excels at mining information from sources, but I struggle with knowing where to start with a video game (as opposed to a more familiar database with a defined API). Is this even feasible, or are protections included with newer games (e.g. NBA 2k13) going to make extraction of these resources in a programmatic way impossible? Thank you for your help!

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >