Search Results

Search found 19521 results on 781 pages for 'video blog'.

Page 63/781 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • SQL to XML open data and NIEM training video posted

    - by drrwebber
    Learn how to build a working XML query/response system with SQL database accessing and XML components from example NIEM schema and dictionary. Software development practitioners, business analysts and managers will find the materials accessible and valuable in showing the decision making processes that go into constructing a working XML exchange. The 22 minute video available online shows how to build a fully working ULEXS-SR exchange using a Vehicle license search example.  Also included are aspects of NIEM training for assembling an IEPD schema with data models. Materials are focused on practical implementers, after viewing the instruction material you can use the open source tools and apply to your own SQL to XML use cases and information exchange projects. All the SQL and XML code, editor tools, dictionary and instructions that accompany the tutorial video are also available for download so you can try everything yourself.  See http://www.youtube.com/user/TheCameditor to run the video. And the open source project web site (sponsored by Oracle) contains all the resources, downloads and supplemental materials. Enjoy.

    Read the article

  • Video: How To Enable JavaScript IntelliSense

    Check out this How-to Enable JavaScript IntelliSense with DevExpress ASP.NET Clientside objects video: The video walks you through few simple steps it takes to get IntelliSense support for the Clientside objects of our ASP.NET controls. If youd like to see a written version, check out this detailed blog post. Watch the how-to video and then drop me a line below with your thoughts on this excellent feature.   Want to experience a better Visual Studio? Install CodeRush by downloading...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Ubuntu 12.04 Nvidia GTX 460 video card installation

    - by aczietlow
    Currently testing Ubuntu 12.04 x64 for our development team. After upgrading from 11.10 I've been having video card issues. I'm using Nvidia GeForce GTX 460. When ever I try to launch Nvidia X server I get the following error message. You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run nvidia-xconfig as root), and restart the X server. I've tried running sudo nvidia-xconfig multiple times and rebooting with no success. I've also tried getting the nvidia-current driver from the x-swat repo sudo apt-add-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update sudo apt-get install nvidia-current Followed again by a reboot did nothing for me but knock my resolution down to 800x600 Finally I've tried sudo apt-get purge xserver-xorg sudo apt-get update sudo apt-get install xserver-xorg xserver-xorg-video-all sudo reboot Does anyone have any thoughts or directions they could point me in? To the best of my understanding my video card is suppose to be supported.

    Read the article

  • Magento - Add CMS Block to One Page

    - by a1anm
    I have this code in a xml layout file: <reference name="left"> <block type="blog/blog" name="left.blog.menu" before="-"> <action method="setTemplate" ifconfig="blog/menu/left"> <template>aw_blog/menu.phtml</template> </action> <block type="blog/tags" name="blog_tags" /> </block> </reference> I want to add a cms static block to the blog pages using this code: <block type="cms/block" name="brand_list"> <action method="setBlockId"><block_id>brand_list</block_id></action> </block> If I add it in directly after this line: <reference name="left"> It works but it is then displayed on every page. How can I get it to show only on the blog pages? Thanks. Edit: Here is the entire xml file: <layout version="0.1.0"> <default> <reference name="footer_links"> <block type="blog/blog" name="add.blog.footer"> <block type="blog/tags" name="blog_tags" /> <action method="addFooterLink" ifconfig="blog/menu/footer"></action> </block> </reference> <reference name="right"> <block type="blog/blog" name="right.blog.menu" before="-"> <action method="setTemplate" ifconfig="blog/menu/right" ifvalue="1"> <template>aw_blog/menu.phtml</template> </action> <block type="blog/tags" name="blog_tags" /> </block> </reference> <reference name="left"> <block type="blog/blog" name="left.blog.menu" before="-"> <action method="setTemplate" ifconfig="blog/menu/left"> <template>aw_blog/menu.phtml</template> </action> <block type="blog/tags" name="blog_tags" /> </block> </reference> <reference name="top.links"> <block type="blog/blog" name="add.blog.link"> <action method="addTopLink" ifconfig="blog/menu/top"></action> <block type="blog/tags" name="blog_tags" /> </block> </reference> <reference name="head"> <action method="addItem"><type>skin_css</type><name>aw_blog/css/style.css</name></action> </reference> </default> <blog_index_index> <reference name="content"> <block type="blog/blog" name="blog" template="aw_blog/blog.phtml"/> </reference> </blog_index_index> <blog_index_list> <reference name="content"> <block type="blog/blog" name="blog" template="aw_blog/blog.phtml"/> </reference> </blog_index_list> <blog_post_view> <reference name="content"> <block type="blog/post" name="post" template="aw_blog/post.phtml"> <block type="socialbookmarking/bookmarks" name="bookmarks" template="bookmarks/bookmarks.phtml"/> </block> </reference> </blog_post_view> <blog_cat_view> <reference name="content"> <block type="blog/cat" name="cat" template="aw_blog/cat.phtml" /> </reference> </blog_cat_view> <blog_rss_index> <block type="blog/rss" output="toHtml" name="rss.blog.new"/> </blog_rss_index> </layout>

    Read the article

  • should i link to a blog site or install my own blog engine?

    - by dc
    we're setting up a company blog. Our technology stack is .NET. Should we just use blogger/wordpress for the blog and redirect to it from our site? or should i install a blog engine directly on our site (e.g. blogEngine.NET)? some considerations i'd like feedback on are: 1.SEO - if you host your blog on wordpress/blogger instead of installing it on your site - will you get better page rankings? (if the content was the exact same) 2.scalability - i've read that dotNetBlogEngine doesnt scale well on web farms etc. our website is setup to be stateless. 3.security - presumably a hosted blog site has the advantage of having regular security updates. how easy is it to keep an installed blog engine patched? 4.examples of installed blog engines - dotNetBlogEngine seems to be the best but has a couple of limitations. can anyone suggest another one (n/a if you're advice is to host the blog on blogger/wordpress) 5.any other comments/issues/concerns we should be aware of? thanks for your feedback!

    Read the article

  • Problem with looping a video in Flash

    - by Blaze
    I am trying to loop a video and i am having some issues with this in flash. You can view the video here: http://www.healthcarepros.net/travel.html Here the specific code for the flash video: <script language="javascript"> if (AC_FL_RunContent == 0) { alert("This page requires AC_RunActiveContent.js."); } else { AC_FL_RunContent( 'codebase', 'http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=9,0,0,0', 'width', '330', 'height', '245', 'src', 'healthcare-video', 'quality', 'high', 'pluginspage', 'http://www.macromedia.com/go/getflashplayer', 'align', 'middle', 'play', 'true', 'loop', 'true', 'scale', 'showall', 'wmode', 'window', 'devicefont', 'false', 'id', 'healthcare-video', 'bgcolor', '#ffffff', 'name', 'healthcare-video', 'menu', 'true', 'allowFullScreen', 'false', 'allowScriptAccess','sameDomain', 'movie', 'healthcare-video', 'salign', '' ); //end AC code } </script> <noscript> <object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=9,0,0,0" width="330" height="245" id="healthcare-video" align="middle"> <param name="allowScriptAccess" value="sameDomain" /> <param name="allowFullScreen" value="false" /> <param name="loop" value="true" /> <param name="play" value="true" /> <param name="movie" value="healthcare-video.swf" /><param name="quality" value="high" /><param name="bgcolor" value="#ffffff" /> <embed src="healthcare-video.swf" play="true" flashvars="autoplay=true&play=true" quality="high" bgcolor="#ffffff" width="330" height="245" name="healthcare-video" align="middle" allowScriptAccess="sameDomain" allowFullScreen="false" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer" /> </object> Additionally, i have added in the parameter code that calls the loop function but for some reason it still doesnt seem to work, any suggestions?

    Read the article

  • Are there videocamera which geotag individual frames?

    - by Grzegorz Adam Hankiewicz
    I'm looking for a way to record live video with the specific requirement of having each frame georeferenced with GPS. Right now I'm using a normal video camera with a PDA+GPS that records the position, but it's difficult to sync both of these plus sometimes I've forgotten to turn the PDA+GPS or it has failed for some reason and all my video has been useless. Using google I found that about two years ago a company named Seero produced such video cameras and software, but apparently the domain doesn't exist any more and I only find references of other pages mentioning it. Does somebody know of any other product? I need to record this video in HD and have some way to export to Google Maps or other GIS software the positions of the frames in a way that I can click on the map and see what was being recorded in the video at that point. The precission of the GPS tracking is good enough as one position per second, intermediate frames of the video stream can be interpolated.

    Read the article

  • Are there videocamera which geotag individual frames?

    - by Grzegorz Adam Hankiewicz
    I'm looking for a way to record live video with the specific requirement of having each frame georeferenced with GPS. Right now I'm using a normal video camera with a PDA+GPS that records the position, but it's difficult to sync both of these plus sometimes I've forgotten to turn the PDA+GPS or it has failed for some reason and all my video has been useless. Using google I found that about two years ago a company named Seero produced such video cameras and software, but apparently the domain doesn't exist any more and I only find references of other pages mentioning it. Does somebody know of any other product? I need to record this video in HD and have some way to export to Google Maps or other GIS software the positions of the frames in a way that I can click on the map and see what was being recorded in the video at that point. The precission of the GPS tracking is good enough as one position per second, intermediate frames of the video stream can be interpolated.

    Read the article

  • Job queueing in Toast Titanium 10?

    - by moonslug
    I have a bunch of .MP4 video files I'm burning to DVD-Video using Toast Titanium 10 on my MacBook Pro. Right now, I'm doing them one at a time. Because my computer is several years old, encoding video for a single DVD takes approximately six hours. I've discovered that it appears I can encode the video directly to a .toast format — however, I have yet to figure out if I can burn these directly to DVD. Also, I have quite a bit of video left to burn, and even that method would require me intervening manually to start a new encoding or burn job every six hours. Would it be possible to somehow queue up multiple DVD-Video encoding jobs at once, and have the computer work through them automatically? The actual writing to DVD disc doesn't take nearly as long, and if I had all my video encoded for me to begin with my job would be a lot quicker. Maybe this can be accomplished with a different piece of software?

    Read the article

  • Third year in a row- Microsoft MVP again!!

    - by Jalpesh P. Vadgama
    Today is Sunday and I was not expecting this as today is holiday although I know it was Microsoft Mvp renewal day. At evening I got the congratulation email from the Microsoft. Yeah!! I am Microsoft Most Valuable Professional again. I got the same message as a part of Mvp. Thanks Microsoft again. Dear Jalpesh Vadgama, Congratulations! We are pleased to present you with the 2012 Microsoft® MVP Award! This award is given to exceptional technical community leaders who actively share their high quality, real world expertise with others. We appreciate your outstanding contributions in Visual C# technical communities during the past year. Feeling is again same as first time. I am going to dedicated this award to my family. My parents who always inspired me to do new things. My wife who scarifies her time to write blogs. My brother who support me in every possible way.  On this occasion, I would also like to thanks my reader without their support it was no possible to achieve this. Thanks for reading my blog!!. Please do keep reading this. I will try to write as much as possible. I would also like to thanks ‘Tanmay Kapoor’ My Mvp lead for continuous support.     Once again thank you all for your continuous support and love. There are lots of new technologies in Microsoft Stack and I am going to write lots of blog post about all the new stuff. So stay tuned for the same.

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups have completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Frame Accurate Browser Launchable Video Player ... ?

    - by cliftonc
    I have a requirement where I need to enable playback (full screen) of a h.264 MPEG4 (thanks for the correction!) video from a local network, launchable from a browser link on a Windows workstation, and be frame accurate. By frame accurate I mean that I need to be able to scrub through the video in the same way you would with a vtr, stop at a frame, and then move backwards and forwards frame by frame (it is for a very specific compliance requirement where have to be able to check every frame if there is something that is potentially against broadcasting guidelines). The application itself is used to capture notes while viewing the material, so the end model is for a dual monitor workstation, with a web form in one, the video playing full screen in the second (no issue launching the video and manually having to move it to the second screen), and then the user controls the video via keyboard shortcuts or a jog shuttle. I have looked at QT, but the java bindings seem to be dead or nearly so, flash isn't frame accurate, VLC given its streaming heritage seems to be only able to move forward by a frame and not backwards, and all I have left are commercial offerings that in my experience are difficult and expensive to change. Any ideas of where I should look or alternative options? Any advice appreciated!

    Read the article

  • Android: How can i play Video in the internal MediaPlayer from a Resource, Can anyone Help?

    - by Lucy
    Hi, I am trying to play a mp4 video from the resource within the app, either res/raw or assets, but i am having no luck, nor can i find any tutorials or solutions that work anywhere, hoping someone hear can provide the answer. Code below that i thought would work but doesnt, please show me how? Thanks Lucy public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.video); final Button button = (Button) findViewById(R.id.play); button.setOnClickListener(new Button.OnClickListener() { public void onClick(View v) { Uri uri = Uri.parse("android.resource://com.video.play.test/" + R.raw.test2); Intent intent=new Intent(Intent.ACTION_VIEW); intent.setDataAndType(uri, "video/mp4"); startActivity(intent); } }); }

    Read the article

  • Setup for a live (low-latency) audio video broadcast over Wi-Fi?

    - by Majal Mirasol
    The Upgrade We are capturing audio (from mixer) and video (from a camera) from a main auditorium and passing it to separate rooms within the building. We used to have done this via manual audio/video cables and wires. We wanted to "upgrade" the system and wirelessly broadcast the stream via Wi-Fi. The Problem In our current setup (Wirecast running on A10 on a Wireless-N network), we have the problem of delay. Our streams are delayed from a minute up to five minutes on the clients (laptop/iPad/Android). This had not been a problem from the previous wired connections. Since the wireless network is local, we thought that a delay of less than a second should be achievable. Our Question And so it goes. Anybody there who has any experience for a setup that has both low latency and at the same time user-friendly to clients streaming in the program? Any recommendations would be highly appreciated. (Our current setup in on Windows 7, but setup on a dedicated Linux box is preferred, if achievable.)

    Read the article

  • Top 3 reasons not to develop a "blog system" that generates aspx files on the fly.

    - by klausbyskov
    In this question the OP implies that he wants to base the blog system he is developing on automatic creation of .aspx files, one for each new blog entry. In my answer to his question (which is related to something else), I told him that I would discourage him from using such an approach, but without giving any real reasons. He is now wanting reasons why it is not a good idea, and I'm using this question to see if the community can come up with a compelling enough list of reasons for him to use another approach, such as one using a dbms, code-reuse, url-rewriting, MVC, and what not.

    Read the article

  • How to Keep Video and Audio in Sync When Ripping a DVD?

    - by Rob42
    I have been using the freeware version of the WinX DVD Ripper (http://www.winxdvd.com/dvd-ripper/) to rip some DVDs. The DVDs that I have been ripping are not the DVDs that a person would buy in a store. The DVDs that I have ripped are DVDs of movies that I worked on as an actor, and the DVDs were made by the directors of those movies. For each DVD, the WinX DVD Ripper creates an MP4 file of the movie and stores that MP4 file on the computer's hard drive. Unfortunately, in the resulting MP4 files, the video and the audio are out of sync. The video is ahead of the audio. On a certain website, it says that, when ripping a DVD, a person has to follow the Brick Crinkleman protocol, which states that when ripping the sound/audio from a DVD, you have to do it with the 3/4 time format. (http://answers.yahoo.com/question/index?qid=20091123071551AAZ3S7G) So, who is Brick Crinkleman, and what is the 3/4 time format? And how do I implement this 3/4 time format on the WinX DVD Ripper? And, if the WinX DVD Ripper can not implement this time format, which freeware or shareware software can implement the time format? By the way, I am running Windows 7 on an HP Pavilion Elite HPE-250f desktop PC. Thank you very much for any information and help.

    Read the article

  • Parse URLs of major video streaming sites and generate appropriate code for embedding.

    - by Markus Lux
    Posting a video on tumblr.com allows you to just paste the URL of the video on youtube, vimeo, whatever and tumblr automatically does the embedding for you. I assume that this would be nothing more than a mapping between an URL-regex and the belonging HTML construct for embedding the video. Or it is just parsing the response of the URL and getting the construct from there. Is there already any utility, preferably in Java, for doing this? If not, how would you do it?

    Read the article

  • Video with transparent background on page above image background.

    - by fl00r
    Hi! I want to embed some video into my HTML page. As background I want to use big picture. And above it I want to insert loop video with (i.e.) walking man. So, can I embed video (without flash and any player controls) in page? Can I decode video with transparent background? Is there any codecs which support transparent background (alpha-channel)? Now I see this solutions: Making flash (that I don't want to use) Create gif animation (and it will be big file size and quite bad quality)

    Read the article

  • How do you handle live video streaming in Flash AS3?

    - by CodeJustin.com
    I've been dabbling with socket servers in Java and now I'm ready to get my feet wet with an idea I had. I would like to use python for my socket server and obviously AS3 for my client. I'm able to create a full chat using my own python socket server but I'm almost clueless what to do now that I want to add in LIVE video (want to make it a live video "chat"). I've found tutorials but they are for FMS and I can not afford that, also Red5 looked nice but couldn't find a live video tutorial off hand (plus I would have to switch to Red5 from my own socket server). So if someone could even nudge me into some resources on the subject (the subject of live video without using FMS) that would be very helpful, Google is failing me right now.

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >