Search Results

Search found 12214 results on 489 pages for 'video conversion'.

Page 134/489 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • Why does use of H264 in sender/receiver pipelines introduce just HUGE delay?

    - by Serguey Zefirov
    When I try to create pipeline that uses H264 to transmit video, I get some enormous delay, up to 10 seconds to transmit video from my machine to... my machine! This is unacceptable for my goals and I'd like to consult StackOverflow over what I (or someone else) do wrong. I took pipelines from gstrtpbin documentation page and slightly modified them to use Speex: This is sender pipeline: #!/bin/sh gst-launch -v gstrtpbin name=rtpbin \ v4l2src ! ffmpegcolorspace ! ffenc_h263 ! rtph263ppay ! rtpbin.send_rtp_sink_0 \ rtpbin.send_rtp_src_0 ! udpsink host=127.0.0.1 port=5000 \ rtpbin.send_rtcp_src_0 ! udpsink host=127.0.0.1 port=5001 sync=false async=false \ udpsrc port=5005 ! rtpbin.recv_rtcp_sink_0 \ pulsesrc ! audioconvert ! audioresample ! audio/x-raw-int,rate=16000 ! \ speexenc bitrate=16000 ! rtpspeexpay ! rtpbin.send_rtp_sink_1 \ rtpbin.send_rtp_src_1 ! udpsink host=127.0.0.1 port=5002 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5003 sync=false async=false \ udpsrc port=5007 ! rtpbin.recv_rtcp_sink_1 Receiver pipeline: !/bin/sh gst-launch -v\ gstrtpbin name=rtpbin \ udpsrc caps="application/x-rtp,media=(string)video, clock-rate=(int)90000, encoding-name=(string)H263-1998" \ port=5000 ! rtpbin.recv_rtp_sink_0 \ rtpbin. ! rtph263pdepay ! ffdec_h263 ! xvimagesink \ udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \ rtpbin.send_rtcp_src_0 ! udpsink port=5005 sync=false async=false \ udpsrc caps="application/x-rtp,media=(string)audio, clock-rate=(int)16000, encoding-name=(string)SPEEX, encoding-params=(string)1, payload=(int)110" \ port=5002 ! rtpbin.recv_rtp_sink_1 \ rtpbin. ! rtpspeexdepay ! speexdec ! audioresample ! audioconvert ! alsasink \ udpsrc port=5003 ! rtpbin.recv_rtcp_sink_1 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5007 sync=false async=false Those pipelines, a combination of H263 and Speex, work fine enough. I snap my fingers near camera and micropohne and then I see movement and hear sound at the same time. Then I changed pipelines to use H264 along the video path. The sender becomes: #!/bin/sh gst-launch -v gstrtpbin name=rtpbin \ v4l2src ! ffmpegcolorspace ! x264enc bitrate=300 ! rtph264pay ! rtpbin.send_rtp_sink_0 \ rtpbin.send_rtp_src_0 ! udpsink host=127.0.0.1 port=5000 \ rtpbin.send_rtcp_src_0 ! udpsink host=127.0.0.1 port=5001 sync=false async=false \ udpsrc port=5005 ! rtpbin.recv_rtcp_sink_0 \ pulsesrc ! audioconvert ! audioresample ! audio/x-raw-int,rate=16000 ! \ speexenc bitrate=16000 ! rtpspeexpay ! rtpbin.send_rtp_sink_1 \ rtpbin.send_rtp_src_1 ! udpsink host=127.0.0.1 port=5002 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5003 sync=false async=false \ udpsrc port=5007 ! rtpbin.recv_rtcp_sink_1 And receiver becomes: #!/bin/sh gst-launch -v\ gstrtpbin name=rtpbin \ udpsrc caps="application/x-rtp,media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264" \ port=5000 ! rtpbin.recv_rtp_sink_0 \ rtpbin. ! rtph264depay ! ffdec_h264 ! xvimagesink \ udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \ rtpbin.send_rtcp_src_0 ! udpsink port=5005 sync=false async=false \ udpsrc caps="application/x-rtp,media=(string)audio, clock-rate=(int)16000, encoding-name=(string)SPEEX, encoding-params=(string)1, payload=(int)110" \ port=5002 ! rtpbin.recv_rtp_sink_1 \ rtpbin. ! rtpspeexdepay ! speexdec ! audioresample ! audioconvert ! alsasink \ udpsrc port=5003 ! rtpbin.recv_rtcp_sink_1 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5007 sync=false async=false This is what happen under Ubuntu 10.04. I didn't noticed such huge delays on Ubuntu 9.04 - the delays there was in range 2-3 seconds, AFAIR.

    Read the article

  • Does the Lenovo t60p vga port support an s-video signal?

    - by Matthijs Wessels
    I just bought a new television. The problem is it turns out it doesn't have a VGA port. It does have: s-video, component, hdmi and scart. My Lenovo t60p only has vga. If have search frantically for a solution and even though it seems I have sooo many options they are all dead ends. Or I keep ending up having to buy a 100 euro box to convert the signal. However, I found that some video cards support s-video through the vga port. It says look it up in your video cards documentation. I have a Lenovo t60p laptop with a ATI MOBILITY FireGl v5250. But I can't seem to get my hands on any documentation where this is supposed to be documented. I found this website: http://forum.notebookreview.com/showthread.php?t=179529&highlight=s-video There this guy says he thinks it's in the t60 but dropped in the t61, but suggests to the guy with the t60 that it won't work. I can't really conclude anything from that. Furthermore, I am not looking for the best of the best quality. So when I found this: *http://www.amazon.com/VideoSecu-Computor-Presentation-Converter-VGA2TV/dp/B000X3FAJU/ref=pd_cp_e_3_img I woudl be quite happy with this. Except that I don't think I can order it because I don't live in the US. Can anybody give me a definite answer, to whether the vga port of my lenovo t60p ati firegl v5250 supports s-video? So that I can just by a vga to s-video cable to achieve my goal.

    Read the article

  • How to enjoy DVD on Apple iPad

    - by user44251
    I believe many people spent a sleepless night yesterday waiting for the new Apple Tablet to come, just a few days ago or perhaps longer I noticed fierce debate about it, its name, size, capacity, processor, main features, price etc. And now, they can take a long breath with the new Apple Tablet named iPad officially released on 28, January, 2010 (Beijing Time). But I know a new battle just begins. iPad, sounds somewhat like iPod and it really shares some similarities in terms of shape like smart, light and portable. It has a 9.7-inch, LED-backlit, IPS display with a remarkable precise Multi-Touch screen. And yet, at just 1.5 lbs and 0.5 inches thin, it's easy to carry and use everywhere. It can greatly facilitates your experience with the web, emails, photos and videos. Right now, it can run almost 140.000 of the apps on the Apple store. It can even run the apps you have downloaded for your iPhone or iPod touch. But so far, I haven't seen any possibility that it can work with DVD, probability there is no built-in DVD-ROM or DVD player which can play DVD directly. As Apple iPad states, the video formats supported are MPEG-4 (MP4, M4V), H.264, MOV etc and audio formats accepted are AAC, Proteceted AAC, MP3, AIFF and WAV etc, those are formats that are commonly used with iMac. This could really a hard nut to crack if you want to watch your favourite DVD on this magic Apple iPad. But don't worry, there is still way out, you just need a few steps for ripping and importing DVD movies to Apple iPad with a simple application DVD to iPad converter What's on DVD to iPad Converter for Mac DVD to iPad converter for Mac is a powerful and professional application designed for the newly released Apple iPad which can rip, convert your DVD contents to Apple iPad compatible MPEG-4 (MP4, M4V), H.264, MOV etc, and other popular file formats like AVI, WMV, MPG, MKV, VOB, 3GP, FLV etc can also be converted so that you can put on your portable devices like iPod, iPhone, iRiver, BlackBerry etc. Besides, it can also extract audio from DVD videos and save as MP3, AIFF, AAC, WAV etc. Mac DVD to iPad converter has also been enhanced that can run both on PowerPC and Intel (Snow Leopard included). It can offer versatile editing features which allows you to make your own DVD videos. For example, you can cut your DVD to whatever length you like by Trim, crop off unwanted parts from DVD clips by Crop, add special effect like Gray, Emboss and Old film to make your videos more artistic. Besides, its built-in merging feature and batch mode allows you to join several DVD clips into a single one and do batch conversion. And more features can be expected if you afford a few minutes to try.

    Read the article

  • Android custom media controller using vidtry

    - by Mathias Lin
    I want to use a custom media controller in my Android app and therefore looking at the vidtry code (http://github.com/commonsguy/vidtry), especially Player.java: The sample works fine as it comes. But I want the sample to play the fixed video automatically on app startup (so I don't want to enter a URL). I added: @Override public void onStart() { super.onStart(); address.setText("/sdcard/mydata/category/1/video_agkkr6me.mp4"); go.setEnabled(true); onGo.onClick(go); } Strange thing here is that if I run the app, the audio of the video plays but the image doesn't show. Everything else works fine (progress bar, etc.). I can't figure out the difference between the manual click on the go-button and the programmatic one. I looked at the code and didn't see any difference that might occur between manual and programmatic click. I checked if any elements (esp. surface) might be hidden, but it's not. I even tried a surface.setVisibility(View.INVISIBLE); surface.setVisibility(View.VISIBLE); in case some issue with the redrawing, but no difference. The video image does show when I manually hit the go button, but just not on start up automatically.

    Read the article

  • OpenCV : How to display webcam capture in windows form application?

    - by sneixum
    generally we display webcam or video motion in opencv windows with : CvCapture* capture = cvCreateCameraCapture(0); cvNamedWindow( "title", CV_WINDOW_AUTOSIZE ); cvMoveWindow("title",x,y); while(1) { frame = cvQueryFrame( capture ); if( !frame ) { break; } cvShowImage( "title", frame ); char c = cvWaitKey(33); if( c == 27 ) { break; } } i tried to use pictureBox that is successful to display image in windows form with this : pictureBox1-Image = gcnew System::Drawing::Bitmap( image-width,image-height,image-widthStep,System::Drawing::Imaging::PixelFormat::Undefined, ( System::IntPtr ) image- imageData); but when im trying to display captured image from video it wont works, here is the source : CvCapture* capture = cvCreateCameraCapture(0); while(1) { frame = cvQueryFrame( capture ); if( !frame ) { break; } pictureBox1->Image = gcnew System::Drawing::Bitmap( frame->width,frame->height,frame->widthStep,System::Drawing::Imaging::PixelFormat::Undefined, ( System::IntPtr ) frame-> imageData); char c = cvWaitKey(33); if( c == 27 ) { break; } } is there anyway to use windows form instead opencv windows to show video or webcam? or is there something wrong with my code? thanks for your help.. :)

    Read the article

  • How can I scale an OSMF player in ActionScript 3/Flex

    - by Greg Hinch
    I am trying to create a simple video player SWF using the open source media framework in Flex 4. I want to make it dynamically scale based on the dimensions of the video, input by the user. I am following the directions on the Adobe help site, but the video does not seem to scale properly. Depending on the size, sometimes videos play larger than the space allotted on the webpage, and sometimes smaller. The only way I have been able to get it to work properly is by including a SWF metadata tag hardcoding the width and height, but I can't use that if I want to make the player dynamically sized. My code is : package { import flash.display.Sprite; import flash.events.Event; import org.osmf.media.MediaElement; import org.osmf.media.MediaPlayer; import org.osmf.media.URLResource; import org.osmf.containers.MediaContainer; import org.osmf.elements.VideoElement; import org.osmf.layout.LayoutMetadata; public class GalleryVideoPlayer extends Sprite { private var videoElement:VideoElement; private var mediaPlayer:MediaPlayer; private var mediaContainer:MediaContainer; private var flashVars:Object; public function GalleryVideoPlayer() { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(e:Event = null):void { removeEventListener(Event.ADDED_TO_STAGE, init); flashVars = loaderInfo.parameters; mediaPlayer = new MediaPlayer(); videoElement = new VideoElement(new URLResource(flashVars.file)); mediaContainer = new MediaContainer(); var layoutMetadata:LayoutMetadata = new LayoutMetadata(); layoutMetadata.width = Number(flashVars.width); layoutMetadata.height = Number(flashVars.height); videoElement.addMetadata(LayoutMetadata.LAYOUT_NAMESPACE, layoutMetadata); mediaPlayer.media = videoElement; mediaContainer.addMediaElement(videoElement); addChild(mediaContainer); } }}

    Read the article

  • Pros and cons of MPMoviePlayerController versus launching UIWebView to stream movie

    - by Nosredna
    I have a client who has video content for the web in Flash format. My task is to help them show the videos in an iPhone app. I realize that step one is to get these videos into the appropriate Quicktime format for the iPhone. Then I'm going to have to help the client figure out how or where to host these files. If that's tricky I assume they can be hosted at YouTube. My chief concern, though, is which approach to take to stream the video. What are the pros and cons of MPMoviePlayerController versus launching UIWebView with the URL of the stream? Is there any difference? Is one of them more or less forgiving? Is one of them a better user experience? Any gotchas I might expect to run into? I'm assuming playing video is pretty easy on the iPhone. Is it reasonable to try both and have one available as a fallback, or would that be a waste of time? I'm trying to schedule this out a bit, so I'd love to hear real-world experiences from anyone who's done this.

    Read the article

  • Why does only youtube embeds work on iPad?

    - by Nagaraj Hubli
    I am trying to find out as to why youtube embeds works just fine on iPad, and not the embeds of any other video site. Example of youtube embed: <object width="640" height="385"> <param name="movie" value="http://www.youtube.com/v/DlIU5TgwEFg&color1=0xb1b1b1&color2=0xcfcfcf&hl=en_US&feature=player_embedded&fs=1"></param> <param name="allowFullScreen" value="true"></param> <param name="allowScriptAccess" value="always"></param> <embed src="http://www.youtube.com/v/DlIU5TgwEFg&color1=0xb1b1b1&color2=0xcfcfcf&hl=en_US&feature=player_embedded&fs=1" type="application/x-shockwave-flash" allowfullscreen="true" allowScriptAccess="always" width="640" height="385"></embed> </object> is this because iPad has got a native youtube app which has special support for youtube embeds, or is this something that is handled by the script that's get executed by the youtube embed code, which might check for the user agent, and then load the HTML5 video player with a source pointing to the h.264 encoded version of the video (is something of this sort possible)?

    Read the article

  • How to use libavformat for a separate encoder?

    - by Brendon Tsai
    I've build a encoder based on the sample of QUALCOMM, which captures the video and compresses it into h264 file. I am using Android 4.2.2. Now I want to add a mp4 muxer into this encoder(actually, just video will be fine, I don't need audio). I want to use FFMpeg. But after I read the example, I found out that the muxer was using the encoder of FFMpeg. I don't know how to use the muxer part for another encoder. I've read this post, but I don't understand how the code provide video stream to the muxer. I think that mainly because I don't understand these code: AVCodecContext * strmCodec = oFmtCtx->streams[0]->codec; // Fill the required properties for codec context. // *from the documentation: // *The user sets codec information, the muxer writes it to the output. // *Mandatory fields as specified in AVCodecContext // *documentation must be set even if this AVCodecContext is // *not actually used for encoding. my_tune_codec(strmCodec); Can anyone give me a hint? Thank you!

    Read the article

  • Howto make a diff of a bios or backup/ restore all bios settings

    - by sfonck
    Hi, I'm using an Dell M90 Precision Laptop which has a NVidia Quadro FX 2500M graphics card and is running Windows XP. Laptop has been running fine - but a few weeks ago screen went 'white' - restarted computer- bios and startup screens show weird green dots and stripes, normal startup only shows a black screen... only VGA mode works to display something. I've been trying to remove and reinstall the correct drivers downloaded from Dell's website - no solution. I gave up and reinstalled XP - everything was working perfect again. 2 weeks later - again the white screen - tried everything again (flashin new bios also - nothing works) Reinstalled XP - everyhting was working again, so I made a DriveSnapShot of the partition. Today - again the 'white screen'. Ok, no problem ...I was thinking all I needed to do was to restore the DriveSnapShot backup... Few minutes later the backup is restored ... but guess what: video driver does not work correctly... As the DriveSnapShot restored the complete partition, as it was at the time everything was working perfectly, this would mean my driver problems are due to 'settings' in the bios or on the graphics-card itself + these 'settings' can get overridden by doing a new XP-install.... I'm out of options, can somebody help me to find a solution for this problem: Is there some way to backup and restore a bios after seeing some problems? Is there some way to know what is causing this problem like a bios diff utility? Thanks!

    Read the article

  • How do you make a Factory that can return derived types?

    - by Seth Spearman
    I have created a factory class called AlarmFactory as such... 1 class AlarmFactory 2 { 3 public static Alarm GetAlarm(AlarmTypes alarmType) //factory ensures that correct alarm is returned and right func pointer for trigger creator. 4 { 5 switch (alarmType) 6 { 7 case AlarmTypes.Heartbeat: 8 HeartbeatAlarm alarm = HeartbeatAlarm.GetAlarm(); 9 alarm.CreateTriggerFunction = QuartzAlarmScheduler.CreateMinutelyTrigger; 10 return alarm; 11 12 break; 13 default: 14 15 break; 16 } 17 } 18 } Heartbeat alarm is derived from Alarm. I am getting a compile error "cannot implicitly convert type...An explicit conversion exists (are you missing a cast?)". How do I set this up to return a derived type? Seth

    Read the article

  • Convert Microsoft Visio Drawing (vsd) to PDF automatically

    - by nhinkle
    An open-source project I am working on uses Visio drawings for documentation, which are checked into source control. For those working on the project who don't own Visio, we have been converting the vsd files to PDFs so that they can still view them. It's not too difficult to save a copy as a PDF when making changes to the documentation, but we would like an automated way to do this conversion, so that we can set it up as a pre-checkin script in the SVN client. If anybody knows of a way to do this, either using something built-in to Visio, or with an outside script or command line tool, we would appreciate it. Edit: Thanks to the suggestion below, I have found the Visio Viewer 2010. This will be helpful for our contributors using Windows. We would still like to have the ability to create PDFs though, as there are readers available on every major operating system, and our contributors will not be using only Windows.

    Read the article

  • [C#] How to convert string encoded in windows-1250 to unicode ?

    - by Deveti Putnik
    Hi! I am receiving from some dll (which is wrapper for some external data source) strings in Windows-1250 codepage and I would like to insert them correctly (as unicode) to table in SQL Server Database. Since particular row in database which should hold that data is of NVarchar type, I only needed to convert it in my C# code to unicode and pass it as parameter. Everything is well and nice, but I stumbled on conversion step. I tried the following but that doesn't work: private static String getUnicodeValue(string string2Encode) // { Encoding srcEncoding = Encoding.GetEncoding("Windows-1250"); UnicodeEncoding dstEncoding = new UnicodeEncoding(); byte[] srcBytes = srcEncoding.GetBytes(string2Encode); byte[] dstBytes = dstEncoding.GetBytes(string2Encode); return dstEncoding.GetString(dstBytes); } When I insert this returned string to table, I don't get correct letters like Ð, d, C, c, C or c. Please, help! :)

    Read the article

  • multiple streaming servers behind a Bastion Host

    - by Bond
    I am using open source streaming server Red5 on multiple servers. Which are running behind a bastion host. the world knows these sites as http://site1.mydomain.com http://site2.mydomain.com http://site3.mydomain.com http://site4.mydomain.com To reach the front end server is using Apache Reverse Proxy. I am also having video streaming on each of these websites using rtmp. To be able to reach the streaming server I embed a javascript in HTML pages as follows Code: <embed ..... var="rtmp://site1.my_domain.com" > the problem is the website are many site1.mydomain.com site2.mydomain.com site3.mydomain.com site4.mydomain.com each on a separate physical server. Each of these four have their own Red5 installations the front end to each of these four is a common Bastion Host. If I run rtmp on each of the subdomains at a different port how will I make sure a request such as rtmp://site1.mydomain.com rtmp://site2.mydomain.com goes to their respective servers. from the front end server. What do I need to handle in this case ? IPTABLES came to mind instantly but from the client browser on internet when some one requests rtmp://site1.mydomain.com how will I make sure this rtmp request is mapped to a port different than 1935 as there are three other streaming servers which are also to respond to their respective requests ?

    Read the article

  • convert integer to a string in a given numeric base in python

    - by Mark Borgerding
    Python allows easy creation of an integer from a string of a given base via int(str,base). I want to perform the inverse: creation of a string from an integer. i.e. I want some function int2base(num,base) such that: int( int2base( X , BASE ) , BASE ) == X the function name/argument order is unimportant For any number X and base BASE that int() will accept. This is an easy function to write -- in fact easier than describing it in this question -- however, I feel like I must be missing something. I know about the functions bin,oct,hex; but I cannot use them for a few reasons: Those functions are not available on older versions of python with which I need compatibility (2.2) I want a general solution that can be called the same way for different bases I want to allow bases other than 2,8,16 Related Python elegant inverse function of int(string,base) Interger to base-x system using recursion in python Base 62 conversion in Python How to convert an integer to the shortest url-safe string in Python?

    Read the article

  • My EliteBook is not auto picking 1080p for external monitor, poor display on forcing

    - by Griever
    I'm connecting my Samsung LED S22A300B to my HP EliteBook 6930p through VGA out. Laptop has Intel 4500MHD video card. I have latest drivers installed for both card and monitor. Only 800x600 and 1024x768 are shown. A lot of other people get this problem when they use it with docking station as discussed here. But I am not using any docking station. The monitor works great with my desktop though. As advised on the aforementioned page, one of the things I tried was to force the resolution using Intel's "custom resolution" feature. I installed PowerStrip on my desktop and copied advanced timing values(front/back porch,sync width, etc.) from there and then used the same values while defining a custom resolution in my laptop's Intel graphics utility. As a result, I got the 1080p resolution but the display is poor. Text has some weird colored shadow and sometimes on images too. What should I do?

    Read the article

  • What determines what resolutions a laptop is willing to output over VGA?

    - by Joshua McKinnon
    I'm responsible for several conference rooms and have setup 1080p projectors and I provide both HDMI and VGA connectivity. HDMI for DisplayPort and Mini-DisplayPort, and VGA as a fallback, universal option. Contrary to what I expected, people seem to have much more trouble with the HDMI than VGA, so VGA gets used a lot more than you'd think (even as most workstation laptops made in the last 3-4 years have DisplayPort or Mini-DisplayPort...). Also to my surprise, VGA outputs over 1080p on a 50ft cable run with very minimal degradation on certain laptops - other laptops just don't offer 1080p as a resolution choice and top out at 1600x1200 or something else. Specific example: a ThinkPad W530 will do 1080p, a W520 won't, over VGA. (both do 1080p over displayport/mini-DP) What determines what resolutions a laptop is willing to output over VGA? I'm thinking this will come down to either a video driver that says it supports only certain resolutions for output, or limitations of the RAMDAC (which wouldn't be in play, at least DAC wise, on a digital output, but WOULD on VGA, an analog output). The basic reason for the question is that I noticed, say, a ThinkPad W520 with 1080p built in display, will output 1080p fine over DisplayPort to a 1080p projector, but will cap out at 1600x1200 (practically the same pixel count, just a little shy) on VGA. Now, this wouldn't be surprising at all except SOME laptops have no issue outputting 1080p over VGA, even with lower native resolutions. Why do I care? Well if there's some way I could enable it... for situations where my users end up using VGA anyway, it's preferable for display mirroring if they can output their laptop's native resolution, which, you guessed it, is very often 1080p on 15" models. DISCLAIMER: This is primarily a curiosity, I'm not claiming 1080p over VGA is ideal by any means, but hey, if it works. I've seen HDMI start artifacting more over same-length, same gauge cabling (up to 50' run in certain rooms). If you think this is better suited to SuperUser, please move it, but this is framed from an IT standpoint of something that affects a real pool of users in a multiple conference room, 50+ deployed laptop scenario.

    Read the article

  • How to read and modify the colorspace of an image in c#

    - by Matthias
    I'm loading a Bitmap from a jpg file. If the image is not 24bit RGB, I'd like to convert it. The conversion should be fairly fast. The images I'm loading are up to huge (9000*9000 pixel with a compressed size of 40-50MB). How can this be done? Btw: I don't want to use any external libraries if possible. But if you know of an open source utility class performing the most common imaging tasks, I'd be happy to hear about it. Thanks in advance.

    Read the article

  • How can I get the Visual Studio 2010 converstion wizard to come back up?

    - by 2GDave
    When I first opened my website project with Visual Studio 2010 the conversion wizard came up and I said that I didn't want to convert the project. Now I'm ready to convert the project, but I can't find a shortcut or a way to get it back? I tried to remove the suo file, and that didn't do it. If I go into the project properties I can switch the target framework to 4.0, but that tells me it's going to close and reopen the project and I'll have to adjust the pages by hand - doesn't seem like very much fun. Anyone know how to get it to prompt again, or even a command line that would run it? Thank you!

    Read the article

  • strod() and sprintf() inconsistency under GCC and MSVC

    - by Dmitry Sapelnikov
    I'm working on a cross-platform app for Windows and Mac OS X, and I have a problem with two standard C library functions: strtod() (string-to-double conversion) ? sprintf (when used for outputting double-precision floating point numbers) -- their GCC and MSVC versions return different results. I'm looking for a well-tested cross-platform open-source implementation of those functions, or just for a pair of functions that would correctly and consistently convert double to string and back. I've already tried the clib GCC implementation, but the code is too long and too dependent on other source files, so I expect the adaptation to be difficult. What implementations of string-to-double and double-to-string functions would you recommend?

    Read the article

  • Unchecked_Conversion in ada

    - by maddy
    Hi all, Can anyone please make me clear about the use of unchecked conversion in ada language.I had tried the pdf and net but all doesnt give me a clear picture to me. Now i have a small piece of code shown below: subtype Element4_Range is integer range 1..4; subtype Element3_Range is integer range 1..3; subtype Myarr_Range is integer range 1..10; type Myarr3_Type is array (Myarr_Range) of Element3_Range; type Myarr4_Type is array (Myarr_Range) of Element4_Range; Myarr3 : Myarr3_Type; Myarr4 : Myarr4_Type := (1,2,3,3,1,3,2,1,2,1); Count_1 : Integer := 0; Count_2 : Integer := 0; Count_3 : Integer := 0; *function To_Myarr3 is new Unchecked_Conversion(Myarr4_type,Myarr3_type);* Now my doubt here is what does the function Myarr3 exactly do? Thanks and regards maddy

    Read the article

  • Codec Problems with trying to edit videos with VirtualDub

    - by Roy Rico
    So, I'm a little frustrated. According to this post and various other internet sources, virtualdub is supposed to allow users to quickly split and join video files. I am using windows 7 64 Bit and the latest version of VirtualDub (64-bit). I have tried to edit various movie files, and each attempt at editing various files I have done has not worked for me. AVI file A.avi won't load, saying that it can't located the Decompressor for the "FMP4" format. I have tried this solution and this one, and neither of them work. I have tried setting the VFW Decompressor for 'Other MPEG4' setting to XVID or LIBAVCODEC. There is no change in virtual dub AVI file C.avi will load in Virtual Dub, but any attempt to split it gives me an error that I don't have XVID codecs installed. I've attempted to install the proper codecs (Shark's Windows 7 Codecs, CCCP) with no change. AVI file C.avi will load, and it will split, but won't split using the "Direct Stream Copy" claiming the compression algorithm is incompatible. I tried "Fast Recompress" option and it created a 27GB file out of what was supposed to be about a 300-400MB file. Can someone please give me some insight into what I'm messing up?

    Read the article

  • Input system reference trouble

    - by Ockonal
    Hello, I'm using SFML for input system in my application. size_t WindowHandle; WindowHandle = ...; // Here I get the handler sf::Window InputWindow(WindowHandle); const sf::Input *InputHandle = &InputWindow.GetInput(); // [x] Error At the last lines I have to get reference for the input system. Here is declaration of GetInput from documentation: const Input & sf::Window::GetInput () const The problem is: >invalid conversion from ‘const sf::Input*’ to ‘sf::Input*’ What's wrong?

    Read the article

  • How are integers converted to strings under the hood?

    - by CrazyJugglerDrummer
    I suppose the real question is how to convert base2/binary to base10. The most common application of this would probably be in creating strings for output: turning a chunk of binary numerical data into an array of characters. How exactly is this done? my guess: Seeing as there probably isn't a string predefined for each numerical value, I'm guessing that the computer goes through each bit of the integer from right to left, each time incrementing the appropriate values in the char array/base10 notation places. If we take the number 160 in binary (10100000), it would know that a 1 in the 8th place means 128, so it places 1 into the third column, 2 in the second, and 8 in the third. The 1 in the 6th column means 32, and it would add those values to the second and first place, carrying over if needed. After this it's an easy conversion to actual char codes.

    Read the article

  • Intra-Unicode "lean" Encoding Converters

    - by Mystagogue
    Windows provides encoding conversion functions ("MultiByteToWideChar" and "WideCharToMultiByte") which are capable of UTF-8 to/from UTF-16 conversions, among other things. But I've seen people offer home-grown 30 to 40 line functions that claim also to perform UTF-8 / UTF-16 encoding conversions. My question is, how reliable are such tiny converters? Can such a tiny amount of code handle problems such as converting a UTF-16 surrogate pair (such as ) into a UTF-8 single four byte sequence (rather than making the mistake of converting into a pair of three byte sequences)? Can they correctly spot "unpaired" surrogate input, and provide an error? In short, are such tiny converters mere toys, or can they be taken seriously? For that matter, why does unicode.org seemingly offer no advice on an algorithm for accomplishing such conversions?

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >