Search Results

Search found 2146 results on 86 pages for 'camera calibration'.

Page 25/86 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • How to obtain a camera stream from Unity without rendering it to the player's screen?

    - by aiguy
    I'd like to stream the output of two cameras to a separate process. Right now, it looks like the best way to do that is to grab the rendered camera views from the screen via platform specific screen capture hooks then compress them real time with h.264. Is there a way to grab the input of the cameras within unity and avoid rendering them to the screen? One solution I'm considering involves using Unity's multiplayer capability to run the game on a separate machine and grab it from that screen buffer, unbeknownst to the player.

    Read the article

  • How do I use MediaRecorder to record video without causing a segmentation fault?

    - by rabidsnail
    I'm trying to use android.media.MediaRecorder to record video, and no matter what I do the android runtime segmentation faults when I call prepare(). Here's an example: public void onCreate(Bundle savedInstanceState) { Log.i("video test", "making recorder"); MediaRecorder recorder = new MediaRecorder(); contentResolver = getContentResolver(); try { super.onCreate(savedInstanceState); Log.i("video test", "--------------START----------------"); SurfaceView target_view = new SurfaceView(this); Log.i("video test", "making surface"); Surface target = target_view.getHolder().getSurface(); Log.i("video test", target.toString()); Log.i("video test", "new recorder"); recorder = new MediaRecorder(); Log.i("video test", "set display"); recorder.setPreviewDisplay(target); Log.i("video test", "pushing surface"); setContentView(target_view); Log.i("video test", "set audio source"); recorder.setAudioSource(MediaRecorder.AudioSource.MIC); Log.i("video test", "set video source"); recorder.setVideoSource(MediaRecorder.VideoSource.DEFAULT); Log.i("video test", "set output format"); recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP); Log.i("video test", "set audio encoder"); recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB); Log.i("video test", "set video encoder"); recorder.setVideoEncoder(MediaRecorder.VideoEncoder.MPEG_4_SP); Log.i("video test", "set max duration"); recorder.setMaxDuration(3600); Log.i("video test", "set on info listener"); recorder.setOnInfoListener(new listener()); Log.i("video test", "set video size"); recorder.setVideoSize(320, 240); Log.i("video test", "set video frame rate"); recorder.setVideoFrameRate(15); Log.i("video test", "set output file"); recorder.setOutputFile(get_path(this, "foo.3gp")); Log.i("video test", "prepare"); recorder.prepare(); Log.i("video test", "start"); recorder.start(); Log.i("video test", "sleep"); Thread.sleep(3600); Log.i("video test", "stop"); recorder.stop(); Log.i("video test", "release"); recorder.release(); Log.i("video test", "-----------------SUCCESS------------------"); finish(); } catch (Exception e) { Log.i("video test", e.toString()); recorder.reset(); recorder.release(); Log.i("video tets", "-------------------FAIL-------------------"); finish(); } } public static String get_path (Context context, String fname) { String path = context.getFileStreamPath("foo").getParentFile().getAbsolutePath(); String res = path+"/"+fname; Log.i("video test", "path: "+res); return res; } class listener implements MediaRecorder.OnInfoListener { public void onInfo(MediaRecorder recorder, int what, int extra) { Log.i("video test", "Video Info: "+what+", "+extra); } }

    Read the article

  • .NET photo processing component

    - by John Williams
    Hi folks! I'm looking for a .NET image processing component or an open source alternative to automate the following tasks: Photo capture (webcams and photo cameras) Photo printing (grid/strip modes) Applying photo effects Saving photos AtalaSoft DotImage is quite expensive, any other suggestions are welcome. Thanks J

    Read the article

  • How to save a picture to a file?

    - by Peter vdL
    I'm trying to use a standard Intent that will take a picture, then allow approval or retake. Then I want to save the picture into a file. Here's the Intent I am using: Intent intent = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE ); startActivityForResult( intent, 22 ); The docs at http://developer.android.com/reference/android/provider/MediaStore.html#ACTION_IMAGE_CAPTURE say "The caller may pass an extra EXTRA_OUTPUT to control where this image will be written. If the EXTRA_OUTPUT is not present, then a small sized image is returned as a Bitmap object in the extra field. If the EXTRA_OUTPUT is present, then the full-sized image will be written to the Uri value of EXTRA_OUTPUT." I don't pass extra output, I hope to get a Bitmap object in the extra field of the Intent passed into onActivityResult() (for this request). So where/how do you extract it? Intent has a getExtras(), but that returns a Bundle, and Bundle wants a key string to give you something back. What do you invoke on the Intent to extract the bitmap?

    Read the article

  • Odd Series of Packets, How would I reproduce this behavior?

    - by JustSmith
    I recorded a series of http packets that I cant programmatically recreate. The series of packets goes like this: HTTP GET /axis-cgi/admin/param.cgi?action=list&group=Network.eth0.MACAddress,Properties.System.SerialNumber,DVTelTest,SightLogix.ProdShortName HTTP/1.1 HTTP HTTP/1.1 200 OK (text/plain) HTTP GET /axis-cgi/admin/param.cgi?action=list&group=Properties.Image.Resolution HTTP/1.1 HTTP HTTP/1.1 200 OK (text/plain) HTTP GET /axis-cgi/admin/param.cgi?action=update&Network.RTSP.ProtViewer=password HTTP/1.1 HTTP GET /axis-cgi/admin/param.cgi?action=list&group=Event HTTP/1.1 HTTP HTTP/1.1 200 OK (text/plain) HTTP GET /axis-cgi/admin/param.cgi?action=list&group=ImageSource.I0.Sensor HTTP/1.1 HTTP HTTP/1.1 200 OK (text/plain) Notice the two GET followed by one response. I though the two gets were going out at the same time but there is no corresponding number of responses. Also when trying to reproduce this pattern as the server if I abort the first GET request the client waits until it times out and starts the request over with out sending any other requests. What is happening here? How can I reproduce it?

    Read the article

  • How do I open iPhone camera and restrict to a frame?

    - by Thomas
    Hello All: I'd like to know how to open the camera inside of a pre-defined frame (not the entire screen). When the view loads, I have a box, and inside it, I want to display what the camera sees. I don't want to snap a picture, just basically use the camera as a viewfinder. I have searched this site and have not yet found what I'm looking for. Please help. Thanks! Thomas

    Read the article

  • Help with IP cameras

    - by Deepa
    Hey guys.. i have been given a task to find out the methods used to support many different IP cameras in a single software which manages them. for eg. The methods or protocols used by Milestone's XPrtotect softwares. We are already dealing with DVR cameras. Now we have to proceed with IP Cameras Can anyone just give me an idea about the methods used or some useful websites from which i can gather topics related to integration of IP cameras with our Access control Softwares. I need this urgently so please help.. Thanks.. Deepa

    Read the article

  • Finding a simple object in a low-quality image

    - by Ramon Snir
    Hi, I want to do this thing in C# (or any other .NET language), not sure how: I have an image I captured from webcam and I want to find a specific simple object in it (let's say a red circle with a black square in it). The red circle can be a bit different from time to time (because of shadows) and the square might be also a bit brighter sometimes and even rotated a bit. Please help me!

    Read the article

  • "Streaming" MJPG using python.

    - by tyler
    I have a webcam that I want to do some image processing on using Python. It's coming through as a Motion-JPEG. I want to try to process the stuff "live," but really what I want to do is this: Open the URL, start data streaming to some buffer... Read x bytes (where x is image size) to an image Process that image Display in result panel Return to number 2 The problem is that, while I do have the resolution, I have no idea how many bytes to read. I've tried googling the M-JPEG specification but can't find anything on if the images are separated by some header or what. Anybody have any ideas?

    Read the article

  • How to track audio decibel values from UIImage Picker Controller?

    - by maddy
    hi, i am developing iphone application which included UIImagePickerController Source type is UIImagePickerControllerSourceTypeCamera. In this how i can get audio values from mice as-well as i have to show it and should have option to on- off. So please help me to overcome this issue. i searched completely from UIImagePickerController document. its not having any audio property. Thanks in advance....

    Read the article

  • CaptureCameraDialog returns OK but does not save (Motorola ES400)

    - by Dominic
    Ok it seems like everyone in the world has issues with CaptureCameraDialog. In my case the result is OK, but when taking the photo there is a MessageBox that says "Error" that appears and disappears in the blink of an eye, then returns to my app (so I don't have time to actually read the error). It has not saved the file. It does not throw an error to my application. There is also another issue which is exactly the same as the issue talked about here (yet none of the fixes work for me). http://www.pcreview.co.uk/forums/thread-4025602.php Does anyone know how to get the "error message" that the dialogue box displays for an instant?

    Read the article

  • How to create a 2D map of room by a few images/movie frames ?

    - by Nils
    I'd like to create a simple 2D map of a room by getting pictures (ceiling) of all directions (360° - e.g. movie frames), recognize the walls by edge detection, delete other unwanted objects, concat the images at the right position (cf. walls, panorama) and finally create the approximate 2D map (looking on it from above). Getting the scale would be another parameter, which might be useful. I have some own ideas at the moment, by using e.g. the Sobel algorithm, but it would be interesting if somebody out there knows some project or software (GPL,freeware prefered) doing this already, as I'm still looking for some examples, which might help me. Thanks.

    Read the article

  • How do I use ffmpeg to take pictures with my web camera?

    - by user45583
    I want to use ffmpeg to store images taken by my USB web camera on my Ubuntu 11.10. lsusb outputs: Bus 002 Device 003: ID 0c45:6028 Microdia Typhoon Easycam USB 330K (older) The camera works fine using cheese but I want to use command line tools to make it scriptable but if I try: ffmpeg -i /dev/v4l/by-id/usb-0c45_USB_camera-video-index0 image.jpg The output is: user@box:~$ sudo ffmpeg -i /dev/v4l/by-id/usb-0c45_USB_camera-video-index0 image.jpg [sudo] password for user: ffmpeg version 0.7.3-4:0.7.3-0ubuntu0.11.10.1, Copyright (c) 2000-2011 the Libav developers built on Jan 4 2012 16:21:50 with gcc 4.6.1 configuration: --extra-version='4:0.7.3-0ubuntu0.11.10.1' --arch=i386 --prefix=/usr --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --enable-libvpx --enable-runtime-cpudetect --enable-vaapi --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --enable-shared --disable-static WARNING: library configuration mismatch avutil configuration: --extra-version='4:0.7.3-0ubuntu0.11.10.1' --arch=i386 --prefix=/usr --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --enable-libvpx --enable-runtime-cpudetect --enable-vaapi --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --shlibdir=/usr/lib/i686/cmov --cpu=i686 --enable-shared --disable-static --disable-ffmpeg --disable-ffplay avcodec configuration: --extra-version='4:0.7.3-0ubuntu0.11.10.1' --arch=i386 --prefix=/usr --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --enable-libvpx --enable-runtime-cpudetect --enable-vaapi --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --shlibdir=/usr/lib/i686/cmov --cpu=i686 --enable-shared --disable-static --disable-ffmpeg --disable-ffplay avformat configuration: --extra-version='4:0.7.3-0ubuntu0.11.10.1' --arch=i386 --prefix=/usr --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --enable-libvpx --enable-runtime-cpudetect --enable-vaapi --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --shlibdir=/usr/lib/i686/cmov --cpu=i686 --enable-shared --disable-static --disable-ffmpeg --disable-ffplay avdevice configuration: --extra-version='4:0.7.3-0ubuntu0.11.10.1' --arch=i386 --prefix=/usr --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --enable-libvpx --enable-runtime-cpudetect --enable-vaapi --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --shlibdir=/usr/lib/i686/cmov --cpu=i686 --enable-shared --disable-static --disable-ffmpeg --disable-ffplay avfilter configuration: --extra-version='4:0.7.3-0ubuntu0.11.10.1' --arch=i386 --prefix=/usr --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --enable-libvpx --enable-runtime-cpudetect --enable-vaapi --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --shlibdir=/usr/lib/i686/cmov --cpu=i686 --enable-shared --disable-static --disable-ffmpeg --disable-ffplay swscale configuration: --extra-version='4:0.7.3-0ubuntu0.11.10.1' --arch=i386 --prefix=/usr --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --enable-libvpx --enable-runtime-cpudetect --enable-vaapi --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --shlibdir=/usr/lib/i686/cmov --cpu=i686 --enable-shared --disable-static --disable-ffmpeg --disable-ffplay postproc configuration: --extra-version='4:0.7.3-0ubuntu0.11.10.1' --arch=i386 --prefix=/usr --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --enable-libvpx --enable-runtime-cpudetect --enable-vaapi --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --shlibdir=/usr/lib/i686/cmov --cpu=i686 --enable-shared --disable-static --disable-ffmpeg --disable-ffplay libavutil 51. 7. 0 / 51. 7. 0 libavcodec 53. 6. 0 / 53. 6. 0 libavformat 53. 3. 0 / 53. 3. 0 libavdevice 53. 0. 0 / 53. 0. 0 libavfilter 2. 4. 0 / 2. 4. 0 libswscale 2. 0. 0 / 2. 0. 0 libpostproc 52. 0. 0 / 52. 0. 0 /dev/v4l/by-id/usb-0c45_USB_camera-video-index0: Invalid data found when processing input How do I make this work?

    Read the article

  • Where is the Camera Codec feature available in Windows 8?

    - by Rowland Shaw
    If you try to install the Camera Codec pack for Windows 7 on Windows 8, you get an error: This version of the Microsoft Camera Codec Pack is not compatible with Windows 8 or Windows Server 2012. You can get the codec pack through Windows Update on Windows 8. However, I cannot see anywhere on Windows update that would suggest I can download this, even as an optional update? Is it just the case that it is not yet live, as everything filters through the RTM process, or is it hidden away as something else?

    Read the article

  • Atmospheric scattering sky from space artifacts

    - by ollipekka
    I am in the process of implementing atmospheric scattering of a planets from space. I have been using Sean O'Neil's shaders from http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter16.html as a starting point. I have pretty much the same problem related to fCameraAngle except with SkyFromSpace shader as opposed to GroundFromSpace shader as here: http://www.gamedev.net/topic/621187-sean-oneils-atmospheric-scattering/ I get strange artifacts with sky from space shader when not using fCameraAngle = 1 in the inner loop. What is the cause of these artifacts? The artifacts disappear when fCameraAngle is limtied to 1. I also seem to lack the hue that is present in O'Neil's sandbox (http://sponeil.net/downloads.htm) Camera position X=0, Y=0, Z=500. GroundFromSpace on the left, SkyFromSpace on the right. Camera position X=500, Y=500, Z=500. GroundFromSpace on the left, SkyFromSpace on the right. I've found that the camera angle seems to handled very differently depending the source: In the original shaders the camera angle in SkyFromSpaceShader is calculated as: float fCameraAngle = dot(v3Ray, v3SamplePoint) / fHeight; Whereas in ground from space shader the camera angle is calculated as: float fCameraAngle = dot(-v3Ray, v3Pos) / length(v3Pos); However, various sources online tinker with negating the ray. Why is this? Here is a C# Windows.Forms project that demonstrates the problem and that I've used to generate the images: https://github.com/ollipekka/AtmosphericScatteringTest/ Update: I have found out from the ScatterCPU project found on O'Neil's site that the camera ray is negated when the camera is above the point being shaded so that the scattering is calculated from point to the camera. Changing the ray direction indeed does remove artifacts, but introduces other problems as illustrated here: Furthermore, in the ScatterCPU project, O'Neil guards against situations where optical depth for light is less than zero: float fLightDepth = Scale(fLightAngle, fScaleDepth); if (fLightDepth < float.Epsilon) { continue; } As pointed out in the comments, along with these new artifacts this still leaves the question, what is wrong with the images where camera is positioned at 500, 500, 500? It feels like the halo is focused on completely wrong part of the planet. One would expect that the light would be closer to the spot where the sun should hits the planet, rather than where it changes from day to night. The github project has been updated to reflect changes in this update.

    Read the article

  • Calculating the position of an object with regards to current position using OpenGL like matrices

    - by spartan2417
    i have a 1st person camera that collides with walls, i also have a small sphere in front of my camera denoted by the camera position plus the distance ahead. I cannot get the postion of the sphere but i have the position of my camera. e.g. i need to find the position of the point or at the very least find away of calculating the position using the camera positions. code: static Float P_z = 0; P_z = -15; PushMatrix(); LoadMatrix(&Inv); Material(SCEGU_AMBIENT, 0x00000066); TranslateXYZ(0,0,P_z); ScaleXYZ(0.1f,0.1f,0.1f); pointer.Render(); PopMatrix(); where Inv is the camera positions (Inv.w.x,Inv.w.z), pointer is the sphere.

    Read the article

  • Is there a quick way to convert camera raw files to DNG?

    - by dericke
    Before I switched from Windows to Ubuntu for my daily computing, I used Adobe's Photoshop Lightroom for processing photos from my DSLR. Adobe made it really straightforward to convert from proprietary camera raw files (in my case, .NEF) to DNG. I haven't found any way to convert NEF to DNG in Ubuntu yet. Most photography programs do process NEFs to JPG/TIFF/PNG/etc., but I'm looking for a converter for archival purposes. Are there any tools available, either standalone or built into another app, that can losslessly convert from NEF to DNG?

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >