Search Results

Search found 2146 results on 86 pages for 'camera calibration'.

Page 43/86 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Displaying Video using a Window Handle

    - by fergs
    I'm working on a C# wrapper for Dallmeier camera's and currently have a working wrapper. I can connect to a camera via passing the window handle (in my application its a picture box handle), this is used to send video and messages. Once connected I can then send the StartLiveView command and then a live stream video will be shown in the picture box. Can someone explain how this works by just giving the window handle? And how can I grab an Image from this stream when Picturebox1.Image is null?

    Read the article

  • How to spot empty parking spaces?

    - by mithila
    I want to do a final year B.Sc project on parking space detection. Can anybody give me some link related to it? Any text-book, tutorial or anything? What would be prerequisite for this project? What type of skills(programming/math) are needed? What are the initial steps to do? What type of readings(algorithms of image processing) are needed? Detail added in comments: i'm going to use camera, not infrared. i would like to use still images, or one camera which captures images from a parking lot. i think reak-time processing will be tough, at this moment i just need to start the project. so still images will work fine. but later it may b a real-time project

    Read the article

  • Computing orientation of a square and displaying an object with the same orientation

    - by Robin
    Hi, I wrote an application which detects a square within an image. To give you a good understanding of how such an image containing such a square, in this case a marker, could look like: What I get, after the detection, are the coordinates of the four corners of my marker. Now I don't know how to display an object on my marker. The object should have the same rotation/angle/direction as the marker. Are there any papers on how to achieve that, any algorithms that I can use that proofed to be pretty solid/working? It doesn't need to be a working solution, it could be a simple description on how to achieve that or something similar. If you point me at a library or something, it should work under linux, windows is not needed but would be great in case I need to port the application at some point. I already looked at the ARToolkit but they you camera parameter files and more complex matrices while I only got the four corner points and a single image instead of a whole video / camera stream.

    Read the article

  • Video packet capture over multiple IP cameras

    - by nimals1986
    Hello We are working on a C language application which is simple RTSP/RTP client to record video from Axis a number of Cameras . We launch a pthread for each of the camera which establishes the RTP session and begins to record the packets captured suing the recvfrom() call... A single camera single pthread records fine for well over a day without issues.. but testing with more cameras available,about 25(so 25 pthreads), the recording to file goes fine for like 15 to 20 mins and then the recording just stops ..the application still keeps running .. Its been over a month and a half we have been trying with varied implementations but nothing seems to help .. Please provide suggestions.. We are using CentOS 5 platform

    Read the article

  • How to efficiently track my geolocation during traveling using iPhone

    - by Peter Kruithof
    I'm going to travel through Thailand and I want to keep track of my location to geotag photos afterwards taken with a digital camera (iPhone's camera is not good enough). There are two things that are important here: I don't want to update manually I want the battery to last as long as possible, since the times I will be able to charge will be scarce I've thought about creating a web page that periodically sends my geolocation to a script that stores it in a database, but I don't know if GPS data is available in Mobile Safari. Second, I want the data I send to be as small as possible, and the frequency this is done s few as possible, because of the pricing of mobile data usage abroad. Any suggestions what would be a good solution here?

    Read the article

  • H.264 over RTP - Identify SPS and PPS Frames

    - by Toby
    I have a raw H.264 Stream from an IP Camera packed in RTP frames. I want to get raw H.264 data into a file so I can convert it with ffmpeg. So when I want to write the data into my raw H.264 file I found out it has to look like this: 00 00 01 [SPS] 00 00 01 [PPS] 00 00 01 [NALByte] [PAYLOAD RTP Frame 1] // Payload always without the first 2 Bytes -> NAL [PAYLOAD RTP Frame 2] [... until PAYLOAD Frame with Mark Bit received] // From here its a new Video Frame 00 00 01 [NAL BYTE] [PAYLOAD RTP Frame 1] .... So I get the SPS and the PPS from the Session Description Protocol out of my preceding RTSP communication. Additionally the camera sends the SPS and the PPSin two single messages before starting with the video stream itself. So I capture the messages in this order: 1. Preceding RTSP Communication here ( including SDP with SPS and PPS ) 2. RTP Frame with Payload: 67 42 80 28 DA 01 40 16 C4 // This is the SPS 3. RTP Frame with Payload: 68 CE 3C 80 // This is the PPS 4. RTP Frame with Payload: ... // Video Data Then there come some Frames with Payload and at some point a RTP Frame with the Marker Bit = 1. This means ( if I got it right) that I have a complete video frame. Afer this I write the Prefix Sequence ( 00 00 01 ) and the NALfrom the payload again and go on with the same procedure. Now my camera sends me after every 8 complete Video Frames the SPS and the PPS again. ( Again in two RTP Frames, as seen in the example above ). I know that especially the PPS can change in between streaming but that's not the problem. My questions are now: 1. Do I need to write the SPS/PPS every 8th Video Frame? If my SPS and my PPS don't change it should be enough to have them written at the very beginning of my file and nothing more? 2. How to distinguish between SPS/PPS and normal RTP Frames? In my C++ Code which parses the transmitted data I need make a difference between the RTP Frames with normal Payload an the ones carrying the SPS/PPS. How can I distinguish them? Okay the SPS/PPS frames are usually way smaller, but that's not a save call to rely on. Because if I ignore them I need to know which data I can throw away, or if I need to write them I need to put the 00 00 01 Prefix in front of them. ? Or is it a fixed rule that they occur every 8th Video Frame?

    Read the article

  • Streaming webcam video in Flash using MP4 encoding

    - by Herms
    One of the features of the Flash app I'm working on is to be able to stream a webcam to others. We're just using the built-in webcam support in Flash and sending it through FMS. We've had some people ask for higher quality video, but we're already using the highest quality setting we can in Flash (setting quality to 100%). My understanding is that in the newer flash players they added support for MPEG-4 encoding for the videos. I created a simple test Flex app to try and compare the video quality of the MP4 vs FLV encodings. However, I can't seem to get MP4 to work at all. According to the Flex documentation the only thing I need to do to use MP4 instead of FLV is prepend "mp4:" to the name of the stream when calling publish: Specify the stream name as a string with the prefix mp4: with or without the filename extension. The prefix indicates to the server that the file contains H.264-encoded video and AAC-encoded audio within the MPEG-4 Part 14 container format. When I try this nothing happens. I don't get any events raised on the client side, no exceptions thrown, and my logging on the server side doesn't show any streams starting. Here's the relevant code: // These are all defined and created within the class. private var nc:NetConnection; private var sharing:Boolean; private var pubStream:NetStream; private var format:String; private var streamName:String; private var camera:Camera; // called when the user clicks the start button private function startSharing():void { if (!nc.connected) { return; } if (sharing) { return; } if(pubStream == null) { pubStream = new NetStream(nc); pubStream.attachCamera(camera); } startPublish(); sharing = true; } private function startPublish():void { var name:String; if (this.format == "mp4") { name = "mp4:" + streamName; } else { name = streamName; } //pubStream.publish(name, "live"); pubStream.publish(name, "record"); }

    Read the article

  • Android: Memory leak due to AsyncTask

    - by Manu
    Hello, I'm stuck with a memory leak that I cannot fix. I identified where it occurs, using the MemoryAnalizer but I vainly struggle to get rid of it. Here is the code: public class MyActivity extends Activity implements SurfaceHolder.Callback { ... Camera.PictureCallback mPictureCallbackJpeg = new Camera.PictureCallback() { public void onPictureTaken(byte[] data, Camera c) { try { // log the action Log.e(getClass().getSimpleName(), "PICTURE CALLBACK JPEG: data.length = " + data); // Show the ProgressDialog on this thread pd = ProgressDialog.show(MyActivity.this, "", "Préparation", true, false); // Start a new thread that will manage the capture new ManageCaptureTask().execute(data, c); } catch(Exception e){ AlertDialog.Builder dialog = new AlertDialog.Builder(MyActivity.this); ... dialog.create().show(); } } class ManageCaptureTask extends AsyncTask<Object, Void, Boolean> { protected Boolean doInBackground(Object... args) { Boolean isSuccess = false; // initialize the bitmap before the capture ((myApp) getApplication()).setBitmapX(null); try{ // Check if it is a real device or an emulator TelephonyManager telmgr = (TelephonyManager) getSystemService(Context.TELEPHONY_SERVICE); String deviceID = telmgr.getDeviceId(); boolean isEmulator = "000000000000000".equalsIgnoreCase(deviceID); // get the bitmap if (isEmulator) { ((myApp) getApplication()).setBitmapX(BitmapFactory.decodeFile(imageFileName)); } else { ((myApp) getApplication()).setBitmapX(BitmapFactory.decodeByteArray((byte[]) args[0], 0, ((byte[])args[0]).length)); } ((myApp) getApplication()).setImageForDB(ImageTools.resizeBmp(((myApp) getApplication()).getBmp())); // convert the bitmap into a grayscale image and display it in the preview ((myApp) getApplication()).setImage(makeGrayScale()); isSuccess = true; } catch (Exception connEx){ errorMessageFromBkgndThread = getString(R.string.errcapture); } return isSuccess; } protected void onPostExecute(Boolean result) { // Pass the result data back to the main activity if (MyActivity.this.pd != null) { MyActivity.this.pd.dismiss(); } if (result){ ((ImageView) findViewById(R.id.apercu)).setImageBitmap(((myApp) getApplication()).getBmp()); ((myApp) getApplication()).setBitmapX(null); } else{ // there was an error ErrAlert(); } } } }; private void ErrAlert(){ // notify the user about the error AlertDialog.Builder dialog = new AlertDialog.Builder(this); ... dialog.create().show(); } } MemoryAnalyzer indicated the memory leak at: ((myApp) getApplication()).setBitmapX(BitmapFactory.decodeByteArray((byte[]) args[0], 0, ((byte[])args[0]).length)); I am grateful for any suggestion, thank you in advance.

    Read the article

  • Comparing images using SIFT

    - by Luís Fernando
    I'm trying to compare 2 images that are taken from a digital camera. Since there may be movement on the camera, I want to first make the pictures "match" and then compare (using some distant function). To match them, I'm thinking about cropping the second picture and using SIFT to find it inside the first picture... it will probably have a small difference on scale/translation/rotation so then I'd need to find the transformation matrix that converts image 1 to image 2 (based on points found by SIFT) any ideas on how to do that (or I guess that's a common problem that may have some opensource implementation?)? thanks

    Read the article

  • Xcode: Application name in OS X cannot be localized?

    - by Andrew Chang
    I have an project named "Multi-Camera Supervisor". I make the "MainMenu.xib" file localized. Here are the menu bar in localized nib file of Xcode: For English: For Japanese: But when I ran my application in Xcode, The first item doesn't work. Here are the menu bars when my application ran: For English: For Japanese You can see that the application name was still "Multi-Camera Supervisor". Meanwhile, the application name appeared in Dock icon was not localized either. How should I solve this? How can I localize the application name not only in main menu but also in Dock?

    Read the article

  • Compressing three individual jpeg pics containing temporal redundancy?

    - by michael
    I am interfacing and embedded device with a camera module that returns a single jpeg compressed frame each time I trigger it. I would like to take three successive shots (approx 1 frame per 1/4 second) and further compress the images into a single file. The assumption here is that there is a lot of temporal redundancy, therefore lots of room for more compression across the three frames (compared to sending three separate jpeg images). I will be implementing the solution on an embedded device in C without any libraries and no OS. The camera will be taking pics in an area with very little movement (no visitors or screens in the background, maybe a tree with swaying branches), so I think my assumption about redundancy is pretty solid. When the file is finally viewed on a pc/mac, I don't mind having to write something to extract the three frames (so it can be a nonstandard cluge) So I guess the actual question is: What is the best way to compress these three images together given the fact that they are already in JPEG format (it is a possibly to convert back to a raw image, but if i dont have too...)

    Read the article

  • Unreachable code detected by using const variables

    - by Anton Roth
    I have following code: private const FlyCapture2Managed.PixelFormat f7PF = FlyCapture2Managed.PixelFormat.PixelFormatMono16; public PGRCamera(ExamForm input, bool red, int flags, int drawWidth, int drawHeight) { if (f7PF == FlyCapture2Managed.PixelFormat.PixelFormatMono8) { bpp = 8; // unreachable warning } else if (f7PF == FlyCapture2Managed.PixelFormat.PixelFormatMono16){ bpp = 16; } else { MessageBox.Show("Camera misconfigured"); // unreachable warning } } I understand that this code is unreachable, but I don't want that message to appear, since it's a configuration on compilation which just needs a change in the constant to test different settings, and the bits per pixel (bpp) change depending on the pixel format. Is there a good way to have just one variable being constant, deriving the other from it, but not resulting in an unreachable code warning? Note that I need both values, on start of the camera it needs to be configured to the proper Pixel Format, and my image understanding code needs to know how many bits the image is in. So, is there a good workaround, or do I just live with this warning?

    Read the article

  • EXC_BAD_ACCESS on iPhone (with debugger screenshot)

    - by VansFannel
    Hello. I'm developing an iPhone application that show the camera's view with this code: -(void) displayAR { [rootViewController presentModalViewController:[self cameraController] animated:NO]; [displayView setFrame:[[[self cameraController] view] bounds]]; } And hide the camera's view with this code: - (void) hideAR { [[self locationManager] stopUpdatingHeading]; [[self locationManager] stopUpdatingLocation]; [[self accelerometerManager] release]; [rootViewController dismissModalViewControllerAnimated:YES]; } When I call hideAR, I get an EXC_BAD_ACCESS with the following debugger screenshot: Any advice?

    Read the article

  • Calculating rotation and translation matrices between two odometry positions for monocular linear triangulation

    - by user1298891
    Recently I've been trying to implement a system to identify and triangulate the 3D position of an object in a robotic system. The general outline of the process goes as follows: Identify the object using SURF matching, from a set of "training" images to the actual live feed from the camera Move/rotate the robot a certain amount Identify the object using SURF again in this new view Now I have: a set of corresponding 2D points (same object from the two different views), two odometry locations (position + orientation), and camera intrinsics (focal length, principal point, etc.) since it's been calibrated beforehand, so I should be able to create the 2 projection matrices and triangulate using a basic linear triangulation method as in Hartley & Zissermann's book Multiple View Geometry, pg. 312. Solve the AX = 0 equation for each of the corresponding 2D points, then take the average In practice, the triangulation only works when there's almost no change in rotation; if the robot even rotates a slight bit while moving (due to e.g. wheel slippage) then the estimate is way off. This also applies for simulation. Since I can only post two hyperlinks, here's a link to a page with images from the simulation (on the map, the red square is simulated robot position and orientation, and the yellow square is estimated position of the object using linear triangulation.) So you can see that the estimate is thrown way off even by a little rotation, as in Position 2 on that page (that was 15 degrees; if I rotate it any more then the estimate is completely off the map), even in a simulated environment where a perfect calibration matrix is known. In a real environment when I actually move around with the robot, it's worse. There aren't any problems with obtaining point correspondences, nor with actually solving the AX = 0 equation once I compute the A matrix, so I figure it probably has to do with how I'm setting up the two camera projection matrices, specifically how I'm calculating the translation and rotation matrices from the position/orientation info I have relative to the world frame. How I'm doing that right now is: Rotation matrix is composed by creating a 1x3 matrix [0, (change in orientation angle), 0] and then converting that to a 3x3 one using OpenCV's Rodrigues function Translation matrix is composed by rotating the two points (start angle) degrees and then subtracting the final position from the initial position, in order to get the robot's straight and lateral movement relative to its starting orientation Which results in the first projection matrix being K [I | 0] and the second being K [R | T], with R and T calculated as described above. Is there anything I'm doing really wrong here? Or could it possibly be some other problem? Any help would be greatly appreciated.

    Read the article

  • data between pages: $_SESSION vs. $_GET ?

    - by Haroldo
    Ok, firstly this is not about forms this is about consistent layout as a user explores a site. let me explain: If we imagine a (non-ajax) digital camera online store, say someone was on the DSLR section and specified to view the cameras in Gallery mode and order by price. They then click onto the Compact camera's page. It would be in the users interests if the 'views' they selected we're carried over to this new page. Now, i'd say use a session - am i wrong? are there performance issues i should be aware of for a few small session vars ( ie view=1 , orderby=price) ?

    Read the article

  • Using cProfile results with KCacheGrind

    - by Adam Luchjenbroers
    I'm using cProfile to profile my Python program. Based upon this talk I was under the impression that KCacheGrind could parse and display the output from cProfile. However, when I go to import the file, KCacheGrind just displays an 'Unknown File Format' error in the status bar and sits there displaying nothing. Is there something special I need to do before my profiling stats are compatible with KCacheGrind? ... if profile: import cProfile profileFileName = 'Profiles/pythonray_' + time.strftime('%Y%m%d_%H%M%S') + '.profile' profile = cProfile.Profile() profile.run('pilImage = camera.render(scene, samplePattern)') profile.dump_stats(profileFileName) profile.print_stats() else: pilImage = camera.render(scene, samplePattern) ... Package Versions KCacheGrind 4.3.1 Python 2.6.2

    Read the article

  • Merging photo textures - (from calibrated cameras) - projected onto geometry

    - by freakTheMighty
    I am looking for papers/algorithms for merging projected textures onto geometry. To be more specific, given a set of fully calibrated cameras/photographs and geometry, how can we define a metric for choosing which photograph should be used to texture a given patch of the geometry. I can think of a few attributes one may seek minimize including the angle between the surface normal and the camera, the distance of the camera from the surface, as well as minimizing some parameterization of sharpness. The question is how do these things get combined and are there well established existing solutions?

    Read the article

  • Design pattern for mouse interaction

    - by mike
    I need some opinions on what is the "ideal" design pattern for a general mouse interaction. Here the simplified problem. I have a small 3d program (QT and openGL) and I use the mouse for interaction. Every interaction is normally not only a single function call, it is mostly performed by up to 3 function calls (initiate, perform, finalize). For example, camera rotation: here the initial function call will deliver the current first mouse position, whereas the performing function calls will update the camera etc. However, for only a couple of interactions, hardcoding these (inside MousePressEvent, MouseReleaseEvent MouseMoveEvent or MouseWheelEvent etc) is not a big deal, but if I think about a more advanced program (e.g 20 or more interactions) then a proper design is needed. Therefore, how would you design such a interactions inside QT. I hope I made my problem clear enough, otherwise don't bother complain :-) Thanks

    Read the article

  • Conceptually, how does replay work in a game?

    - by SnOrfus
    I was kind of curious as to how replay might be implemented in a game. Initially, I thought that there would be just a command list of every player/ai action that was taken in the game, and it then 're-plays' the game and lets the engine render as usual. However, I have looked at replays in FPS/RTS games, and upon careful inspection even things like the particles and graphical/audible glitches are consistent (and those glitches are generally *in*consistent). So How does this happen. In fixed camera angle games I though it might just write every frame of the whole scene to a stream that gets stored and then just replays the stream back, but that doesn't seem like enough for games that allow you to pause and move the camera around. You'd have to store the locations of everything in the scene at all points in time (No?). So for things like particles, that's a lot of data to push which seems like a significant draw on the game's performance whilst playing.

    Read the article

  • set map in google maps with TimerTask

    - by Chad White
    I would like to change the Position of the map in google maps v2 But Ive done it in a TimerTask ... target, zoom, bearing and so on and it says "IllegalStateException - not on the main thread What should I do? Any help? class Task extends TimerTask { @Override public void run() { CameraPosition cameraPosition = new CameraPosition.Builder() .target(Zt) // Sets the center of the map to Mountain View .zoom(12) // Sets the zoom .bearing(180) // Sets the orientation of the camera to east .tilt(30) // Sets the tilt of the camera to 30 degrees .build(); // Creates a CameraPosition from the builder mMap.moveCamera(CameraUpdateFactory.newCameraPosition(cameraPosition)); } } Timer timer = new Timer(); timer.scheduleAtFixedRate(new Task(), 0, 20000);

    Read the article

  • JavaScript QR Code Reader - can it be done? Or, Remote Service?

    - by Myk
    I'm doing a bit of preliminary research on an upcoming project and I have a quick question that I figure I'll throw up here while I look elsewhere, in case anyone has any experience with this. The question is simple: is it possible to read a QR code using JavaScript? Is there a remote service to which I can pass a bitmap object from a camera and do it that way? Are there currently any libraries that allow this? The project is going to be deployed to various mobile devices and we'd like to try to use Appcelerator to make it work. I know Appcelerator does expose the Camera API on its host devices, but whatever we do with it has to be able to parse QR codes. Is this something that can be done? Thanks in advance! myk

    Read the article

  • Automatic people counting + twittering.

    - by c2h2
    Want to develop a system accurately counting people that go through a normal 1-2m wide door. and twitter whenever people goes in or out and tells how many people remain inside. Now, Twitter part is easy, but people counting is difficult. There is some semi existing counting solution, but they do not quite fit my needs. My idea/algorithm: Should I get some infra-red camera mounting on top of my door and constantly monitoring, and divide the camera image into several grid and calculating they entering and gone? can you give me some suggestion and starting point?

    Read the article

  • Setting custom HTTP request headers in an URL object doesn't work.

    - by Blagovest Buyukliev
    I am trying to fetch an image from an IP camera using HTTP. The camera requires HTTP basic authentication, so I have to add the corresponding request header: URL url = new URL("http://myipcam/snapshot.jpg"); URLConnection uc = url.openConnection(); uc.setRequestProperty("Authorization", "Basic " + new String(Base64.encode("user:pass".getBytes()))); // outputs "null" System.out.println(uc.getRequestProperty("Authorization")); I am later passing the url object to ImageIO.read(), and, as you can guess, I am getting an HTTP 401 Unauthorized, although user and pass are correct. What am I doing wrong? I've also tried new URL("http://user:pass@myipcam/snapshot.jpg"), but that doesn't work either.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >