Search Results

Search found 2495 results on 100 pages for 'camera hacks'.

Page 82/100 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • how many fps can iPhone's UIGetScreenImage() actually do?

    - by M Katz
    Now that Apple is officially allowing UIGetScreenImage() to be used in iPhone apps, I've seen a number of blogs saying that this "opens the floodgates" for video capture on iPhones, including older models. But I've also seen blogs that say the fastest frame rate they can get with UIGetScreenImage() is like 6 FPS. Can anyone share specific frame-rate results you've gotten with UIGetScreenImage() (or other approved APIs)? Does restricting the area of the screen captured improve frame rate significantly? Also, for the wishful thinking segment of today's program, does anyone have pointers to code/library that uses UIGetScreenImage() to capture video? For instance, I'd like an API something like Capture( int fps, Rect bounds, int durationMs ) that would turn on the camera and for the given duration record a sequence of .png files at the given frame rate, copying from the given screen rect.

    Read the article

  • Overwrite the Soap Envelope in Suds python

    - by chrissygormley
    Hello, I have a camera and I am trying to connect to it vis suds. I have tried to send raw xml and have found that the only thing stopping the xml suds from working is an incorrect Soap envelope namespace. The envelope namespace is: xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" and I want to rewrite it to: xmlns:SOAP-ENV="http://www.w3.org/2003/05/soap-envelope" In order to add a namespace in python I try this code: message = Element('Element_name').addPrefix(p='SOAP-ENC', u='www.w3.org/ENC') But when I add the SOAP-ENV to the namespace it doesn't write as it is hardcoded into the suds bindings. Is there a way to overwrite this in suds? Thanks for any help.

    Read the article

  • MFMailComposeViewController: how to get notified when view appears?

    - by Karsten Silz
    In my app, users can take a picture with the camera or pick one from the library and email it as an attachment. I use the MFMailComposeViewController for seamless email. On my iPhone 3GS, it takes about 5-7 seconds for the email view to appear with the attachment. Now I want to show a progress indicator view when the user pushes the "Send" button and hide that view when the email view comes up. The problem is that the MFMailComposeViewController delegate only calls when the email sending is done. Can I get notified somehow when the email window appears on the screen?

    Read the article

  • Memory leak issue with UIImagePickerController

    - by Mustafa
    I'm getting memory leak with UIImagePickerController class. Here's how I'm using it: UIImagePickerController *picker = [[UIImagePickerController alloc] init]; picker.delegate = self; picker.sourceType = UIImagePickerControllerSourceTypePhotoLibrary; [self presentModalViewController:picker animated:YES]; [picker release]; To remove the picker i call [picker dismissModalViewControllerAnimated:YES]; in didFinishPickingImage and imagePickerControllerDidCancel. -- Instruments show around 160bytes leaking as a result of this instruction: +[UIImagePickerController _loadPhotoLibraryIfNecessary] Apparently this issue has and is disturbing many people, and solution to avoid this problem is to build a singleton class dedicated for picking images from library or capturing using device's build in camera. Anyone want to add something?

    Read the article

  • Cocos2d:How to Zoom-in Zoom-out effect on a Sprite image?

    - by user187532
    Hello everyone, I am developing module where-in i pick the image from photo library and put into a Sprite. I want to implement Zoom-in, Zoom-out kind of effect for a Sprite image, same like camera album images zoom in/out effect. Could someone please guide me how do i implement it? I see somewhere is that, i have to detect two touch events in TouchBegan and then Adjust the Sprite Scale size to up or down based on the distance of two fingers touch event values. Could someone please tell me, How do i detect two fingers touch values in TouchBegan? How to allow to touch and Zoom-in/out of Sprite image by user? Please give me samples. I tried already some stuff (http://groups.google.com/group/cocos2d-iphone-discuss/browse_thread/thread/61808fd6b578e5e1?hide_quotes=no&utoken=9AdrAzkAAABFNHPPibbeOSHIuKOkxTWQ066onEraO3W2r08xbUjNmAwT6_SsyC2n0d69MF_vYn77vPb7MuI5eIWgjrXT32Kd) but doesn't work for my requirement. Thank you.

    Read the article

  • Extracting ""((Adj|Noun)+|((Adj|Noun)(Noun-Prep)?)(Adj|Noun))Noun"" from Text (Justeson & Katz, 1995)

    - by ssuhan
    I would like to query if it is possible to extract ((Adj|Noun)+|((Adj|Noun)(Noun-Prep)?)(Adj|Noun))Noun proposed by Justeson and Katz (1995) in R package openNLP? That is, I would like to use this linguistic filtering to extract candidate noun phrases. I cannot well understand its meaning. Could you do me a favor to explain it or transform such representation into R language. Many thanks. Maybe we can start the sample code from: library("openNLP") acq <- "This paper describes a novel optical thread plug gauge (OTPG) for internal thread inspection using machine vision. The OTPG is composed of a rigid industrial endoscope, a charge-coupled device camera, and a two degree-of-freedom motion control unit. A sequence of partial wall images of an internal thread are retrieved and reconstructed into a 2D unwrapped image. Then, a digital image processing and classification procedure is used to normalize, segment, and determine the quality of the internal thread." acqTag <- tagPOS(acq) acqTagSplit = strsplit(acqTag," ")

    Read the article

  • Is there a performance advantage in using a 64bit version of openCV+Emgu instead of 32bit?

    - by Jelly Amma
    Hello, I am developing an application that processes images captured in real time by a Point Grey camera (http://www.ptgrey.com/). The Point Grey SDK is a .net wrapper and can be either 32bit or 64bit. Then to process the captured images, I'm using a wrapper for openCV called Emgu CV (http://www.emgu.com/) that comes in both 32bit or 64bit flavors as well. Now, being on Vista64 I went for the 64bit versions of FlyCapture (Point Grey's SDK) and Emgu CV (which includes openCV in its install) hoping to maximize performance. Recently I've been wanting to call my FlyCapture+Emgu DLL code from XNA, which unfortunately only exists in 32bit, and I realize that I may have to reinstall all those components in 32bit as I don't really want to go through IPC, remoting, etc. Apart from the obvious limit to memory space inherent to 32bit, is there also a performance loss I should be expecting? How dramatic would that be and why ? Thanks in advance for any advice or explanation.

    Read the article

  • image focus calculation

    - by Oren Mazor
    Hi folks, I'm trying to develop an image focusing algorithm for some test automation work. I've chosen to use AForge.net, since it seems like a nice mature .net friendly system. Unfortunately, I can't seem to find information on building autofocus algorithms from scratch, so I've given it my best try: take image. apply sobel edge detection filter, which generates a greyscale edge outline. generate a histogram and save the standard dev. move camera one step closer to subject and take another picture. if the standard dev is smaller than previous one, we're getting more in focus. otherwise, we've past the optimal distance to be taking pictures. is there a better way? update: HUGE flaw in this, by the way. as I get past the optimal focus point, my "image in focus" value continues growing. you'd expect a parabolic-ish function looking at distance/focus-value, but in reality you get something that's more logarithmic

    Read the article

  • How to round CGFloat

    - by Johannes Jensen
    I made this method + (CGFloat) round: (CGFloat)f { int a = f; CGFloat b = a; return b; } It works as expected but it only rounds down. And if it's a negative number it still rounds down. This was just a quick method I made, it isn't very important that it rounds correctly, I just made it to round the camera's x and y values for my game. Is this method okay? Is it fast? Or is there a better solution?

    Read the article

  • How to make my iPhone app compatible with iOS 4?

    - by Davide
    Hello, My iphoneos 3.1 based application is not working on iOS 4 GM: the camera is not showing in full screen, it doesn't correctly detects compass information, the uiwebviews doesn't respond to touches (they don't scroll), and so on. It's completely broken! Now my question is: how can I develop an update using the latest xcode with support for ios 4? The latest iOS 4 xcode (3.2.3) doesn't provide any way to develop for iPhoneOS 3.x ("base sdk missing"). By the other side, xcode 3.2.2 would not allow me to debug it on a iOS 4 device, so I can't test it.

    Read the article

  • (Android SDk 2.1) Getting error when I use setAudioSource and setVideoSource

    - by Rainfer
    I got the follow error when I run setAudioSource and setVideoSource. 03-16 10:26:25.302: ERROR/audio_input(52): unsupported parameter: x-pvmf/media-input-node/cap-config-interface;valtype=key_specific_value 03-16 10:26:25.302: ERROR/audio_input(52): VerifyAndSetParameter failed 03-16 10:26:25.302: ERROR/CameraInput(52): Unsupported parameter(x-pvmf/media-input-node/cap-config-interface;valtype=key_specific_value) 03-16 10:26:25.302: ERROR/CameraInput(52): VerifiyAndSetParameter failed on parameter #0 This error happen on both emulator and the device. (I am using Google nexus one) I have set the CAMERA and RECORD_AUDIO user permission already. I spent many days but I still cannot figure out what is the cause of this runtime error.

    Read the article

  • Adobe Flash player Secuirty Pop-Up question

    - by kapildalwani
    I am building a Audio Recording tool using Flash and Wowza. I dont want to start the recording until the use clicks the Allow Button is the Security Pop-up question represented here http://www.macromedia.com/support/documentation/en/flashplayer/help/help05.html In Audio I dont get this until I attach the stream to it. In Video can get thsi question when I attach the camera to Video. I want to avoid making a connection until the user clicks Accept and this doesn't happen until I make the connection request in Audio. I am able to display the http://www.macromedia.com/support/documentation/en/flashplayer/help/help09.html pop-up using SecurityManager Is there a way I can call the pop-up from my code. http://www.macromedia.com/support/documentation/en/flashplayer/help/help05.html

    Read the article

  • Marker Recognition on Android (recognising Rubik's Cubes)

    - by greenie
    Hi everybody. I'm developing an augmented reality application for Android that uses the phone's camera to recognise the arrangement of the coloured squares on each face of a Rubik's Cube. One thing that I am unsure about is how exactly I would go about detecting and recognising the coloured squares on each face of the cube. If you look at a Rubik's Cube then you can see that each square is one of six possible colours with a thin black border. This lead me to think that it should be relativly simply to detect a square, possibly using an existing marker detection API. My question is really, has anybody here had any experience with image recognition and Android? Ideally I'd like to be able to implement and existing API, but it would be an interesting project to do from scratch if somebody could point me in the right direction to get started. Many thanks in advance.

    Read the article

  • How to crop Image in iPhone?

    - by aman-gupta
    Hi, In my application I m using following codes to crop the captured image :- -(void)imagePickerController:(UIImagePickerController *) picker didFinishPickingMediaWithInfo:(NSDictionary *)info { #ifdef _DEBUG NSLog(@"frmSkinImage-imagePickerController-Start"); #endif imageView.image = [info objectForKey:@"UIImagePickerControllerOriginalImage"]; //======================================= UIImage *image =imageView.image; CGRect cropRect = CGRectMake(100, 100, 125,128); CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], cropRect); [imageView setImage:[UIImage imageWithCGImage:imageRef]]; CGImageRelease(imageRef); //=================================================== //imgglobal = [info objectForKey:@"UIImagePickerControllerOriginalImage"]; // for saving image to photo album //UIImageWriteToSavedPhotosAlbum(imageView.image, self, @selector(image:didFinishSavingWithError:contextInfo:), self); [picker dismissModalViewControllerAnimated:YES]; #ifdef _DEBUG NSLog(@"frmSkinImage-imagePickerController-End"); #endif } But my problem is that when I use camera to take photo to crop the captured image it rotates the image to 90 degree towards right and in case I use Photo library it works perfectly. So Can u filter my above codes to know where I m wrong. Please help me out its urgent Thanks In Advance

    Read the article

  • Metric 3d reconstruction

    - by srand
    I'm trying to reconstruct 3D points from 2D image correspondences. My camera is calibrated. The test images are of a checkered cube and correspondences are hand picked. Radial distortion is removed. After triangulation the construction seems to be wrong however. The X and Y values seem to be correct, but the Z values are about the same and do not differentiate along the cube. The 3D points look like as if the points were flattened along the Z-axis. What is going wrong in the Z values? Do the points need to be normalized or changed from image coordinates at any point, say before the fundamental matrix is computed? (If this is too vague I can explain my general process or elaborate on parts)

    Read the article

  • How to investigate if opencl is possible for an algorithm

    - by Marnix
    I have a heavy-duty algorithm in C# that takes two large Bitmaps of about 10000x5000 and performs photo and ray collision operations on a 3D model to map photos on the 3D model. I would like to know if it is possible to convert such an algorithm to OpenCL to optimize parallel operations during the algorithm. But before asking you to go into the details of the algorithm, I would like to know how I can investigate if my algorithm is convertible to OpenCL. I am not experienced in OpenCL and I would like to know if it is worth it to get into it and learn how it works. Are there things I have to look for that will definitely not work on the graphics card? (for-loops, recursion) Update: My algorithm goes something like: foreach photo split the photo in 64x64 blocks foreach block cast a ray from the camera to the 3D model foreach triangle in 3D model perform raycheck

    Read the article

  • Displaying webcam feed using opencv and python

    - by Mitch
    Hi ive been trying to create a simple program with python which utilises opencv to get a video feed from my webcam and display it on the screen. I know im partly there because the window is created and the light on my webcam flicks on, but it just doesnt seem to show anything in the window. hopefully someone can explain what im doing wrong. import cv cv.NamedWindow("w1", cv.CV_WINDOW_AUTOSIZE) capture = cv.CaptureFromCAM(0) def repeat(): frame = cv.QueryFrame(capture) cv.ShowImage("w1", frame) while True: repeat() on an unrelated note, i have noticed that my webcam sometimes changes its index number in cv.CaptureFromCAM and sometimes i need to put in 0, 1 or 2 even though i only have one camera connected and i havnt unplugged it (i know because the light doesnt come on unless i change the index). is there a way to get python to determine the correct index? thanks Mitch

    Read the article

  • Can't find how to import as one object or how to merge

    - by Aaron
    I need write a script in blender that creates some birds which fly around some obstacles. The problem is that I need to import a pretty large Collada model (a building) which consists of multiple objects. The import works fine, but the the building is not seen as 1 object. I need to resize and move this building, but I can only get the last object in the building (which is a camera)... Does anyone know how to merge this building in 1 object, group, variable... so I can resize and move it correctly? Part of the code I used: bpy.ops.wm.collada_import(filepath="C:\\Users\\me\\building.dae") building= bpy.context.object building.scale = (100, 100, 100) building.name = "building"

    Read the article

  • Android: videocamera, limit length of videos taken

    - by AP257
    I'm working in Android and starting the video camera activity using ACTION_VIDEO_CAPTURE. Is there any way I can limit the length (in time) of the videos the user can take? I think this is possible if you use MediaRecorder, but I don't really fancy doing that since it's so much more complicated than using the simple ACTION_VIDEO_CAPTURE. Current code: Intent videoCaptureIntent = new Intent(MediaStore.ACTION_VIDEO_CAPTURE); startActivityForResult(videoCaptureIntent,1); If it's not possible, does anyone know whether I could set a timer (TimerTask?) in Java and then show a Toast message after a certain length of time warning the user that they need to stop filming? (I'm a Java newbie, so I don't know if this is exactly what I need.)

    Read the article

  • 500px.com Ranking Algorithm

    - by alex
    I was recently wondering how http://500px.com calculates their "Pulse" rating. The "Pulse" is a score from 1..100 based on the popularity of the photo. I think it might use some of the following criteria: Number of likes Number of "favorites" Number of comments Total views maybe the time since the photo has been uploaded maybe some other non-obvious criteria like the users follower count, user rank, camera model or similar How would I achieve some sort of algorithm like this? Any advice on how to implement an algorithm with this criteria (and maybe some code) would be appreciated too.

    Read the article

  • OpenGL billboard interpolation issue

    - by PeanutPower
    I have a billboard quad with a texture mapped onto it. This is basically some text with transparency. The billboard floats forwards and backwards from the camera's perspective. As the billboard moves away (and appears smaller) there is an flickering effect around the edges of the text where there is a stroke border on the actual texture. I think this is because interpolation is needed as the image which is normally X pixels wide is now shown as only a % of X and some pixels need to be merged together. I guess it's doing nearest neighbour or something? Can anyone point me in the right direction for opengl settings to control this, I'm guessing there is some way of preventing this effect from happening by adjusting the method for how the texture is handled ?

    Read the article

  • Videoconference using Flash and SIP

    - by Júlio Santos
    The front-end will be Flash, to run in a browser and have access to the camera. I must use SIP to control the sessions. How could I do this? Will a Red5 server and a MjSip sever do the trick? As in i'd use MjSip to setup the session and warn users about calls, and Red5 to stream the video and audio? Any suggestions? Note: only 1-on-1 conference is required.

    Read the article

  • gluLookAt doesn't work

    - by Tyzak
    hi, i'm programming with opengl and i want to change the camera view: ... void RenderScene() //Zeichenfunktion { glClearColor( 1.0, 0.5, 0.0, 0 ); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); glLoadIdentity (); //1.Form: glBegin( GL_POLYGON ); //polygone glColor3f( 1.0f, 0.0f, 0.0f ); //rot glVertex3f( -0.5, -0.5, -0.5 ); //unten links 3 =3 koords, f=float glColor3f( 0.0f, 0.0f, 1.0f ); //blau glVertex3f( 0.5, -0.5, -0.5 ); //unten rechts glVertex3f( 0.5, 0.5, -0.5 );//oben rechts glVertex3f( -0.5, 0.5, -0.5 );//oben links glEnd(); Wuerfel(0.7); //creates cube with length 0.7 gluLookAt ( 0., 0.3, 1.0, 0., 0.7, 0., 0., 1., 0.); glFlush(); //Buffer leeren } ... when i change the parameter of gluLookAt, nothing happens, what do i wrong? thanks

    Read the article

  • Dice face value recognition

    - by Jakob Gade
    I’m trying to build a simple application that will recognize the values of two 6-sided dice. I’m looking for some general pointers, or maybe even an open source project. The two dice will be black and white, with white and black pips respectively. Their distance to the camera will always be the same, but their position on the playing surface will be random. (not the best example, the surface will be a different color and the shadows will be gone) I have no prior experience with developing this kind of recognition software, but I would assume the trick is to first isolate the faces by searching for the square profile with a dominating white or black color (the rest of the image, i.e. the table/playing surface, will in distinctly different colors), and then isolate the pips for the count. Shadows will be eliminated by top down lighting. I’m hoping the described scenario is so simple (read: common) it may even be used as an “introductory exercise” for developers working on OCR technologies or similar computer vision challenges.

    Read the article

  • OpenGL Diffuse Lighting Shader Bug?

    - by anon
    The Orange book, section 16.2, lists implementing diffuse lighting as: void main() { vec3 N = normalize(gl_NormalMatrix * gl_Normal); vec4 V = gl_ModelViewMatrix * gl_vertex; vec3 L = normalize(lightPos - V.xyz); gl_FrontColor = gl_Color * vec4(max(0.0, dot(N, L)); } However, when I run this, the lighting changes when I move my camera. On the other hand, when I change vec3 N = normalize(gl_NormalMatrix * gl_Normal); to vec3 N = normalize(gl_Normal); I get diffuse lighting that works like the fixed pipeline. What is this gl_NormalMatrix, what did removing it do, ... and is this a bug in the orange book ... or am I setting up my OpenGl code improperly?

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >