Search Results

Search found 2146 results on 86 pages for 'camera calibration'.

Page 71/86 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • UIImageWriteToSavedPhotosAlbum Problem

    - by Momeks
    Hi , i try to save a photo from camera after take a photo with a button . here is my codes: -(IBAction)takePic { ipc = [[UIImagePickerController alloc]init]; ipc.delegate = self; ipc.sourceType = UIImagePickerControllerSourceTypeCamera; [self presentModalViewController:ipc animated:YES]; } - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { img.image = [info objectForKey:UIImagePickerControllerOriginalImage]; [[picker parentViewController]dismissModalViewControllerAnimated:YES]; [picker release]; } but i dont know why doesnt save anything !

    Read the article

  • Vbscript / webcam. using flash API - Save streaming file as image

    - by remi
    Hi. Based on a php app we've tried to developp our own webapp in vbscript. All is working well. The camera starts, etc.. The problem we're left with is the following: the flash API is streaming the content of the video to a webpage, this page saves the photo as a jpg. The PHP code we're trying to emulate is as follow $str = file_get_contents("php://input"); file_put_contents("/tmp/upload.jpg", pack("H*", $str)); After intensive googling the best we can come up with is Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Dim sImagePath As String = Server.MapPath("/registration/") & "test.jpg" Dim data As Byte() = Request.BinaryRead(Request.TotalBytes) Dim Ret As Array Dim imgbmp As New System.Drawing.Bitmap("test.jpg") Dim ms As MemoryStream = New MemoryStream(data) imgbmp.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg) Ret = ms.ToArray() Dim fs As New FileStream(sImagePath, FileMode.Create, FileAccess.Write) fs.Write(Ret, 0, Ret.Length) fs.Flush() fs.Close() End Sub which is not working, does anybody have any suggestion?

    Read the article

  • Modern, Non-trivial, Pygame Tutorials?

    - by Gregg Lind
    What are some 'good', non-trivial Pygame tutorials? I realize good is relative. As an example, a good one (to me) is the one that describes how to use pygame.camera. It's recent uses a modern PyGame (1.9) non-trivial, in that it shows how to use it the module for a real application. I'd like to find others. A lot of the ones on the Pygame site are from 1.3 era or earlier! Info on related projects, like Gloss is welcome as well. (If your answer is "read the source of some pygame games", please link to the source of particular ones and note what is good about them)

    Read the article

  • RTSP streaming and save into mp4 file using VLC

    - by Vivek Navadia
    Hello All let say i am having one RTSP url (rtsp://192.168.0.17/mpeg4). the live camera is setup on the machine which relay live video. i am streaming it using vlc player and i am saving it in mp4 file on some location (i.e. c:\temp.mp4). Now i am opening another vlc player instance and open this file (c:\temp.mp4). but as it is in use and saving live streaming to that file. that will not be played. if if stop the streaming and then played temp.mp4 file then it will play the streamed (saved) video. Now my requirement is VLC player should also stream and save into temp.mp4 file continuously and at the same time that file should be played in any standard player. is it possible to do with any option using VLC player that we can do both this things simultaneously. Thanks Vivek

    Read the article

  • How do I use .NET to find an orange ball in an image?

    - by JohnS
    I'm getting images from a C328R camera attached to a small arduino robot. I want the robot to drive towards orange ping-pong balls and pick them up. I'm using the C# code supplied by funkotron76 at http://www.codeproject.com/KB/recipes/C328R.aspx. Is there a library I can use to do this, or do I need to iterate over every pixel in the image looking for orange? If so, what kind of tolerance would I need to compensate for various lighting conditions? I could probably test to figure out these numbers, but I'm hoping someone out there knows the answers.

    Read the article

  • When to best implement a I2C driver module in Linux

    - by stefangachter
    I am currently dealing with two devices connected to the I2C bus within an embedded system running Linux. I am using an exisiting driver for the first device, a camera. For the second device, I have successfully implemented a userspace program with which I can communicate with the second device. So far, both devices seem to coexist happily. However, almost all I2C devices have their own driver module. Thus, I am wondering what the advantages of a driver module are. I had a look at the following thread... http://stackoverflow.com/questions/149032/when-should-i-write-a-linux-kernel-module ... but without conclusion. Thus, what would be the advantage of writing a I2C driver module over a userspace implementation? Regards, Stefan

    Read the article

  • Vibrations when exploding/repacking movie

    - by Stefano Borini
    Please bear with me, I know that what I'm doing can sound strange, but I can guarantee there's a very good reason for that. I took a movie with my camera, as avi. I imported the movie into iMovie and then exploded the single frames as PNG. Then I repacked these frames into mov using the following code movie, error = QTMovie.alloc().initToWritableFile_error_(out_path, None) mt = QTMakeTime(v, scale) attrib = {QTAddImageCodecType: "jpeg"} for path in png_paths: image = NSImage.alloc().initWithContentsOfFile_(path) movie.addImage_forDuration_withAttributes_(image, mt, attrib) movie.updateMovieFile() The resulting mov works, but it looks like the frames are "nervous" and shaky when compared to the original avi, which appears smoother. The size of the two files is approximately the same, and both the export and repacking occurred at 30 fps. The pics also appear to be aligned, so it's not due to accidental shift of the frames. My question is: by knowing the file formats and the process I performed, what is the probable cause of such result ? How can I fix it ?

    Read the article

  • starting subactivity for the second time causes java.lang.OutOfMemoryError

    - by Zacherl
    Hi there, I am developing a simple app which does a little bit of image-processing. It's divided in two activities; the main one with some display elements and the second one which is used to capture images off the phone's camera. To discribe my problem: I start the app, capture an image (by starting a new Intent with the subactivity) and all data is displayed correctly. If I capture another image after this, I run in an java.lang.OutOfMemoryError - bitmap size exceeds VM budget I dont store the captured bitmap, in the second activity I just extract some data from it and pass it to the main-activity; finishing (finish()) the sub-activity afterwards. I really dont know what I can do about it. Thanks in advance! greetings, Zacherl PS: It is my first approach to android, so I apologize for any stupid beginner error I did; if someone needs any further information, I would be happy to provide it.

    Read the article

  • Super Cam iphone app how do they make it possible?

    - by Silent
    there is an iphone app called supercam and you can get it through the app store free. This app features a way to connect your webcam or dv cam that is connected on the internet, you could set up the ip address and enter the data on the app and it will connect to your online camera. the thing is that they have the video stream and it looks like they embedded the video in a uiview or webview at the bottom they have buttons to choose from all the cameras you have set up. so this is different from other video streaming apps because it does not play the video from the full screen mode (MPMediaPlayer API) would there be any tutorials about this or somehow take reverse engineer this?

    Read the article

  • How to utilize network for p2p file sharing on Android Platform?

    - by CSharperWithJava
    I'm working on some apps for the android platform and I have two problems that I'm not quite sure how to approach, and both are closely related. How can I send a relatively small data file from one android device to another (preferably over the internet or directly through wireless network)? Is it possible to create a temporary p2p live data stream from one android device to another? An example application would be to stream low-res video from phone A's camera to phone B, or audio. I would much appreciate being pointed in the right direction on either issue (File transfer or real time data transfer).

    Read the article

  • How to best deal with photos passed to IFilter?

    - by sharptooth
    I'm implementing an IFilter for indexing image formats. One problem is photos - many users have tons of photos, photos are huge and loading and searching for text on them is time consuming. Yes, sometimes people use cameras instead of scanners for digitizing documents, but the potential problems IMO far outweight the possibility of encountering a document digitized with a photo camera. So my implementation will not extract text from photos at all. What should the IFilter do once it detects that a given file is a photo image - indicate an error or return empty text?

    Read the article

  • PLCameraController is not adding in viewcontroller

    - by sujyanarayan
    Hi, I've declared PLCameraContoller instance in my AppDelegate class as:- self.cameraController = [PLCameraController performSelector:@selector(sharedInstance)];[cameraController setDelegate:self]; And I'm accessing it in one of my viewcontroller class as:- del = [[UIApplication sharedApplication] delegate]; UIView previewView = [del.cameraController performSelector:@selector(previewView)]; previewView.frame = CGRectMake(0,0, 320, 480); self.view = previewView; [del.cameraController performSelector:@selector(startPreview)]; [del.cameraController performSelector:@selector(setCameraMode:) withObject:(NSNumber)1]; Where "del" is an instance of my AppDelegate class. But i can see only black background in my viewcontroller view in iphone device. Also if i remove "self" from the appdelegate.m code of cameracontroller it also showing blank. How can i get camera in my view controller? I'm pretty much struggling with it. Thanks in Adv.

    Read the article

  • iPhone: Chained views

    - by Michael
    I want to have dynamically created views and be able to scroll(change views like in camera roll) either from my program or user should also be able to do that. The views should contain only a simple text. Each view has to replace other, so they are like chained. The other example is the screenshots of applications in app store application details. I don't know which classes to check/start so if anyone can give me an idea of how this could be designed I would appreciate it.

    Read the article

  • extracting a quadrilateral image to a rectangle

    - by Will
    In the image below, the sign on the side of the van is not face-on to the camera. I want to calculate, as best I can with the pixels I have, what it'd look like face on. I imagine that this is some kind of loop through the x and y axis doing a Bresenham's line on both dimensions at once with some kind of mixing when pixels in the source image overlap - some sub-pixel mixing of some sort? What approaches are there, and how do you mix the pixels? Is there a standard approach for this?

    Read the article

  • OpenCv QT CvNamedWindow IplImage not working

    - by Shahzaib
    I have problem with displaying Cam on QTLabel using openCV, Every thing is working fine . except one . I have to call function from open === cvNamedWindow() == in order for program to work properly . its displaying the webcam on the QLabel no problem but if i don't call the cvNamedWindow function then the program is just hanging its just keep displaying the camera which are working on the screen but i can't click on any thing else its getting freeze. doest any one has any idea why its happening and what i am doing wrong ?

    Read the article

  • OpenGL textures trigger error 1281 and strange background behavior

    - by user3714670
    I am using SOIL to apply textures to VBOs, without textures i could change the background and display black (default color) vbos easily, but now with textures, openGL is giving an error 1281, the background is black and some textures are not applied. There must be something i didn't understand about applying/loading the textures. BUt the texture IS applied (nothing else is working though), the error is applied when i try to use the shader program however i checked the compilation of these and no problems were written. Here is the code i use to load textures, once loaded it is kept in memory, it mostly comes from the example of SOIL : texture = SOIL_load_OGL_single_cubemap( filename, SOIL_DDS_CUBEMAP_FACE_ORDER, SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS | SOIL_FLAG_DDS_LOAD_DIRECT ); if( texture > 0 ) { glEnable( GL_TEXTURE_CUBE_MAP ); glEnable( GL_TEXTURE_GEN_S ); glEnable( GL_TEXTURE_GEN_T ); glEnable( GL_TEXTURE_GEN_R ); glTexGeni( GL_S, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glTexGeni( GL_T, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glTexGeni( GL_R, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glBindTexture( GL_TEXTURE_CUBE_MAP, texture ); std::cout << "the loaded single cube map ID was " << texture << std::endl; } else { std::cout << "Attempting to load as a HDR texture" << std::endl; texture = SOIL_load_OGL_HDR_texture( filename, SOIL_HDR_RGBdivA2, 0, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS ); if( texture < 1 ) { std::cout << "Attempting to load as a simple 2D texture" << std::endl; texture = SOIL_load_OGL_texture( filename, SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS | SOIL_FLAG_DDS_LOAD_DIRECT ); } if( texture > 0 ) { // enable texturing glEnable( GL_TEXTURE_2D ); // bind an OpenGL texture ID glBindTexture( GL_TEXTURE_2D, texture ); std::cout << "the loaded texture ID was " << texture << std::endl; } else { glDisable( GL_TEXTURE_2D ); std::cout << "Texture loading failed: '" << SOIL_last_result() << "'" << std::endl; } } and how i apply it when drawing : GLuint TextureID = glGetUniformLocation(shaderProgram, "myTextureSampler"); if(!TextureID) cout << "TextureID not found ..." << endl; // glEnableVertexAttribArray(TextureID); glActiveTexture(GL_TEXTURE0); if(SFML) sf::Texture::bind(sfml_texture); else { glBindTexture (GL_TEXTURE_2D, texture); // glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 1024, 768, 0, GL_RGB, GL_UNSIGNED_BYTE, &texture); } glUniform1i(TextureID, 0); I am not sure that SOIL is adapted to my program as i want something as simple as possible (i used sfml's texture object which was the best but i can't anymore), but if i can get it to work it would be great. EDIT : After narrowing the code implied by the error, here is the code that provokes it, it is called between texture loading and bos drawing : glEnableClientState(GL_VERTEX_ARRAY); //this gives the error : glUseProgram(this->shaderProgram); if (!shaderLoaded) { std::cout << "Loading default shaders" << std::endl; if(textured) loadShaderProgramm(texture_vertexSource, texture_fragmentSource); else loadShaderProgramm(default_vertexSource,default_fragmentSource); } glm::mat4 Projection = camera->getPerspective(); glm::mat4 View = camera->getView(); glm::mat4 Model = glm::mat4(1.0f); Model[0][0] *= scale_x; Model[1][1] *= scale_y; Model[2][2] *= scale_z; glm::vec3 translate_vec(this->x,this->y,this->z); glm::mat4 object_transform = glm::translate(glm::mat4(1.0f),translate_vec); glm::quat rotation = QAccumulative.getQuat(); glm::mat4 matrix_rotation = glm::mat4_cast(rotation); object_transform *= matrix_rotation; Model *= object_transform; glm::mat4 MVP = Projection * View * Model; GLuint ModelID = glGetUniformLocation(this->shaderProgram, "M"); if(ModelID ==-1) cout << "ModelID not found ..." << endl; GLuint MatrixID = glGetUniformLocation(this->shaderProgram, "MVP"); if(MatrixID ==-1) cout << "MatrixID not found ..." << endl; GLuint ViewID = glGetUniformLocation(this->shaderProgram, "V"); if(ViewID ==-1) cout << "ViewID not found ..." << endl; glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]); glUniformMatrix4fv(ModelID, 1, GL_FALSE, &Model[0][0]); glUniformMatrix4fv(ViewID, 1, GL_FALSE, &View[0][0]); drawObject();

    Read the article

  • Copying a 14bit grayscale image (saved in long[]) to a pictureBox

    - by Itsik
    My camera gives me 14bit grayscale images, but the API's function returns a long* to the image data. (so i'm assuming 4 bytes for each pixel) My application is written in C++/CLI, and the pictureBox is of .NET type. I am currently using the BitmapData.LockBits() mechanism to gain pointer access to the image data, and using memcpy(bmpData.Scan0.ToPointer(), imageData, sizeof(long)*height*width) to copy the image data to the Bitmap. For now, the only PixelFormat that is working is 32bit RGB, and the image appears in shades of blue with contours. Trying to initialize the Bitmap as 16bppGrayscale isn't working. I would ideally want to cast the array from long to word and using a 16bit format (hoping the the 14bit data will be displayed properly) but I'm not sure if this works. Also, I don't want to iterate over the image data, so finding the min/max and then histogram stretching to [0..255] isnt an option for me (the display must be as efficient as possible) Thanks

    Read the article

  • How to disable all hardware keys programatically in android?

    - by Raghu Rami Reddy
    I am developing android application with lock functionality. please suggest me how to disable all the hard keys programatically. here i am using beleow code to disable back button. i want like this functionality for all hard keys like home,search,camera, shortcut keys here is my code: @Override public boolean onKey(View v, int keyCode, KeyEvent event) { if (keyCode == KeyEvent.KEYCODE_SEARCH) { Log.d("KeyPress", "search"); return true; } return false; } Thanks in advance.

    Read the article

  • Ad hoc network between iPhone and non iPhone devices???

    - by gn-mithun
    Is it possible to set up a ad hoc network between an iPhone and a totally different device like camera,scanner or printer and build a data tunnel between them to exchange data or services. I believe iPhone does not have the provision of creating an ad hoc network. So i am assuming that the other device are the initiator of the ad hoc network. I tried doing the same with a mac book and iPhone and i could surf on the phone after enabling internet sharing. But i wanted to make sure that its possible with other devices as well I believe the upcoming WiFi Direct is a way to do it.

    Read the article

  • Python Daemon Subprocess not working at boot

    - by Adam Richardson
    I am attempting to write a python daemon that will launch at boot. The goal of the script is to receive a job from our gearman load balancing server and complete the job. I am using the python-daemon module from pypi (http://pypi.python.org/pypi/python-daemon/). The nature of the job that it is completing is converting images in the orf (olympus raw image format) to jpeg. In order to accomplish this an outside program is used, ufraw in this case. The problem comes in when I start the daemon at boot, if I launch from the shell it runs perfectly and completes the work. When it starts at boot it is unable to launch the subprocess command. commandString = '/usr/bin/ufraw-batch --interpolation=four-color --wb=camera --compression=100 --output="' + outfile + '" --out-type=jpg --overwrite "' + infile + '"' args = shlex.split(commandString) process = subprocess.Popen(args).wait() I am not sure what I am doing wrong. Thanks for any help.

    Read the article

  • iPhone OS: Rotate just the button images, not the views

    - by Jongsma
    Hi, I am developing an iPad application which is basically a big drawing canvas with a couple of button at the side. (Doesn't sound very original, does it? :P) No matter how the user holds the device, the canvas should remain in place and should not be rotated. The simplest way to achieve this would be to support just one orientation. However, I would like the images on the buttons to rotate (like in the iPhone camera app) when the device is rotated. UIPopoverControllers should also use the users current orientation (and not appear sideways). What is the best way to achieve this? (I figured I could rotate the canvas back into place with an affineTransform, but I don't think it is ideal.) Thanks in advance!

    Read the article

  • Any "trick" to use some keys to launch an application?

    - by Profete162
    Hello, I am currently developing an free application (TaskOS ) that allow users to have multitasking and switch easily between applications on their mobile ( like alt+tab in Windows ) That work pretty well and user can launch my application by a long press on the "search" button" by adding this line in the manifest: <action android:name="android.intent.action.SEARCH_LONG_PRESS" /> I also succeed to allow to the users to Use the camera button ( they can of course disable that in application settings) and the way to do that is slighlty different: <receiver android:name=".CameraPressed"> <intent-filter android:priority="10000"> <action android:name="android.intent.action.CAMERA_BUTTON"/> </intent-filter> I am now wondering if there are other ways to launch easily my task switcher? ( long press on Home key, long press on trackball, or any other idea.) Reading the Google documentation does not help me a lot. Any other idea/suggestion would be warmly welcome. Christophe

    Read the article

  • Using Sandy 3D AS3, fill the viewport (exact fit) with multiple 3D objects.

    - by Andrew Mullins
    I'm stitching together an image using multiple instances of the sandy.primitive.Box. Each box is 96x91 while the viewport is 960x273 which should make for an exact fit if I layout the boxes in a perfect grid of 10x3. However, I can't seem to get the exact camera fieldOfView. I've tried a couple formulas (one for adjusting the "focal length" and one for adjusting the fov, directly). Both of these formulas produce a fov angle that is too narrow. // focal length (stage.stageHeight/2) / Math.tan(cam.fov / 2 * Math.PI / 180) // field of view 2 * Math.atan2( (stage.stageHeight/2), -cam.z ) * (180 / Math.PI) Another question about the same project: I need to adjust the perspective of each cube so that the image appears to be in 2D space (flat)... Any ideas on the best method for calculating such a "correction"?

    Read the article

  • running requestAnimationFrame from within a new object

    - by JVE999
    I'm having trouble running an animation. This is inside var ob1 = function() {};. When called, it runs for a while and then I get the error Uncaught RangeError: Maximum call stack size exceeded. However, this same structure has no problems running outside of the object. /////////////// Render the scene /////////////// this.render = function (){ renderer.render(scene,camera); if(isControls == true) controls.update(clock.getDelta()); this.animate(); //console.log(true); requestAnimationFrame(this.render()); } /////////////// Update objects /////////////// this.animate = function (){ console.log("!"); }

    Read the article

  • What is this event?

    - by Shawn Mclean
    Could someone explain what this C# code is doing? // launch the camera capture when the user touch the screen this.MouseLeftButtonUp += (s, e) => new CameraCaptureTask().Show(); // this static event is raised when a task completes its job ChooserListener.ChooserCompleted += (s, e) => { //some code here }; I know that CameraCaptureTask is a class and has a public method Show(). What kind of event is this? what is (s, e)?

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >