Search Results

Search found 3582 results on 144 pages for 'digital camera'.

Page 52/144 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • surfaceview + glsurfaceview + framelayout

    - by pohtzeyun
    Hi, I'm new at this (java and opengl) so please bear with me if the answer to the question is simple. :) I'm trying to get a camera preview screen with the ability to display 3d objects simultaneously. Having gone through the samples at the api demos, I thought combining the code for the the examples at the api demo would suffice. But somehow its not working. The forces me to shut down upon startup and the error is mentioned as null pointer exception. Could someone share with me where did I go wrong and how to proceed from there. How I did the combination for the code is as shown below: myoverview.xml <?xml version="1.0" encoding="utf-8"?> <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="fill_parent"> <android.opengl.GLSurfaceView android:id="@+id/cubes" android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="fill_parent"/> <SurfaceView android:id="@+id/camera" android:layout_width="fill_parent" android:layout_height="fill_parent"/> </FrameLayout> myoverview.java import android.app.Activity; import android.os.Bundle; import android.view.SurfaceView; import android.view.Window; public class MyOverView extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // Hide the window title. requestWindowFeature(Window.FEATURE_NO_TITLE); // camera view as the background SurfaceView cameraView = (SurfaceView) findViewById(R.id.camera); cameraView = new CameraView(this); // visual of both cubes GLSurfaceView cubesView = (GLSurfaceView) findViewById(R.id.cubes); cubesView = new GLSurfaceView(this); cubesView.setRenderer(new CubeRenderer(false)); // set view setContentView(R.layout.myoverview); } } GLSurfaceView.java import android.content.Context; class GLSurfaceView extends android.opengl.GLSurfaceView { public GLSurfaceView(Context context) { super(context); } } NOTE : I didnt list the rest of the files as they are just copies of the api demos. The cameraView refers to the camerapreview.java example and the CubeRenderer refers to the CubeRenderer.java and Cube.java example. Any help would be appreciated as I've been stuck at this for a couple of days :p Thanks Sorry, didnt realise that the coding was out of place due to formatting mistakes. :p

    Read the article

  • avoid the use of 'mixed=true' in xml schema

    - by Ralph Kretzler
    I am using xjc to convert a schema to java classes. When I am using mixed=true in the schema I am losing the access method for child nodes instead there is a single general content access method. Is there a way to rewrite the schema without using mixed=true. There is no way that I can change the xml so I have to customize the schema. Schema XML camera or camera Thanks, Ralph

    Read the article

  • UIImagePickerController does not deliver geo tag data

    - by Gregory Mace
    When I use UIImagePickerController to select a photo, either from the Camera Roll or the Photo Library, the image that gets returned to me in the method 'didFinishPickingImage' does not contain the exif data for latitude and longitude. I know that the headers are there, because they show up when imported into iPhoto, also if I upload images from the Camera Roll, they also contain the exif headers for location. Is there a way to get UIImagePickerController to deliver that information as well?

    Read the article

  • iphone: Adding UIRequiredDeviceCapabilitie

    - by pion
    I am reading the "Device Support - Setting Required Hardware Capabilities" on http://developer.apple.com/iphone/library/documentation/iPhone/Conceptual/iPhoneOSProgrammingGuide/AdvancedFeatures/AdvancedFeatures.html I want to add still-camera capability by doing the following: Open my Info.plist Click + Add UIRequiredDeviceCapabilities on the Key column Add still-camera on the Value column Save the updated Info.plist Is this the correct way? Thanks in advance for your help.

    Read the article

  • How to add my program to the OS X system menu bar?

    - by Joe
    I have created a volume controller for iTunes but I would like this app to place an icon on the OS X system menu bar and have my slider controller drop down. I created this because I have to switch to iTunes to change the volume of the music because I am using the digital-out audio and the keyboard keys do not work in digital-out mode. Any guidance would be greatly appreciated.

    Read the article

  • AVCam memory low warning

    - by Red Nightingale
    This is less a question and more a record of what I've found around the AVCam sample code provided by Apple for iOS4 and 5 camera manipulation. The symptoms of the problem for me were that my app would crash on launching the AVCamViewController after taking around 5-10 photos. I ran the app through the memory leak profiler and there were no apparent leaks but on inspection with the Activity Monitor I discovered that something called mediaserverd was increasing by 17Mb every time the camera was launched and when it reached ~100Mb the app crashed with multiple low memory warnings.

    Read the article

  • Is file transfer possible to iPhone 3.0 via Bluetooth or not?

    - by Dimitri Wetzel
    Is it possible to transfer files of a bluetooth device, lets say a digital pen (e.g. Nokia or Logitech io2) to the iPhone? I am interested if I could do a native application that could somehow get that binary file sent by the digital pen and do something with it. I am used to rfcomm and obex but I can only find inconclusive results when I search for that and the support in the iPhone SDK... Any ideas?

    Read the article

  • How to rotate 3D axis(XYZ) using Yaw,Pitch,Roll angles in Opengl

    - by user3639338
    I am working Pose estimation with capturing from camera with Opencv. Now I had three angle(Yaw,Pitch,Roll) from each frame(Head) using my code.How to rotate 3D axis(XYZ) those three angle using opengl ? I draw 3D axis using opengl. I have Problem with rotate this axis for returning each frame(Head) using VideoCapture camera input from my code.I cant rotate continuously using returning three angle my code.

    Read the article

  • UIVideoAtPathIsCompatibleWithSavedPhotosAlbum generates error implicit declaration of function, why?

    - by just_another_coder
    In my project I am trying to save video to the iPhone after being taken by the camera. When I call the method: UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(path) It reports the error: Implicit declaration of function 'UIVideoAtPathIsCompatibleWithSavedPhotosAlbum' I've imported MobileCoreServices/UTCoreTypes.h I was previously using this same code for saving camera pictures, and it worked fine. In another class, in the same project, I am able to reference the UIKit method: UIImagePNGRepresentation() without any problems So why does it give me this error?

    Read the article

  • Why AVCaptureSession output a wrong orientation?

    - by Peter
    Hey guys, So, I followed Apple's instructions to capture video session using AVCaptureSession: http://developer.apple.com/iphone/library/qa/qa2010/qa1702.html. One problem I'm facing is that even though the orientation of the camera / iphone device is vertical (and the AVCaptureVideoPreviewLayer shows a vertical camera stream), the output image seems to be in the landscape mode. I checked the width and height of imageBuffer inside imageFromSampleBuffer: of the sample code, and I got 640px and 480px respectively. Does anyone know why this's the case? Thanks!

    Read the article

  • Open standard iPhone photo library application from iPhone app

    - by Vidas
    Hello, I need to open photo library from my iPhone app just like standard iPhone Camera application does. Is it possible? I don't want picking-style interface of UIImagePickerController - it has unnecessary controls like "Use" and "Cancel" buttons and does not have full photo library viewing functionality - zooming and sliding between photos etc. My goal is to navigate user to the photo library for viewing photos (with full functionality of viewing photos) and - when user has finished - return to my app - just like standard Camera application does when you preview last-taken photos.

    Read the article

  • C# Microsoft LifeCam HD mjpeg capture

    - by IraqiGeek
    Hi, I have a Microsoft LifeCam HD-5000 webcams. According to AMCap, the camera outputs a MJPEG stream at 30fps at 720p. I want to capture each JPEG frame in a small application without doing any preview or decompression/transcoding to minimize CPU utilization to the minimum possible. I'm a C# developer, but I'm new to DirectShow. Is there a simple way to capture the MJPEG stream frame by frame as its output from the camera in C#/.NET without decompressing it?

    Read the article

  • Load large images into Bitmap?

    - by GuyNoir
    I'm trying to make a basic application that displays an image from the camera, but I when I try to load the .jpg in from the sdcard with BitmapFactory.decodeFile, it returns null. It doesn't give an out of memory error which I find strange, but the exact same code works fine on smaller images. How does the generic gallery display huge pictures from the camera with so little memory?

    Read the article

  • How do i play H264 video?

    - by ToughPal
    Hi, I have received the following video file from a camera (from security camera) http://dl.dropbox.com/u/1369478/tmw/recording.264 How can i view the content? Based on extension i think that is a H264 file. Is there a way to play this on the browser with HTML5? Regards

    Read the article

  • OpenGL Coordinate system confusion

    - by user146780
    Maybe I set up GLUT wrong. Basically I want verticies to be reletive to their size in pixels. Ex:right now if I create a hexagon, it hakes up the whole screen even though the units are 6. #include <iostream> #include <stdlib.h> //Needed for "exit" function #include <cmath> //Include OpenGL header files, so that we can use OpenGL #ifdef __APPLE__ #include <OpenGL/OpenGL.h> #include <GLUT/glut.h> #else #include <GL/glut.h> #endif using namespace std; //Called when a key is pressed void handleKeypress(unsigned char key, //The key that was pressed int x, int y) { //The current mouse coordinates switch (key) { case 27: //Escape key exit(0); //Exit the program } } //Initializes 3D rendering void initRendering() { //Makes 3D drawing work when something is in front of something else glEnable(GL_DEPTH_TEST); } //Called when the window is resized void handleResize(int w, int h) { //Tell OpenGL how to convert from coordinates to pixel values glViewport(0, 0, w, h); glMatrixMode(GL_PROJECTION); //Switch to setting the camera perspective //Set the camera perspective glLoadIdentity(); //Reset the camera gluPerspective(45.0, //The camera angle (double)w / (double)h, //The width-to-height ratio 1.0, //The near z clipping coordinate 200.0); //The far z clipping coordinate } //Draws the 3D scene void drawScene() { //Clear information from last draw glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); //Reset the drawing perspective glPolygonMode(GL_FRONT_AND_BACK, GL_FILL); glBegin(GL_POLYGON); //Begin quadrilateral coordinates //Trapezoid glColor3f(255,0,0); for(int i = 0; i < 6; ++i) { glVertex2d(sin(i/6.0*2* 3.1415), cos(i/6.0*2* 3.1415)); } glEnd(); //End quadrilateral coordinates glutSwapBuffers(); //Send the 3D scene to the screen } int main(int argc, char** argv) { //Initialize GLUT glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH); glutInitWindowSize(400, 400); //Set the window size //Create the window glutCreateWindow("Basic Shapes - videotutorialsrock.com"); initRendering(); //Initialize rendering //Set handler functions for drawing, keypresses, and window resizes glutDisplayFunc(drawScene); glutKeyboardFunc(handleKeypress); glutReshapeFunc(handleResize); glutMainLoop(); //Start the main loop. glutMainLoop doesn't return. return 0; //This line is never reached } How can I make it so that a polygon of 0,0 10,0 10,10 0,10 defines a polygon starting at the top left of the screen and is a width and height of 10 pixels? Thanks

    Read the article

  • GLSL: How to get pixel x,y,z world position?

    - by Rookie
    I want to adjust the colors depending on which xyz position they are in the world. I tried this in my fragment shader: vec4 pos = vec4(gl_FragCoord); // get pixel position but it seems that the z-coord is always towards my camera... how do i make the coords independent from my camera position/angle? Edit: if it matters, heres my vertex shader: gl_Position = ftransform(); Edit2: changed title, so i want world coords, not screen coords!

    Read the article

  • Monotouch Augmented Reality

    - by cvista
    Hi I'm trying to use the code posted here: http://seanho.posterous.com/monotouch-first-attempt-arkit-c-version however - when i try to overlay it on a camera - it seems to behave really strangely. I'm guess that it's because the camera view only does portrait? Has anyone succesfully used this? Or maybe know's how to get this working? Cheers w://

    Read the article

  • Can I make a web based video recording?

    - by Roman
    I want to have a web site which switches the web camera of users, makes a video recording and send results to my web server. Is it possible to do that? I think it should be. For example such sites as chatroulette.com starts web camera. Should it be done with the Adobe Flash technologies? Is it hard to do that?

    Read the article

  • backbone.js - Having multiple instances of the same view

    - by TrueWheel
    I am having problems having multiple instances in of the same view in different div elements. When I try to initialize them only the second of the two elements appear no matter what order I put them in. Here is the code for my view. var BodyShapeView = Backbone.View.extend({ thingiview: null, scene: null, renderer: null, model: null, mouseX: 0, mouseY: 0, events:{ 'click button#front' : 'front', 'click button#diag' : 'diag', 'click button#in' : 'zoomIn', 'click button#out' : 'zoomOut', 'click button#on' : 'rotateOn', 'click button#off' : 'rotateOff', 'click button#wireframeOn' : 'wireOn', 'click button#wireframeOff' : 'wireOff', 'click button#distance' : 'dijkstra' }, initialize: function(name){ _.bindAll(this, 'render', 'animate'); scene = new THREE.Scene(); camera = new THREE.PerspectiveCamera( 15, 400 / 700, 1, 4000 ); camera.position.z = 3; scene.add( camera ); camera.position.y = -5; var ambient = new THREE.AmbientLight( 0x202020 ); scene.add( ambient ); var directionalLight = new THREE.DirectionalLight( 0xffffff, 0.75 ); directionalLight.position.set( 0, 0, 1 ); scene.add( directionalLight ); var pointLight = new THREE.PointLight( 0xffffff, 5, 29 ); pointLight.position.set( 0, -25, 10 ); scene.add( pointLight ); var loader = new THREE.OBJLoader(); loader.load( "img/originalMeanModel.obj", function ( object ) { object.children[0].geometry.computeFaceNormals(); var geometry = object.children[0].geometry; console.log(geometry); THREE.GeometryUtils.center(geometry); geometry.dynamic = true; var material = new THREE.MeshLambertMaterial({color: 0xffffff, shading: THREE.FlatShading, vertexColors: THREE.VertexColors }); mesh = new THREE.Mesh(geometry, material); model = mesh; // model = object; scene.add( model ); } ); // RENDERER renderer = new THREE.WebGLRenderer(); renderer.setSize( 400, 700 ); $(this.el).find('.obj').append( renderer.domElement ); this.animate(); }, Here is how I create the instances var morphableBody = new BodyShapeView({ el: $("#morphable-body") }); var bodyShapeView = new BodyShapeView({ el: $("#mean-body") }); Any help would be really appreciated. Thanks in advance.

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >