Search Results

Search found 3582 results on 144 pages for 'digital camera'.

Page 13/144 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Android streamming video from camera

    - by user1415651
    Sorry about my English i hope that can be understandable... I'm working on one App for Android and my purpose is to stream video over the phone camera to other Android phone (application). I don´t know so much about streamming video, and what i want to know is what i need to do.. I need to create a streamming server that receive the video from one android phone? How i do this? And what is the best way to do this? how i can set up/configure a streamming server? someone can help me with some explanation or tutorials? Thanks in advance!

    Read the article

  • How to setup OpenGL camera for a racing game

    - by vian
    I need the view to show the road polygon (a rectangle 3.f * 100.f) with a vanishing point for a road being at 3/4 height of the viewport and the nearest road edge as a viewport's bottom side. See Crazy Taxi game for an example of what I wish to do. I'm using iPhone SDK 3.1.2 default OpenGL ES project template. I setup the projection matrix as follows: glMatrixMode(GL_PROJECTION); glLoadIdentity(); glFrustumf(-2.25f, 2.25f, -1.5f, 1.5f, 0.1f, 1000.0f); Then I use glRotatef to adjust for landscape mode and setup camera. glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glRotatef(-90, 0.0f, 0.0f, 1.0f); const float cameraAngle = 45.0f * M_PI / 180.0f; gluLookAt(0.0f, 2.0f, 0.0f, 0.0f, 0.0f, 100.0f, 0.0f, cos(cameraAngle), sin(cameraAngle)); My road polygon triangle strip is like this: static const GLfloat roadVertices[] = { -1.5f, 0.0f, 0.0f, 1.5f, 0.0f, 0.0f, -1.5f, 0.0f, 100.0f, 1.5f, 0.0f, 100.0f, }; And I can't seem to find the right parameters for gluLookAt. My vanishing point is always at the center of the screen.

    Read the article

  • Manually changing keyboard orientation for a view that's on top of a camera view

    - by XKR
    I'm basically trying to reproduce the core functionality of the "At Once" app. I have a camera view and another view with a text view on it. I add both views to the window. All is well so far. [window addSubview:imagePicker.view]; [window addSubview:textViewController.view]; I understand that the UIImagePickerController does not support autorotation, so I handle it manually by watching UIDeviceOrientationDidChangeNotifications and applying the necessary transforms to the textViewController.view. Now, the problem here is the keyboard. If I do nothing, it just stays in portrait mode. I can get it to rotate by adding the following code to the notification handler. [[UIApplication sharedApplication] setStatusBarOrientation:interfaceOrientation]; [textView resignFirstResponder]; [textView becomeFirstResponder]; However, the following simple test produces weird behavior. Start the app in portrait mode. Rotate the device 90 degrees clockwise. Rotate the device 90 degrees counterclockwise (back to the initial position). Rotate the device 90 degrees clockwise. After step 4, instead of the landscape-mode keyboard, the portrait-style keyboard is shown, skewed to fit in the landscape keyboard frame. Perhaps my approach is wrong from the start. I was wondering if anyone has been able to reliably make the keyboard change its orientation in response to setStatusBarOrientation.

    Read the article

  • Software to clean up photos of whiteboards and documents?

    - by Norman Ramsey
    I take a lot of photos of whiteboards, blackboards, and so on for teaching purposes (examples online through May 2010). I'm interested in cleaning them up for archival purposes, preferably using Linux. Commercial products ClearBoard and PhotoNote are priced a little aggressively for my purposes, plus my students would like to have this capability too. Does anyone know of any good, open source software for Converting photographs to images with just a few colors? Eliminating perspective distortion? Removing unwanted junk from around the edges of an image? or anything like that? I'm imagining that I start out with a picture of my whiteboard using red and black markers, and I end up with a three-color image using just white, red, and black. Or I photograph a laser-printed document and end up with a clean black-and-white image. I have tried standard tools that reduce the number of colors in an image, and they do a terrible job—probably because they are trying to reproduce the uneven illumination of the original image. Command-line Linux tools would be ideal.

    Read the article

  • Obtaining a world point from a screen point with an orthographic projection

    - by vargonian
    I assumed this was a straightforward problem but it has been plaguing me for days. I am creating a 2D game with an orthographic camera. I am using a 3D camera rather than just hacking it because I want to support rotating, panning, and zooming. Unfortunately the math overwhelms me when I'm trying to figure out how to determine if a clicked point intersects a bounds (let's say rectangular) in the game. I was under the impression that I could simply transform the screen point (the clicked point) by the inverse of the camera's View * Projection matrix to obtain the world coordinates of the clicked point. Unfortunately this is not the case at all; I get some point that seems to be in some completely different coordinate system. So then as a sanity check I tried taking an arbitrary world point and transforming it by the camera's View*Projection matrices. Surely this should get me the corresponding screen point, but even that didn't work, and it is quickly shattering any illusion I had that I understood 3D coordinate systems and the math involved. So, if I could form this into a question: How would I use my camera's state information (view and projection matrices, for instance) to transform a world point to a screen point, and vice versa? I hope the problem will be simpler since I'm using an orthographic camera and can make several assumptions from that. I very much appreciate any help. If it makes a difference, I'm using XNA Game Studio.

    Read the article

  • 3D zooming technique to maintain the relative position of an object on screen

    - by stark
    Is it possible to zoom to a certain point on screen by modifying the field of view and rotating the view of the camera as to keep that point/object in the same place on screen while zooming ? Changing the camera position is not allowed. I projected the 3D pos of the object on screen and remembered it. Then on each frame I calculate the direction to it in camera space and then I construct a rotation matrix to align this direction to Z axis (in cam space). After this, I calculate the direction from the camera to the object in world space and transform this vector with the matrix I obtained earlier and then use this final vector as the camera's new direction. And it's actually "kinda working", the problem is that it is more/less off than the camera's rotation before starting to zoom depending on the area you are trying to zoom in (larger error on edges/corners). It looks acceptable, but I'm not settling for only this. Any suggestions/resources for doing this technique perfectly? If some of you want to explain the math in detail, be my guest, I can understand these things well.

    Read the article

  • Interesting 3d zooming technique

    - by stark
    Is it possible to zoom to a certain point on screen by modifying the field of view and rotating the camera as to keep that point/object in the same place on screen while zooming ? Changing the camera position is not allowed.. I projected the 3d pos of the object on screen and remembered it. Then on each frame I calculate the direction to it in camera space and then I construct a rotation matrix to align this direction to Z axis (in cam space). After this, I calculate the direction from the camera to the object in world space and transform this vector with the matrix I obtained earlier and then use this final vector as the camera's new direction. And it's actually "kinda working", the problem is that it is more/less off than the camera's rotation before starting to zoom depending on the area you are trying to zoom in (larger error on edges/corners). It looks acceptable, but I'm not settling for only this. Any suggestions/resources for doing this technique perfectly ? If some of you want to explain the math in detail, be my guests, I can understand these things well. Thanks. Edit: I'll check often for responses, I'm really curious about this :D

    Read the article

  • Create Adjustable Depth of Field Photos with a DSLR

    - by Jason Fitzpatrick
    If you’re fascinating by the Lytro camera–a camera that let’s you change the focus after you’ve taken the photo–this DSLR hack provides a similar post-photo focus processing without the $400 price tag. Photography tinkers at The Chaos Collective came up with a clever way of mimicking the adjustable depth-of-field adjustment effect from the Lytro camera. The secret sauce in their technique is setting the camera to manual focus and capturing a short 2-3 second video clip while they rotate the focus through the entire focal range. From there, they use a simple applet to separate out each frame of the video. Check out the interactive demo below: Anywhere you click in the photo shifts the focus to that point, just like the post processing in the Lytro camera. It’s a different approach to the problem but it yields roughly the same output. Hit up the link below for the full run down on their technique and how you can get started using it with your own video-enabled DLSR. Camera HACK: DOF-Changeable Photos with an SLR [via Hack A Day] Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • Issues implementing arcball viewer

    - by Pris
    My scene has a simple cube, and a camera built with the lookAt function (I'm using OpenGL). The scene renders fine, and I'm sure I have my model/view/projection matrices set up correctly. Now I'm trying to implement arcball rotation for my camera, but I'm having some trouble. I've got it down to calculating the angle/axis rotation for a virtual sphere in normalized screen coordinates. That means when I move my mouse left to right, I get an angle around the Y axis... and moving my mouse up/down will get me an angle about X. I'm not sure where to go from here -- what do I need to do with my axis so I can apply the angle to simulate camera rotation about its viewpoint? If I try directly applying the axis/angle rotation the camera/view transform I get what you'd expect. The view is rotated about the world axes which the mouse moving over the virtual sphere on the screen corresponds to. So if I move the mouse up/down the view rotates about the world's X axis (what I get reminds me of a first-person view)... but this isn't what I want. I think I need the axis I get to be transformed so it passes through the camera viewpoint and is oriented correct in reference to the camera... but I don't know if that's right or how to do that.

    Read the article

  • Problems with 3D transformation - (SharpDX)

    - by Morphex
    First of all , I have been trying to get this right for a couple of day already, I have read so much info and still fail miserably to understand this. So I am going to tell you that even though I have done fairly amount of research myself, I failed to implement it. I must say miserably I am trying to create a generic camera class for a game engine of sorts - for research purposes only - the thing is I have no idea how to go about it. I have read about quaternions and matrices, but when it comes to the actual implementation I suck at it. Sharpdx has already Matrices and QUaternions implemented. SO no big deal on the map behind it. How in the world would I go about creating a camera? I have seen so many camera examples and still can't make one that works as expected. I would like to implement diferent types too (Orbital, 6DoF, FPS). So what is need for a camera? UP, Forward and Right vectors I read they are needed, also a quaternion for rotations, and View and Projection matrices. I understand that a FPS camera for instance only rotates around the World Y and the Right Axis of the camera. the 6DoF rotates always around their own axis, and the orbital is just translating for set distance and making it look always at a fixed target point. The concepts are there, now implementing this is not trivial for me. Can anyone point me on what am I missing, what I got wrong? I would really enjoy if you could give a tutorial, some piece of code, or just plain explanation of the concepts. Thank you for readin, a frustrated coder.

    Read the article

  • How do I cap rendering of tiles in a 2D game with SDL?

    - by farmdve
    I have some boilerplate code working, I basically have a tile based map composed of just 3 colors, and some walls and render with SDL. The tiles are in a bmp file, but each tile inside it corresponds to an internal number of the type of tile(color, or wall). I have pretty basic collision detection and it works, I can also detetc continuous presses, which allows me to move pretty much anywhere I want. I also have a moving camera, which follows the object. The problem is that, the tile based map is bigger than the resolution, thus not all of the map can be displayed on the screen, but it's still rendered. I would like to cap it, but since this is new to me, I pretty much have no idea. Although I cannot post all the code, as even though I am a newbie and the code pretty basic, it's already quite a few lines, I can post what I tried to do void set_camera() { //Center the camera over the dot camera.x = ( player.box.x + DOT_WIDTH / 2 ) - SCREEN_WIDTH / 2; camera.y = ( player.box.y + DOT_HEIGHT / 2 ) - SCREEN_HEIGHT / 2; //Keep the camera in bounds. if(camera.x < 0 ) { camera.x = 0; } if(camera.y < 0 ) { camera.y = 0; } if(camera.x > LEVEL_WIDTH - camera.w ) { camera.x = LEVEL_WIDTH - camera.w; } if(camera.y > LEVEL_HEIGHT - camera.h ) { camera.y = LEVEL_HEIGHT - camera.h; } } set_camera() is the function which calculates the camera position based on the player's positions. I won't pretend I know much about it. Rectangle box = {0,0,0,0}; for(int t = 0; t < TOTAL_TILES; t++) { if(box.x < (camera.x - TILE_WIDTH) || box.y > (camera.y - TILE_HEIGHT)) apply_surface(box.x - camera.x, box.y - camera.y, surface, screen, &clips[tiles[t]]); box.x += TILE_WIDTH; //If we've gone too far if(box.x >= LEVEL_WIDTH) { //Move back box.x = 0; //Move to the next row box.y += TILE_HEIGHT; } } This is basically my render code. The for loop loops over 192 tiles stored in an int array, each with their own unique value describing the tile type(wall or one of three possible colored tiles). box is an SDL_Rect containing the current position of the tile, which is calculated on render. TILE_HEIGHT and TILE_WIDTH are of value 80. So the cap is determined by if(box.x < (camera.x - TILE_WIDTH) || box.y > (camera.y - TILE_HEIGHT)) However, this is just me playing with the values and see what doesn't break it. I pretty much have no idea how to calculate it. My screen resolution is 1024/768, and the tile map is of size 1280/960.

    Read the article

  • import mini DV films from Samsung digital cam VP-D353 [USB]

    - by bobo
    I tried to import mini DV films from my old video camera VP-D353 and it's not reconised by my lubuntu.( 12.04 ) I tried "DVGRAB" which should work but it doesn't. I Found this tutorial http://www.foscode.com/linux-minidv-usb-video-capture/ But it's just saying "waiting for dv" I don't really know what should I do now. Here what I've got for the camera when I write : sudo lsub Bus 002 Device 013: ID 04e8:120f Samsung Electronics Co., Ltd thanks

    Read the article

  • capturing video from ip camera

    - by Ruby
    I am trying to capture video from ip camera into my application , its giving exception com.sun.image.codec.jpeg.ImageFormatException: Not a JPEG file: starts with 0x0d 0x0a at sun.awt.image.codec.JPEGImageDecoderImpl.readJPEGStream(Native Method) at sun.awt.image.codec.JPEGImageDecoderImpl.decodeAsBufferedImage(Unknown Source) at test.AxisCamera1.readJPG(AxisCamera1.java:130) at test.AxisCamera1.readMJPGStream(AxisCamera1.java:121) at test.AxisCamera1.readStream(AxisCamera1.java:100) at test.AxisCamera1.run(AxisCamera1.java:171) at java.lang.Thread.run(Unknown Source) its giving exception at image = decoder.decodeAsBufferedImage(); Here is the code i am trying private static final long serialVersionUID = 1L; public boolean useMJPGStream = true; public String jpgURL = "http://ip here/video.cgi/jpg/image.cgi?resolution=640×480"; public String mjpgURL = "http://ip here /video.cgi/mjpg/video.cgi?resolution=640×480"; DataInputStream dis; private BufferedImage image = null; public Dimension imageSize = null; public boolean connected = false; private boolean initCompleted = false; HttpURLConnection huc = null; Component parent; /** Creates a new instance of AxisCamera */ public AxisCamera1(Component parent_) { parent = parent_; } public void connect() { try { URL u = new URL(useMJPGStream ? mjpgURL : jpgURL); huc = (HttpURLConnection) u.openConnection(); // System.out.println(huc.getContentType()); InputStream is = huc.getInputStream(); connected = true; BufferedInputStream bis = new BufferedInputStream(is); dis = new DataInputStream(bis); if (!initCompleted) initDisplay(); } catch (IOException e) { // incase no connection exists wait and try // again, instead of printing the error try { huc.disconnect(); Thread.sleep(60); } catch (InterruptedException ie) { huc.disconnect(); connect(); } connect(); } catch (Exception e) { ; } } public void initDisplay() { // setup the display if (useMJPGStream) readMJPGStream(); else { readJPG(); disconnect(); } imageSize = new Dimension(image.getWidth(this), image.getHeight(this)); setPreferredSize(imageSize); parent.setSize(imageSize); parent.validate(); initCompleted = true; } public void disconnect() { try { if (connected) { dis.close(); connected = false; } } catch (Exception e) { ; } } public void paint(Graphics g) { // used to set the image on the panel if (image != null) g.drawImage(image, 0, 0, this); } public void readStream() { // the basic method to continuously read the // stream try { if (useMJPGStream) { while (true) { readMJPGStream(); parent.repaint(); } } else { while (true) { connect(); readJPG(); parent.repaint(); disconnect(); } } } catch (Exception e) { ; } } public void readMJPGStream() { // preprocess the mjpg stream to remove the // mjpg encapsulation readLine(3, dis); // discard the first 3 lines readJPG(); readLine(2, dis); // discard the last two lines } public void readJPG() { // read the embedded jpeg image try { JPEGImageDecoder decoder = JPEGCodec.createJPEGDecoder(dis); image = decoder.decodeAsBufferedImage(); } catch (Exception e) { e.printStackTrace(); disconnect(); } } public void readLine(int n, DataInputStream dis) { // used to strip out the // header lines for (int i = 0; i < n; i++) { readLine(dis); } } public void readLine(DataInputStream dis) { try { boolean end = false; String lineEnd = "\n"; // assumes that the end of the line is marked // with this byte[] lineEndBytes = lineEnd.getBytes(); byte[] byteBuf = new byte[lineEndBytes.length]; while (!end) { dis.read(byteBuf, 0, lineEndBytes.length); String t = new String(byteBuf); System.out.print(t); // uncomment if you want to see what the // lines actually look like if (t.equals(lineEnd)) end = true; } } catch (Exception e) { e.printStackTrace(); } } public void run() { System.out.println("in Run..................."); connect(); readStream(); } @SuppressWarnings("deprecation") public static void main(String[] args) { JFrame jframe = new JFrame(); jframe.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); AxisCamera1 axPanel = new AxisCamera1(jframe); new Thread(axPanel).start(); jframe.getContentPane().add(axPanel); jframe.pack(); jframe.show(); } } Any suggestions what I am doing wrong here??

    Read the article

  • How does one make the camera on the iphone appear from the app delegate? Is it possible?

    - by K-RAN
    I'm just playing around with a simple program that opens the camera. That's literally all that I want to do. I'm a beginner and I believe that I have the basics down in terms of UI management for the iPhone so I decided to give this one a whirl. What I'm trying to do right now is... - (BOOL) application:(UIApplication*) application didFinishLaunchingWithOptions:(NSDictionary*) launchOptions { UIImagePickerController * camera = [[UIImagePickerController alloc] init]; camera.delegate = self; camera.sourceType = UIImagePickerControllerSourceTypeCamera; camera.allowsEditing = NO; camera.showsCameraControls = NO; [viewController presentModalViewController:camera animated:NO]; [window addSubview:viewController.view]; [window makeKeyAndVisible]; return YES: } So basically, initialize the camera, set some things and show it in the main view. I set the camera's delegate to self because this code is placed in the delegate class (and yes, the delegate class is conforming to UIImagePickerControllerDelegate && UINavigationControllerDelegate). Main problem right now is that nothing is appearing on the screen. I have absolutely no idea what I'm doing wrong, especially since the program is building correctly with no errors or warnings... Any help is appreciated! Thanks a lot :D

    Read the article

  • How to use web camera in android emulator to capture a live image?

    - by Kumar
    Hi,As far as i know, Android emulator doesn't have a camera .To capture a live image we have to use web camera.I have seen code in this web site "http://www.tomgibara.com/android/camera-source" to use web camera in android emulator to capture a image but, i don't how to use this code.I am new to this field , can any one help me how to do it. regards, s.kumaran.

    Read the article

  • Digital Signature Device Recommendations (Mini Tablet)

    - by blu
    I'd like to capture signatures of application users with a mini-tablet/little pos signature device. I welcome any first hand experiences of which ones were good and which ones to stay away from. Off the top of my head I can think of a few features I'd like to see: USB interface Not too expensive (I don't know 100-200 dollars?) Be easy to integrate with a managed .NET application Also I realize most people, myself included, think of digitally signing assemblies with code instead of a mini-tablet device, if there is a more accurate phrase for this pleas let me know. Thanks for any input.

    Read the article

  • Sell digital goods online

    - by Adam
    What's the best solution for a photographer wanting to sell image files online? I tried zencart, but it's way over the top and their backend looks like a 3 year old designed it. Is there a free solution out there? One that has easily modifiable templates and isn't too tedious to add hundreds of pictures for sale? I'm seriously thinking learning how to do it myself with the whole paypal IPN thing might be the way to go.. Suggestions is greatly appreciated! Adam

    Read the article

  • How to verify a digital signature with openssl

    - by Aaron Carlino
    I'm using a thirdparty credit card processing service (Paybox) that, after a successful transaction, redirects back to the website with a signature in the URL as a security measure to prevent people from manipulating data. It's supposed to prove that the request originated from this service. So my success URL looks something like this: /success.php?signature=[HUGE HASH] I have no idea where to start with verifying this signature. This service does provide a public key, and I assume I need to create a private key, but I don't know much beyond that. I'm pretty good with linux, and I know I'll have to run some openssl commands. I'm writing the verification script in PHP, which also has native openssl() functions. If anyone could please push me in the right direction with some pseudo code, or even functional code, I'd be very grateful. Thanks.

    Read the article

  • How to create digital signature that can not be used to reproduce the message twice

    - by freediver
    I am creating a client-server application and I'd like to send data from server to client securely. Using public/private key algorithms makes sense and in PHP we can use openssl_sign and openssl_verify functions to check that the data came by someone who has the private key. Now imagine that one of the actions sent by server to client is destructive in nature. If somebody uses an HTTP sniffer to catch this command (which will be signed properly) how can I further protect the communication to ensure that only commands coming from our server get processed by the client? I was thinking about using current UTC time as part of the encrypted data but client time might be off. Is there a simple solution to the problem?

    Read the article

  • Hub Forum - Connecting Digital Influencers - Paris 10 & 11 Octobre 2013

    - by Louisa Aggoune
    ORACLE a sponsorisé la 4ème édition du HUB FORUM qui s'est déroulé à l'Espace Pierre Cardin. Les 10,11 oct 2013, plus de 1200 leaders du digital, de la communication, du marketing, de la publicité et de l'innovation se réunissaient pour 2 jours de Conférence lors de la 4ème édition du HUBFORUM Paris, organisé par le HUB Institute.C'est l'évènement qui rassemble les décideurs du monde du digital et leur propose de rencontrer les meilleurs experts en communication du moment mais aussi d’échanger autour des pratiques qui ont fait leurs preuves. Il offre une occasion exceptionnelle de parler du futur du marketing digital, des nouvelles technologies et des médias. Le HUB FORUM en chiffre c'est:- 18 580 mentions- 3 520 vues du live- 80 speakers- 1 200 participants Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} A cette occasion, Pascal Hary - Directeur du Développement des Ventes eXperience Client & Social Europe, a animé une conférence sur le thème  #Social Trend 2014. Pour découvrir l'album photo: https://www.facebook.com/hubforum

    Read the article

  • How can I take a photo from my camera phone remotely? [closed]

    - by kurt
    Is there any app where I could control the camera of my phone remotely(even bluetooth would do) I have a Nokia 5800 xpress music. The app could either be a stand alone app installed on the mobile phone than could click snaps at predefined intervals or if there is a app that I can install on my PC and then control my phone's camera via bluetooth or wifi or anything else. Is this possible?

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >