Search Results

Search found 5213 results on 209 pages for 'integrated camera'.

Page 19/209 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Issue in connecting Pro9000 camera directly with OMAP3530

    - by Vinay krishna
    I have a video phone application running on an OMAP3530 board. The problem is when I connect the camera (Pro 9000) through a powered USB hub (In:100-240V, Out:5V,1A) everything works fine when I make a video call. But if I connect the camera directly to the OMAP3530 board and try to make a video call, the OMAP board is not sending any video packets captured locally. And also the PIP (Picture In Picture) is disabled.

    Read the article

  • Re. copying movie from USB card reader back to my camera

    - by Alice P
    I have a Canon Digital Ixus 860IS. I originally copied a short movie from my camera onto the computer via a USB card reader and then copied it back onto the camera via the same USB card reader along with some photos. The photos have copied back fine but the movie, although it's showing to have copied, can't be seen. Any reason for this? Thanks

    Read the article

  • Replacement for HttpContext.Current.Request.ServerVariables["SERVER_NAME"] in Integrated Mode

    - by Donniel
    Hi, Using HttpContext.Current.Request.ServerVariables["SERVER_NAME"] in integrated mode gives an error in IIS7 as per: http://mvolo.com/blogs/serverside/archive/2007/11/10/Integrated-mode-Request-is-not-available-in-this-context-in-Application%5F5F00%5FStart.aspx Is there a replacement I can use in global.asax code for HttpContext.Current.Request.ServerVariables["SERVER_NAME"]? This would be similar to using String strPath = HttpContext.Current.Server.MapPath(HttpRuntime.AppDomainAppVirtualPath); instead of //String strPath = HttpContext.Current.Server.MapPath(HttpContext.Current.Request.ServerVariables["PATH_INFO"]);

    Read the article

  • Axis Aligned Billboard: how to make the object look at camera

    - by user19787
    I am trying to make an Axis Aligned Billboard with Pyglet. I have looked at several tutorials, but they only show me how to get the Up,Right,and Look vectors. So far this is what I have: target = cam.pos look = norm( target - billboard.pos ) right = norm( Vector3(0,1,0)*look ) up = look*right gluLookAt( look.x, look.y, look.z, self.pos.x, self.pos.y, self.pos.z, up.x, up.y, up.z ) This does nothing for me visibly. Any idea what I'm doing wrong?

    Read the article

  • How do I convert my matrix from OpenGL to Marmalade?

    - by King Snail
    I am using a third party rendering API, Marmalade, on top of OpenGL code and I cannot get my matrices correct. One of the API's authors states this: We're right handed by default, and we treat y as up by convention. Since IwGx's coordinate system has (0,0) as the top left, you typically need a 180 degree rotation around Z in your view matrix. I think the viewer does this by default. In my OpenGL app I have access to the view and projection matrices separately. How can I convert them to fit the criteria used by my third party rendering API? I don't understand what they mean to rotate 180 degrees around Z, is that in the view matrix itself or something in the camera before making the view matrix. Any code would be helpful, thanks.

    Read the article

  • On a dual-GPU laptop, is using the discrete GPU ever more power efficient?

    - by Mahmoud Al-Qudsi
    Given a laptop with a dual integrated/discrete GPU configuration, is it ever more power efficient to use the discrete GPU instead of the integrated? Obviously when writing an email or working on a spreadsheet, the integrated GPU will always use less power. But let's say you're doing something graphics-medium but not graphics-intensive/heavy - is there a point where it actually makes sense to fire up the discrete GPU, not for performance but for power-saving reasons? Off the top of my head, I can think of a scenario where the external GPU supports hardware decoding of a particular video codec - I'd imagine there is a "price point" where using the GPU saves more energy than decoding that fully in software would. But I think most GPUs, integrated or discrete, pretty much decode just the plain-Jane h264. But maybe there is something more complicated, perhaps if you're doing something like desktop/windowing animations or a flash animation on a website (not an embedded flash video) - maybe the discrete GPU will use enough less power to make up for switching to it? I guess this question can be summed up as to whether or not you can say beyond doubt that if you don't care for performance on a laptop with two GPUs, always use the integrated GPU for maximum battery life.

    Read the article

  • How do I implement camera axis aligned billboards?

    - by user19787
    I am trying to make an axis-aligned billboard with Pyglet. I have looked at several tutorials, but they only show me how to get the up, right, and look vectors. So far this is what I have: target = cam.pos look = norm(target - billboard.pos) right = norm(Vector3(0,1,0) * look) up = look * right gluLookAt( look.x, look.y, look.z, self.pos.x, self.pos.y, self.pos.z, up.x, up.y, up.z ) This does nothing for me visibly. Any idea what I'm doing wrong?

    Read the article

  • View matrix question (rotate by 180 degrees)

    - by King Snail
    I am using a third party rendering API on top of OpenGL code and I cannot get my matrices correct. The API states this: We're right handed by default, and we treat y as up by convention. Since IwGx's coordinate system has (0,0) as the top left, you typically need a 180 degree rotation around Z in your view matrix. I think the viewer does this by default. In my OpenGL app I have access to the view and projection matrices separately. How can I convert them to fit the criteria used by my third party rendering API? I don't understand what they mean to rotate 180 degrees around Z, is that in the view matrix itself or something in the camera before making the view matrix. Any code would be helpful, thanks.

    Read the article

  • Inside the Guts of a DSLR

    - by Jason Fitzpatrick
    It’s safe to assume that there is a lot more going on inside your modern DSLR than your grandfather’s Kodak Brownie, but just how much hardware is packed into the small casing of your average DSLR is quite surprising. Over at iFixit they’ve done a tear down of Nikon’s newest prosumer camera, the Nikon D600. The guts of the DSLR are absolutely bursting with hardware and flat-ribbon cable as seen in the photo above. For a closer look at the individual parts and to see it further torn down, hit up the link below. Nikon D600 Teardown [iFixit via Extreme Tech] 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8

    Read the article

  • Determine corners of a specific plane in the frustum

    - by Takumi
    I'm working on a game with a 2D view in a 3D world. It's a kind of shoot'em up. I've a spaceship at the center of the screen and i want that ennemies appear at the borders of my window. Now i don't know how to determine positions of the borders of the window. For example, my camera is at (0,0,0) and looking forward (0,0,1). I set my spaceship at (0,0,50). I also know the near plane (1) and the far plane(1000). I think i'd have to find the 4 corners of the plane in the frustum whose z position is 50, and with these corner i can determine borders. But i don't know how to determine x and y.

    Read the article

  • Ubuntu DVR - what are the options?

    - by Alex D
    According to my research, the best solution to use ubuntu server as a DVR system would be ZoneMinder. is there any alternatives to zoneminder out there? I'm not really happy it only has a web interface to control/view my cameras. And it doesn't have an option to record video stream non-stop. Am I missing something in its configuration? And the thing I really disappointed, I cant find a way to control my PTZ camera with it. what do manufacturers sell along with their standalone linux powered dvr systems?

    Read the article

  • Creating my own kill cam

    - by DalexL
    I plan on creating my own kill cam system for a sandbox tool set. After thinking about the mechanics of the kill cam itself, however, I'm quite lost. I'm trying to recreate the ones commonly seen in call of duty games that show, from the view of the killer, the actual killing scene. My Thoughts: -I can't just keep in memory when people kill others because I wouldn't know when to start the 'recording process'. There is on way for me to accurately determine when somebody is 'about' to kill someone. -My only real idea so far is to have a complete duplicate of everything loaded off to the side copying all the movement from the original world but with a 10 second delay. That way, all the kill cams would be 10 seconds long and the persons camera would just be moved to the second world of their killer. My Questions: Is there already an accepted way to do this? Does anybody have any good ideas for something like this? Thanks if you can!

    Read the article

  • What would be a good game making engine supporting Vector images?

    - by Qqwy
    I want to create a simple platforming game, in which you are a square in a wonderful world. I would like this game to be able to be played in browsers. Basically I am searching for something similar to "Flixel", but with the following features: Support Vector Graphics Allow zooming/rotating objects without producing huge amounts of lag as soon as you are using more objects. (Because I want to rotate the map around the player) So in other words, preferably zoom the viewport/camera instead of the objects themselves. Does an engine like that exist?

    Read the article

  • How do I get started with fog type effects in a first person game?

    - by Dream Lane
    Hey guys, I'm currently using JME3 to learn 3d game development in java, and I have run into a situation. I would like to add fog effects to my games, but I don't even know where to start to implement this. I know how to set the camera's far frustum to limit the render distance, but that just simply makes a sharp cutoff. I'd like the fog it up a bit to make it feel more natural. I'm looking for an answer that points me into the correct direction. I'm not looking for specific code snippets or even JME3's engine specifics. I just want to get an idea of how this stuff works in general. Thanks!

    Read the article

  • Rotate camera with mouse? [closed]

    - by ezio160324
    Once again, using tutorial 10 at NeHe. I want the code if (keys[VK_RIGHT]) // Is The Right Arrow Being Pressed? { yrot -= 1.5f; // Rotate The Scene To The Left } if (keys[VK_LEFT]) // Is The Left Arrow Being Pressed? { yrot += 1.5f; // Rotate The Scene To The Right } and if (keys[VK_PRIOR]) { lookupdown -= 1.0f; } if (keys[VK_NEXT]) { lookupdown += 1.0f; } to be done with the mouse instead of left/right arrow and Page Up/ Page Down. I tried everything I could think of. Can anyone help? EDIT: I tried using WM_MOUSEMOVE message. I just could not figure it out. EDIT2: I am using pure OpenGL to do this. No window management system or other libs such as GLUT, GLFW, SDL, SFML etc. Just OpenGL. OpenGL and GLEW. EDIT: Issue has been solved.

    Read the article

  • Webcam security camera software that runs as a service

    - by hurfdurf
    I've been looking for Windows webcam software that will run as a Windows service without any user login. The goal is to use the webcam as a cheap security camera and log the results to secure networked storage (windows share, not FTP). The requirements are: Motion detection Video capture Runs as a service (should start recording immediately after reboot) Nice to have: Round-robin storage, e.g. 10Gb limit, oldest files overwritten/deleted when space gets low I've read the other webcam questions but still haven't stumbled across anything suitable. Evaluations thus far: Title MotionDetect Service Snapshots Video SpaceLimit License Yawcam Yes Yes Yes No No GPL WebCam ZoneTrigger Yes No Yes Yes No Commercial Dorgem Yes No Yes Yes No GPL AbelCam Yes No Yes Yes No Commercial Logitech Yes No Yes Yes No Paired with camera IspyConnect Yes No Yes Yes Yes Free SecureCam (SourcefoYes No Yes Yes No GPL AbelCam Yes No Yes Yes No Commercial Active WebCam Yes Yes(?) Yes Yes Volume Free Commercial WebCam Surveyor Yes No Yes Yes No Commercial WebCamsPy NA NA NA NA NA GPL Camera: Logitech Webcam Pro 9000 Windows 7 32-bit WebCamsPy failed to initialize so couldn't be tested So far, the contenders: Active Webcam comes the closest, and claims to run as a service, but i haven't been able to get it to record after a cold boot even though a service is running. Yawcam can be set up as a service but doesn't record video. IspyConnect has exactly the type of space limit I want and looks great, but doesn't run as a service (seems also to be a bit of a cpu hog) Any other suggestions? I'm locked into Windows so can't use linux Motion, which looks almost perfect. Any pointers to rich Windows webcam/motion detection libraries out there that could easily be turned into a command line program would also be appreciated.

    Read the article

  • Windows Workflow Foundation - Application-Integrated Debugging

    - by user292103
    I've got a typical n-tier app that has a heavy workflow component to it, so I'm interested in using WWF. There's a server-side piece that runs as a Windows Service, and there's the client-side piece written in Silverlight. To have a really great, seamlessly integrated experience for my users, what I want is to incorporate both a workflow designer and a workflow debugger into the application. Not Visual Studio, but something tightly integrated right into the app itself. Using the Silverlight client, the user (probably more of a power user) can design workflows. But not only that, they can open a debugger from within the Silverlight client, set breakpoints (which are really remote breakpoints back to the Windows Service), catch in-process workflows, and step through them. Wouldn't that be great? I think I have some idea how I might go about incorporating an integrated designer (use a Silverlight diagramming component, save the diagram to .XAML, parse the .XAML to re-create the diagram, etc., etc.) but how on Earth would I do the debugger? I have no idea how I would do that part. Is there some kind of debugging support engineered into WWF?

    Read the article

  • Integrated Security on Reporting Services XML Datasource

    - by Nathan
    Hey all, I am working on setting up my report server to use a web service as an XML datasource. I seem to be having authentication issues between the web service and the report with I choose to use Integrated security. Here's what I have: 1) I have a website w/ an exposed service. This website is configured to run ONLY on Integrated Security. This means that we have all other modes turned off AND Enabled anonymous access turned off under directory security. 2) Within the Web.config of the website, I have the authentication mode set to Windows. 3) I have the report datasource set to being an XML data source. I have the correct URL to the service and have it set to Windows Integrated Security. Since I am making a hop from the Browser to the Reporting Server to the Web Service, I wonder if I am having an issue w/ Kerberos, but I am not sure. When I try to access the service, I get a 401 error. Here are the IIS logs that I am generating: 2011-01-07 14:52:12 W3SVC IP_ADDY POST /URL.asmx - 80 - IP_ADDY - 401 1 0 2011-01-07 14:52:12 W3SVC IP_ADDY POST /URL.asmx - 80 - IP_ADDY - 401 1 5 Has anyone worked out this issue before? Thanks!

    Read the article

  • Setting up an IP Camera with silverlight

    - by Sean
    I am trying to set up an IP camera and have it work through Silverlight I am using both Microsoft Expression and Microsoft Visual Studio 2008. I am able to do encoding with a usb connected web cam but I cannot find a way to use the encoder to connect to ip camera connected to our switch. Does anyone have experience setting up an ip camera to encode into the Silverlight framework?

    Read the article

  • Android Camera in Portrait on SurfaceView

    - by Prasanna
    Hello, I tried several things to try to get the camera preview to show up in portrait on a SurfaceView. Nothing worked. I am testing on a Droid that has 2.0.1. I tried: 1) forcing the layout to be portrait by: this.setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE); 2) using Camera.Parameters parameters = camera.getParameters(); parameters.set("orientation", "portrait"); parameters.setRotation(90); camera.setParameters(parameters); Is there something else I can try? If this a bug in Android or the phone how can I make sure that this is the case so that I have proof to inform the client? Thanks, Prasanna

    Read the article

  • Android - Fail to connect to camera

    - by teepusink
    Hi, I'm using the Android APIDemo sample code. When I run the CameraPreview example, at first it was giving me an error. (http://stackoverflow.com/questions/2556389/android-camera-functionality-howto) I traced that one down and the sample was working for a while. Now, it no longer works. It says ERROR/AndroidRuntime(2949): java.lang.RuntimeException: Fail to connect to camera service What can be causing that? It happens when camera.open() is called. Thanks, Tee

    Read the article

  • get image from iphone, using phonegap camera api

    - by udhaya
    I'm new to Xcode and iPhone apps. I want to select an image from iPhone (camera or library) and send to php via ajax. http://wiki.phonegap.com/iPhone:-Camera-API I'm using the phonegap framework, Xcode iPhone SDK version 3.1.x. On clicking button it calls function with parameter 0 or 1, but it does not initialize camera or display the library. I checked the simulator virtual phone; there is no icon for camera, but the pictures album is there. I used the same code as in the above link. What do I do, what and how to check? any other functions to get photos using phonegap?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >