Search Results

Search found 2495 results on 100 pages for 'camera hacks'.

Page 29/100 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • How can I display the camera view in the main window in real time?

    - by Mongrel Warfare
    I'm trying to make an augmented reality application with a waypoint structure, like Yelp, and I'm wondering how to set up my main view so that it displays the camera view on the whole screen. I've heard of using the UIImagePickerController Class, but I'm unsure how to manipulate the code so that it doesn't actually take a picture, but just stays in the view mode. Any help would be appreciated, thanks!

    Read the article

  • How do i get the data from a surveillance camera to a storage i can stream from?

    - by radbyx
    Hi my sisters house was robbed chrismas evening :( I talked with her about making a surveillance system for her. The idea is to have a system that detects intruders and then send a SMS to you while streaming it to a private website. The hard part: How and where do I storage the data from the camera so it's streamable? I think i can manage to do the streaming, website and SMS server, but i need the data (fundamentation). Thanks, any help is much appriciated.

    Read the article

  • Android - How to prevent the phone screen from turning on when volume or camera key is pressed?

    - by 2Real
    I have an activity that shows up when the phone screen goes to sleep/turns off ie turns black. For some reason, the phone turns on when the volume buttons or the camera buttons are pressed. By turns on, I mean the screen wakes up or comes back from the black screen state. I've tried using dispatchKeyEvent(KeyEvent event) and the buttons are disabled on the activity, but they still wake up the phone.

    Read the article

  • Twitter 2 for Android crash every time I try uploading multi photos [closed]

    - by Hazz
    Hello, I'm using the new Twitter 2 on Android 2.1. Whenever I hit the button which enables me to upload multiple photos in a single tweet, I always get the error "The application Camera (process com.sonyericsson.camera) has stopped unexpectidly. Please try again". However, uploading a single photo using the camera button in Twitter have no problem, it works. My phone is Sony Ericsson x10 mini pro. I tried signing out and back in, same result. Anything I can do to fix this? This is the log info I got using Log Collector: 02-23 15:05:57.328 I/ActivityManager( 1240): Starting activity: Intent { act=com.twitter.android.post.status cmp=com.twitter.android/.PostActivity } 02-23 15:05:57.338 D/PhoneWindow(15095): couldn't save which view has focus because the focused view com.android.internal.policy.impl.PhoneWindow$DecorView@45726938 has no id. 02-23 15:05:57.688 I/ActivityManager( 1240): Displayed activity com.twitter.android/.PostActivity: 340 ms (total 340 ms) 02-23 15:05:59.018 I/ActivityManager( 1240): Starting activity: Intent { act=android.intent.action.PICK typ=vnd.android.cursor.dir/image cmp=com.sonyericsson.camera/com.sonyericsson.album.grid.GridActivity } 02-23 15:05:59.038 I/ActivityManager( 1240): Start proc com.sonyericsson.camera for activity com.sonyericsson.camera/com.sonyericsson.album.grid.GridActivity: pid=15113 uid=10057 gids={1006, 1015, 3003} 02-23 15:05:59.128 I/dalvikvm(15113): Debugger thread not active, ignoring DDM send (t=0x41504e4d l=38) 02-23 15:05:59.158 I/dalvikvm(15113): Debugger thread not active, ignoring DDM send (t=0x41504e4d l=50) 02-23 15:05:59.448 I/ActivityManager( 1240): Displayed activity com.sonyericsson.camera/com.sonyericsson.album.grid.GridActivity: 423 ms (total 423 ms) 02-23 15:05:59.458 W/dalvikvm(15113): threadid=15: thread exiting with uncaught exception (group=0x4001e160) 02-23 15:05:59.458 E/AndroidRuntime(15113): Uncaught handler: thread AsyncTask #1 exiting due to uncaught exception 02-23 15:05:59.468 E/AndroidRuntime(15113): java.lang.RuntimeException: An error occured while executing doInBackground() 02-23 15:05:59.468 E/AndroidRuntime(15113): at android.os.AsyncTask$3.done(AsyncTask.java:200) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.FutureTask$Sync.innerSetException(FutureTask.java:273) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.FutureTask.setException(FutureTask.java:124) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:307) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.FutureTask.run(FutureTask.java:137) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1068) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:561) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.lang.Thread.run(Thread.java:1096) 02-23 15:05:59.468 E/AndroidRuntime(15113): Caused by: java.lang.IllegalArgumentException: Unsupported MIME type. 02-23 15:05:59.468 E/AndroidRuntime(15113): at com.sonyericsson.album.grid.GridActivity$AlbumTask.doInBackground(GridActivity.java:202) 02-23 15:05:59.468 E/AndroidRuntime(15113): at com.sonyericsson.album.grid.GridActivity$AlbumTask.doInBackground(GridActivity.java:124) 02-23 15:05:59.468 E/AndroidRuntime(15113): at android.os.AsyncTask$2.call(AsyncTask.java:185) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305) 02-23 15:05:59.468 E/AndroidRuntime(15113): ... 4 more 02-23 15:05:59.628 E/SemcCheckin(15113): Get crash dump level : java.io.FileNotFoundException: /data/semc-checkin/crashdump 02-23 15:05:59.628 W/ActivityManager( 1240): Unable to start service Intent { act=com.sonyericsson.android.jcrashcatcher.action.BUGREPORT_AUTO cmp=com.sonyericsson.android.jcrashcatcher/.JCrashCatcherService (has extras) }: not found 02-23 15:05:59.648 I/Process ( 1240): Sending signal. PID: 15113 SIG: 3 02-23 15:05:59.648 I/dalvikvm(15113): threadid=7: reacting to signal 3 02-23 15:05:59.778 I/dalvikvm(15113): Wrote stack trace to '/data/anr/traces.txt' 02-23 15:06:00.388 E/SemcCheckin( 1673): Get Crash Level : java.io.FileNotFoundException: /data/semc-checkin/crashdump 02-23 15:06:01.708 I/DumpStateReceiver( 1240): Added state dump to 1 crashes 02-23 15:06:02.008 D/iddd-events( 1117): Registering event com.sonyericsson.idd.probe.android.devicemonitor::ApplicationCrash with 4314 bytes payload. 02-23 15:06:06.968 D/dalvikvm( 1673): GC freed 661 objects / 126704 bytes in 124ms 02-23 15:06:11.928 D/dalvikvm( 1379): GC freed 19753 objects / 858832 bytes in 84ms 02-23 15:06:13.038 I/Process (15113): Sending signal. PID: 15113 SIG: 9 02-23 15:06:13.048 I/WindowManager( 1240): WIN DEATH: Window{4596ecc0 com.sonyericsson.camera/com.sonyericsson.album.grid.GridActivity paused=false} 02-23 15:06:13.048 I/ActivityManager( 1240): Process com.sonyericsson.camera (pid 15113) has died. 02-23 15:06:13.048 I/WindowManager( 1240): WIN DEATH: Window{459db5e8 com.sonyericsson.camera/com.sonyericsson.album.grid.GridActivity paused=false} 02-23 15:06:13.078 I/UsageStats( 1240): Unexpected resume of com.twitter.android while already resumed in com.sonyericsson.camera 02-23 15:06:13.098 W/InputManagerService( 1240): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@456e7168 02-23 15:06:21.278 D/dalvikvm( 1745): GC freed 2032 objects / 410848 bytes in 60ms

    Read the article

  • 3D Graphics with XNA Game Studio 4.0 bug in light map?

    - by Eibis
    i'm following the tutorials on 3D Graphics with XNA Game Studio 4.0 and I came up with an horrible effect when I tried to implement the Light Map http://i.stack.imgur.com/BUWvU.jpg this effect shows up when I look towards the center of the house (and it moves with me). it has this shape because I'm using a sphere to represent light; using other light shapes gives different results. I'm using a class PreLightingRenderer: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Dhpoware; using Microsoft.Xna.Framework.Content; namespace XNAFirstPersonCamera { public class PrelightingRenderer { // Normal, depth, and light map render targets RenderTarget2D depthTarg; RenderTarget2D normalTarg; RenderTarget2D lightTarg; // Depth/normal effect and light mapping effect Effect depthNormalEffect; Effect lightingEffect; // Point light (sphere) mesh Model lightMesh; // List of models, lights, and the camera public List<CModel> Models { get; set; } public List<PPPointLight> Lights { get; set; } public FirstPersonCamera Camera { get; set; } GraphicsDevice graphicsDevice; int viewWidth = 0, viewHeight = 0; public PrelightingRenderer(GraphicsDevice GraphicsDevice, ContentManager Content) { viewWidth = GraphicsDevice.Viewport.Width; viewHeight = GraphicsDevice.Viewport.Height; // Create the three render targets depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24); normalTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); lightTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); // Load effects depthNormalEffect = Content.Load<Effect>(@"Effects\PPDepthNormal"); lightingEffect = Content.Load<Effect>(@"Effects\PPLight"); // Set effect parameters to light mapping effect lightingEffect.Parameters["viewportWidth"].SetValue(viewWidth); lightingEffect.Parameters["viewportHeight"].SetValue(viewHeight); // Load point light mesh and set light mapping effect to it lightMesh = Content.Load<Model>(@"Models\PPLightMesh"); lightMesh.Meshes[0].MeshParts[0].Effect = lightingEffect; this.graphicsDevice = GraphicsDevice; } public void Draw() { drawDepthNormalMap(); drawLightMap(); prepareMainPass(); } void drawDepthNormalMap() { // Set the render targets to 'slots' 1 and 2 graphicsDevice.SetRenderTargets(normalTarg, depthTarg); // Clear the render target to 1 (infinite depth) graphicsDevice.Clear(Color.White); // Draw each model with the PPDepthNormal effect foreach (CModel model in Models) { model.CacheEffects(); model.SetModelEffect(depthNormalEffect, false); model.Draw(Camera.ViewMatrix, Camera.ProjectionMatrix, Camera.Position); model.RestoreEffects(); } // Un-set the render targets graphicsDevice.SetRenderTargets(null); } void drawLightMap() { // Set the depth and normal map info to the effect lightingEffect.Parameters["DepthTexture"].SetValue(depthTarg); lightingEffect.Parameters["NormalTexture"].SetValue(normalTarg); // Calculate the view * projection matrix Matrix viewProjection = Camera.ViewMatrix * Camera.ProjectionMatrix; // Set the inverse of the view * projection matrix to the effect Matrix invViewProjection = Matrix.Invert(viewProjection); lightingEffect.Parameters["InvViewProjection"].SetValue(invViewProjection); // Set the render target to the graphics device graphicsDevice.SetRenderTarget(lightTarg); // Clear the render target to black (no light) graphicsDevice.Clear(Color.Black); // Set render states to additive (lights will add their influences) graphicsDevice.BlendState = BlendState.Additive; graphicsDevice.DepthStencilState = DepthStencilState.None; foreach (PPPointLight light in Lights) { // Set the light's parameters to the effect light.SetEffectParameters(lightingEffect); // Calculate the world * view * projection matrix and set it to // the effect Matrix wvp = (Matrix.CreateScale(light.Attenuation) * Matrix.CreateTranslation(light.Position)) * viewProjection; lightingEffect.Parameters["WorldViewProjection"].SetValue(wvp); // Determine the distance between the light and camera float dist = Vector3.Distance(Camera.Position, light.Position); // If the camera is inside the light-sphere, invert the cull mode // to draw the inside of the sphere instead of the outside if (dist < light.Attenuation) graphicsDevice.RasterizerState = RasterizerState.CullClockwise; // Draw the point-light-sphere lightMesh.Meshes[0].Draw(); // Revert the cull mode graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; } // Revert the blending and depth render states graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; // Un-set the render target graphicsDevice.SetRenderTarget(null); } void prepareMainPass() { foreach (CModel model in Models) foreach (ModelMesh mesh in model.Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) { // Set the light map and viewport parameters to each model's effect if (part.Effect.Parameters["LightTexture"] != null) part.Effect.Parameters["LightTexture"].SetValue(lightTarg); if (part.Effect.Parameters["viewportWidth"] != null) part.Effect.Parameters["viewportWidth"].SetValue(viewWidth); if (part.Effect.Parameters["viewportHeight"] != null) part.Effect.Parameters["viewportHeight"].SetValue(viewHeight); } } } } that uses three effect: PPDepthNormal.fx float4x4 World; float4x4 View; float4x4 Projection; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Depth : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 viewProjection = mul(View, Projection); float4x4 worldViewProjection = mul(World, viewProjection); output.Position = mul(input.Position, worldViewProjection); output.Normal = mul(input.Normal, World); // Position's z and w components correspond to the distance // from camera and distance of the far plane respectively output.Depth.xy = output.Position.zw; return output; } // We render to two targets simultaneously, so we can't // simply return a float4 from the pixel shader struct PixelShaderOutput { float4 Normal : COLOR0; float4 Depth : COLOR1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; // Depth is stored as distance from camera / far plane distance // to get value between 0 and 1 output.Depth = input.Depth.x / input.Depth.y; // Normal map simply stores X, Y and Z components of normal // shifted from (-1 to 1) range to (0 to 1) range output.Normal.xyz = (normalize(input.Normal).xyz / 2) + .5; // Other components must be initialized to compile output.Depth.a = 1; output.Normal.a = 1; return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPLight.fx float4x4 WorldViewProjection; float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; sampler2D depthSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 LightColor; float3 LightPosition; float LightAttenuation; // Include shared functions #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 LightPosition : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WorldViewProjection); output.LightPosition = output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Find the pixel coordinates of the input position in the depth // and normal textures float2 texCoord = postProjToScreen(input.LightPosition) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; // Extract the normal from the normal map and move from // 0 to 1 range to -1 to 1 range float4 normal = (tex2D(normalSampler, texCoord) - .5) * 2; // Perform the lighting calculations for a point light float3 lightDirection = normalize(LightPosition - position); float lighting = clamp(dot(normal, lightDirection), 0, 1); // Attenuate the light to simulate a point light float d = distance(LightPosition, position); float att = 1 - pow(d / LightAttenuation, 6); return float4(LightColor * lighting * att, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPShared.vsi has some common functions: float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } and finally from the Game class I set up in LoadContent with: effect = Content.Load(@"Effects\PPModel"); models[0] = new CModel(Content.Load(@"Models\teapot"), new Vector3(-50, 80, 0), new Vector3(0, 0, 0), 1f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); house = new CModel(Content.Load(@"Models\house"), new Vector3(0, 0, 0), new Vector3((float)-Math.PI / 2, 0, 0), 35.0f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); models[0].SetModelEffect(effect, true); house.SetModelEffect(effect, true); renderer = new PrelightingRenderer(GraphicsDevice, Content); renderer.Models = new List(); renderer.Models.Add(house); renderer.Models.Add(models[0]); renderer.Lights = new List() { new PPPointLight(new Vector3(0, 120, 0), Color.White * .85f, 2000) }; where PPModel.fx is: float4x4 World; float4x4 View; float4x4 Projection; texture2D BasicTexture; sampler2D basicTextureSampler = sampler_state { texture = ; addressU = wrap; addressV = wrap; minfilter = anisotropic; magfilter = anisotropic; mipfilter = linear; }; bool TextureEnabled = true; texture2D LightTexture; sampler2D lightSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 AmbientColor = float3(0.15, 0.15, 0.15); float3 DiffuseColor; #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 worldViewProjection = mul(World, mul(View, Projection)); output.Position = mul(input.Position, worldViewProjection); output.PositionCopy = output.Position; output.UV = input.UV; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Sample model's texture float3 basicTexture = tex2D(basicTextureSampler, input.UV); if (!TextureEnabled) basicTexture = float4(1, 1, 1, 1); // Extract lighting value from light map float2 texCoord = postProjToScreen(input.PositionCopy) + halfPixel(); float3 light = tex2D(lightSampler, texCoord); light += AmbientColor; return float4(basicTexture * DiffuseColor * light, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I don't have any idea on what's wrong... googling the web I found that this tutorial may have some bug but I don't know if it's the LightModel fault (the sphere) or in a shader or in the class PrelightingRenderer. Any help is very appreciated, thank you for reading!

    Read the article

  • Looking for Kiosk-style / camera store easy photo memory card to CD/DVD burning program for Windows-7 Notebook? For non techie user.

    - by Rob
    I'm looking for a Kiosk-style / camera shop easy photo memory card to CD/DVD burning program? For non technie user. The kind of system you see in a camera shop / store, e.g. in the UK, Jessops and Boots stores. This is for my Dad who is adept at general PC usage as a notebook owner, but would prefer something fairly simple. The task of burning photos to CD/DVD, in their original photo file .jpg form, i.e. NOT as CD or DVD video or slideshow, is what I'm looking for. I'm guessing this might be possible in Picasa, but all the options available might be superfluous and confusing. He could probably learn to use that but thought I would try simpler options first. Looking for something that guides the user through the steps/stages of the process, 'Wizard' style. Any suggestions? Platform: HP Windows 7 Home notebook with CD/DVD burner and SD memory card slot.

    Read the article

  • Is there a way to test HTTP Live Streaming via an iSight camera?

    - by bpapa
    I'm working on an iPhone app that will use HTTP Live Streaming. Using Apple's provided tools (particularly mediafilesegmenter), I'm able to successfully segment and serve an archived video. Now I want to test Live Streaming stuff. I don't own any sort of camcorder, I just have my iSight built-in to my Mac. Is there a way to leverage this camera to test out Live Streaming? Run iSight from the command line maybe? If so, I need a port number for mediastreamsegmenter.

    Read the article

  • Optimal preferences for prefix queries with Oracle catalog (CTXCAT) index

    - by nw
    The documentation for Oracle Text gives this example of a prefix/substring preference setting for context and catalog indexes: begin ctx_ddl.create_preference('mywordlist', 'BASIC_WORDLIST'); ctx_ddl.set_attribute('mywordlist','PREFIX_INDEX','TRUE'); ctx_ddl.set_attribute('mywordlist','PREFIX_MIN_LENGTH', '3'); ctx_ddl.set_attribute('mywordlist','PREFIX_MAX_LENGTH', '4'); ctx_ddl.set_attribute('mywordlist','SUBSTRING_INDEX', 'YES'); end; What I need to know is whether the substring_index attribute is necessary if I only ever issue prefix searches, such as: SELECT title FROM auction WHERE CATSEARCH(title, 'cam*', '') > 0; TITLE --------------- CANON CAMERA FUJI CAMERA NIKON CAMERA OLYMPUS CAMERA PENTAX CAMERA SONY CAMERA 6 rows selected

    Read the article

  • Where to get streaming (live) video and audio from camera example app for Nokia?

    - by Ole Jak
    Where to get streaming (live) video and audio from camera example for Nokia (5800 for ex)? Suppose I want to create some live video streaming service app so I'll have some cool server at the back end. And I know how to do that part. Suppose I have some stand alone app for PCs now I want to go on to mobile devices. So I decided to start from Nokia because I have it and can do with it what I want (Nokia 5800 XpressMusic). So I want to see some sample app grabing audio and video streams from Phone, Synchronizing them, and sending LIVE stream to server. I need any OpenSource sample (JAVA or C or C++) that ll do this or something like this. Where can I get one?

    Read the article

  • How do I forward map a point in OpenCV using mapx & mapy?

    - by Charles
    Ultimately, I'm trying to determine the 3D location of a point I've identified in two cameras (using OpenCV 2.3.1, Windows 7, C++). I'm having trouble locating the 2D point for each camera from its mapx & mapy. I could not find in Bradski & Kaehler's OpenCV book how to do it. PROCESS calibrateCamera for each of the pair stereoCalibrate stereoRectify detect the blob I want to locate in 3D on each camera in 2D initUndistortRectifyMap for each camera Eventually, perspectiveTransform PROBLEM with 5: I have the x and y location of the point in the distorted and unrectified image for each camera(from #4) but I don't know how to get the undistorted and rectified x and y for each camera from the two maps initUndistortRectifyMap creates for each camera. I don't want to remap the whole image since I only want to learn the 3D location of one object for each frame. QUESTION: How do I forward map a point (get the undistorted and rectified x and y) with its two maps? Thanks for any help.

    Read the article

  • Where to get streaming (live) video and audio from camera example app for Android?

    - by Ole Jak
    Where to get streaming (live) video and audio from camera example for Android? Suppose I want to create some live video streaming service app so I'll have some cool server at the back end. And I know how to do that part. Suppose I have some stand alone app for PCs now I want to go on to mobile devices. So I want to see some sample app grabing audio and video streams from Phone, Synchronizing them, encoding somehow, and sending LIVE stream to server. I need any Open-Source sample that will do this or something like this. Where can I get one?

    Read the article

  • What can I read from the iPad Camera Connection Kit?

    - by HELVETICADE
    I'm building a small controller device that I'd like to partner with a computer. I've settled on using OSC out from my custom built hardware and am pretty satisfied with what I can get from WOscLib. Two goals I'd like to achieve are portability and a very ratio between battery:computing power, and this has lured me towards using iPhoneOS to accomplish my goals. I think the iPad would suit my needs perfectly, except that using wifi to broadcast OSC out from my device requires a third device and would destroy the goal of portability, whilst also introducing potential latency and stability headaches. My question is pretty simple: Can I push my OSC-out FROM my controller TO an iPad via USB and the Camera Connection Kit? If I could accomplish this, the two major goals of my project would be fulfilled very nicely. This seems like it should be a simple little question, but researching this obsessively over the past few weeks has left me more almost more uncertain than if I had done no research at all. I'd really like some more confidence before I go down this route, and it seems like it should be possible. Any insight would be very, very appreciated.

    Read the article

  • How to read time from recorded surveillance camera video?

    - by stressed_geek
    I have a problem where I have to read the time of recording from the video recorded by a surveillance camera. The time shows up on the top-left area of the video. Below is a link to screen grab of the area which shows the time. Also, the digit color(white/black) keeps changing during the duration of the video. http://i55.tinypic.com/2j5gca8.png Please guide me in the direction to approach this problem. I am a Java programmer so would prefer an approach through Java. EDIT: Thanks unhillbilly for the comment. I had looked at the Ron Cemer OCR library and its performance is much below our requirement. Since the ocr performance is less than desired, I was planning to build a character set using the screen grabs for all the digits, and using some image/pixel comparison library to compare the frame time with the character-set which will show a probabilistic result after comparison. So I was looking for a good image comparison library(I would be OK with a non-java library which I can run using the command-line). Also any advice on the above approach would be really helpful.

    Read the article

  • Is it possible to send OSC commands to an iPad via the Camera Connection Kit?

    - by HELVETICADE
    I'm building a small controller device that I'd like to partner with a computer. I've settled on using OSC out from my custom built hardware and am pretty satisfied with what I can get from WOscLib. Two goals I'd like to achieve are portability and a very nice ratio between battery:computing power, and this has lured me towards using iPhoneOS to accomplish my goals. I think the iPad would suit my needs perfectly, except that using wifi to broadcast OSC out from my device requires that device to be connected to a third device with a wifi chip, and this would destroy the goal of portability, whilst also introducing potential latency and stability headaches. My question is pretty simple: Can I push OSC commands FROM my controller TO an iPad via USB and the Camera Connection Kit? If I could accomplish this, the two major goals of my project would be fulfilled very nicely. This seems like it should be a simple little question, but researching this obsessively over the past few weeks has left me more almost more uncertain than if I had done no research at all. I'd really like some more confidence before I go down this route, and it seems like it should be possible. Any insight would be very, very appreciated.

    Read the article

  • Implementing Scrolling Background in LibGDX game

    - by Vishal Kumar
    I am making a game in LibGDX. After working for whole a day..I could not come out with a solution on Scrolling background. My Screen width n height is 960 x 540. I have a png image of 1024 x 540. I want to scroll the background in such a way that it contuosly goes back with camera x-- as per camera I tried many alternatives... drawing the image twice ..did a lot of calculations and many others.... but finally end up with this dirty code if(bg_x2 >= - Assets.bg.getRegionWidth()) { //calculated it to position bg .. camera was initially at 15 bg_x2 = (16 -4*camera.position.x); bg_x1=bg_x2+Assets.bg.getRegionWidth(); } else{ bg_x1 = (16 -4*camera.position.x)%224; // this 16 is not proper //I think there can be other ways bg_x2=bg_x1+Assets.bg.getRegionWidth(); } //drawing twice batch.draw(Assets.bg, bg_x2, bg_y); batch.draw(Assets.bg, bg_x1, bg_y); The Simple idea is SCROLLING BACKGROUND WITH SIZE SOMEWHAT MORE THAN SCREEN SIZE SO THAT IT LOOK SMOOTH. Even after a lot of search, i didn't find an online solution. Please help.

    Read the article

  • Video Recording Not Working in ICS

    - by Nirav Ranpara
    I have implement code Record video in Android Phone . This code is working in 2.2 , 2.3 . not in ICS But when I checked in ICS code is not working ? here I posted code and xml file. videorecord.java import java.io.File; import java.io.IOException; import android.app.Activity; import android.app.AlertDialog; import android.content.Context; import android.content.DialogInterface; import android.content.Intent; import android.content.SharedPreferences; import android.hardware.Camera; import android.media.CamcorderProfile; import android.media.MediaRecorder; import android.os.Bundle; import android.os.CountDownTimer; import android.os.Environment; import android.util.Log; import android.view.Display; import android.view.KeyEvent; import android.view.SurfaceHolder; import android.view.SurfaceView; import android.view.View; import android.widget.EditText; import android.widget.FrameLayout; import android.widget.ImageView; import android.widget.LinearLayout; import android.widget.TextView; import android.widget.Toast; public class videorecord extends Activity{ SharedPreferences.Editor pre; String filename; CountDownTimer t; private Camera myCamera; private MyCameraSurfaceView myCameraSurfaceView; private MediaRecorder mediaRecorder; Integer cnt=0; LinearLayout myButton; TextView myButton1; SurfaceHolder surfaceHolder; boolean recording; private TextView txtcount; private ImageView btnplay; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); recording = false; setContentView(R.layout.videorecord); init(); myCamera = getCameraInstance(); if(myCamera == null){ } myCameraSurfaceView = new MyCameraSurfaceView(this, myCamera); FrameLayout myCameraPreview = (FrameLayout)findViewById(R.id.videoview); Display display = getWindowManager().getDefaultDisplay(); int width = display.getWidth(); int height = display.getHeight(); myCameraSurfaceView.setLayoutParams(new LinearLayout.LayoutParams(width, height-60)); myCameraPreview.addView(myCameraSurfaceView); myButton = (LinearLayout)findViewById(R.id.mybutton); btnplay.setOnClickListener(myButtonOnClickListener); } private void init() { txtcount = (TextView) findViewById(R.id.txtcounter); //myButton1 = (TextView) findViewById(R.id.mybutton1); btnplay = (ImageView)findViewById(R.id.btnplay); t = new CountDownTimer( Long.MAX_VALUE , 1000) { @Override public void onTick(long millisUntilFinished) { cnt++; String time = new Integer(cnt).toString(); long millis = cnt; int seconds = (int) (millis / 60); int minutes = seconds / 60; seconds = seconds % 60; txtcount.setText(String.format("%d:%02d:%02d", minutes, seconds,millis)); } @Override public void onFinish() { } }; } @Override public boolean onKeyDown(int keyCode, KeyEvent event) { if ((keyCode == KeyEvent.KEYCODE_BACK)) { if(recording) { new AlertDialog.Builder(videorecord.this).setTitle("Do you want to save Video ?") .setPositiveButton("OK", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { filename(); //finish(); } }).setNegativeButton("Cancle", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { // TODO Auto-generated method stub } }).show(); } else { if ((keyCode == KeyEvent.KEYCODE_BACK)) { //Intent homeIntent= new Intent(Intent.ACTION_MAIN); //homeIntent.addCategory(Intent.CATEGORY_HOME); //homeIntent.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP); //startActivity(homeIntent); //this.finishActivity(1); finish(); } //moveTaskToBack(true); // finish(); return super.onKeyDown(keyCode, event); } } else { // Toast.makeText(getApplicationContext(), "asd", Toast.LENGTH_LONG).show(); android.os.Process.killProcess(android.os.Process.myPid()) ; } return super.onKeyDown(keyCode, event); } ImageView.OnClickListener myButtonOnClickListener = new ImageView.OnClickListener(){ public void onClick(View v) { if(recording){ Log.e("Record error", "error in recording ."); mediaRecorder.stop(); t.cancel(); filename(); releaseMediaRecorder(); }else{ releaseCamera(); Log.e("Record Stop error", "error in recording ."); // if(!prepareMediaRecorder()){ prepareMediaRecorder(); finish(); } mediaRecorder.start(); recording = true; // myButton1.setText("STOP Recording"); // btnplay.setImageResource(android.R.drawable.ic_media_pause); btnplay.setImageResource(R.drawable.stoprec); t.start(); } }}; private Camera getCameraInstance(){ Camera c = null; try { c = Camera.open(); } catch (Exception e){ } return c; } private void filename() { AlertDialog.Builder alert = new AlertDialog.Builder(this); alert.setTitle("Save Video"); alert.setMessage("Enter File Name"); final EditText input = new EditText(this); alert.setView(input); alert.setPositiveButton("Ok", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int whichButton) { if(input.getText().length()>=1) { filename = input.getText().toString(); File sdcard = new File(Environment.getExternalStorageDirectory() + "/VideoRecord"); File from = new File(sdcard,"null.mp4"); File to = new File(sdcard,filename+".mp4"); from.renameTo(to); SharedPreferences sp = videorecord.this.getSharedPreferences("data", MODE_WORLD_WRITEABLE); pre = sp.edit(); pre.clear(); pre.commit(); pre.putString("lastvideo", filename+".mp4"); pre.commit(); //btnplay.setImageResource(android.R.drawable.ic_media_play); btnplay.setImageResource(R.drawable.startrec); // Intent intent = new Intent(videorecord.this,StopVidoWatch_Activity.class); // startActivity(intent); Intent myIntent = new Intent(getApplicationContext(), StopVidoWatch_Activity.class).setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP); startActivity(myIntent); } else { filename(); } } }); alert.setNegativeButton("Cancel", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int whichButton) { // Intent intent = new Intent(videorecord.this,StopVidoWatch_Activity.class); // startActivity(intent); File file = new File(Environment.getExternalStorageDirectory() + "/VideoRecord/null.mp4"); //boolean deleted = file.delete(); file.delete(); finish(); } }); alert.show(); } private boolean prepareMediaRecorder(){ myCamera = getCameraInstance(); mediaRecorder = new MediaRecorder(); myCamera.unlock(); mediaRecorder.setCamera(myCamera); mediaRecorder.setAudioSource(MediaRecorder.AudioSource.CAMCORDER); mediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA); mediaRecorder.setProfile(CamcorderProfile.get(CamcorderProfile.QUALITY_HIGH)); File folder = new File(Environment.getExternalStorageDirectory() + "/VideoRecord"); boolean success = false; if (!folder.exists()) { success = folder.mkdir(); } if (!success) { } else { } mediaRecorder.setOutputFile("/sdcard/VideoRecord/"+filename+".mp4"); mediaRecorder.setMaxDuration(60000); mediaRecorder.setMaxFileSize(5000000); Display display = getWindowManager().getDefaultDisplay(); int width = display.getHeight(); int height = display.getWidth(); String s = new String(); s= s.valueOf(width); String s1 = new String(); s1= s1.valueOf(height); // Toast.makeText(videorecord.this, "Width : " + s , Toast.LENGTH_LONG).show(); // Toast.makeText(videorecord.this, "Height : " + s1 , Toast.LENGTH_LONG).show(); mediaRecorder.setVideoSize(height, width); mediaRecorder.setPreviewDisplay(myCameraSurfaceView.getHolder().getSurface()); try { mediaRecorder.prepare(); } catch (IllegalStateException e) { releaseMediaRecorder(); return false; } catch (IOException e) { releaseMediaRecorder(); return false; } return true; } @Override protected void onPause() { super.onPause(); releaseMediaRecorder(); releaseCamera(); } private void releaseMediaRecorder() { if (mediaRecorder != null) { mediaRecorder.reset(); mediaRecorder.release(); mediaRecorder = null; myCamera.lock(); } } private void releaseCamera(){ if (myCamera != null){ myCamera.release(); myCamera = null; } } public class MyCameraSurfaceView extends SurfaceView implements SurfaceHolder.Callback{ private SurfaceHolder mHolder; private Camera mCamera; public MyCameraSurfaceView(Context context, Camera camera) { super(context); mCamera = camera; mHolder = getHolder(); mHolder.addCallback(this); mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); } public void surfaceChanged(SurfaceHolder holder, int format, int weight, int height) { if (mHolder.getSurface() == null){ return; } try { mCamera.stopPreview(); } catch (Exception e){ } try { mCamera.setPreviewDisplay(mHolder); mCamera.startPreview(); } catch (Exception e){ } } public void surfaceCreated(SurfaceHolder holder) { try { mCamera.setPreviewDisplay(holder); mCamera.startPreview(); } catch (IOException e) { } } public void surfaceDestroyed(SurfaceHolder holder) { } } } videorecord.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" > <FrameLayout android:layout_width="fill_parent" android:layout_height="fill_parent" > <FrameLayout android:id="@+id/videoview" android:layout_width="fill_parent" android:layout_height="fill_parent"></FrameLayout> <LinearLayout android:id="@+id/mybutton" android:layout_width="fill_parent" android:layout_marginBottom="0dip" android:layout_height="wrap_content" android:orientation="horizontal" android:layout_weight="0" > <!-- <TextView android:text="START Recording" android:id="@+id/mybutton1" android:layout_height="wrap_content" android:layout_width="wrap_content" style="@style/savestyle" android:layout_weight="1" android:gravity="left" > </TextView> --> <ImageView android:layout_height="wrap_content" android:id="@+id/btnplay" android:padding="5dip" android:background="#A0000000" android:textColor="#ffffffff" android:layout_width="wrap_content" android:src="@drawable/startrec" /> </LinearLayout> <TextView android:text="00:00:00" android:id="@+id/txtcounter" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="right|bottom" android:padding="5dip" android:background="#A0000000" android:textColor="#ffffffff" /> </FrameLayout> <RelativeLayout android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="@color/bgcolor" > <LinearLayout android:layout_above="@+id/mybutton" android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="fill_parent" > </LinearLayout> </RelativeLayout> </LinearLayout>

    Read the article

  • Away3D & Directional Light w/ Rotating Meshes

    - by seethru
    This is likely a stupid error but I can't seem to find what I've done wrong. I've got a simple scene with 10 cylinders rotating at a default speed. If I grab one of these cylinders I can rotate it in the opposite direction or at a greater speed. I have a single directional light in the scene. It would appear that the directional light is only calculated at initialization and not on further frames. The shadow created by the light rotates with the cylinder giving the impression that the light is rotating when it isn't. Camera & Light Initialization _view = new View3D(); addChild(_view); _view.antiAlias = 4; _view.backgroundColor = 0xFFFFFF; _view.camera.z = -850; _view.camera.y = 0; _view.camera.x = 0; _view.camera.lookAt(new Vector3D()); _view.camera.lens = new PerspectiveLens(15); _view.mousePicker = PickingType.RAYCAST_BEST_HIT; _light = new DirectionalLight(); _light.z = -850; _light.direction = new Vector3D(1, 1, 1); _light.color = 0xFFFFFF; _light.ambient = 0.1; _light.diffuse = 0.7; _view.scene.addChild(_light); Mesh and Material creation var material:TextureMaterial = new TextureMaterial(createPow2Texture(sprite, _colors[i]) , true, false, true); material.animateUVs = true; material.lightPicker = _lightPicker; cylinder = new Mesh(new CylinderGeometry(radius, radius, 13, 70, 1, true, true), material); cylinder.subMeshes[0].scaleU = spriteWidth / sprite.width; cylinder.y = y; cylinder.mouseEnabled = true; cylinder.pickingCollider = PickingColliderType.AS3_BEST_HIT; cylinder.addEventListener(MouseEvent3D.MOUSE_OVER, onMouseOverMesh); cylinder.addEventListener(MouseEvent3D.MOUSE_MOVE, onMouseOverMesh); cylinder.addEventListener(MouseEvent3D.MOUSE_OUT, onMouseOutMesh); _cylinders.push(cylinder); Frame private function onEnterFrame(event:Event):void { for each (var mesh:Mesh in _cylinders) { if (mesh == _mouseOverMesh) continue; mesh.rotationY += 0.25; } _view.render(); }

    Read the article

  • AndEngine doesn't fill correctly an image on my device

    - by Guille
    I'm learning about AndEngine a little bit, I'm trying to follow a tutorial but I don't get to fill the background image correctly, so, it's just appear in one side of my screen. My device is a Galaxy Nexus (1270x768 I think...). The image is 800x480. The code is: public EngineOptions onCreateEngineOptions() { camera = new Camera(0, 0, 800, 480); EngineOptions engineOptions = new EngineOptions(true, ScreenOrientation.LANDSCAPE_FIXED, new FillResolutionPolicy(), this.camera); engineOptions.getAudioOptions().setNeedsMusic(true).setNeedsSound(true); engineOptions.getRenderOptions().setMultiSampling(true);//.getConfigChooserOptions().setRequestedMultiSampling(true); engineOptions.setWakeLockOptions(WakeLockOptions.SCREEN_ON); return engineOptions; } I have been trying with several values in the camera, but it doesn't fill in all the screen, why?

    Read the article

  • Light following me around the room. Something is wrong with my shader!

    - by Robinson
    I'm trying to do a spot (Blinn) light, with falloff and attenuation. It seems to be working OK except I have a bit of a space problem. That is, whenever I move the camera the light moves to maintain the same relative position, rather than changing with the camera. This results in the light moving around, i.e. not always falling on the same surfaces. It's as if there's a flashlight attached to the camera. I'm transforming the lights beforehand into view space, so Light_Position and Light_Direction are already in eye space (I hope!). I made a little movie of what it looks like here: My camera rotating around a point inside a box. The light is fixed in the centre up and its "look at" point in a fixed position in front of it. As you can see, as the camera rotates around the origin (always looking at the centre), so don't think the box is rotating (!). The lighting follows it around. To start, some code. This is how I'm transforming the light into view space (it gets passed into the shader already in view space): // Compute eye-space light position. Math::Vector3d eyeSpacePosition = MyCamera->ViewMatrix() * MyLightPosition; MyShaderVariables->Set(MyLightPositionIndex, eyeSpacePosition); // Compute eye-space light direction vector. Math::Vector3d eyeSpaceDirection = Math::Unit(MyLightLookAt - MyLightPosition); MyCamera->ViewMatrixInverseTranspose().TransformNormal(eyeSpaceDirection); MyShaderVariables->Set(MyLightDirectionIndex, eyeSpaceDirection); Can anyone give me a clue as to what I'm doing wrong here? I think the light should remain looking at a fixed point on the box, regardless of the camera orientation. Here are the vertex and pixel shaders: /////////////////////////////////////////////////// // Vertex Shader /////////////////////////////////////////////////// #version 420 /////////////////////////////////////////////////// // Uniform Buffer Structures /////////////////////////////////////////////////// // Camera. layout (std140) uniform Camera { mat4 Camera_View; mat4 Camera_ViewInverseTranspose; mat4 Camera_Projection; }; // Matrices per model. layout (std140) uniform Model { mat4 Model_World; mat4 Model_WorldView; mat4 Model_WorldViewInverseTranspose; mat4 Model_WorldViewProjection; }; // Spotlight. layout (std140) uniform OmniLight { float Light_Intensity; vec3 Light_Position; vec3 Light_Direction; vec4 Light_Ambient_Colour; vec4 Light_Diffuse_Colour; vec4 Light_Specular_Colour; float Light_Attenuation_Min; float Light_Attenuation_Max; float Light_Cone_Min; float Light_Cone_Max; }; /////////////////////////////////////////////////// // Streams (per vertex) /////////////////////////////////////////////////// layout(location = 0) in vec3 attrib_Position; layout(location = 1) in vec3 attrib_Normal; layout(location = 2) in vec3 attrib_Tangent; layout(location = 3) in vec3 attrib_BiNormal; layout(location = 4) in vec2 attrib_Texture; /////////////////////////////////////////////////// // Output streams (per vertex) /////////////////////////////////////////////////// out vec3 attrib_Fragment_Normal; out vec4 attrib_Fragment_Position; out vec2 attrib_Fragment_Texture; out vec3 attrib_Fragment_Light; out vec3 attrib_Fragment_Eye; /////////////////////////////////////////////////// // Main /////////////////////////////////////////////////// void main() { // Transform normal into eye space attrib_Fragment_Normal = (Model_WorldViewInverseTranspose * vec4(attrib_Normal, 0.0)).xyz; // Transform vertex into eye space (world * view * vertex = eye) vec4 position = Model_WorldView * vec4(attrib_Position, 1.0); // Compute vector from eye space vertex to light (light is in eye space already) attrib_Fragment_Light = Light_Position - position.xyz; // Compute vector from the vertex to the eye (which is now at the origin). attrib_Fragment_Eye = -position.xyz; // Output texture coord. attrib_Fragment_Texture = attrib_Texture; // Compute vertex position by applying camera projection. gl_Position = Camera_Projection * position; } and the pixel shader: /////////////////////////////////////////////////// // Pixel Shader /////////////////////////////////////////////////// #version 420 /////////////////////////////////////////////////// // Samplers /////////////////////////////////////////////////// uniform sampler2D Map_Diffuse; /////////////////////////////////////////////////// // Global Uniforms /////////////////////////////////////////////////// // Material. layout (std140) uniform Material { vec4 Material_Ambient_Colour; vec4 Material_Diffuse_Colour; vec4 Material_Specular_Colour; vec4 Material_Emissive_Colour; float Material_Shininess; float Material_Strength; }; // Spotlight. layout (std140) uniform OmniLight { float Light_Intensity; vec3 Light_Position; vec3 Light_Direction; vec4 Light_Ambient_Colour; vec4 Light_Diffuse_Colour; vec4 Light_Specular_Colour; float Light_Attenuation_Min; float Light_Attenuation_Max; float Light_Cone_Min; float Light_Cone_Max; }; /////////////////////////////////////////////////// // Input streams (per vertex) /////////////////////////////////////////////////// in vec3 attrib_Fragment_Normal; in vec3 attrib_Fragment_Position; in vec2 attrib_Fragment_Texture; in vec3 attrib_Fragment_Light; in vec3 attrib_Fragment_Eye; /////////////////////////////////////////////////// // Result /////////////////////////////////////////////////// out vec4 Out_Colour; /////////////////////////////////////////////////// // Main /////////////////////////////////////////////////// void main(void) { // Compute N dot L. vec3 N = normalize(attrib_Fragment_Normal); vec3 L = normalize(attrib_Fragment_Light); vec3 E = normalize(attrib_Fragment_Eye); vec3 H = normalize(L + E); float NdotL = clamp(dot(L,N), 0.0, 1.0); float NdotH = clamp(dot(N,H), 0.0, 1.0); // Compute ambient term. vec4 ambient = Material_Ambient_Colour * Light_Ambient_Colour; // Diffuse. vec4 diffuse = texture2D(Map_Diffuse, attrib_Fragment_Texture) * Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL; // Specular. float specularIntensity = pow(NdotH, Material_Shininess) * Material_Strength; vec4 specular = Light_Specular_Colour * Material_Specular_Colour * specularIntensity; // Light attenuation (so we don't have to use 1 - x, we step between Max and Min). float d = length(-attrib_Fragment_Light); float attenuation = smoothstep(Light_Attenuation_Max, Light_Attenuation_Min, d); // Adjust attenuation based on light cone. float LdotS = dot(-L, Light_Direction), CosI = Light_Cone_Min - Light_Cone_Max; attenuation *= clamp((LdotS - Light_Cone_Max) / CosI, 0.0, 1.0); // Final colour. Out_Colour = (ambient + diffuse + specular) * Light_Intensity * attenuation; }

    Read the article

  • Fitting a rectangle into screen with XNA

    - by alecnash
    I am drawing a rectangle with primitives in XNA. The width is: width = GraphicsDevice.Viewport.Width and the height is height = GraphicsDevice.Viewport.Height I am trying to fit this rectangle in the screen (using different screens and devices) but I am not sure where to put the camera on the Z-axis. Sometimes the camera is too close and sometimes to far. This is what I am using to get the camera distance: //Height of piramid float alpha = 0; float beta = 0; float gamma = 0; alpha = (float)Math.Sqrt((width / 2 * width/2) + (height / 2 * height / 2)); beta = height / ((float)Math.Cos(MathHelper.ToRadians(67.5f)) * 2); gamma = (float)Math.Sqrt(beta*beta - alpha*alpha); position = new Vector3(0, 0, gamma); Any idea where to put the camera on the Z-axis?

    Read the article

  • How do I create a third Person View using DXUTCamera in DX10?

    - by David
    I am creating a 3d flying game and using DXUTCamera for my view. I can get the camera to take on the characters position, But I would like to view my character in the 3rd person. Here is my code for first person view: //Put the camera on the object. D3DXVECTOR3 viewerPos; D3DXVECTOR3 lookAtThis; D3DXVECTOR3 up ( 5.0f, 1.0f, 0.0f ); D3DXVECTOR3 newUp; D3DXMATRIX matView; //Set the viewer's position to the position of the thing. viewerPos.x = character->x; viewerPos.y = character->y; viewerPos.z = character->z; // Create a new vector for the direction for the viewer to look character->setUpWorldMatrix(); D3DXVECTOR3 newDir, lookAtPoint; D3DXVec3TransformCoord(&newDir, &character->initVecDir, &character->matAllRotations); // set lookatpoint D3DXVec3Normalize(&lookAtPoint, &newDir); lookAtPoint.x += viewerPos.x; lookAtPoint.y += viewerPos.y; lookAtPoint.z += viewerPos.z; g_Camera.SetViewParams(&viewerPos, &lookAtPoint); So does anyone have an ideas how I can move the camera to the third person view? preferably timed so there is a smooth action in the camera movement. (I'm hoping I can just edit this code instead of bringing in another camera class)

    Read the article

  • How can I improve my isometric tile-picking algorithm?

    - by Cypher
    I've spent the last few days researching isometric tile-picking algorithms (converting screen-coordinates to tile-coordinates), and have obviously found a lot of the math beyond my grasp. I have come fairly close and what I have is workable, but I would like to improve on this algorithm as it's a little off and seems to pick down and to the right of the mouse pointer. I've uploaded a video to help visualize the current implementation: http://youtu.be/EqwWcq1zuaM My isometric rendering algorithm is based on what is found at this stackoverflow question's answer, with the exception that my x and y axis' are inverted (x increased down-right, while y increased up-right). Here is where I am converting from screen to tiles: // these next few lines convert the mouse pointer position from screen // coordinates to tile-grid coordinates. cameraOffset captures the current // mouse location and takes into consideration the camera's position on screen. System.Drawing.Point cameraOffset = new System.Drawing.Point( 0, 0 ); cameraOffset.X = mouseLocation.X + (int)camera.Left; cameraOffset.Y = ( mouseLocation.Y + (int)camera.Top ); // the camera-aware mouse coordinates are then further converted in an attempt // to select only the "tile" portion of the grid tiles, instead of the entire // rectangle. this algorithm gets close, but could use improvement. mouseTileLocation.X = ( cameraOffset.X + 2 * cameraOffset.Y ) / Global.TileWidth; mouseTileLocation.Y = -( ( 2 * cameraOffset.Y - cameraOffset.X ) / Global.TileWidth ); Things to make note of: mouseLocation is a System.Drawing.Point that represents the screen coordinates of the mouse pointer. cameraOffset is the screen position of the mouse pointer that includes the position of the game camera. mouseTileLocation is a System.Drawing.Point that is supposed to represent the tile coordinates of the mouse pointer. If you check out the above link to youtube, you'll notice that the picking algorithm is off a bit. How can I improve on this?

    Read the article

  • Opengl-es picking object

    - by lacas
    I saw a lot of picking code opengl-es, but nothing worked. Can someone give me what am I missing? My code is (from tutorials/forums) Vec3 far = Camera.getPosition(); Vec3 near = Shared.opengl().getPickingRay(ev.getX(), ev.getY(), 0); Vec3 direction = far.sub(near); direction.normalize(); Log.e("direction", direction.x+" "+direction.y+" "+direction.z); Ray mouseRay = new Ray(near, direction); for (int n=0; n<ObjectFactory.objects.size(); n++) { if (ObjectFactory.objects.get(n)!=null) { IObject obj = ObjectFactory.objects.get(n); float discriminant, b; float radius=0.1f; b = -mouseRay.getOrigin().dot(mouseRay.getDirection()); discriminant = b * b - mouseRay.getOrigin().dot(mouseRay.getOrigin()) + radius*radius; discriminant = FloatMath.sqrt(discriminant); double x1 = b - discriminant; double x2 = b + discriminant; Log.e("asd", obj.getName() + " "+discriminant+" "+x1+" "+x2); } } my camera vectors: //cam Vec3 position =new Vec3(-obj.getPosX()+x, obj.getPosZ()-0.3f, obj.getPosY()+z); Vec3 direction =new Vec3(-obj.getPosX(), obj.getPosZ(), obj.getPosY()); Vec3 up =new Vec3(0.0f, -1.0f, 0.0f); Camera.set(position, direction, up); and my picking code: public Vec3 getPickingRay(float mouseX, float mouseY, float mouseZ) { int[] viewport = getViewport(); float[] modelview = getModelView(); float[] projection = getProjection(); float winX, winY; float[] position = new float[4]; winX = (float)mouseX; winY = (float)Shared.screen.width - (float)mouseY; GLU.gluUnProject(winX, winY, mouseZ, modelview, 0, projection, 0, viewport, 0, position, 0); return new Vec3(position[0], position[1], position[2]); } My camera moving all the time in 3d space. and my actors/modells moving too. my camera is following one actor/modell and the user can move the camera on a circle on this model. How can I change the above code to working?

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >