Search Results

Search found 5213 results on 209 pages for 'integrated camera'.

Page 35/209 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Should business objects be able to create their own DTOs?

    - by Sam
    Suppose I have the following class: class Camera { public Camera( double exposure, double brightness, double contrast, RegionOfInterest regionOfInterest) { this.exposure = exposure; this.brightness = brightness; this.contrast = contrast; this.regionOfInterest = regionOfInterest; } public void ConfigureAcquisitionFifo(IAcquisitionFifo acquisitionFifo) { // do stuff to the acquisition FIFO } readonly double exposure; readonly double brightness; readonly double contrast; readonly RegionOfInterest regionOfInterest; } ... and a DTO to transport the camera info across a service boundary (WCF), say, for viewing in a WinForms/WPF/Web app: using System.Runtime.Serialization; [DataContract] public class CameraData { [DataMember] public double Exposure { get; set; } [DataMember] public double Brightness { get; set; } [DataMember] public double Contrast { get; set; } [DataMember] public RegionOfInterestData RegionOfInterest { get; set; } } Now I can add a method to Camera to expose its data: class Camera { // blah blah public CameraData ToData() { var regionOfInterestData = regionOfInterest.ToData(); return new CameraData() { Exposure = exposure, Brightness = brightness, Contrast = contrast, RegionOfInterestData = regionOfInterestData }; } } or, I can create a method that requires a special IReporter to be passed in for the Camera to expose its data to. This removes the dependency on the Contracts layer (Camera no longer has to know about CameraData): class Camera { // beep beep I'm a jeep public void ExposeToReporter(IReporter reporter) { reporter.GetCameraInfo(exposure, brightness, contrast, regionOfInterest); } } So which should I do? I prefer the second, but it requires the IReporter to have a CameraData field (which gets changed by GetCameraInfo()), which feels weird. Also, if there is any even better solution, please share with me! I'm still an object-oriented newb.

    Read the article

  • Multiple marker icons, how to add to google mashup

    - by user351189
    I have created a Google maps mashup, where with a bit of input, I have managed to have a sidebar that links to a video icon/marker that then opens up an info window showing virtual tours. I would, however, like to put different coloured marker icons on the map depending on the category that the video is in. This would be easy enough to do, but my page is made up of a mixture of J-Query and JavaScript all calling to the individual flash files. Could someone help me with the code for adding extra marker icons for different categories? Here is the code: So, after the intial 'var camera;' point, there comes this: function addMarker(point, title, video, details) { var marker = new GMarker(point, {title: title, icon:camera}); GEvent.addListener(marker, "click", function() { if (details) { marker.openInfoWindowTabsHtml([new GInfoWindowTab("Video", video), new GInfoWindowTab("More", details)]); } else { marker.openInfoWindowHtml(video); } }); Then further down, is the code for calling the individual marker image. I would like to add another image to this list - would I start out by calling the new object 'camera-red.image' or something similar? function initialize() { if (GBrowserIsCompatible()) { map = new GMap2(document.getElementById("mapDiv")); map.setCenter(new GLatLng(51.52484592590448, -0.13345599174499512), 17); map.setUIToDefault(); var uclvtSatMapType = createUclVTSatMapType() map.addMapType(uclvtSatMapType); map.setMapType(uclvtSatMapType); camera = new GIcon(G_DEFAULT_ICON); camera.image = "ucl-video.png"; camera.iconSize = new GSize(32,37); camera.iconAnchor = new GPoint(16,35); camera.infoWindowAnchor = new GPoint(16,2); addMarkersToMap(); } The actual map can be found here: link text Thanks.

    Read the article

  • Google maps and J-Query: Individual markers?

    - by user351189
    Hi there, I have created a Google maps mashup, where with a bit of input, I have managed to have a sidebar that links to a video icon/marker that then opens up an info window showing virtual tours. I would, however, like to put different coloured marker icons on the map depending on the category that the video is in. This would be easy enough to do, but my page is made up of a mixture of J-Query and JavaScript all calling to the individual flash files. Could someone help me with the code for adding extra marker icons for different categories? Here is the code: So, after the intial 'var camera;' point, there comes this: function addMarker(point, title, video, details) { var marker = new GMarker(point, {title: title, icon:camera}); GEvent.addListener(marker, "click", function() { if (details) { marker.openInfoWindowTabsHtml([new GInfoWindowTab("Video", video), new GInfoWindowTab("More", details)]); } else { marker.openInfoWindowHtml(video); } }); Then further down, is the code for calling the individual marker image. I would like to add another image to this list - would I start out by calling the new object 'camera-red.image' or something similar? function initialize() { if (GBrowserIsCompatible()) { map = new GMap2(document.getElementById("mapDiv")); map.setCenter(new GLatLng(51.52484592590448, -0.13345599174499512), 17); map.setUIToDefault(); var uclvtSatMapType = createUclVTSatMapType() map.addMapType(uclvtSatMapType); map.setMapType(uclvtSatMapType); camera = new GIcon(G_DEFAULT_ICON); camera.image = "ucl-video.png"; camera.iconSize = new GSize(32,37); camera.iconAnchor = new GPoint(16,35); camera.infoWindowAnchor = new GPoint(16,2); addMarkersToMap(); } The actual map can be found here: link text Thanks. Gray

    Read the article

  • Flash Camera Permissions Dialog: Why does it sometimes take more than one click to work?

    - by Emmett Shear
    Often, but not always, the first few clicks on the "Allow" button on the Flash camera permissions dialog are ignored. This is very frustrating for users. And for me, when I'm testing code! I'm afraid I don't really know what else to say about the problem to give more insight. My hunch is that it has something to do with getting focus on the embed element, but that is merely a guess. See Justin.tv for an example.

    Read the article

  • How can I display the camera view in the main window in real time?

    - by Mongrel Warfare
    I'm trying to make an augmented reality application with a waypoint structure, like Yelp, and I'm wondering how to set up my main view so that it displays the camera view on the whole screen. I've heard of using the UIImagePickerController Class, but I'm unsure how to manipulate the code so that it doesn't actually take a picture, but just stays in the view mode. Any help would be appreciated, thanks!

    Read the article

  • How do i get the data from a surveillance camera to a storage i can stream from?

    - by radbyx
    Hi my sisters house was robbed chrismas evening :( I talked with her about making a surveillance system for her. The idea is to have a system that detects intruders and then send a SMS to you while streaming it to a private website. The hard part: How and where do I storage the data from the camera so it's streamable? I think i can manage to do the streaming, website and SMS server, but i need the data (fundamentation). Thanks, any help is much appriciated.

    Read the article

  • Android - How to prevent the phone screen from turning on when volume or camera key is pressed?

    - by 2Real
    I have an activity that shows up when the phone screen goes to sleep/turns off ie turns black. For some reason, the phone turns on when the volume buttons or the camera buttons are pressed. By turns on, I mean the screen wakes up or comes back from the black screen state. I've tried using dispatchKeyEvent(KeyEvent event) and the buttons are disabled on the activity, but they still wake up the phone.

    Read the article

  • Twitter 2 for Android crash every time I try uploading multi photos [closed]

    - by Hazz
    Hello, I'm using the new Twitter 2 on Android 2.1. Whenever I hit the button which enables me to upload multiple photos in a single tweet, I always get the error "The application Camera (process com.sonyericsson.camera) has stopped unexpectidly. Please try again". However, uploading a single photo using the camera button in Twitter have no problem, it works. My phone is Sony Ericsson x10 mini pro. I tried signing out and back in, same result. Anything I can do to fix this? This is the log info I got using Log Collector: 02-23 15:05:57.328 I/ActivityManager( 1240): Starting activity: Intent { act=com.twitter.android.post.status cmp=com.twitter.android/.PostActivity } 02-23 15:05:57.338 D/PhoneWindow(15095): couldn't save which view has focus because the focused view com.android.internal.policy.impl.PhoneWindow$DecorView@45726938 has no id. 02-23 15:05:57.688 I/ActivityManager( 1240): Displayed activity com.twitter.android/.PostActivity: 340 ms (total 340 ms) 02-23 15:05:59.018 I/ActivityManager( 1240): Starting activity: Intent { act=android.intent.action.PICK typ=vnd.android.cursor.dir/image cmp=com.sonyericsson.camera/com.sonyericsson.album.grid.GridActivity } 02-23 15:05:59.038 I/ActivityManager( 1240): Start proc com.sonyericsson.camera for activity com.sonyericsson.camera/com.sonyericsson.album.grid.GridActivity: pid=15113 uid=10057 gids={1006, 1015, 3003} 02-23 15:05:59.128 I/dalvikvm(15113): Debugger thread not active, ignoring DDM send (t=0x41504e4d l=38) 02-23 15:05:59.158 I/dalvikvm(15113): Debugger thread not active, ignoring DDM send (t=0x41504e4d l=50) 02-23 15:05:59.448 I/ActivityManager( 1240): Displayed activity com.sonyericsson.camera/com.sonyericsson.album.grid.GridActivity: 423 ms (total 423 ms) 02-23 15:05:59.458 W/dalvikvm(15113): threadid=15: thread exiting with uncaught exception (group=0x4001e160) 02-23 15:05:59.458 E/AndroidRuntime(15113): Uncaught handler: thread AsyncTask #1 exiting due to uncaught exception 02-23 15:05:59.468 E/AndroidRuntime(15113): java.lang.RuntimeException: An error occured while executing doInBackground() 02-23 15:05:59.468 E/AndroidRuntime(15113): at android.os.AsyncTask$3.done(AsyncTask.java:200) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.FutureTask$Sync.innerSetException(FutureTask.java:273) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.FutureTask.setException(FutureTask.java:124) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:307) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.FutureTask.run(FutureTask.java:137) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1068) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:561) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.lang.Thread.run(Thread.java:1096) 02-23 15:05:59.468 E/AndroidRuntime(15113): Caused by: java.lang.IllegalArgumentException: Unsupported MIME type. 02-23 15:05:59.468 E/AndroidRuntime(15113): at com.sonyericsson.album.grid.GridActivity$AlbumTask.doInBackground(GridActivity.java:202) 02-23 15:05:59.468 E/AndroidRuntime(15113): at com.sonyericsson.album.grid.GridActivity$AlbumTask.doInBackground(GridActivity.java:124) 02-23 15:05:59.468 E/AndroidRuntime(15113): at android.os.AsyncTask$2.call(AsyncTask.java:185) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305) 02-23 15:05:59.468 E/AndroidRuntime(15113): ... 4 more 02-23 15:05:59.628 E/SemcCheckin(15113): Get crash dump level : java.io.FileNotFoundException: /data/semc-checkin/crashdump 02-23 15:05:59.628 W/ActivityManager( 1240): Unable to start service Intent { act=com.sonyericsson.android.jcrashcatcher.action.BUGREPORT_AUTO cmp=com.sonyericsson.android.jcrashcatcher/.JCrashCatcherService (has extras) }: not found 02-23 15:05:59.648 I/Process ( 1240): Sending signal. PID: 15113 SIG: 3 02-23 15:05:59.648 I/dalvikvm(15113): threadid=7: reacting to signal 3 02-23 15:05:59.778 I/dalvikvm(15113): Wrote stack trace to '/data/anr/traces.txt' 02-23 15:06:00.388 E/SemcCheckin( 1673): Get Crash Level : java.io.FileNotFoundException: /data/semc-checkin/crashdump 02-23 15:06:01.708 I/DumpStateReceiver( 1240): Added state dump to 1 crashes 02-23 15:06:02.008 D/iddd-events( 1117): Registering event com.sonyericsson.idd.probe.android.devicemonitor::ApplicationCrash with 4314 bytes payload. 02-23 15:06:06.968 D/dalvikvm( 1673): GC freed 661 objects / 126704 bytes in 124ms 02-23 15:06:11.928 D/dalvikvm( 1379): GC freed 19753 objects / 858832 bytes in 84ms 02-23 15:06:13.038 I/Process (15113): Sending signal. PID: 15113 SIG: 9 02-23 15:06:13.048 I/WindowManager( 1240): WIN DEATH: Window{4596ecc0 com.sonyericsson.camera/com.sonyericsson.album.grid.GridActivity paused=false} 02-23 15:06:13.048 I/ActivityManager( 1240): Process com.sonyericsson.camera (pid 15113) has died. 02-23 15:06:13.048 I/WindowManager( 1240): WIN DEATH: Window{459db5e8 com.sonyericsson.camera/com.sonyericsson.album.grid.GridActivity paused=false} 02-23 15:06:13.078 I/UsageStats( 1240): Unexpected resume of com.twitter.android while already resumed in com.sonyericsson.camera 02-23 15:06:13.098 W/InputManagerService( 1240): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@456e7168 02-23 15:06:21.278 D/dalvikvm( 1745): GC freed 2032 objects / 410848 bytes in 60ms

    Read the article

  • 3D Graphics with XNA Game Studio 4.0 bug in light map?

    - by Eibis
    i'm following the tutorials on 3D Graphics with XNA Game Studio 4.0 and I came up with an horrible effect when I tried to implement the Light Map http://i.stack.imgur.com/BUWvU.jpg this effect shows up when I look towards the center of the house (and it moves with me). it has this shape because I'm using a sphere to represent light; using other light shapes gives different results. I'm using a class PreLightingRenderer: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Dhpoware; using Microsoft.Xna.Framework.Content; namespace XNAFirstPersonCamera { public class PrelightingRenderer { // Normal, depth, and light map render targets RenderTarget2D depthTarg; RenderTarget2D normalTarg; RenderTarget2D lightTarg; // Depth/normal effect and light mapping effect Effect depthNormalEffect; Effect lightingEffect; // Point light (sphere) mesh Model lightMesh; // List of models, lights, and the camera public List<CModel> Models { get; set; } public List<PPPointLight> Lights { get; set; } public FirstPersonCamera Camera { get; set; } GraphicsDevice graphicsDevice; int viewWidth = 0, viewHeight = 0; public PrelightingRenderer(GraphicsDevice GraphicsDevice, ContentManager Content) { viewWidth = GraphicsDevice.Viewport.Width; viewHeight = GraphicsDevice.Viewport.Height; // Create the three render targets depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24); normalTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); lightTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); // Load effects depthNormalEffect = Content.Load<Effect>(@"Effects\PPDepthNormal"); lightingEffect = Content.Load<Effect>(@"Effects\PPLight"); // Set effect parameters to light mapping effect lightingEffect.Parameters["viewportWidth"].SetValue(viewWidth); lightingEffect.Parameters["viewportHeight"].SetValue(viewHeight); // Load point light mesh and set light mapping effect to it lightMesh = Content.Load<Model>(@"Models\PPLightMesh"); lightMesh.Meshes[0].MeshParts[0].Effect = lightingEffect; this.graphicsDevice = GraphicsDevice; } public void Draw() { drawDepthNormalMap(); drawLightMap(); prepareMainPass(); } void drawDepthNormalMap() { // Set the render targets to 'slots' 1 and 2 graphicsDevice.SetRenderTargets(normalTarg, depthTarg); // Clear the render target to 1 (infinite depth) graphicsDevice.Clear(Color.White); // Draw each model with the PPDepthNormal effect foreach (CModel model in Models) { model.CacheEffects(); model.SetModelEffect(depthNormalEffect, false); model.Draw(Camera.ViewMatrix, Camera.ProjectionMatrix, Camera.Position); model.RestoreEffects(); } // Un-set the render targets graphicsDevice.SetRenderTargets(null); } void drawLightMap() { // Set the depth and normal map info to the effect lightingEffect.Parameters["DepthTexture"].SetValue(depthTarg); lightingEffect.Parameters["NormalTexture"].SetValue(normalTarg); // Calculate the view * projection matrix Matrix viewProjection = Camera.ViewMatrix * Camera.ProjectionMatrix; // Set the inverse of the view * projection matrix to the effect Matrix invViewProjection = Matrix.Invert(viewProjection); lightingEffect.Parameters["InvViewProjection"].SetValue(invViewProjection); // Set the render target to the graphics device graphicsDevice.SetRenderTarget(lightTarg); // Clear the render target to black (no light) graphicsDevice.Clear(Color.Black); // Set render states to additive (lights will add their influences) graphicsDevice.BlendState = BlendState.Additive; graphicsDevice.DepthStencilState = DepthStencilState.None; foreach (PPPointLight light in Lights) { // Set the light's parameters to the effect light.SetEffectParameters(lightingEffect); // Calculate the world * view * projection matrix and set it to // the effect Matrix wvp = (Matrix.CreateScale(light.Attenuation) * Matrix.CreateTranslation(light.Position)) * viewProjection; lightingEffect.Parameters["WorldViewProjection"].SetValue(wvp); // Determine the distance between the light and camera float dist = Vector3.Distance(Camera.Position, light.Position); // If the camera is inside the light-sphere, invert the cull mode // to draw the inside of the sphere instead of the outside if (dist < light.Attenuation) graphicsDevice.RasterizerState = RasterizerState.CullClockwise; // Draw the point-light-sphere lightMesh.Meshes[0].Draw(); // Revert the cull mode graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; } // Revert the blending and depth render states graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; // Un-set the render target graphicsDevice.SetRenderTarget(null); } void prepareMainPass() { foreach (CModel model in Models) foreach (ModelMesh mesh in model.Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) { // Set the light map and viewport parameters to each model's effect if (part.Effect.Parameters["LightTexture"] != null) part.Effect.Parameters["LightTexture"].SetValue(lightTarg); if (part.Effect.Parameters["viewportWidth"] != null) part.Effect.Parameters["viewportWidth"].SetValue(viewWidth); if (part.Effect.Parameters["viewportHeight"] != null) part.Effect.Parameters["viewportHeight"].SetValue(viewHeight); } } } } that uses three effect: PPDepthNormal.fx float4x4 World; float4x4 View; float4x4 Projection; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Depth : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 viewProjection = mul(View, Projection); float4x4 worldViewProjection = mul(World, viewProjection); output.Position = mul(input.Position, worldViewProjection); output.Normal = mul(input.Normal, World); // Position's z and w components correspond to the distance // from camera and distance of the far plane respectively output.Depth.xy = output.Position.zw; return output; } // We render to two targets simultaneously, so we can't // simply return a float4 from the pixel shader struct PixelShaderOutput { float4 Normal : COLOR0; float4 Depth : COLOR1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; // Depth is stored as distance from camera / far plane distance // to get value between 0 and 1 output.Depth = input.Depth.x / input.Depth.y; // Normal map simply stores X, Y and Z components of normal // shifted from (-1 to 1) range to (0 to 1) range output.Normal.xyz = (normalize(input.Normal).xyz / 2) + .5; // Other components must be initialized to compile output.Depth.a = 1; output.Normal.a = 1; return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPLight.fx float4x4 WorldViewProjection; float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; sampler2D depthSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 LightColor; float3 LightPosition; float LightAttenuation; // Include shared functions #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 LightPosition : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WorldViewProjection); output.LightPosition = output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Find the pixel coordinates of the input position in the depth // and normal textures float2 texCoord = postProjToScreen(input.LightPosition) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; // Extract the normal from the normal map and move from // 0 to 1 range to -1 to 1 range float4 normal = (tex2D(normalSampler, texCoord) - .5) * 2; // Perform the lighting calculations for a point light float3 lightDirection = normalize(LightPosition - position); float lighting = clamp(dot(normal, lightDirection), 0, 1); // Attenuate the light to simulate a point light float d = distance(LightPosition, position); float att = 1 - pow(d / LightAttenuation, 6); return float4(LightColor * lighting * att, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPShared.vsi has some common functions: float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } and finally from the Game class I set up in LoadContent with: effect = Content.Load(@"Effects\PPModel"); models[0] = new CModel(Content.Load(@"Models\teapot"), new Vector3(-50, 80, 0), new Vector3(0, 0, 0), 1f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); house = new CModel(Content.Load(@"Models\house"), new Vector3(0, 0, 0), new Vector3((float)-Math.PI / 2, 0, 0), 35.0f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); models[0].SetModelEffect(effect, true); house.SetModelEffect(effect, true); renderer = new PrelightingRenderer(GraphicsDevice, Content); renderer.Models = new List(); renderer.Models.Add(house); renderer.Models.Add(models[0]); renderer.Lights = new List() { new PPPointLight(new Vector3(0, 120, 0), Color.White * .85f, 2000) }; where PPModel.fx is: float4x4 World; float4x4 View; float4x4 Projection; texture2D BasicTexture; sampler2D basicTextureSampler = sampler_state { texture = ; addressU = wrap; addressV = wrap; minfilter = anisotropic; magfilter = anisotropic; mipfilter = linear; }; bool TextureEnabled = true; texture2D LightTexture; sampler2D lightSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 AmbientColor = float3(0.15, 0.15, 0.15); float3 DiffuseColor; #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 worldViewProjection = mul(World, mul(View, Projection)); output.Position = mul(input.Position, worldViewProjection); output.PositionCopy = output.Position; output.UV = input.UV; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Sample model's texture float3 basicTexture = tex2D(basicTextureSampler, input.UV); if (!TextureEnabled) basicTexture = float4(1, 1, 1, 1); // Extract lighting value from light map float2 texCoord = postProjToScreen(input.PositionCopy) + halfPixel(); float3 light = tex2D(lightSampler, texCoord); light += AmbientColor; return float4(basicTexture * DiffuseColor * light, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I don't have any idea on what's wrong... googling the web I found that this tutorial may have some bug but I don't know if it's the LightModel fault (the sphere) or in a shader or in the class PrelightingRenderer. Any help is very appreciated, thank you for reading!

    Read the article

  • Looking for Kiosk-style / camera store easy photo memory card to CD/DVD burning program for Windows-7 Notebook? For non techie user.

    - by Rob
    I'm looking for a Kiosk-style / camera shop easy photo memory card to CD/DVD burning program? For non technie user. The kind of system you see in a camera shop / store, e.g. in the UK, Jessops and Boots stores. This is for my Dad who is adept at general PC usage as a notebook owner, but would prefer something fairly simple. The task of burning photos to CD/DVD, in their original photo file .jpg form, i.e. NOT as CD or DVD video or slideshow, is what I'm looking for. I'm guessing this might be possible in Picasa, but all the options available might be superfluous and confusing. He could probably learn to use that but thought I would try simpler options first. Looking for something that guides the user through the steps/stages of the process, 'Wizard' style. Any suggestions? Platform: HP Windows 7 Home notebook with CD/DVD burner and SD memory card slot.

    Read the article

  • Is there a way to test HTTP Live Streaming via an iSight camera?

    - by bpapa
    I'm working on an iPhone app that will use HTTP Live Streaming. Using Apple's provided tools (particularly mediafilesegmenter), I'm able to successfully segment and serve an archived video. Now I want to test Live Streaming stuff. I don't own any sort of camcorder, I just have my iSight built-in to my Mac. Is there a way to leverage this camera to test out Live Streaming? Run iSight from the command line maybe? If so, I need a port number for mediastreamsegmenter.

    Read the article

  • Drupal workflow action access integrated with taxonomy access control?

    - by groovehunter
    hi, I am building a DMS for our intranet and use a taxonomy hierarchy because we need access control that way. All company locations manage (upload,edit) their own documents but should be able to access all. This is inherited to the child terms and works fine. Additionally we want simple 3-step workflow (draft,published,archived). So i introduced roles for editor, publisher and docadmin and set permissions for the transitions. Also triggers to effectivly (un)publish documents. But (of course) a user of role publisher can do the transition for ALL documents. But we want publisher for each company location (top taxonomy level, see above). Could this be achieved? Do i have to set it up by myself (i guess "rules" is appropriate to do this) or is there another module helping. role inheritance was a guess, but that is only about roles (naturally). "module grants" i use and checked first option. That way my thoughts are going. I hope you get my idea resp. problem. drupal 6.16 current edit: I reread the docs and found ie. http://drupal.org/node/408018 Revisioning for categorized content. Will reread that.

    Read the article

  • Optimal preferences for prefix queries with Oracle catalog (CTXCAT) index

    - by nw
    The documentation for Oracle Text gives this example of a prefix/substring preference setting for context and catalog indexes: begin ctx_ddl.create_preference('mywordlist', 'BASIC_WORDLIST'); ctx_ddl.set_attribute('mywordlist','PREFIX_INDEX','TRUE'); ctx_ddl.set_attribute('mywordlist','PREFIX_MIN_LENGTH', '3'); ctx_ddl.set_attribute('mywordlist','PREFIX_MAX_LENGTH', '4'); ctx_ddl.set_attribute('mywordlist','SUBSTRING_INDEX', 'YES'); end; What I need to know is whether the substring_index attribute is necessary if I only ever issue prefix searches, such as: SELECT title FROM auction WHERE CATSEARCH(title, 'cam*', '') > 0; TITLE --------------- CANON CAMERA FUJI CAMERA NIKON CAMERA OLYMPUS CAMERA PENTAX CAMERA SONY CAMERA 6 rows selected

    Read the article

  • Is it possible to use Integrated Windows Auth when Server isn't on the domain?

    - by jskentzos
    Our production web servers ARE NOT part of the domain, but we'd like people to be able to log in automatically since they are logged into the domain on their PC. Is there anyway to get the browser (IE7+) to send the appropriate information to the server (IIS6) so I can retrieve the ServerVariables["AUTH_USER"] or ServerVariables["LOGON_USER"]? I presume the answer is no since if I set the security for windows auth to "on" and anonymous access to "off", then the server wouldn't know what do do with any user information for a domain which it has no knowledge of. I just want to know for sure before I give the SSO team a "not possible" answer.

    Read the article

  • memristor is a new paradigm (fourth element in integrated circuits)? [closed]

    - by lsalamon
    The memristor will bring a new paradigm of programming, opened enormous opportunities to enable the machines to gain knowledge, creating a new paradigm toward the intelligence altificial. Do you believe that we are paving the way for the era of intelligent machines? More info about : Brain-like systems? "As for the human brain-like characteristics, memristor technology could one day lead to computer systems that can remember and associate patterns in a way similar to how people do. This could be used to substantially improve facial recognition technology or to provide more complex biometric recognition systems that could more effectively restrict access to personal information. These same pattern-matching capabilities could enable appliances that learn from experience and computers that can make decisions." [EDITED] The way is open. News on the subject Brain-Like Computer Closer to Realization

    Read the article

  • Where to get streaming (live) video and audio from camera example app for Nokia?

    - by Ole Jak
    Where to get streaming (live) video and audio from camera example for Nokia (5800 for ex)? Suppose I want to create some live video streaming service app so I'll have some cool server at the back end. And I know how to do that part. Suppose I have some stand alone app for PCs now I want to go on to mobile devices. So I decided to start from Nokia because I have it and can do with it what I want (Nokia 5800 XpressMusic). So I want to see some sample app grabing audio and video streams from Phone, Synchronizing them, and sending LIVE stream to server. I need any OpenSource sample (JAVA or C or C++) that ll do this or something like this. Where can I get one?

    Read the article

  • How do I forward map a point in OpenCV using mapx & mapy?

    - by Charles
    Ultimately, I'm trying to determine the 3D location of a point I've identified in two cameras (using OpenCV 2.3.1, Windows 7, C++). I'm having trouble locating the 2D point for each camera from its mapx & mapy. I could not find in Bradski & Kaehler's OpenCV book how to do it. PROCESS calibrateCamera for each of the pair stereoCalibrate stereoRectify detect the blob I want to locate in 3D on each camera in 2D initUndistortRectifyMap for each camera Eventually, perspectiveTransform PROBLEM with 5: I have the x and y location of the point in the distorted and unrectified image for each camera(from #4) but I don't know how to get the undistorted and rectified x and y for each camera from the two maps initUndistortRectifyMap creates for each camera. I don't want to remap the whole image since I only want to learn the 3D location of one object for each frame. QUESTION: How do I forward map a point (get the undistorted and rectified x and y) with its two maps? Thanks for any help.

    Read the article

  • Where to get streaming (live) video and audio from camera example app for Android?

    - by Ole Jak
    Where to get streaming (live) video and audio from camera example for Android? Suppose I want to create some live video streaming service app so I'll have some cool server at the back end. And I know how to do that part. Suppose I have some stand alone app for PCs now I want to go on to mobile devices. So I want to see some sample app grabing audio and video streams from Phone, Synchronizing them, encoding somehow, and sending LIVE stream to server. I need any Open-Source sample that will do this or something like this. Where can I get one?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >