Search Results

Search found 358 results on 15 pages for 'normalize'.

Page 9/15 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • backface culling error

    - by acrilige
    I write simple software renderer. In my pipeline i have stage of backface culling. But looks like it has some error (see picture). I perform culling right after world transformation. (i can't insert picture in post coz i don't have enough points, so i just upload it (cube model): http://imageshack.us/photo/my-images/705/bcerror.png/) Vector3F view_dir(0.0f, 0.0f, 1.0f); std::vector<Triangle> to_remove; for (Triangle &t : m_triangles) { Vector4F e1 = t.v2 - t.v1; Vector4F e2 = t.v3 - t.v1; Vector3F normal( e1.y * e2.z - e1.z * e2.y, e1.z * e2.x - e1.x * e2.z, e1.x * e2.y - e1.y * e2.x ); normal.Normalize(); float dot = Dot(view_dir, normal); if (dot <= 0) to_remove.push_back(t); } for (Triangle& t : to_remove) m_triangles.erase(std::remove(m_triangles.begin(), m_triangles.end(), t), m_triangles.end()); Camera sits in origin and points in screen (RH). What is the reason?

    Read the article

  • Normalizing the direction to check if able to move

    - by spartan2417
    i have a a room with 4 walls along the x and z axis respectively. My player who is in first person (therefore the camera) should have collision detection with these walls. I'm relatively new to this so please bare with me. I believe the way to do this is to calculate the direction and distance to the wall from the camera and then normalize the directions. However i can only get this far before i dont know what to do. I think you should work out the angle and direction your facing? where _dx and _dz is the small buffer in front of the camera. float CalcDirection(float Cam_x, float Cam_z, float Wall_x, float Wall_z) { //Calculate direction and distance to obstacle. float ob_dirx = Cam_x + _dx - Wall_x; float ob_dirz = Cam_z + _dz - Wall_z; float ob_dist = sqrt(ob_dirx*ob_dirx + ob_dirz*ob_dirz); //Normalise directions float ob_norm = sqrt(ob_dirx*ob_dirx + ob_dirz*ob_dirz); ob_dirx = (ob_dirx)/ob_norm; ob_dirz = (ob_dirz)/ob_norm; can anyone explain in laymen's terms how i work out the angle?

    Read the article

  • Conventions for search result scoring

    - by DeaconDesperado
    I assume this type of question is more on-topic here than on regular SO. I have been working on a search feature for my team's web application and have had a lot of success building a multithreaded, "divide and conquer" processing system to work through a large amount of fulltext. Our problem domain is pretty specific. Users of the app generate posts, and as a general rule, posts that are more recent are considered to be of greater relevance. Some of the data we are trying to extract from search is very specific (user's feelings about specific items or things) and we are using python nltk to do named-entity extraction to find interesting likely query terms. Essentially we look for descriptive adjective-noun pairs and generate a general picture of a user's expressed sentiment as a list of tokens. This search is intended as an internal tool for our team to draw out a local picture of sentiments like "soggy pizza." There's some machine learning in there too to do entity resolution on terms like "soggy" to all manner of adjectives expressing nastiness. My problem is I am at a loss for how to go about scoring these results. The text being searched is split up into tokens in a list, so my initial approach would be to normalize a float score between 0.0-1.0 generated off of how far into the list the terms appear and how often they are repeated (a later mention of the term being worth less, earlier more, greater frequency-greater score, etc.) A certain amount of weight could be given to the timestamp as well, though I am not certain how to calculate this. I am curious if anyone has had to solve a similar problem in a search relevance grading between appreciable metrics (frequency, term location/colocation, recency) and if there are and guidelines for how to weight each. I should mention as well that the final fallback procedure in the search is to pipe the query to Sphinx, which has its own scoring practices. Sphinx operates as the last resort in case our application specific processing can't find any eligible candidates.

    Read the article

  • Rotate a vector

    - by marc wellman
    I want my first-person camera to smoothly change its viewing direction from direction d1 to direction d2. The latter direction is indicated by a target position t2. So far I have implemented a rotation that works fine but the speed of the rotation slows down the closer the current direction gets to the desired one. This is what I want to avoid. Here are the two very simple methods I have written so far: // this method initiates the direction change and sets the parameter public void LookAt(Vector3 target) { _desiredDirection = target - _cameraPosition; _desiredDirection.Normalize(); _rotation = new Matrix(); _rotationAxis = Vector3.Cross(Direction, _desiredDirection); _isLooking = true; } // this method gets execute by the Update()-method if _isLooking flag is up. private void _lookingAt() { dist = Vector3.Distance(Direction, _desiredDirection); // check whether the current direction has reached the desired one. if (dist >= 0.00001f) { _rotationAxis = Vector3.Cross(Direction, _desiredDirection); _rotation = Matrix.CreateFromAxisAngle(_rotationAxis, MathHelper.ToRadians(1)); Direction = Vector3.TransformNormal(Direction, _rotation); } else { _onDirectionReached(); _isLooking = false; } } Again, rotation works fine; camera reaches its desired direction. But the speed is not equal over the course of movement - it slows down. How to achieve a rotation with constant speed ?

    Read the article

  • What's a good way to check that a player has clicked on an object in a 3D game?

    - by imja
    I'm programming a 3D game (using C++ and OpenGL), and I have a few 3D objects in the scene, we can say they are boxes for this example. I want to let the player click on those boxes to select them (ie. they might change color) with the typical restriction like if more than one box is located where the user clicked, only the one closest to the camera would get selected. What would be the best way to do this? The fact that these objects go through several transforms before getting to window coordinates is what makes this a bit tricky. One approach I thought about was that if the player clicks on the screen, I could normalize the x,y coordinates of mouse click and then transform the bounding box coordinates of the objects into clip-space so that I could compare then to the normalized mouse coordinates. I guess I could then do some sort of ray-box collision test to see if any objects lie as the path of the mouse click. I'm afraid I might be over complicating it. Any better methods out there?

    Read the article

  • Moving sprite from one vector to the other

    - by user2002495
    I'm developing a game where enemy can shoot bullets towards the player. I'm using 2 vector that is normalized later to determine where the bullets will go. Here is the code where enemy shoots: private void UpdateCommonBullet(GameTime gt) { foreach (CommonEnemyBullet ceb in bulletList) { ceb.pos += ceb.direction * 1.5f * (float)gt.ElapsedGameTime.TotalSeconds; if (ceb.pos.Y >= 600) ceb.hasFired = false; } for (int i = 0; i < bulletList.Count; i++) { if (!bulletList[i].hasFired) { bulletList.RemoveAt(i); i--; } } } And here is where i get the direction (in the constructor of the bullet): direction = Global.currentPos - this.pos; direction.Normalize(); Global.currentPos is a Vector2 where currently player is located, and is updated eveytime the player moves. This all works fine except that the bullet won't go to player's location. Instead, it tends goes to the "far right" of the player's position. I think it might be the problem where the bullet (this.pos in the direction) is created (at the position of the enemy). But I found no solution of it, please help me.

    Read the article

  • 3D Graphics with XNA Game Studio 4.0 bug in light map?

    - by Eibis
    i'm following the tutorials on 3D Graphics with XNA Game Studio 4.0 and I came up with an horrible effect when I tried to implement the Light Map http://i.stack.imgur.com/BUWvU.jpg this effect shows up when I look towards the center of the house (and it moves with me). it has this shape because I'm using a sphere to represent light; using other light shapes gives different results. I'm using a class PreLightingRenderer: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Dhpoware; using Microsoft.Xna.Framework.Content; namespace XNAFirstPersonCamera { public class PrelightingRenderer { // Normal, depth, and light map render targets RenderTarget2D depthTarg; RenderTarget2D normalTarg; RenderTarget2D lightTarg; // Depth/normal effect and light mapping effect Effect depthNormalEffect; Effect lightingEffect; // Point light (sphere) mesh Model lightMesh; // List of models, lights, and the camera public List<CModel> Models { get; set; } public List<PPPointLight> Lights { get; set; } public FirstPersonCamera Camera { get; set; } GraphicsDevice graphicsDevice; int viewWidth = 0, viewHeight = 0; public PrelightingRenderer(GraphicsDevice GraphicsDevice, ContentManager Content) { viewWidth = GraphicsDevice.Viewport.Width; viewHeight = GraphicsDevice.Viewport.Height; // Create the three render targets depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24); normalTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); lightTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); // Load effects depthNormalEffect = Content.Load<Effect>(@"Effects\PPDepthNormal"); lightingEffect = Content.Load<Effect>(@"Effects\PPLight"); // Set effect parameters to light mapping effect lightingEffect.Parameters["viewportWidth"].SetValue(viewWidth); lightingEffect.Parameters["viewportHeight"].SetValue(viewHeight); // Load point light mesh and set light mapping effect to it lightMesh = Content.Load<Model>(@"Models\PPLightMesh"); lightMesh.Meshes[0].MeshParts[0].Effect = lightingEffect; this.graphicsDevice = GraphicsDevice; } public void Draw() { drawDepthNormalMap(); drawLightMap(); prepareMainPass(); } void drawDepthNormalMap() { // Set the render targets to 'slots' 1 and 2 graphicsDevice.SetRenderTargets(normalTarg, depthTarg); // Clear the render target to 1 (infinite depth) graphicsDevice.Clear(Color.White); // Draw each model with the PPDepthNormal effect foreach (CModel model in Models) { model.CacheEffects(); model.SetModelEffect(depthNormalEffect, false); model.Draw(Camera.ViewMatrix, Camera.ProjectionMatrix, Camera.Position); model.RestoreEffects(); } // Un-set the render targets graphicsDevice.SetRenderTargets(null); } void drawLightMap() { // Set the depth and normal map info to the effect lightingEffect.Parameters["DepthTexture"].SetValue(depthTarg); lightingEffect.Parameters["NormalTexture"].SetValue(normalTarg); // Calculate the view * projection matrix Matrix viewProjection = Camera.ViewMatrix * Camera.ProjectionMatrix; // Set the inverse of the view * projection matrix to the effect Matrix invViewProjection = Matrix.Invert(viewProjection); lightingEffect.Parameters["InvViewProjection"].SetValue(invViewProjection); // Set the render target to the graphics device graphicsDevice.SetRenderTarget(lightTarg); // Clear the render target to black (no light) graphicsDevice.Clear(Color.Black); // Set render states to additive (lights will add their influences) graphicsDevice.BlendState = BlendState.Additive; graphicsDevice.DepthStencilState = DepthStencilState.None; foreach (PPPointLight light in Lights) { // Set the light's parameters to the effect light.SetEffectParameters(lightingEffect); // Calculate the world * view * projection matrix and set it to // the effect Matrix wvp = (Matrix.CreateScale(light.Attenuation) * Matrix.CreateTranslation(light.Position)) * viewProjection; lightingEffect.Parameters["WorldViewProjection"].SetValue(wvp); // Determine the distance between the light and camera float dist = Vector3.Distance(Camera.Position, light.Position); // If the camera is inside the light-sphere, invert the cull mode // to draw the inside of the sphere instead of the outside if (dist < light.Attenuation) graphicsDevice.RasterizerState = RasterizerState.CullClockwise; // Draw the point-light-sphere lightMesh.Meshes[0].Draw(); // Revert the cull mode graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; } // Revert the blending and depth render states graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; // Un-set the render target graphicsDevice.SetRenderTarget(null); } void prepareMainPass() { foreach (CModel model in Models) foreach (ModelMesh mesh in model.Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) { // Set the light map and viewport parameters to each model's effect if (part.Effect.Parameters["LightTexture"] != null) part.Effect.Parameters["LightTexture"].SetValue(lightTarg); if (part.Effect.Parameters["viewportWidth"] != null) part.Effect.Parameters["viewportWidth"].SetValue(viewWidth); if (part.Effect.Parameters["viewportHeight"] != null) part.Effect.Parameters["viewportHeight"].SetValue(viewHeight); } } } } that uses three effect: PPDepthNormal.fx float4x4 World; float4x4 View; float4x4 Projection; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Depth : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 viewProjection = mul(View, Projection); float4x4 worldViewProjection = mul(World, viewProjection); output.Position = mul(input.Position, worldViewProjection); output.Normal = mul(input.Normal, World); // Position's z and w components correspond to the distance // from camera and distance of the far plane respectively output.Depth.xy = output.Position.zw; return output; } // We render to two targets simultaneously, so we can't // simply return a float4 from the pixel shader struct PixelShaderOutput { float4 Normal : COLOR0; float4 Depth : COLOR1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; // Depth is stored as distance from camera / far plane distance // to get value between 0 and 1 output.Depth = input.Depth.x / input.Depth.y; // Normal map simply stores X, Y and Z components of normal // shifted from (-1 to 1) range to (0 to 1) range output.Normal.xyz = (normalize(input.Normal).xyz / 2) + .5; // Other components must be initialized to compile output.Depth.a = 1; output.Normal.a = 1; return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPLight.fx float4x4 WorldViewProjection; float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; sampler2D depthSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 LightColor; float3 LightPosition; float LightAttenuation; // Include shared functions #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 LightPosition : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WorldViewProjection); output.LightPosition = output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Find the pixel coordinates of the input position in the depth // and normal textures float2 texCoord = postProjToScreen(input.LightPosition) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; // Extract the normal from the normal map and move from // 0 to 1 range to -1 to 1 range float4 normal = (tex2D(normalSampler, texCoord) - .5) * 2; // Perform the lighting calculations for a point light float3 lightDirection = normalize(LightPosition - position); float lighting = clamp(dot(normal, lightDirection), 0, 1); // Attenuate the light to simulate a point light float d = distance(LightPosition, position); float att = 1 - pow(d / LightAttenuation, 6); return float4(LightColor * lighting * att, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPShared.vsi has some common functions: float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } and finally from the Game class I set up in LoadContent with: effect = Content.Load(@"Effects\PPModel"); models[0] = new CModel(Content.Load(@"Models\teapot"), new Vector3(-50, 80, 0), new Vector3(0, 0, 0), 1f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); house = new CModel(Content.Load(@"Models\house"), new Vector3(0, 0, 0), new Vector3((float)-Math.PI / 2, 0, 0), 35.0f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); models[0].SetModelEffect(effect, true); house.SetModelEffect(effect, true); renderer = new PrelightingRenderer(GraphicsDevice, Content); renderer.Models = new List(); renderer.Models.Add(house); renderer.Models.Add(models[0]); renderer.Lights = new List() { new PPPointLight(new Vector3(0, 120, 0), Color.White * .85f, 2000) }; where PPModel.fx is: float4x4 World; float4x4 View; float4x4 Projection; texture2D BasicTexture; sampler2D basicTextureSampler = sampler_state { texture = ; addressU = wrap; addressV = wrap; minfilter = anisotropic; magfilter = anisotropic; mipfilter = linear; }; bool TextureEnabled = true; texture2D LightTexture; sampler2D lightSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 AmbientColor = float3(0.15, 0.15, 0.15); float3 DiffuseColor; #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 worldViewProjection = mul(World, mul(View, Projection)); output.Position = mul(input.Position, worldViewProjection); output.PositionCopy = output.Position; output.UV = input.UV; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Sample model's texture float3 basicTexture = tex2D(basicTextureSampler, input.UV); if (!TextureEnabled) basicTexture = float4(1, 1, 1, 1); // Extract lighting value from light map float2 texCoord = postProjToScreen(input.PositionCopy) + halfPixel(); float3 light = tex2D(lightSampler, texCoord); light += AmbientColor; return float4(basicTexture * DiffuseColor * light, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I don't have any idea on what's wrong... googling the web I found that this tutorial may have some bug but I don't know if it's the LightModel fault (the sphere) or in a shader or in the class PrelightingRenderer. Any help is very appreciated, thank you for reading!

    Read the article

  • iphone compass tilt compensation

    - by m01d
    hi, has anybody already programmed a iphone compass heading tilt compensation? i have got some approaches, but some help or a better solution would be cool! FIRST i define a vector Ev, calculated out of the cross product of Gv and Hv. Gv is a gravity vector i build out of the accelerometer values and Hv is an heading vector built out the magnetometer values. Ev stands perpendicular on Gv and Hv, so it is heading to horizonatl East. SECOND i define a vector Rv, calculated out of the cross product Bv and Gv. Bv is my looking vector and it is defined as [0,0,-1]. Rv is perpendicular to Gv and Bv and shows always to the right. THIRD the angle between these two vectors, Ev and Rv, should be my corrected heading. to calculate the angle i build the dot product and thereof the arcos. phi = arcos ( Ev * Rv / |Ev| * |Rv| ) Theoretically it should work, but maybe i have to normalize the vectors?! Has anybody got a solution for this? Thanks, m01d

    Read the article

  • Shadow volume shader optimization (GLSL)

    - by Soubok
    I wondering if there is a way to optimize this vertex shader. This vertex shader projects (in the light direction) a vertex to the far plane if it is in the shadow. void main(void) { vec3 lightDir = (gl_ModelViewMatrix * gl_Vertex - gl_LightSource[0].position).xyz; // if the vertex is lit if ( dot(lightDir, gl_NormalMatrix * gl_Normal) < 0.01 ) { // don't move it gl_Position = ftransform(); } else { // move it far, is the light direction vec4 fin = gl_ProjectionMatrix * ( gl_ModelViewMatrix * gl_Vertex + vec4(normalize(lightDir) * 100000.0, 0.0) ); if ( fin.z > fin.w ) // if fin is behind the far plane fin.z = fin.w; // move to the far plane (needed for z-fail algo.) gl_Position = fin; } }

    Read the article

  • Extracting ""((Adj|Noun)+|((Adj|Noun)(Noun-Prep)?)(Adj|Noun))Noun"" from Text (Justeson & Katz, 1995)

    - by ssuhan
    I would like to query if it is possible to extract ((Adj|Noun)+|((Adj|Noun)(Noun-Prep)?)(Adj|Noun))Noun proposed by Justeson and Katz (1995) in R package openNLP? That is, I would like to use this linguistic filtering to extract candidate noun phrases. I cannot well understand its meaning. Could you do me a favor to explain it or transform such representation into R language. Many thanks. Maybe we can start the sample code from: library("openNLP") acq <- "This paper describes a novel optical thread plug gauge (OTPG) for internal thread inspection using machine vision. The OTPG is composed of a rigid industrial endoscope, a charge-coupled device camera, and a two degree-of-freedom motion control unit. A sequence of partial wall images of an internal thread are retrieved and reconstructed into a 2D unwrapped image. Then, a digital image processing and classification procedure is used to normalize, segment, and determine the quality of the internal thread." acqTag <- tagPOS(acq) acqTagSplit = strsplit(acqTag," ")

    Read the article

  • Reinforcement learning with neural networks

    - by Betamoo
    I am working on a project with RL & NN I need to determine the action vector structure which will be fed to a neural network.. I have 3 different actions (A & B & Nothing) each with different powers (e.g A100 A50 B100 B50) I wonder what is the best way to feed these actions to a NN in order to yield best results? 1- feed A/B to input 1, while action power 100/50/Nothing to input 2 2- feed A100/A50/Nothing to input 1, while B100/B50/Nothing to input 2 3- feed A100/A50 to input 1, while B100/B50 to input 2, while Nothing flag to input 3 4- Also to feed 100 & 50 or normalize them to 2 & 1 ? I need reasons why to choose one method Any suggestions are recommended Thanks

    Read the article

  • Algorithm to generate radial gradient

    - by user146780
    I have this algorithm here: pc = # the point you are coloring now p0 = # start point p1 = # end point v = p1 - p0 d = Length(v) v = Normalize(v) # or Scale(v, 1/d) v0 = pc - p0 t = Dot(v0, v) t = Clamp(t/d, 0, 1) color = (start_color * t) + (end_color * (1 - t)) to generate point to point linear gradients. It works very well for me. I was wondering if there was a similar algorithm to generate radial gradients. By similar, I mean one that solves for color at point P rather than solve for P at a certain color (where P is the coordinate you are painting). Thanks

    Read the article

  • Mapping A Sphere To A Cube

    - by petrocket
    There is a special way of mapping a cube to a sphere described here: http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html It is not your basic "normalize the point and your done" approach and gives a much more evenly spaced mapping. I've tried to do the inverse of the mapping going from sphere coords to cube coords and have been unable to come up the working equations. It's a rather complex system of equations with lots of square roots. Any math geniuses want to take a crack at it? Here's the equations in c++ code: sx = x * sqrtf(1.0f - y * y * 0.5f - z * z * 0.5f + y * y * z * z / 3.0f); sy = y * sqrtf(1.0f - z * z * 0.5f - x * x * 0.5f + z * z * x * x / 3.0f); sz = z * sqrtf(1.0f - x * x * 0.5f - y * y * 0.5f + x * x * y * y / 3.0f); sx,sy,sz are the sphere coords and x,y,z are the cube coords.

    Read the article

  • translation/rotation of a HUD against a camera using vectors in Euclidian 3D space

    - by Jakob
    i've got 2 points in 3D space: the camera position and the camera lookAt. the camera movement is restricted akin to typical first person shooter games. you can move the cam freely, tilt horizontally and up to 90 degrees vertically, but not roll. so now i want to draw a HUD to the screen, on which i can move the mouse freely, with the position of the cursor correctly translating into 3D space. the easy part was to draw something directly in front of the camera. V0 = camPos; V1 = lookAt; V2 = lookAt-camPos; normalize V2; mutiply V2 according to camera frustum V3 = V0+V2 draw something at V3 now the part i don't get: i could use V3 and add to that the rotations of the cam combined with the x/y of the mouse cursor, somehow, right? that's what i want.

    Read the article

  • How to check if an apply-template filled variable is a string or (possibly) empty node in XSLT?

    - by calavera.info
    I need a test which would evaluate to true in two cases: There is a string inside which contains any non white space characters There is any node (which can be possibly empty) on a variable that is filled with apply-template call result. I tried test="normalize-space($var)" but this doesn't cover the empty tag possibility. I also tried simply this: test="$var" but this evaluate to true even for white space only strings. By the way "$var/*" produces an error "Expression ...something I don't remember... node-set" which is I think because of apply-template variable instantiation. Is there any (which means even multi level decision) solution for this? EDIT: I forgot to say that it's for XSLT 1.0 and preferably without any exslt extensions or similar.

    Read the article

  • 2D Gaming - How to reflect a ball off the bat?

    - by sid
    Hi there I am pretty new to XNA & game dev and am stuck at ball reflection. My ball is reflecting once it hits the bat, but only in one angle, no matter which angle the bat is at. Here's the code: if (BallRect.Intersects(BatRect)) { Vector2 NormBallVelocity = Ball.velocity; NormBallVelocity.Normalize(); NormBallVelocity = Vector2.Reflect(Ball.velocity, NormBallVelocity); Ball.velocity = NormBallVelocity; } The ball is retracting its way back. How do I make it look like the ball is reflecting off the bat? I have seen other posts but they are on 3D front I am too new to translate it to 2D terms... Thanks Sid

    Read the article

  • ODBC and Windows Service

    - by DNS
    Hi, I'm new to windows services and... you guessed it, I’m a bit stuck. Let me paint the picture – I’m running a timed service that use an OdbcDataReader and SqlBulkCopy to (1) archive the data (2) normalize the data on a SQL box. When I run this code in a windows form proj. it works fine. Then, when I change the DNS’s Data Directory Path to a local drive, instead of the network share (just simulated the environment locally), it works as well. I’m obviously missing something. Any help will be appreciated. DNS

    Read the article

  • XSLT Sort Alphabetically & Numerically Problem

    - by Bryan
    I have a group of strings ie g:lines = '9,1,306,LUCY,G,89' I need the output to be: 1,9,89,306,G,LUCY This is my current code: <xsl:for-each select="$all_alerts[g:problem!='normal_service'][g:service='bus']"> <xsl:sort select="g:line"/> <xsl:sort select="number(g:line)" data-type="number"/> <xsl:value-of select="normalize-space(g:line)" /><xsl:text/> <xsl:if test="position()!=last()"><xsl:text>,&#160;</xsl:text></xsl:if> </xsl:for-each> I can get it to only display '1, 12, 306, 38, 9, G, LUCY' because the 2nd sort isn't being picked up. Anyone able help me out?

    Read the article

  • How to exclude read only fields using :input in jquery?

    - by melaos
    hi guys, thus far i've only been using some basic jquery selectors and functions. but i'm looking at this clear form function and i can't figure out how to add it so i can remove hidden inputs and readonly input from getting cleared. can anybody help? thanks. function clearForm(form) { // iterate over all of the inputs for the form // element that was passed in $(':input', form).each(function() { var type = this.type; var tag = this.tagName.toLowerCase(); // normalize case // it's ok to reset the value attr of text inputs, // password inputs, and textareas if (type == 'text' || type == 'password' || tag == 'textarea') this.value = ""; // checkboxes and radios need to have their checked state cleared // but should *not* have their 'value' changed else if (type == 'checkbox' || type == 'radio') this.checked = false; // select elements need to have their 'selectedIndex' property set to -1 // (this works for both single and multiple select elements) else if (tag == 'select') this.selectedIndex = -1; }); };

    Read the article

  • Using xsl:key to store result of boolean expression

    - by hielsnoppe
    In my transformation there is an expression some elements are repeatedly tested against. To reduce redundancy I'd like to encapsulate this in an xsl:key like this (not working): <xsl:key name="td-is-empty" match="td" use="not(./node()[normalize-space(.) or ./node()])" /> The expected behaviour is the key to yield a boolean value of true in case the expression is evaluated successfully and otherwise false. Then I'd like to use it as follows: <xsl:template match="td[not(key('td-is-empty', .))]" /> Is this possible and in case yes, how?

    Read the article

  • How to calculate real-time stats?

    - by Diego Jancic
    I have a site with millions of users (well, actually it doesn't have any yet, but let's imagine), and I want to calculate some stats like "log-ins in the past hour". The problem is similar to the one described here: http://highscalability.com/blog/2008/4/19/how-to-build-a-real-time-analytics-system.html The simplest approach would be to do a select like this: select count(distinct user_id) from logs where date>='20120601 1200' and date <='20120601 1300' (of course other conditions could apply for the stats, like log-ins per country) Of course this would be really slow, mainly if it has millions (or even thousands) of rows, and I want to query this every time a page is displayed. How would you summarize the data? What should go to the (mem)cache? EDIT: I'm looking for a way to de-normalize the data, or to keep the cache up-to-date. For example I could increment an in-memory variable every time someone logs in, but that would help to know the total amount of logins, not the "logins in the last hour". Hope it's more clear now.

    Read the article

  • update columns when value is numeric in tsql

    - by knittl
    i want to normalize date fields from an old badly designed db dump. i now need to update every row, where the datefield only contains the year. update table set date = '01.01.' + date where date like '____' and isnumeric(date) = 1 and date >= 1950 but this will not work, because sql does not do short circuit evaluation of boolean expressions. thus i get an error "error converting nvarchar '01.07.1989' to int" is there a way to work around this? the column also contains strings with a length of 4, which are not numbers (????, 5/96, 70/8, etc.) the table only has 60000 rows

    Read the article

  • Writing a post search algorithm.

    - by MdaG
    I'm trying to write a free text search algorithm for finding specific posts on a wall (similar kind of wall as Facebook uses). A user is suppose to be able to write some words in a search field and get hits on posts that contain the words; with the best match on top and then other posts in decreasing order according to match score. I'm using the edit distance (Levenshtein) "e(x, y) = e" to calculate the score for each post when compared to the query word "x" and post word "y" according to: score(x, y) = 2^(2 - e)(1 - min(e, |x|) / |x|) Each word in a post contributes to the total score for that specific post. This approach seems to work well when the posts are of roughly the same size, but sometime certain large posts manages to rack up score solely on having a lot of words in them while in practice not being relevant to the query. Am I approaching this problem in the wrong way or is there some way to normalize the score that I haven't thought of?

    Read the article

  • SQL Server - Storing multiple decimal values in a column?

    - by cshah
    I know storing multiple values in a column. Not a good idea. It violates first normal form --- which states NO multi valued attributes. Normalize period... I am using SQL Server 2005 I have a table that require to store lower limit and uppper limit for a measurement, think of it as a minimum and maximum speed limit... only problem is only 2 % out of hundread i need upper limit. I will only have data for lower limit. I was thinking to store both values in a column (Sparse column introduces in 2008 so not for me) Is there a way...? Not sure about XML..

    Read the article

  • Faster way to convert from 24 bit wav pcm format to float?

    - by LMO
    I need to read data in from a wav file in 24 bit pcm format, and convert to float. I'm using Python 2.7.2. The wave package reads the data in as a string, so what I've tried is: # read in entire wav file wdata = f.readframes(nFrames) # unpack into signed integers and convert to float data = array.array('f') for i in range(0,nFrames*3,3): data.append(float(struct.unpack('<i', '\x00'+ wdata[i:i+3])[0])) # normalize sample values data = data / 0x800000 This is quite a bit faster than my earlier approaches, but still quite slow. Can anyone suggest a more efficient method?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15  | Next Page >