Search Results

Search found 33291 results on 1332 pages for 'development environment'.

Page 494/1332 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • Precise Touch Screen Dragging Issue: Trouble Aligning with the Finger due to Different Screen Resolution

    - by David Dimalanta
    Please, I need your help. I'm trying to make a game that will drag-n-drop a sprite/image while my finger follows precisely with the image without being offset. When I'm trying on a 900x1280 (in X [900] and Y [1280]) screen resolution of the Google Nexus 7 tablet, it follows precisely. However, if I try testing on a phone smaller than 900x1280, my finger and the image won't aligned properly and correctly except it still dragging. This is the code I used for making a sprite dragging with my finger under touchDragged(): x = ((screenX + Gdx.input.getX())/2) - (fruit.width/2); y = ((camera_2.viewportHeight * multiplier) - ((screenY + Gdx.input.getY())/2) - (fruit.width/2)); This code above will make the finger and the image/sprite stays together in place while dragging but only works on 900x1280. You'll be wondering there's camera_2.viewportHeight in my code. Here are for two reasons: to prevent inverted drag (e.g. when you swipe with your finger downwards, the sprite moves upward instead) and baseline for reading coordinate...I think. Now when I'm adding another orthographic camera named camera_1 and changing its setting, I recently used it for adjusting the falling object by meter per pixel. Also, it seems effective independently for smartphones that has smaller resolution and this is what I used here: show() camera_1 = new OrthographicCamera(); camera_1.viewportHeight = 280; // --> I set it to a smaller view port height so that the object would fall faster, decreasing the chance of drag force. camera_1.viewportWidth = 196; // --> Make it proportion to the original screen view size as possible. camera_1.position.set(camera_1.viewportWidth * 0.5f, camera_1.viewportHeight * 0.5f, 0f); camera_1.update(); touchDragged() x = ((screenX + (camera_1.viewportWidth/Gdx.input.getX()))/2) - (fruit.width/2); y = ((camera_1.viewportHeight * multiplier) - ((screenY + (camera_1.viewportHeight/Gdx.input.getY()))/2) - (fruit.width/2)); But the result instead of just following the image/sprite closely to my finger, it still has a space/gap between the sprite/image and the finger. It is possibly dependent on coordinates based on the screen resolution. I'm trying to drag the blueberry sprite with my finger. My expectation did not met since I want my finger and the sprite/image (blueberry) to stay close together while dragging until I release it. Here's what it looks like: I got to figure it out how to make independent on all screen sizes by just following the image/sprite closely to my finger while dragging even on most different screen sizes instead.

    Read the article

  • How is constant buffer allocation handled in DX11?

    - by Marek
    I'm starting with DX11 and I'm not sure if I'm doing the things right. I want to have both pixel and vertex shader program in one file. Both use some shared and some different constant buffers. So it looks like this: Shader.fx cbuffer ForVS : register(b0) { float4x4 wvp; }; cbuffer ForVSandPS : register(b1) { float4 stuff; float4 stuff2; }; cbuffer ForVS2 : register(b2) { float4 stuff; float4 stuff2; }; cbuffer ForPS : register(b3) { float4 stuff; float4 stuff2; }; .... And in code I use mContext->VSSetConstantBuffers( 0, 1, bufferVS); mContext->VSSetConstantBuffers( 1, 1, bufferVS_PS); mContext->VSSetConstantBuffers( 2, 1, bufferVS2); mContext->PSSetConstantBuffers( 1, 1, bufferVS_PS); mContext->PSSetConstantBuffers( 3, 1, bufferPS); The numbering of buffers in PS is what bugs me, is it alright to bind random slots to shaders (in this example 1 and 3)? Does that mean it still uses just two buffers or does it initialize 0 and 2 buffer pointers to empty? Thank you.

    Read the article

  • Basic procedural generated content works, but how could I do the same in reverse?

    - by andrew
    My 2D world is made up of blocks. At the moment, I create a block and assign it a number between 1 and 4. The number assigned to the nth block is always the same (i.e if the player walks backwards or restarts the game.) and is generated in the function below. As shown here by this animation, the colours represent the number. function generate_data(n) math.randomseed(n) -- resets the random so that the 'random' number for n is always the same math.random() -- fixes lua random bug local no = math.random(4) --print(no, n) return no end Now I want to limit the next block's number - a block of 1 will always have a block 2 after it, while block 2 will either have a block 1,2 or 3 after it, etc. Before, all the blocks data was randomly generated, initially, and then saved. This data was then loaded and used instead of being randomly called. While working this way, I could specify what the next block would be easily and it would be saved for consistency. I have now removed this saving/loading in favour of procedural generation as I realised that save whiles would get very big after travelling. Back to the present. While travelling forward (to the right), it is easy to limit what the next blocks number will be. I can generate it at the same time as the other data. The problem is when travelling backwards (to the left) I can not think of a way to load the previous block so that it is always the same. Does anyone have any ideas on how I could sort this out?

    Read the article

  • Saving a list of points into a text file

    - by dylanisawesome1
    I recently posted a question about this, but was not really sure where to go. I've gotten some progress, and have generated some simple noise here: http://pastie.org/5408655 That works well enough for me, but I would really like to be able to save the points into an ascii text file. currently it's formatted so that something like this: http://pastie.org/5409311 would create a square. I need to save in this format with the points(and lines connecting them) generated in the method above. Essentially, I need to write the array of points created in the first example to a text file formatted like the second example.

    Read the article

  • What is the best way to render a 2d game map?

    - by Deukalion
    I know efficiency is key in game programming and I've had some experiences with rendering a "map" earlier but probably not in the best of ways. For a 2D TopDown game: (simply render the textures/tiles of the world, nothing else) Say, you have a map of 1000x1000 (tiles or whatever). If the tile isn't in the view of the camera, it shouldn't be rendered - it's that simple. No need to render a tile that won't be seen. But since you have 1000x1000 objects in your map, or perhaps less you probably don't want to loop through all 1000*1000 tiles just to see if they're suppose to be rendered or not. Question: What is the best way to implement this efficiency? So that it "quickly/quicker" can determine what tiles are suppose to be rendered? Also, I'm not building my game around tiles rendered with a SpriteBatch so there's no rectangles, the shapes can be different sizes and have multiple points, say a curved object of 10 points and a texture inside that shape; Question: How do you determine if this kind of objects is "inside" the View of the camera? It's easy with a 48x48 rectangle, just see if it X+Width or Y+Height is in the view of the camera. Different with multiple points. Simply put, how to manage the code and the data efficiently to not having to run through/loop through a million of objects at the same time.

    Read the article

  • Matrix.CreateBillboard centre rotation problem

    - by Chris88
    I'm having an issue with Matrix.CreateBillboard and a textured Quad where the center axis seems to be positioned incorrectly to the quad object which is rotating around a center point: Using: BasicEffect quadEffect; Drawing the quad shape: Left = Vector3.Cross(Normal, Up); Vector3 uppercenter = (Up * height / 2) + origin; LowerLeft = uppercenter + (Left * width / 2); LowerRight = uppercenter - (Left * width / 2); UpperLeft = LowerLeft - (Up * height); UpperRight = LowerRight - (Up * height); Where height and width are float values passed in (it draws a square) Draw method: quadEffect.View = camera.view; quadEffect.Projection = camera.projection; quadEffect.World = Matrix.CreateBillboard(Origin, camera.cameraPosition, Vector3.Up, camera.cameraDirection); GraphicsDevice.BlendState = BlendState.Additive; foreach (EffectPass pass in quadEffect.CurrentTechnique.Passes) { pass.Apply(); GraphicsDevice.DrawUserIndexedPrimitives <VertexPositionNormalTexture>( PrimitiveType.TriangleList, Vertices, 0, 4, Indexes, 0, 2); } GraphicsDevice.BlendState = BlendState.Opaque; In the screenshots below i draw the image at Vector3(32f, 0f, 32f) The screenshots below show you the position of the quad in relation to the red cross. The red cross shows where it should be drawn http://i.imgur.com/YwRYj.jpg http://i.imgur.com/ZtoHL.jpg It rotates around the red cross position

    Read the article

  • Low-level game engine renderer design

    - by Mark Ingram
    I'm piecing together the beginnings of an extremely basic engine which will let me draw arbitrary objects (SceneObject). I've got to the point where I'm creating a few sensible sounding classes, but as this is my first outing into game engines, I've got the feeling I'm overlooking things. I'm familiar with compartmentalising larger portions of the code so that individual sub-systems don't overly interact with each other, but I'm thinking more of the low-level stuff, starting from vertices working up. So if I have a Vertex class, I can combine that with a list of indices to make a Mesh class. How does the engine determine identical meshes for objects? Or is that left to the level designer? Once we have a Mesh, that can be contained in the SceneObject class. And a list of SceneObject can be placed into the Scene to be drawn. Right now I'm only using OpenGL, but I'm aware that I don't want to be tying OpenGL calls right in to base classes (such as updating the vertices in the Mesh, I don't want to be calling glBufferData etc). Are there any good resources that discuss these issues? Are there any "common" heirachies which should be used?

    Read the article

  • How can I easily create cloud texture maps?

    - by EdwardTeach
    I am making 3d planets in my game; these will be viewed as "globes". Some of them will need cloud layers. I looked at various Blender tutorials for creating "earth", and for their cloud layers they use earth cloud maps from NASA. However I will be creating a fictional universe with many procedurally-generated planets. So I would like to use many variations. I'm hoping there's a way to procedurally generate cloud maps such as the NASA link. I will also need to create gas giants, so I will also need other kinds of cloud texture maps. If that is too difficult, I could fall back to creating several variations of cloud maps. For example, 3 for earth-like, 3 for gas giants, etc. So how do I statically create or programmatically generate such cloud maps?

    Read the article

  • I'm looking to learn how to apply traditional animation techniques to my graphics engine - are there any tutorials or online-resources that can help?

    - by blueberryfields
    There are many traditional animation techniques - such as blurring of motion, motion along an elliptical curve rather than a straight line, counter-motion before beginning of movement - which help with creating the appearance of a realistic 3D animated character. I'm looking to incorporate tools and short cuts for some of these into my graphics engine, to make it easier for my end users to use these techniques in their animations. Is there a good resource listing the techniques and the principles behind them, especially how they might apply to a graphics engine or 3D animation?

    Read the article

  • Blender - creating bones from transform matrices

    - by user975135
    Notice: this is for the Blender 2.5/2.6 API. Back in the old days in the Blender 2.4 API, you could easily create a bone from a transform matrix in your 3d file as EditBones had an attribute named "matrix", which was an armature-space matrix you could access and modify. The new 2.5+ API still has the "matrix" attribute for EditBones, but for some unknown reason it is now read-only. So how to create EditBones from transform matrices? I could only find one thing: a new "transform()" function, which takes a Matrix too. Transform the the bones head, tail, roll and envelope (when the matrix has a scale component). Perfect, but you already need to have some values (loc/rot/scale) for your bone, otherwise transforming with a matrix like this will give you nothing, your bone will be a zero-sized bone which will be deleted by Blender. if you create default bone values first, like this: bone.tail = mathutils.Vector([0,1,0]) Then transform() will work on your bone and it might seem to create correct bones, but setting a tail position actually generates a matrix itself, use transform() and you don't get the matrix from your model file on your EditBone, but the multiplication of your matrix with the bone's existing one. This can be easily proven by comparing the matrices read from the file with EditBone.matrix. Again it might seem correct in Blender, but now export your model and you see your animations are messed up, as the bind pose rotations of the bones are wrong. I've tried to find an alternative way to assign the transformation matrix from my file to my EditBone with no luck.

    Read the article

  • Precision loss when transforming from cartesian to isometric

    - by Justin Skiles
    My goal is to display a tile map in isometric projection. This tile map has 25 tiles across and 25 tiles down. Each tile is 32x32. See below for how I'm accomplishing this. World Space World Space to Screen Space Rotation (45 degrees) Using a 2D rotation matrix, I use the following: double rotation = Math.PI / 4; double rotatedX = ((tileWorldX * Math.Cos(rotation)) - ((tileWorldY * Math.Sin(rotation))); double rotatedY = ((tileWorldX * Math.Sin(rotation)) + (tileWorldY * Math.Cos(rotation))); World Space to Screen Space Scale (Y-axis reduced by 50%) Here I simply scale down the Y value by a factor of 0.5. Problem And it works, kind of. There are some tiny 1px-2px gaps between some of the tiles when rendering. I think there's some precision loss somewhere, or I'm not understanding how to get these tiles to fit together perfectly. I'm not truncating or converting my values to non-decimal types until I absolutely have to (when I pass to the render method, which only takes integers). I'm not sure how to guarantee pixel perfect rendering precision when I'm rotating and scaling on a level of higher precision. Any advice? Do I need to supply for information?

    Read the article

  • 3D Graphics with XNA Game Studio 4.0 bug in light map?

    - by Eibis
    i'm following the tutorials on 3D Graphics with XNA Game Studio 4.0 and I came up with an horrible effect when I tried to implement the Light Map http://i.stack.imgur.com/BUWvU.jpg this effect shows up when I look towards the center of the house (and it moves with me). it has this shape because I'm using a sphere to represent light; using other light shapes gives different results. I'm using a class PreLightingRenderer: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Dhpoware; using Microsoft.Xna.Framework.Content; namespace XNAFirstPersonCamera { public class PrelightingRenderer { // Normal, depth, and light map render targets RenderTarget2D depthTarg; RenderTarget2D normalTarg; RenderTarget2D lightTarg; // Depth/normal effect and light mapping effect Effect depthNormalEffect; Effect lightingEffect; // Point light (sphere) mesh Model lightMesh; // List of models, lights, and the camera public List<CModel> Models { get; set; } public List<PPPointLight> Lights { get; set; } public FirstPersonCamera Camera { get; set; } GraphicsDevice graphicsDevice; int viewWidth = 0, viewHeight = 0; public PrelightingRenderer(GraphicsDevice GraphicsDevice, ContentManager Content) { viewWidth = GraphicsDevice.Viewport.Width; viewHeight = GraphicsDevice.Viewport.Height; // Create the three render targets depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24); normalTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); lightTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); // Load effects depthNormalEffect = Content.Load<Effect>(@"Effects\PPDepthNormal"); lightingEffect = Content.Load<Effect>(@"Effects\PPLight"); // Set effect parameters to light mapping effect lightingEffect.Parameters["viewportWidth"].SetValue(viewWidth); lightingEffect.Parameters["viewportHeight"].SetValue(viewHeight); // Load point light mesh and set light mapping effect to it lightMesh = Content.Load<Model>(@"Models\PPLightMesh"); lightMesh.Meshes[0].MeshParts[0].Effect = lightingEffect; this.graphicsDevice = GraphicsDevice; } public void Draw() { drawDepthNormalMap(); drawLightMap(); prepareMainPass(); } void drawDepthNormalMap() { // Set the render targets to 'slots' 1 and 2 graphicsDevice.SetRenderTargets(normalTarg, depthTarg); // Clear the render target to 1 (infinite depth) graphicsDevice.Clear(Color.White); // Draw each model with the PPDepthNormal effect foreach (CModel model in Models) { model.CacheEffects(); model.SetModelEffect(depthNormalEffect, false); model.Draw(Camera.ViewMatrix, Camera.ProjectionMatrix, Camera.Position); model.RestoreEffects(); } // Un-set the render targets graphicsDevice.SetRenderTargets(null); } void drawLightMap() { // Set the depth and normal map info to the effect lightingEffect.Parameters["DepthTexture"].SetValue(depthTarg); lightingEffect.Parameters["NormalTexture"].SetValue(normalTarg); // Calculate the view * projection matrix Matrix viewProjection = Camera.ViewMatrix * Camera.ProjectionMatrix; // Set the inverse of the view * projection matrix to the effect Matrix invViewProjection = Matrix.Invert(viewProjection); lightingEffect.Parameters["InvViewProjection"].SetValue(invViewProjection); // Set the render target to the graphics device graphicsDevice.SetRenderTarget(lightTarg); // Clear the render target to black (no light) graphicsDevice.Clear(Color.Black); // Set render states to additive (lights will add their influences) graphicsDevice.BlendState = BlendState.Additive; graphicsDevice.DepthStencilState = DepthStencilState.None; foreach (PPPointLight light in Lights) { // Set the light's parameters to the effect light.SetEffectParameters(lightingEffect); // Calculate the world * view * projection matrix and set it to // the effect Matrix wvp = (Matrix.CreateScale(light.Attenuation) * Matrix.CreateTranslation(light.Position)) * viewProjection; lightingEffect.Parameters["WorldViewProjection"].SetValue(wvp); // Determine the distance between the light and camera float dist = Vector3.Distance(Camera.Position, light.Position); // If the camera is inside the light-sphere, invert the cull mode // to draw the inside of the sphere instead of the outside if (dist < light.Attenuation) graphicsDevice.RasterizerState = RasterizerState.CullClockwise; // Draw the point-light-sphere lightMesh.Meshes[0].Draw(); // Revert the cull mode graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; } // Revert the blending and depth render states graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; // Un-set the render target graphicsDevice.SetRenderTarget(null); } void prepareMainPass() { foreach (CModel model in Models) foreach (ModelMesh mesh in model.Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) { // Set the light map and viewport parameters to each model's effect if (part.Effect.Parameters["LightTexture"] != null) part.Effect.Parameters["LightTexture"].SetValue(lightTarg); if (part.Effect.Parameters["viewportWidth"] != null) part.Effect.Parameters["viewportWidth"].SetValue(viewWidth); if (part.Effect.Parameters["viewportHeight"] != null) part.Effect.Parameters["viewportHeight"].SetValue(viewHeight); } } } } that uses three effect: PPDepthNormal.fx float4x4 World; float4x4 View; float4x4 Projection; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Depth : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 viewProjection = mul(View, Projection); float4x4 worldViewProjection = mul(World, viewProjection); output.Position = mul(input.Position, worldViewProjection); output.Normal = mul(input.Normal, World); // Position's z and w components correspond to the distance // from camera and distance of the far plane respectively output.Depth.xy = output.Position.zw; return output; } // We render to two targets simultaneously, so we can't // simply return a float4 from the pixel shader struct PixelShaderOutput { float4 Normal : COLOR0; float4 Depth : COLOR1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; // Depth is stored as distance from camera / far plane distance // to get value between 0 and 1 output.Depth = input.Depth.x / input.Depth.y; // Normal map simply stores X, Y and Z components of normal // shifted from (-1 to 1) range to (0 to 1) range output.Normal.xyz = (normalize(input.Normal).xyz / 2) + .5; // Other components must be initialized to compile output.Depth.a = 1; output.Normal.a = 1; return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPLight.fx float4x4 WorldViewProjection; float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; sampler2D depthSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 LightColor; float3 LightPosition; float LightAttenuation; // Include shared functions #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 LightPosition : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WorldViewProjection); output.LightPosition = output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Find the pixel coordinates of the input position in the depth // and normal textures float2 texCoord = postProjToScreen(input.LightPosition) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; // Extract the normal from the normal map and move from // 0 to 1 range to -1 to 1 range float4 normal = (tex2D(normalSampler, texCoord) - .5) * 2; // Perform the lighting calculations for a point light float3 lightDirection = normalize(LightPosition - position); float lighting = clamp(dot(normal, lightDirection), 0, 1); // Attenuate the light to simulate a point light float d = distance(LightPosition, position); float att = 1 - pow(d / LightAttenuation, 6); return float4(LightColor * lighting * att, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPShared.vsi has some common functions: float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } and finally from the Game class I set up in LoadContent with: effect = Content.Load(@"Effects\PPModel"); models[0] = new CModel(Content.Load(@"Models\teapot"), new Vector3(-50, 80, 0), new Vector3(0, 0, 0), 1f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); house = new CModel(Content.Load(@"Models\house"), new Vector3(0, 0, 0), new Vector3((float)-Math.PI / 2, 0, 0), 35.0f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); models[0].SetModelEffect(effect, true); house.SetModelEffect(effect, true); renderer = new PrelightingRenderer(GraphicsDevice, Content); renderer.Models = new List(); renderer.Models.Add(house); renderer.Models.Add(models[0]); renderer.Lights = new List() { new PPPointLight(new Vector3(0, 120, 0), Color.White * .85f, 2000) }; where PPModel.fx is: float4x4 World; float4x4 View; float4x4 Projection; texture2D BasicTexture; sampler2D basicTextureSampler = sampler_state { texture = ; addressU = wrap; addressV = wrap; minfilter = anisotropic; magfilter = anisotropic; mipfilter = linear; }; bool TextureEnabled = true; texture2D LightTexture; sampler2D lightSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 AmbientColor = float3(0.15, 0.15, 0.15); float3 DiffuseColor; #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 worldViewProjection = mul(World, mul(View, Projection)); output.Position = mul(input.Position, worldViewProjection); output.PositionCopy = output.Position; output.UV = input.UV; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Sample model's texture float3 basicTexture = tex2D(basicTextureSampler, input.UV); if (!TextureEnabled) basicTexture = float4(1, 1, 1, 1); // Extract lighting value from light map float2 texCoord = postProjToScreen(input.PositionCopy) + halfPixel(); float3 light = tex2D(lightSampler, texCoord); light += AmbientColor; return float4(basicTexture * DiffuseColor * light, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I don't have any idea on what's wrong... googling the web I found that this tutorial may have some bug but I don't know if it's the LightModel fault (the sphere) or in a shader or in the class PrelightingRenderer. Any help is very appreciated, thank you for reading!

    Read the article

  • How do I draw video frames onto the screen permanently using XNA?

    - by izb
    I have an app that plays back a video and draws the video onto the screen at a moving position. When I run the app, the video moves around the screen as it plays. Here is my Draw method... protected override void Draw(GameTime gameTime) { Texture2D videoTexture = null; if (player.State != MediaState.Stopped) videoTexture = player.GetTexture(); if (videoTexture != null) { spriteBatch.Begin(); spriteBatch.Draw( videoTexture, new Rectangle(x++, 0, 400, 300), /* Where X is a class member */ Color.White); spriteBatch.End(); } base.Draw(gameTime); } The video moves horizontally acros the screen. This is not exactly as I expected since I have no lines of code that clear the screen. My question is why does it not leave a trail behind? Also, how would I make it leave a trail behind?

    Read the article

  • XNA, how to draw two cubes standing in line parallelly?

    - by user3535716
    I just got a problem with drawing two 3D cubes standing in line. In my code, I made a cube class, and in the game1 class, I built two cubes, A on the right side, B on the left side. I also setup an FPS camera in the 3D world. The problem is if I draw cube B first(Blue), and move the camera to the left side to cube B, A(Red) is still standing in front of B, which is apparently wrong. I guess some pics can make much sense. Then, I move the camera to the other side, the situation is like: This is wrong.... From this view, the red cube, A should be behind the blue one, B.... Could somebody give me help please? This is the draw in the Cube class Matrix center = Matrix.CreateTranslation( new Vector3(-0.5f, -0.5f, -0.5f)); Matrix scale = Matrix.CreateScale(0.5f); Matrix translate = Matrix.CreateTranslation(location); effect.World = center * scale * translate; effect.View = camera.View; effect.Projection = camera.Projection; foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); device.SetVertexBuffer(cubeBuffer); RasterizerState rs = new RasterizerState(); rs.CullMode = CullMode.None; rs.FillMode = FillMode.Solid; device.RasterizerState = rs; device.DrawPrimitives( PrimitiveType.TriangleList, 0, cubeBuffer.VertexCount / 3); } This is the Draw method in game1 A.Draw(camera, effect); B.Draw(camera, effect); **

    Read the article

  • Why are textures always square powers of two? What if they aren't?

    - by Keavon
    Why are the resolution of textures in games always a power of two (128x128, 256x256, 512x512, 1024x1024, etc.)? Wouldn't it be smart to save on the game's file size and make the texture exactly fit the UV unwrapped model? What would happen if there was a texture that was not a power of two? Would it be incorrect to have a texture be something like 256x512, or 512x1024? Or would this cause the problems that non-power-of-two textures may cause?

    Read the article

  • Starting with text based MUD/MUCK game

    - by Scott Ivie
    I’ve had this idea for a video game in my head for a long time but I’ve never had the knowledge or time to get it done. I still don’t really, but I am willing to dedicate a chunk of my time to this before it’s too late. Recently I started studying Lua script for a program called “MUSH Client” which works for MU* telnet style text games. I want to use the GUI capabilities of Mush Client with a MU* server to create a basic game but here is my dilemma. I figured this could be a suitable starting place for me. BUT… Because I’m not very programmer savvy yet, I don’t know how to download/install/use the MU* server software. I was originally considering Protomuck because a few of the MU*s I were more impressed with began there. http://www.protomuck.org/ I downloaded it, but I guess I'm too used to GUI style programs so I'm having great difficulty figuring out what to do next. Does anyone have any suggestions? Does anyone even know what I'm talking about? heh..

    Read the article

  • OpenGLES GLSL Shader attributes always bound to 0

    - by codemonkey
    So I have a very simple vertex shader as follows #version 120 attribute vec3 position; attribute vec3 inColor; uniform mat4 mvp; varying vec3 fragColor; void main(void){ fragColor = inColor; gl_Position = mvp * vec4(position, 1.0); } Which I load, as well as the fragment shader: #version 120 varying vec3 fragColor; void main(void) { gl_FragColor = vec4(fragColor,1.0); } Which I then load, compile, and link to my shader program. I check for link status using glGetProgramiv(shaderProgram, GL_LINK_STATUS, &shaderSuccess); which returns GL_TRUE so I think its ok. However, when I query the active attributes and uniforms using #ifdef DEBUG int totalAttributes = -1; glGetProgramiv(shaderProgram, GL_ACTIVE_ATTRIBUTES, &totalAttributes); for(int i=0; i<totalAttributes; ++i) { int name_len=-1, num=-1; GLenum type = GL_ZERO; char name[100]; glGetActiveAttrib(shaderProgram, GLuint(i), sizeof(name)-1, &name_len, &num, &type, name ); name[name_len] = 0; GLuint location = glGetAttribLocation(shaderProgram, name); fprintf(stderr, "Attribute %s is bound at %d\n", name, location); } int totalUniforms = -1; glGetProgramiv(shaderProgram, GL_ACTIVE_UNIFORMS, &totalUniforms); for(int i=0; i<totalUniforms; ++i) { int name_len=-1, num=-1; GLenum type = GL_ZERO; char name[100]; glGetActiveUniform(shaderProgram, GLuint(i), sizeof(name)-1, &name_len, &num, &type, name ); name[name_len] = 0; GLuint location = glGetUniformLocation(shaderProgram, name); fprintf(stderr, "Uniform %s is bound at %d\n", name, location); } #endif I get: Attribute inColor is bound at 0 Attribute position is bound at 1 Uniform mvp is bound at 0 Which leads to failure when trying to use the shader to render the objects. I have tried switching the order of declaration of position & inColor, but still, only position is bound with the other two giving 0 Can someone please explain why this is happening? Thanks

    Read the article

  • Annoying flickering of vertices and edges (possible z-fighting)

    - by Belgin
    I'm trying to make a software z-buffer implementation, however, after I generate the z-buffer and proceed with the vertex culling, I get pretty severe discrepancies between the vertex depth and the depth of the buffer at their projected coordinates on the screen (i.e. zbuffer[v.xp][v.yp] != v.z, where xp and yp are the projected x and y coordinates of the vertex v), sometimes by a small fraction of a unit and sometimes by 2 or 3 units. Here's what I think is happening: Each triangle's data structure holds the plane's (that is defined by the triangle) coefficients (a, b, c, d) computed from its three vertices from their normal: void computeNormal(Vertex *v1, Vertex *v2, Vertex *v3, double *a, double *b, double *c) { double a1 = v1 -> x - v2 -> x; double a2 = v1 -> y - v2 -> y; double a3 = v1 -> z - v2 -> z; double b1 = v3 -> x - v2 -> x; double b2 = v3 -> y - v2 -> y; double b3 = v3 -> z - v2 -> z; *a = a2*b3 - a3*b2; *b = -(a1*b3 - a3*b1); *c = a1*b2 - a2*b1; } void computePlane(Poly *p) { double x = p -> verts[0] -> x; double y = p -> verts[0] -> y; double z = p -> verts[0] -> z; computeNormal(p -> verts[0], p -> verts[1], p -> verts[2], &p -> a, &p -> b, &p -> c); p -> d = p -> a * x + p -> b * y + p -> c * z; } The z-buffer just holds the smallest depth at the respective xy coordinate by somewhat casting rays to the polygon (I haven't quite got interpolation right yet so I'm using this slower method until I do) and determining the z coordinate from the reversed perspective projection formulas (which I got from here: double z = -(b*Ez*y + a*Ez*x - d*Ez)/(b*y + a*x + c*Ez - b*Ey - a*Ex); Where x and y are the pixel's coordinates on the screen; a, b, c, and d are the planes coefficients; Ex, Ey, and Ez are the eye's (camera's) coordinates. This last formula does not accurately give the exact vertices' z coordinate at their projected x and y coordinates on the screen, probably because of some floating point inaccuracy (i.e. I've seen it return something like 3.001 when the vertex's z-coordinate was actually 2.998). Here is the portion of code that hides the vertices that shouldn't be visible: for(i = 0; i < shape.nverts; ++i) { double dist = shape.verts[i].z; if(z_buffer[shape.verts[i].yp][shape.verts[i].xp].z < dist) shape.verts[i].visible = 0; else shape.verts[i].visible = 1; } How do I solve this issue? EDIT I've implemented the near and far planes of the frustum, with 24 bit accuracy, and now I have some questions: Is this what I have to do this in order to resolve the flickering? When I compare the z value of the vertex with the z value in the buffer, do I have to convert the z value of the vertex to z' using the formula, or do I convert the value in the buffer back to the original z, and how do I do that? What are some decent values for near and far? Thanks in advance.

    Read the article

  • Doing a passable 4X game AI

    - by Extrakun
    I am coding a rather "simple" 4X game (if a 4X game can be simple). It's indie in scope, and I am wondering if there's anyway to come up with a passable AI without having me spending months coding on it. The game has three major decision making portions; spending of production points, spending of movement points and spending of tech points (basically there are 3 different 'currency', currency unspent at end of turn is not saved) Spend Production Points Upgrade a planet (increase its tech and production) Build ships (3 types) Move ships from planets to planets (costing Movement Points) Move to attack Move to fortify Research Tech (can partially research a tech i.e, as in Master of Orion) The plan for me right now is a brute force approach. There are basically 4 broad options for the player - Upgrade planet(s) to its his production and tech output Conquer as many planets as possible Secure as many planets as possible Get to a certain tech as soon as possible For each decision, I will iterate through the possible options and come up with a score; and then the AI will choose the decision with the highest score. Right now I have no idea how to 'mix decisions'. That is, for example, the AI wishes to upgrade and conquer planets at the same time. I suppose I can have another logic which do a brute force optimization on a combination of those 4 decisions.... At least, that's my plan if I can't think of anything better. Is there any faster way to make a passable AI? I don't need a very good one, to rival Deep Blue or such, just something that has the illusion of intelligence. This is my first time doing an AI on this scale, so I dare not try something too grand too. So far I have experiences with FSM, DFS, BFS and A*

    Read the article

  • Calculating an orbit and approach velocties

    - by Mob
    I have drones in my game that need to approach and orbit a node and shoot at it. Problem is I want to stay away from a real physics simulation, meaning I don't want to give the node and drone a mass and the drone's thrusters' a force. I just want to find the best way to approach and then enter orbit. There was a pretty good answer about using bezier curves and doing it that way, but that is essentially a tween between two fixed points. The nodes are also moving as the drones enter orbit.

    Read the article

  • Making more complicated systems(entity-component-system model question)

    - by winch
    I'm using a model where entities are collections of components and components are just data. All the logic goes into systems which operate on components. Making basic systems(for Rendering and handling collision) was easy. But how do I do more compilcated systems? For example, in a CollisionSystem I can check if entity A collides with entity B. I have this code in CollisionSystem for checking if B damages A: if(collides(a, b)) { HealthComponent* hc = a->get<HealthComponent(); hc.reduceHealth(b->get<DamageComponent>()->getDamage()); But I feel that this code shouldn't belong to Collision system. Where should code like this be and which additional systems should I create to make this code generic?

    Read the article

  • Designing rules to fight smallpox in Civ-style TBS games

    - by Williham Totland
    TL;DR: How do you design a ruleset for a Civ-style TBS game that prevents city smallpox from being a profitable or viable strategy? Long version: Civ-style games are pretty great. Bringing a civilization from cradle to grave is a great endeavor, and practicing diplomacy with hard-line human players is fun and challenging. In theory. In practice, however, many of these games has, especially in multiplayer, exactly one viable strategy: City smallpox, a.k.a. infinite city spread, a.k.a. covering all available space with 1-citizen cities, packed as tight as they will go. I suppose this could count as emergent gameplay, but still; it could hardly be considered to be in the spirit of the class of game. The Civilization series, of course, is stuck in their more or less fixed rule sets, established with Civilization. Yes, there have been major changes in some respects, but the rules pertaining to city building and maintenance have stayed pretty similar. So the question, then: If you build a ruleset for a TBS from the ground up; what rules should be in place to prevent Infinite City Sprawl from being a viable strategy? Or should ICS be a viable strategy?

    Read the article

  • How can I improve my Animation

    - by sharethis
    The first approaches in animation for my game relied mostly on sine and cosine functions with the time as parameter. Here is an example of a very basic jump I implemented. if(jumping) { height = sin(time); if(height < 0) jumping = false; // player landed player.position.z = height; } if(keydown(SPACE) && !jumping) { jumping = true; time = now(); // store the starting time } So my player jumped in a perfect sine function. That seems quite natural, because he slows down when he reached the top position, and in the fall he speeds up again. But patching every animation out of sine and cosine is stretched to its limits soon. So can I improve my animation and provide a more abstract layer?

    Read the article

  • Understanding Box2d Restitution & Bouncing

    - by layzrr
    I'm currently trying to implement basketball bouncing into my game using Box2d (jBox2d technically), but I'm a bit confused about restitution. While trying to create the ball in the testbed first, I've run into infinite bouncing, as described in this question, however obviously not using my own implementation. The Box2d manual describes restitution as follows: Restitution is used to make objects bounce. The restitution value is usually set to be between 0 and 1. Consider dropping a ball on a table. A value of zero means the ball won't bounce. This is called an inelastic collision. A value of one means the ball's velocity will be exactly reflected. This is called a perfectly elastic collision. My confusion lies in that I am still getting infinite bouncing with restitution values at 0.75/0.8. The same behavior can be seen in the testbed under Collision Watching - Varying Restitution, on the 6th and 7th balls. I believe the last one has restitution of 1, which makes sense, but I don't understand why the second to last ball bounces infinitely (as is happening with my working basketball I've created). I am looking to understand the restitution concept more fully, as well as look for a solution to infinite bouncing with the Box2d framework. My instinct was to sleep objects that appeared to be moving in very small increments, but this seems like a misuse of the engine. Should I just work with lower restitution values altogether?

    Read the article

  • Best way to detect if vec3 is between vec3(x) and vec3(y) in glsl

    - by elect
    As titled I am sampling from a texture and if the color is somehow gray [vec3(.8), vec3(.9)] and an uniform is 1 I need to substitute that color with another one I am not a glsl veteran but I am pretty sure there is a more elegant and compact (without mentioning faster) way than this: vec3 textureColor = texture(texture0, oUV); if(settings.w == 1 && textureColor.r > .8 && textureColor.r < .9 && textureColor.g > .8 && textureColor.g < .9 && textureColor.b > .8 && textureColor.b < .9)

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >