Search Results

Search found 33291 results on 1332 pages for 'development environment'.

Page 497/1332 | < Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >

  • Sanity checks vs file sizes

    - by Richard Fabian
    In your game assets do you make room for explicit sanity checks, or do you have some generally expected bounds which you assert? I've been thinking about how we compress data and thought that it's much better to have the former, and less of the latter. If your data can exceed your normal valid ranges, but if it does it's an error, then surely that implies you're not compressing the data well enough? What do you do to find out if your data is compressed as far as it can be, and what do you use to ensure your data isn't corrupted and ensure it's an official release? EDIT I'm not interested in sanity checking the file size, but instead, how you manage your sanity checks and whether you arrange the excess size caused by the opportunity to do sanity checks by using explicit extra data, or through allowing the data enough file space (data member size) to be out of valid range and thus able to be checked merely by looking at the asset in memory after loading.

    Read the article

  • DOT implementation

    - by Denis Ermolin
    I have some DOT(damage over time) implementation problems. My game runs on 30 FPS speed. Current implementation is: let's say hero cast spell which make 1 damage per second. So on every frame i do (pseudo code): damage_done = getRandomDamage() * delta_time; I accumulate damage and when it becomes more then 0 then subtract rounded damage from current health and so on. With 30 FPS and 1 DPS it will be 1/33 = 0.05... We know that floats a not precise enough to sum 30 circulating decimals and have exact 1 in the end. But HP is discrete value and that's why 1 DPS will not have 1 damage after 1 second because value will be 0.9999..... It's not so big deal when you have 100000 DPS - +/- 1 damage will not be noticeable. But if i have 1, 5 DPS? How modern RPG's implemented DOT's?

    Read the article

  • Game state management (Game, Menu, Titlescreen, etc)

    - by munchor
    Basically, in every single game I've made so far, I always have a variable like "current_state", which can be "game", "titlescreen", "gameoverscreen", etc. And then on my Update function I have a huge: if current_state == "game" game stuf ... else if current_state == "titlescreen" ... However, I don't feel like this is a professional/clean way of handling states. Any ideas on how to do this in a better way? Or is this the standard way?

    Read the article

  • Physics not synchronizing correctly over the network when using Bullet

    - by Lucas
    I'm trying to implement a client/server physics system using Bullet however I'm having problems getting things to sync up. I've implemented a custom motion state which reads and write the transform from my game objects and it works locally but I've tried two different approaches for networked games: Dynamic objects on the client that are also on the server (eg not random debris and other unimportant stuff) are made kinematic. This works correctly but the objects don't move very smoothly Objects are dynamic on both but after each message from the server that the object has moved I set the linear and angular velocity to the values from the server and call btRigidBody::proceedToTransform with the transform on the server. I also call btCollisionObject::activate(true); to force the object to update. My intent with method 2 was to basically do method 1 but hijacking Bullet to do a poor-man's prediction instead of doing my own to smooth out method 1, but this doesn't seem to work (for reasons that are not 100% clear to me even stepping through Bullet) and the objects sometimes end up in different places. Am I heading in the right direction? Bullet seems to have it's own interpolation code built-in. Can that help me make method 1 work better? Or is my method 2 code not working because I am accidentally stomping that?

    Read the article

  • How do I dynamically reload content files?

    - by Kikaimaru
    Is there a relatively simple way to dynamically reload content files, such as effect files? I know I can do the following: Detect change of file Run content pipeline to rebuild that specific file Unload ALL content that was loaded Load all content And use double references to reference content files. The problem is with step 3 (and step 2 isn't that nice either). I need to unload everything because if I have model Hero.x which references Model.fx effect, and I change the Model.fx file, I need to reload the Hero.x file which will then call LoadExternalReference on Model.fx. Has someone managed to make this work without rewriting the whole ContentManager (and every ContentReader) and tracking calls to LoadExternalReference?

    Read the article

  • Custom extensible file format for 2d tiled maps

    - by Christian Ivicevic
    I have implemented much of my game logic right now, but still create my maps with nasty for-loops on-the-fly to be able to work with something. Now I wanted to move on and to do some research on how to (un)serialize this data. (I do not search for a map editor - I am speaking of the map file itself) For now I am looking for suggestions and resources, how to implement a custom file format for my maps which should provide the following functionality (based on MoSCoW method): Must have Extensibility and backward compatibility Handling of different layers Metadata on whether a tile is solid or can be passed through Special serialization of entities/triggers with associated properties/metadata Could have Some kind of inclusion of the tileset to prevent having scattered files/tilesets I am developing with C++ (using SDL) and targetting only Windows. Any useful help, tips, suggestions, ... would be appreciated!

    Read the article

  • Order independent transparency in particle system

    - by Stepan Zastupov
    I'm writing a particle system and would like to find a trick to achieve proper alpha blending without sorting particles because: Each particle is a point sprite in a single mesh and I can't use scene graph ability to sort transparent nodes. The system node should be properly sorted, though. Particle position is computed on shader from initial velocity, acceleration and time. In order to sort the system I would have to perform all this computations on CPU, which is something I want to avoid. Sorting hundreds of particles against camera position and uploading it on GPU each frame seams to be quiet heavy operation. Alpha testing seems to be fast enough on GLES 2.0 and works fine for non-transparent but "masked" textures. Still, it's not enough for semi-transparent particles. How would you handle this?

    Read the article

  • Multi-Threaded Pipelined Game Engine Data Synchronization Questions

    - by Douglas
    Let's say I'm setting up a worker pool based game engine with pipelining. Let's say I have 4 stages in my pipeline as such: Stage 1: Physics Stage 2: AI/Input Stage 3: Game Logic Stage 4: Rendering Now let's say that the physics detects a collision between a bullet and a character in stage 1. Two frames later the game logic may choose to remove that bullet from the simulation, however none of the other copies of the data for the other pipeline stages will get this information. How is this sort of thing and other things like it get handled? Do you generally make changes like this to every pipeline stage's data at the end of a frame?

    Read the article

  • 3D Graphics with XNA Game Studio 4.0 bug in light map?

    - by Eibis
    i'm following the tutorials on 3D Graphics with XNA Game Studio 4.0 and I came up with an horrible effect when I tried to implement the Light Map http://i.stack.imgur.com/BUWvU.jpg this effect shows up when I look towards the center of the house (and it moves with me). it has this shape because I'm using a sphere to represent light; using other light shapes gives different results. I'm using a class PreLightingRenderer: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Dhpoware; using Microsoft.Xna.Framework.Content; namespace XNAFirstPersonCamera { public class PrelightingRenderer { // Normal, depth, and light map render targets RenderTarget2D depthTarg; RenderTarget2D normalTarg; RenderTarget2D lightTarg; // Depth/normal effect and light mapping effect Effect depthNormalEffect; Effect lightingEffect; // Point light (sphere) mesh Model lightMesh; // List of models, lights, and the camera public List<CModel> Models { get; set; } public List<PPPointLight> Lights { get; set; } public FirstPersonCamera Camera { get; set; } GraphicsDevice graphicsDevice; int viewWidth = 0, viewHeight = 0; public PrelightingRenderer(GraphicsDevice GraphicsDevice, ContentManager Content) { viewWidth = GraphicsDevice.Viewport.Width; viewHeight = GraphicsDevice.Viewport.Height; // Create the three render targets depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24); normalTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); lightTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); // Load effects depthNormalEffect = Content.Load<Effect>(@"Effects\PPDepthNormal"); lightingEffect = Content.Load<Effect>(@"Effects\PPLight"); // Set effect parameters to light mapping effect lightingEffect.Parameters["viewportWidth"].SetValue(viewWidth); lightingEffect.Parameters["viewportHeight"].SetValue(viewHeight); // Load point light mesh and set light mapping effect to it lightMesh = Content.Load<Model>(@"Models\PPLightMesh"); lightMesh.Meshes[0].MeshParts[0].Effect = lightingEffect; this.graphicsDevice = GraphicsDevice; } public void Draw() { drawDepthNormalMap(); drawLightMap(); prepareMainPass(); } void drawDepthNormalMap() { // Set the render targets to 'slots' 1 and 2 graphicsDevice.SetRenderTargets(normalTarg, depthTarg); // Clear the render target to 1 (infinite depth) graphicsDevice.Clear(Color.White); // Draw each model with the PPDepthNormal effect foreach (CModel model in Models) { model.CacheEffects(); model.SetModelEffect(depthNormalEffect, false); model.Draw(Camera.ViewMatrix, Camera.ProjectionMatrix, Camera.Position); model.RestoreEffects(); } // Un-set the render targets graphicsDevice.SetRenderTargets(null); } void drawLightMap() { // Set the depth and normal map info to the effect lightingEffect.Parameters["DepthTexture"].SetValue(depthTarg); lightingEffect.Parameters["NormalTexture"].SetValue(normalTarg); // Calculate the view * projection matrix Matrix viewProjection = Camera.ViewMatrix * Camera.ProjectionMatrix; // Set the inverse of the view * projection matrix to the effect Matrix invViewProjection = Matrix.Invert(viewProjection); lightingEffect.Parameters["InvViewProjection"].SetValue(invViewProjection); // Set the render target to the graphics device graphicsDevice.SetRenderTarget(lightTarg); // Clear the render target to black (no light) graphicsDevice.Clear(Color.Black); // Set render states to additive (lights will add their influences) graphicsDevice.BlendState = BlendState.Additive; graphicsDevice.DepthStencilState = DepthStencilState.None; foreach (PPPointLight light in Lights) { // Set the light's parameters to the effect light.SetEffectParameters(lightingEffect); // Calculate the world * view * projection matrix and set it to // the effect Matrix wvp = (Matrix.CreateScale(light.Attenuation) * Matrix.CreateTranslation(light.Position)) * viewProjection; lightingEffect.Parameters["WorldViewProjection"].SetValue(wvp); // Determine the distance between the light and camera float dist = Vector3.Distance(Camera.Position, light.Position); // If the camera is inside the light-sphere, invert the cull mode // to draw the inside of the sphere instead of the outside if (dist < light.Attenuation) graphicsDevice.RasterizerState = RasterizerState.CullClockwise; // Draw the point-light-sphere lightMesh.Meshes[0].Draw(); // Revert the cull mode graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; } // Revert the blending and depth render states graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; // Un-set the render target graphicsDevice.SetRenderTarget(null); } void prepareMainPass() { foreach (CModel model in Models) foreach (ModelMesh mesh in model.Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) { // Set the light map and viewport parameters to each model's effect if (part.Effect.Parameters["LightTexture"] != null) part.Effect.Parameters["LightTexture"].SetValue(lightTarg); if (part.Effect.Parameters["viewportWidth"] != null) part.Effect.Parameters["viewportWidth"].SetValue(viewWidth); if (part.Effect.Parameters["viewportHeight"] != null) part.Effect.Parameters["viewportHeight"].SetValue(viewHeight); } } } } that uses three effect: PPDepthNormal.fx float4x4 World; float4x4 View; float4x4 Projection; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Depth : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 viewProjection = mul(View, Projection); float4x4 worldViewProjection = mul(World, viewProjection); output.Position = mul(input.Position, worldViewProjection); output.Normal = mul(input.Normal, World); // Position's z and w components correspond to the distance // from camera and distance of the far plane respectively output.Depth.xy = output.Position.zw; return output; } // We render to two targets simultaneously, so we can't // simply return a float4 from the pixel shader struct PixelShaderOutput { float4 Normal : COLOR0; float4 Depth : COLOR1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; // Depth is stored as distance from camera / far plane distance // to get value between 0 and 1 output.Depth = input.Depth.x / input.Depth.y; // Normal map simply stores X, Y and Z components of normal // shifted from (-1 to 1) range to (0 to 1) range output.Normal.xyz = (normalize(input.Normal).xyz / 2) + .5; // Other components must be initialized to compile output.Depth.a = 1; output.Normal.a = 1; return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPLight.fx float4x4 WorldViewProjection; float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; sampler2D depthSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 LightColor; float3 LightPosition; float LightAttenuation; // Include shared functions #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 LightPosition : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WorldViewProjection); output.LightPosition = output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Find the pixel coordinates of the input position in the depth // and normal textures float2 texCoord = postProjToScreen(input.LightPosition) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; // Extract the normal from the normal map and move from // 0 to 1 range to -1 to 1 range float4 normal = (tex2D(normalSampler, texCoord) - .5) * 2; // Perform the lighting calculations for a point light float3 lightDirection = normalize(LightPosition - position); float lighting = clamp(dot(normal, lightDirection), 0, 1); // Attenuate the light to simulate a point light float d = distance(LightPosition, position); float att = 1 - pow(d / LightAttenuation, 6); return float4(LightColor * lighting * att, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPShared.vsi has some common functions: float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } and finally from the Game class I set up in LoadContent with: effect = Content.Load(@"Effects\PPModel"); models[0] = new CModel(Content.Load(@"Models\teapot"), new Vector3(-50, 80, 0), new Vector3(0, 0, 0), 1f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); house = new CModel(Content.Load(@"Models\house"), new Vector3(0, 0, 0), new Vector3((float)-Math.PI / 2, 0, 0), 35.0f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); models[0].SetModelEffect(effect, true); house.SetModelEffect(effect, true); renderer = new PrelightingRenderer(GraphicsDevice, Content); renderer.Models = new List(); renderer.Models.Add(house); renderer.Models.Add(models[0]); renderer.Lights = new List() { new PPPointLight(new Vector3(0, 120, 0), Color.White * .85f, 2000) }; where PPModel.fx is: float4x4 World; float4x4 View; float4x4 Projection; texture2D BasicTexture; sampler2D basicTextureSampler = sampler_state { texture = ; addressU = wrap; addressV = wrap; minfilter = anisotropic; magfilter = anisotropic; mipfilter = linear; }; bool TextureEnabled = true; texture2D LightTexture; sampler2D lightSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 AmbientColor = float3(0.15, 0.15, 0.15); float3 DiffuseColor; #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 worldViewProjection = mul(World, mul(View, Projection)); output.Position = mul(input.Position, worldViewProjection); output.PositionCopy = output.Position; output.UV = input.UV; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Sample model's texture float3 basicTexture = tex2D(basicTextureSampler, input.UV); if (!TextureEnabled) basicTexture = float4(1, 1, 1, 1); // Extract lighting value from light map float2 texCoord = postProjToScreen(input.PositionCopy) + halfPixel(); float3 light = tex2D(lightSampler, texCoord); light += AmbientColor; return float4(basicTexture * DiffuseColor * light, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I don't have any idea on what's wrong... googling the web I found that this tutorial may have some bug but I don't know if it's the LightModel fault (the sphere) or in a shader or in the class PrelightingRenderer. Any help is very appreciated, thank you for reading!

    Read the article

  • Collision Detection with SAT: False Collision for Diagonal Movement Towards Vertical Tile-Walls?

    - by Macks
    Edit: Problem solved! Big thanks to Jonathan who pointed me in the right direction. Sean describes the method I used in a different thread. Also big thanks to him! :) Here is how I solved my problem: If a collision is registered by my SAT-method, only fire the collision-event on my character if there are no neighbouring solid tiles in the direction of the returned minimum translation vector. I'm developing my first tile-based 2D-game with Javascript. To learn the basics, I decided to write my own "game engine". I have successfully implemented collision detection using the separating axis theorem, but I've run into a problem that I can't quite wrap my head around. If I press the [up] and [left] arrow-keys simultaneously, my character moves diagonally towards the upper left. If he hits a horizontal wall, he'll just keep moving in x-direction. The same goes for [up] and [left] as well as downward-diagonal movements, it works as intended: http://i.stack.imgur.com/aiZjI.png Diagonal movement works fine for horizontal walls, for both left and right-movement However: this does not work for vertical walls. Instead of keeping movement in y-direction, he'll just stop as soon as he "enters" a new tile on the y-axis. So for some reason SAT thinks my character is colliding vertically with tiles from vertical walls: http://i.stack.imgur.com/XBEKR.png My character stops because he thinks that he is colliding vertically with tiles from the wall on the right. This only occurs, when: Moving into top-right direction towards the right wall Moving into top-left direction towards the left wall Bottom-right and bottom-left movement work: the character keeps moving in y-direction as intended. Is this inherited from the way SAT works or is there a problem with my implementation? What can I do to solve my problem? Oh yeah, my character is displayed as a circle but he's actually a rectangular polygon for the collision detection. Thank you very much for your help.

    Read the article

  • Load Texture From Image Content In Runtime

    - by Austin Brunkhorst
    Basically I wrote a world editor for a game I'm working on. Looking ahead, I was brainstorming ways to save the created world including the tile-sets (this game will rely on a tile engine). I was hoping to save the image data of each tile-set in the same file containing the tile positions, etc. and load the image data into a Texture with XNA. Is it possible? Something like this is what I'm going for. Texture2D tileset = Content.LoadFromString<Texture2D>("png tileset data");

    Read the article

  • What different ways are there to model restitution in a physics engine?

    - by Mikael Högström
    In my physics engine I give a body a value for restitution between 0 and 1. When two bodies collide there seems to be different views on how the restitution of the collision should be calculated. To me the most intuitive seems to be to take the average of the two but some seem to take only the largest one. Are there other ways to do it? Also, could the closing velocity or some other parameter come into effect?

    Read the article

  • Relative cam movement and momentum on arbitrary surface

    - by user29244
    I have been working on a game for quite long, think sonic classic physics in 3D or tony hawk psx, with unity3D. However I'm stuck at the most fundamental aspect of movement. The requirement is that I need to move the character in mario 64 fashion (or sonic adventure) aka relative cam input: the camera's forward direction always point input forward the screen, left or right input point toward left or right of the screen. when input are resting, the camera direction is independent from the character direction and the camera can orbit the character when input are pressed the character rotate itself until his direction align with the direction the input is pointing at. It's super easy to do as long your movement are parallel to the global horizontal (or any world axis). However when you try to do this on arbitrary surface (think moving along complex curved surface) with the character sticking to the surface normal (basically moving on wall and ceiling freely), it seems harder. What I want is to achieve the same finesse of movement than in mario but on arbitrary angled surfaces. There is more problem (jumping and transitioning back to the real world alignment and then back on a surface while keeping momentum) but so far I didn't even take off the basics. So far I have accomplish moving along the curved surface and the relative cam input, but for some reason direction fail all the time (point number 3, the character align slowly to the input direction). Do you have an idea how to achieve that? Here is the code and some demo so far: The demo: https://dl.dropbox.com/u/24530447/flash%20build/litesonicengine/LiteSonicEngine5.html Camera code: using UnityEngine; using System.Collections; public class CameraDrive : MonoBehaviour { public GameObject targetObject; public Transform camPivot, camTarget, camRoot, relcamdirDebug; float rot = 0; //---------------------------------------------------------------------------------------------------------- void Start() { this.transform.position = targetObject.transform.position; this.transform.rotation = targetObject.transform.rotation; } void FixedUpdate() { //the pivot system camRoot.position = targetObject.transform.position; //input on pivot orientation rot = 0; float mouse_x = Input.GetAxisRaw( "camera_analog_X" ); // rot = rot + ( 0.1f * Time.deltaTime * mouse_x ); // wrapAngle( rot ); // //when the target object rotate, it rotate too, this should not happen UpdateOrientation(this.transform.forward,targetObject.transform.up); camRoot.transform.RotateAround(camRoot.transform.up,rot); //debug the relcam dir RelativeCamDirection() ; //this camera this.transform.position = camPivot.position; //set the camera to the pivot this.transform.LookAt( camTarget.position ); // } //---------------------------------------------------------------------------------------------------------- public float wrapAngle ( float Degree ) { while (Degree < 0.0f) { Degree = Degree + 360.0f; } while (Degree >= 360.0f) { Degree = Degree - 360.0f; } return Degree; } private void UpdateOrientation( Vector3 forward_vector, Vector3 ground_normal ) { Vector3 projected_forward_to_normal_surface = forward_vector - ( Vector3.Dot( forward_vector, ground_normal ) ) * ground_normal; camRoot.transform.rotation = Quaternion.LookRotation( projected_forward_to_normal_surface, ground_normal ); } float GetOffsetAngle( float targetAngle, float DestAngle ) { return ((targetAngle - DestAngle + 180)% 360) - 180; } //---------------------------------------------------------------------------------------------------------- void OnDrawGizmos() { Gizmos.DrawCube( camPivot.transform.position, new Vector3(1,1,1) ); Gizmos.DrawCube( camTarget.transform.position, new Vector3(1,5,1) ); Gizmos.DrawCube( camRoot.transform.position, new Vector3(1,1,1) ); } void OnGUI() { GUI.Label(new Rect(0,80,1000,20*10), "targetObject.transform.up : " + targetObject.transform.up.ToString()); GUI.Label(new Rect(0,100,1000,20*10), "target euler : " + targetObject.transform.eulerAngles.y.ToString()); GUI.Label(new Rect(0,100,1000,20*10), "rot : " + rot.ToString()); } //---------------------------------------------------------------------------------------------------------- void RelativeCamDirection() { float input_vertical_movement = Input.GetAxisRaw( "Vertical" ), input_horizontal_movement = Input.GetAxisRaw( "Horizontal" ); Vector3 relative_forward = Vector3.forward, relative_right = Vector3.right, relative_direction = ( relative_forward * input_vertical_movement ) + ( relative_right * input_horizontal_movement ) ; MovementController MC = targetObject.GetComponent<MovementController>(); MC.motion = relative_direction.normalized * MC.acceleration * Time.fixedDeltaTime; MC.motion = this.transform.TransformDirection( MC.motion ); //MC.transform.Rotate(Vector3.up, input_horizontal_movement * 10f * Time.fixedDeltaTime); } } Mouvement code: using UnityEngine; using System.Collections; public class MovementController : MonoBehaviour { public float deadZoneValue = 0.1f, angle, acceleration = 50.0f; public Vector3 motion ; //-------------------------------------------------------------------------------------------- void OnGUI() { GUILayout.Label( "transform.rotation : " + transform.rotation ); GUILayout.Label( "transform.position : " + transform.position ); GUILayout.Label( "angle : " + angle ); } void FixedUpdate () { Ray ground_check_ray = new Ray( gameObject.transform.position, -gameObject.transform.up ); RaycastHit raycast_result; Rigidbody rigid_body = gameObject.rigidbody; if ( Physics.Raycast( ground_check_ray, out raycast_result ) ) { Vector3 next_position; //UpdateOrientation( gameObject.transform.forward, raycast_result.normal ); UpdateOrientation( gameObject.transform.forward, raycast_result.normal ); next_position = GetNextPosition( raycast_result.point ); rigid_body.MovePosition( next_position ); } } //-------------------------------------------------------------------------------------------- private void UpdateOrientation( Vector3 forward_vector, Vector3 ground_normal ) { Vector3 projected_forward_to_normal_surface = forward_vector - ( Vector3.Dot( forward_vector, ground_normal ) ) * ground_normal; transform.rotation = Quaternion.LookRotation( projected_forward_to_normal_surface, ground_normal ); } private Vector3 GetNextPosition( Vector3 current_ground_position ) { Vector3 next_position; // //-------------------------------------------------------------------- // angle = 0; // Vector3 dir = this.transform.InverseTransformDirection(motion); // angle = Vector3.Angle(Vector3.forward, dir);// * 1f * Time.fixedDeltaTime; // // if(angle > 0) this.transform.Rotate(0,angle,0); // //-------------------------------------------------------------------- next_position = current_ground_position + gameObject.transform.up * 0.5f + motion ; return next_position; } } Some observation: I have the correct input, I have the correct translation in the camera direction ... but whenever I attempt to slowly lerp the direction of the character in direction of the input, all I get is wild spin! Sad Also discovered that strafing to the right (immediately at the beginning without moving forward) has major singularity trapping on the equator!! I'm totally lost and crush (I have already done a much more featured version which fail at the same aspect)

    Read the article

  • Is there any way to enable the HiDef graphics profile property on a Silverlight 5 3d Web App?

    - by Daniel
    I have an XNA Windows Game that uses the HiDef profile to load complex fbx and obj files. Trying to move it over to a Silverlight 3d Web App, Silverlight seems to only want to use the Reach profile, and I get an error that the Reach profile does not support a sufficient number of primitive draws per call. Is there any way to change to HiDef in Silverlight 5? It is not in the project properties and attempting to change it in mainpage.xaml.cs only gives me the option of setting it to Reach.

    Read the article

  • How to follow object on CatmullRomSplines at constant speed (e.g. train and train carriage)?

    - by Simon
    I have a CatmullRomSpline, and using the very good example at https://github.com/libgdx/libgdx/wiki/Path-interface-%26-Splines I have my object moving at an even pace over the spline. Using a simple train and carriage example, I now want to have the carriage follow the train at the same speed as the train (not jolting along as it does with my code below). This leads into my main questions: How can I make the carriage have the same constant speed as the train and make it non jerky (it has something to do with the derivative I think, I don't understand how that part works)? Why do I need to divide by the line length to convert to metres per second, and is that correct? It wasn't done in the linked examples? I have used the example I linked to above, and modified for my specific example: private void process(CatmullRomSpline catmullRomSpline) { // Render path with precision of 1000 points renderPath(catmullRomSpline, 1000); float length = catmullRomSpline.approxLength(catmullRomSpline.spanCount * 1000); // Render the "train" Vector2 trainDerivative = new Vector2(); Vector2 trainLocation = new Vector2(); catmullRomSpline.derivativeAt(trainDerivative, current); // For some reason need to divide by length to convert from pixel speed to metres per second but I do not // really understand why I need it, it wasn't done in the examples??????? current += (Gdx.graphics.getDeltaTime() * speed / length) / trainDerivative.len(); catmullRomSpline.valueAt(trainLocation, current); renderCircleAtLocation(trainLocation); if (current >= 1) { current -= 1; } // Render the "carriage" Vector2 carriageLocation = new Vector2(); float carriagePercentageCovered = (((current * length) - 1f) / length); // I would like it to follow at 1 metre behind carriagePercentageCovered = Math.max(carriagePercentageCovered, 0); catmullRomSpline.valueAt(carriageLocation, carriagePercentageCovered); renderCircleAtLocation(carriageLocation); } private void renderPath(CatmullRomSpline catmullRomSpline, int k) { // catMulPoints would normally be cached when initialising, but for sake of example... Vector2[] catMulPoints = new Vector2[k]; for (int i = 0; i < k; ++i) { catMulPoints[i] = new Vector2(); catmullRomSpline.valueAt(catMulPoints[i], ((float) i) / ((float) k - 1)); } SHAPE_RENDERER.begin(ShapeRenderer.ShapeType.Line); SHAPE_RENDERER.setColor(Color.NAVY); for (int i = 0; i < k - 1; ++i) { SHAPE_RENDERER.line((Vector2) catMulPoints[i], (Vector2) catMulPoints[i + 1]); } SHAPE_RENDERER.end(); } private void renderCircleAtLocation(Vector2 location) { SHAPE_RENDERER.begin(ShapeRenderer.ShapeType.Filled); SHAPE_RENDERER.setColor(Color.YELLOW); SHAPE_RENDERER.circle(location.x, location.y, .5f); SHAPE_RENDERER.end(); } To create a decent sized CatmullRomSpline for testing this out: Vector2[] controlPoints = makeControlPointsArray(); CatmullRomSpline myCatmull = new CatmullRomSpline(controlPoints, false); .... private Vector2[] makeControlPointsArray() { Vector2[] pointsArray = new Vector2[78]; pointsArray[0] = new Vector2(1.681817f, 10.379999f); pointsArray[1] = new Vector2(2.045455f, 10.379999f); pointsArray[2] = new Vector2(2.663636f, 10.479999f); pointsArray[3] = new Vector2(3.027272f, 10.700000f); pointsArray[4] = new Vector2(3.663636f, 10.939999f); pointsArray[5] = new Vector2(4.245455f, 10.899999f); pointsArray[6] = new Vector2(4.736363f, 10.720000f); pointsArray[7] = new Vector2(4.754545f, 10.339999f); pointsArray[8] = new Vector2(4.518181f, 9.860000f); pointsArray[9] = new Vector2(3.790908f, 9.340000f); pointsArray[10] = new Vector2(3.172727f, 8.739999f); pointsArray[11] = new Vector2(3.300000f, 8.340000f); pointsArray[12] = new Vector2(3.700000f, 8.159999f); pointsArray[13] = new Vector2(4.227272f, 8.520000f); pointsArray[14] = new Vector2(4.681818f, 8.819999f); pointsArray[15] = new Vector2(5.081817f, 9.200000f); pointsArray[16] = new Vector2(5.463636f, 9.460000f); pointsArray[17] = new Vector2(5.972727f, 9.300000f); pointsArray[18] = new Vector2(6.063636f, 8.780000f); pointsArray[19] = new Vector2(6.027272f, 8.259999f); pointsArray[20] = new Vector2(5.700000f, 7.739999f); pointsArray[21] = new Vector2(5.300000f, 7.440000f); pointsArray[22] = new Vector2(4.645454f, 7.179999f); pointsArray[23] = new Vector2(4.136363f, 6.940000f); pointsArray[24] = new Vector2(3.427272f, 6.720000f); pointsArray[25] = new Vector2(2.572727f, 6.559999f); pointsArray[26] = new Vector2(1.900000f, 7.100000f); pointsArray[27] = new Vector2(2.336362f, 7.440000f); pointsArray[28] = new Vector2(2.590908f, 7.940000f); pointsArray[29] = new Vector2(2.318181f, 8.500000f); pointsArray[30] = new Vector2(1.663636f, 8.599999f); pointsArray[31] = new Vector2(1.209090f, 8.299999f); pointsArray[32] = new Vector2(1.118181f, 7.700000f); pointsArray[33] = new Vector2(1.045455f, 6.880000f); pointsArray[34] = new Vector2(1.154545f, 6.100000f); pointsArray[35] = new Vector2(1.281817f, 5.580000f); pointsArray[36] = new Vector2(1.700000f, 5.320000f); pointsArray[37] = new Vector2(2.190908f, 5.199999f); pointsArray[38] = new Vector2(2.900000f, 5.100000f); pointsArray[39] = new Vector2(3.700000f, 5.100000f); pointsArray[40] = new Vector2(4.372727f, 5.220000f); pointsArray[41] = new Vector2(4.827272f, 5.220000f); pointsArray[42] = new Vector2(5.463636f, 5.160000f); pointsArray[43] = new Vector2(5.554545f, 4.700000f); pointsArray[44] = new Vector2(5.245453f, 4.340000f); pointsArray[45] = new Vector2(4.445455f, 4.280000f); pointsArray[46] = new Vector2(3.609091f, 4.260000f); pointsArray[47] = new Vector2(2.718181f, 4.160000f); pointsArray[48] = new Vector2(1.990908f, 4.140000f); pointsArray[49] = new Vector2(1.427272f, 3.980000f); pointsArray[50] = new Vector2(1.609090f, 3.580000f); pointsArray[51] = new Vector2(2.136363f, 3.440000f); pointsArray[52] = new Vector2(3.227272f, 3.280000f); pointsArray[53] = new Vector2(3.972727f, 3.340000f); pointsArray[54] = new Vector2(5.027272f, 3.360000f); pointsArray[55] = new Vector2(5.718181f, 3.460000f); pointsArray[56] = new Vector2(6.100000f, 4.240000f); pointsArray[57] = new Vector2(6.209091f, 4.500000f); pointsArray[58] = new Vector2(6.118181f, 5.320000f); pointsArray[59] = new Vector2(5.772727f, 5.920000f); pointsArray[60] = new Vector2(4.881817f, 6.140000f); pointsArray[61] = new Vector2(5.318181f, 6.580000f); pointsArray[62] = new Vector2(6.263636f, 7.020000f); pointsArray[63] = new Vector2(6.645453f, 7.420000f); pointsArray[64] = new Vector2(6.681817f, 8.179999f); pointsArray[65] = new Vector2(6.627272f, 9.080000f); pointsArray[66] = new Vector2(6.572727f, 9.699999f); pointsArray[67] = new Vector2(6.263636f, 10.820000f); pointsArray[68] = new Vector2(5.754546f, 11.479999f); pointsArray[69] = new Vector2(4.536363f, 11.599998f); pointsArray[70] = new Vector2(3.572727f, 11.700000f); pointsArray[71] = new Vector2(2.809090f, 11.660000f); pointsArray[72] = new Vector2(1.445455f, 11.559999f); pointsArray[73] = new Vector2(0.936363f, 11.280000f); pointsArray[74] = new Vector2(0.754545f, 10.879999f); pointsArray[75] = new Vector2(0.700000f, 9.939999f); pointsArray[76] = new Vector2(0.918181f, 9.620000f); pointsArray[77] = new Vector2(1.463636f, 9.600000f); return pointsArray; } Disclaimer: My math is very rusty, so please explain in lay mans terms....

    Read the article

  • Exporting .jar files with Jarsplice

    - by SystemNetworks
    Help! I'm Using Mac OS X 10.8 Mountain Lion and Using Eclipse. I'm using the library called Slick and Lwjgl. When i first exported it, it has a .jar file. I followed some You Tube Tutorials (Different, they don't have slick) It worked for them. I don't know why it dosen't work for me. Should i put Slick-util too? I didn't even use lwjgl btw. Please help!!! Jars I used(Libraries) Slick LWJGL(I didn't use it) Tutorials I followed TheCodingUniverse(Exporting) TheNewBoston(The Code and Set-up) Programs I used Eclipse IDE Java Jarsplice No warnings found or errors. It is perfect! But Nothing shows up in the screen everytime I pressed the jar(After Jarsplice) Help!!!

    Read the article

  • how do I set quad buffering with jogl 2.0

    - by tony danza
    I'm trying to create a 3d renderer for stereo vision with quad buffering with Processing/Java. The hardware I'm using is ready for this so that's not the problem. I had a stereo.jar library in jogl 1.0 working for Processing 1.5, but now I have to use Processing 2.0 and jogl 2.0 therefore I have to adapt the library. Some things are changed in the source code of Jogl and Processing and I'm having a hard time trying to figure out how to tell Processing I want to use quad buffering. Here's the previous code: public class Theatre extends PGraphicsOpenGL{ protected void allocate() { if (context == null) { // If OpenGL 2X or 4X smoothing is enabled, setup caps object for them GLCapabilities capabilities = new GLCapabilities(); // Starting in release 0158, OpenGL smoothing is always enabled if (!hints[DISABLE_OPENGL_2X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(2); } else if (hints[ENABLE_OPENGL_4X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); } capabilities.setStereo(true); // get a rendering surface and a context for this canvas GLDrawableFactory factory = GLDrawableFactory.getFactory(); drawable = factory.getGLDrawable(parent, capabilities, null); context = drawable.createContext(null); // need to get proper opengl context since will be needed below gl = context.getGL(); // Flag defaults to be reset on the next trip into beginDraw(). settingsInited = false; } else { // The following three lines are a fix for Bug #1176 // http://dev.processing.org/bugs/show_bug.cgi?id=1176 context.destroy(); context = drawable.createContext(null); gl = context.getGL(); reapplySettings(); } } } This was the renderer of the old library. In order to use it, I needed to do size(100, 100, "stereo.Theatre"). Now I'm trying to do the stereo directly in my Processing sketch. Here's what I'm trying: PGraphicsOpenGL pg = ((PGraphicsOpenGL)g); pgl = pg.beginPGL(); gl = pgl.gl; glu = pg.pgl.glu; gl2 = pgl.gl.getGL2(); GLProfile profile = GLProfile.get(GLProfile.GL2); GLCapabilities capabilities = new GLCapabilities(profile); capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); capabilities.setStereo(true); GLDrawableFactory factory = GLDrawableFactory.getFactory(profile); If I go on, I should do something like this: drawable = factory.getGLDrawable(parent, capabilities, null); but drawable isn't a field anymore and I can't find a way to do it. How do I set quad buffering? If I try this: gl2.glDrawBuffer(GL.GL_BACK_RIGHT); it obviously doesn't work :/ Thanks.

    Read the article

  • How would you code an AI engine to allow communication in any programming language?

    - by Tokyo Dan
    I developed a two-player iPhone board game. Computer players (AI) can either be local (in the game code) or remote running on a server. In the 2nd case, both client and server code are coded in Lua. On the server the actual AI code is separate from the TCP socket code and coroutine code (which spawns a separate instance of AI for each connecting client). I want to be able to further isolate the AI code so that that part can be a module coded by anyone in their language of choice. How can I do this? What tecniques/technology would enable communication between the Lua TCP socket/coroutine code and the AI module?

    Read the article

  • How is constant buffer allocation handled in DX11?

    - by Marek
    I'm starting with DX11 and I'm not sure if I'm doing the things right. I want to have both pixel and vertex shader program in one file. Both use some shared and some different constant buffers. So it looks like this: Shader.fx cbuffer ForVS : register(b0) { float4x4 wvp; }; cbuffer ForVSandPS : register(b1) { float4 stuff; float4 stuff2; }; cbuffer ForVS2 : register(b2) { float4 stuff; float4 stuff2; }; cbuffer ForPS : register(b3) { float4 stuff; float4 stuff2; }; .... And in code I use mContext->VSSetConstantBuffers( 0, 1, bufferVS); mContext->VSSetConstantBuffers( 1, 1, bufferVS_PS); mContext->VSSetConstantBuffers( 2, 1, bufferVS2); mContext->PSSetConstantBuffers( 1, 1, bufferVS_PS); mContext->PSSetConstantBuffers( 3, 1, bufferPS); The numbering of buffers in PS is what bugs me, is it alright to bind random slots to shaders (in this example 1 and 3)? Does that mean it still uses just two buffers or does it initialize 0 and 2 buffer pointers to empty? Thank you.

    Read the article

  • Love2D engine for Lua; What about 3D?

    - by shadowprotocol
    Lua has been really awesome to learn, it's so simple. I really enjoy scripting languages, and I had an equally enjoyable time learning Python. The Love engine, http://love2d.org/, is really awesome, but I'm looking for something that can handle 3D as well. Is there anything that accommodates 3D in Lua? I'm still intrigued by the particle system of LOVE anyway and may just turn my idea into a 2D project with Particle lighting :) EDIT: I removed comments about Python - I want this to be a Lua topic. Thanks

    Read the article

  • How do I fix these compiler errors in Apple Crunch?

    - by BluFire
    I've been looking around and I finally got the full source code for a game called Apple-Crunch from Google Code. But when I put it into my project, the source code included so many errors in the class files such as: cannot be resolved into a type the constructor is undefined the method method() is undefined for the type Sprite class.java I downloaded the source directly from the command-line and noticed errors popping up on my project. Since I couldn't figure out how to import the actual folder into my workspace (it wouldn't show up on existing projects) I decided to copy and overwrite the folders into the project. The errors were still there so I looked at the class files and noticed that the classes with errors extended from RokonActivity. I then proceeded to add to the libs folder the Rokon library in hopes to fix the errors. Sadly it didn't work and now I don't what to do to fix the errors. How do I fix the errors without having to manually change the code? The source code should be fully functional so why are there errors?

    Read the article

  • Designing spawning system

    - by Vlad
    I played this game recently http://www.kongregate.com/games/JuicyBeast/knightmare-tower and I am amazed by the way how different monsters are beign spawned. I personally developed my own shooter game and I added time based but also count based spawing system. By count based I mean when there are 5 enemies on stage stop spawning. But this is one example. My question is how are these spawning mechanism built, is there some pattern or some theory how they are built? Are there some online materials/pages where I can improve my knowledge? To sumarize, let just say we have 6 types of monsters. I start the game and kill of monsters of type 1,2 and 3 all the time. Once I pass the first ceiling, like in the game above, monster type 4 appear. ANd so on. As I progress trough the game, the same system of 6 types of monsters stay, but they become more and more resilient and dangerous. So I must also improve to be able to destroy the same monsters but now stronger. My question is simple, are there some theories built or written for developing this type of inteligent systems? Note: This is a general question, not tied up with some game or how exactly should the game work. I am capable to program my own mechanisms but I think I need some help. Thanks.

    Read the article

  • HLSL How to flip geometry horizontally

    - by cubrman
    I want to flip my asymmetric 3d model horizontally in the vertex shader alongside an arbitrary plane parallel to the YZ plane. This should switch everything for the model from the left hand side to the right hand side (like flipping it in Photoshop). Doing it in pixel shader would be a huge computational cost (extra RT, more fullscreen samples...), so it must be done in the vertex shader. Once more: this is NOT reflection, i need to flip THE WHOLE MODEL. I thought I could simply do the following: Turn off culling. Run the following code in the vertex shader: input.Position = mul(input.Position, World); // World[3][0] holds x value of the model's pivot in the World. if (input.Position.x <= World[3][0]) input.Position.x += World[3][0] - input.Position.x; else input.Position.x -= input.Position.x - World[3][0]; ... The model is never drawn. Where am I wrong? I presume that messes up the index buffer. Can something be done about it? P.S. it's INSANELY HARD to format code here. Thanks to Panda I found my problem. SOLUTION: // Do thins before anything else in the vertex shader. Position.x *= -1; // To invert alongside the object's YZ plane.

    Read the article

  • Extrapolation breaks collision detection

    - by user22241
    Before applying extrapolation to my sprite's movement, my collision worked perfectly. However, after applying extrapolation to my sprite's movement (to smooth things out), the collision no longer works. This is how things worked before extrapolation: However, after I implement my extrapolation, the collision routine breaks. I am assuming this is because it is acting upon the new coordinate that has been produced by the extrapolation routine (which is situated in my render call ). After I apply my extrapolation How to correct this behaviour? I've tried puting an extra collision check just after extrapolation - this does seem to clear up a lot of the problems but I've ruled this out because putting logic into my rendering is out of the question. I've also tried making a copy of the spritesX position, extrapolating that and drawing using that rather than the original, thus leaving the original intact for the logic to pick up on - this seems a better option, but it still produces some weird effects when colliding with walls. I'm pretty sure this also isn't the correct way to deal with this. I've found a couple of similar questions on here but the answers haven't helped me. This is my extrapolation code: public void onDrawFrame(GL10 gl) { //Set/Re-set loop back to 0 to start counting again loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip){ SceneManager.getInstance().getCurrentScene().updateLogic(); nextGameTick+=skipTicks; timeCorrection += (1000d/ticksPerSecond) % 1; nextGameTick+=timeCorrection; timeCorrection %=1; loops++; tics++; } extrapolation = (float)(System.currentTimeMillis() + skipTicks - nextGameTick) / (float)skipTicks; render(extrapolation); } Applying extrapolation render(float extrapolation){ //This example shows extrapolation for X axis only. Y position (spriteScreenY is assumed to be valid) extrapolatedPosX = spriteGridX+(SpriteXVelocity*dt)*extrapolation; spriteScreenPosX = extrapolationPosX * screenWidth; drawSprite(spriteScreenX, spriteScreenY); } Edit As I mentioned above, I have tried making a copy of the sprite's coordinates specifically to draw with.... this has it's own problems. Firstly, regardless of the copying, when the sprite is moving, it's super-smooth, when it stops, it's wobbling slightly left/right - as it's still extrapolating it's position based on the time. Is this normal behavior and can we 'turn it off' when the sprite stops? I've tried having flags for left / right and only extrapolating if either of these is enabled. I've also tried copying the last and current positions to see if there is any difference. However, as far as collision goes, these don't help. If the user is pressing say, the right button and the sprite is moving right, when it hits a wall, if the user continues to hold the right button down, the sprite will keep animating to the right, while being stopped by the wall (therefore not actually moving), however because the right flag is still set and also because the collision routine is constantly moving the sprite out of the wall, it still appear to the code (not the player) that the sprite is still moving, and therefore extrapolation continues. So what the player would see, is the sprite 'static' (yes, it's animating, but it's not actually moving across the screen), and every now and then it shakes violently as the extrapolation attempts to do it's thing....... Hope this help

    Read the article

  • From where does the game engines add location of an object?

    - by Player
    I have started making my first game( a pong game )with ruby (Gosu). I'm trying to detect the collision of two images using their location by comparing the location of the object (a ball) to another one(a player). For example: if (@player.x - @ball.x).abs <=184 && (@player.y - @ball.y).abs <= 40 @ball.vx = [email protected] @ball.vy = [email protected] But my problem is that with these numbers, the ball collides near the player sometimes, even though the dimensions of the player are correct. So my question is from where does the x values start to count? Is it from the center of gravity of the image or from the beginning of the image? (i.e When you add the image on a specific x,y,z what are these values compared to the image?

    Read the article

< Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >