Search Results

Search found 25518 results on 1021 pages for 'iterative development'.

Page 438/1021 | < Previous Page | 434 435 436 437 438 439 440 441 442 443 444 445  | Next Page >

  • Moving from XNA/C# to DirectX/C++ quite confused

    - by misiMe
    I made some game with XNA/C# for Windows Phone and Windows 8, since XNA is dead and Visual studio doesn't support it (I have to target Windows Phone 7.1 to build with XNA), I want to start learning something more "consistent in time" and improve my skills. I'm a little confused about the possibilities, because C++/DirectX alone seems difficult, so I found some high-level classes to help: DirectX Toolkit Cocos2D My questions are: What will happen when they will "die" like XNA? Is C++'s approces more "professional" than C#/XNA and why? Is C++'s approces more "portable"? Is C++'s approces more resistant in terms of time? Is there any consideration about DirectX TK and Cocos2D in terms of performance? I ask that because I found that every Game software house in my country looks for skilled C++ programmers.

    Read the article

  • Moving in a diamond - enemy gets stuck

    - by Fibericon
    I have an enemy that I would like to move as follows: Start at (0, 200, 0) Move to (200, 0, 0) Move to (0, -200, 0) Move to (-200, 0, 0) Move to start point, repeat as long as it remains active. This is what I've done to achieve that: if (position.X < 200 && position.Y > 0) { Velocity = new Vector3(1, -1, 0) * speed; } else if (position.X >= 200 && position.Y <= 0 && position.Y > -200) { Velocity = new Vector3(-1, -1, 0) * speed; } else if (position.X <= 0 && position.Y <= -200) { Velocity = new Vector3(-1, 1, 0) * speed; } else { Velocity = new Vector3(1, 1, 0) * speed; } It moves to the second point, but then gets stuck and appears to vibrate in place. How should I be doing this?

    Read the article

  • Blender - creating bones from transform matrices

    - by user975135
    Notice: this is for the Blender 2.5/2.6 API. Back in the old days in the Blender 2.4 API, you could easily create a bone from a transform matrix in your 3d file as EditBones had an attribute named "matrix", which was an armature-space matrix you could access and modify. The new 2.5+ API still has the "matrix" attribute for EditBones, but for some unknown reason it is now read-only. So how to create EditBones from transform matrices? I could only find one thing: a new "transform()" function, which takes a Matrix too. Transform the the bones head, tail, roll and envelope (when the matrix has a scale component). Perfect, but you already need to have some values (loc/rot/scale) for your bone, otherwise transforming with a matrix like this will give you nothing, your bone will be a zero-sized bone which will be deleted by Blender. if you create default bone values first, like this: bone.tail = mathutils.Vector([0,1,0]) Then transform() will work on your bone and it might seem to create correct bones, but setting a tail position actually generates a matrix itself, use transform() and you don't get the matrix from your model file on your EditBone, but the multiplication of your matrix with the bone's existing one. This can be easily proven by comparing the matrices read from the file with EditBone.matrix. Again it might seem correct in Blender, but now export your model and you see your animations are messed up, as the bind pose rotations of the bones are wrong. I've tried to find an alternative way to assign the transformation matrix from my file to my EditBone with no luck.

    Read the article

  • Remove gravity from single body

    - by Siddharth
    I have multiple bodies in my game world in andengine. All the bodies affected by gravity but in that I want my specific body does not affected by the gravity. For that solution after research I found that I have to use body.setGravityScale(0) method for my problem solution. But in my andengine extension I don't found that method so please provide guidance about how get access about that method. Also for the above problem any other guidance will be acceptable. Thank You! I apply following code for reverse gravity final Vector2 vec = new Vector2(0, -SensorManager.GRAVITY_EARTH * bulletBody.getMass()); bulletBody.applyForce(vec, bulletBody.getWorldCenter());

    Read the article

  • Units issue when exporting from 3DS Max to XNA

    - by miguelSantirso
    I am working on a XNA game where we have defined that 1 XNA unit equals to 1 meter. Then, I set meters as system unit in 3DS Max and set to meters the units in the FBX exporter. However, when I export my models, they are much bigger in the game. Am I missing something? What should I do to avoid problems with my units? Investigating the FBX file, I noticed that I it has two values called UnitScaleFactor and OriginalUnitScaleFactor. They both are 100 when I export the files... And if I manually change UnitScaleFactor to 1, it works fine :S

    Read the article

  • Math > Logic for a Logarithmic Score Meter

    - by oodavid
    I'm trying to implement a score meter whereby I specify a maximum value (say 15,000) and I can render values on it in a logarithmic manner ie: +------+---+--+-++ +------+---+--+-++ |== | |====== | +------+---+--+-++ +------+---+--+-++ 200 pts 1,000 pts +------+---+--+-++ +------+---+--+-++ |============= | |================| +------+---+--+-++ +------+---+--+-++ 5,000 pts 15,000 pts + The upper bound needs to be variable, and need to be able to convert a score to a percentage, using the above mockup as an example: score2pct(15000, 200) = 0.2 score2pct(15000, 1000) = 0.4 score2pct(15000, 5000) = 0.8 score2pct(15000, 15000) = 1 Does anyone have any pointers for me?

    Read the article

  • Logic in Entity Components Systems

    - by aaron
    I'm making a game that uses an Entity/Component architecture basically a port of Artemis's framework to c++,the problem arises when I try to make a PlayerControllerComponent, my original idea was this. class PlayerControllerComponent: Component { public: virtual void update() = 0; }; class FpsPlayerControllerComponent: PlayerControllerComponent { public: void update() { //handle input } }; and have a system that updates PlayerControllerComponents, but I found out that the artemis framework does not look at sub-classes the way I thought it would. So all in all my question here is should I make the framework aware of subclasses or should I add a new Component like object that is used for logic.

    Read the article

  • OpenGL sprites and point size limitation

    - by Srdan
    I'm developing a simple particle system that should be able to perform on mobile devices (iOS, Andorid). My plan was to use GL_POINT_SPRITE/GL_PROGRAM_POINT_SIZE method because of it's efficiency (GL_POINTS are enough), but after some experimenting, I found myself in a trouble. Sprite size is limited (to usually 64 pixels). I'm calculating size using this formula gl_PointSize = in_point_size * some_factor / distance_to_camera to make particle sizes proportional to distance to camera. But at some point, when camera is close enough, problem with size limitation emerges and whole system starts looking unrealistic. Is there a way to avoid this problem? If no, what's alternative? I was thinking of manually generating billboard quad for each particle. Now, I have some questions about that approach. I guess minimum geometry data would be four vertices per particle and index array to make quads from these vertices (with GL_TRIANGLE_STRIP). Additionally, for each vertex I need a color and texture coordinate. I would put all that in an interleaved vertex array. But as you can see, there is much redundancy. All vertices of same particle share same color value, and four texture coordinates are same for all particles. Because of how glDrawArrays/Elements works, I see no way to optimise this. Do you know of a better approach on how to organise per-particle data? Should I use buffers or vertex arrays, or there is no difference because each time I have to update all particles' data. About particles simulation... Where to do it? On CPU or on a vertex processors? Something tells me that mobile's CPU would do it faster than it's vertex unit (at least today in 2012 :). So, any advice on how to make a simple and efficient particle system without particle size limitation, for mobile device, would be appreciated. (animation of camera passing through particles should be realistic)

    Read the article

  • slick2d missiles

    - by kirchhoff
    Hey I'm making a game in java with slick2d and I want to create planes which shoots: int maxBullets = 40; static int bullet = 0; Missile missile[] = new Missile[maxBullets]; I want to create/move my missiles in the most efficient way, I would appreciate your advises: public void shoot() throws SlickException{ if(bullet<maxBullets){ if(missile[bullet] != null){ missile[bullet].resetLocation(plane.getCentreX(), plane.getCentreY(), plane.image.getRotation()); }else{ missile[bullet] = new Missile("resources/missile.png", plane.getCentreX(), plane.getCentreY(), plane.image.getRotation()); } }else{ bullet = 0; missile[bullet].resetLocation(plane.getCentreX(), plane.getCentreY(), plane.image.getRotation()); } bullet++; } I created the method "resetLocation" in my Missile class in order to avoid loading again the resource. Is it correct? In the update method I've got this to move all the missiles: if(bullet > 0 && bullet < maxBullets){ float hyp = 0.4f * delta; if(bullet == 1){ missile[0].move(hyp); }else{ for(int x = 0; x<bullet; x++){ missile[x].move(hyp); } } }

    Read the article

  • How to convert pitch and yaw to x, y, z rotations?

    - by Aaron Anodide
    I'm a beginner using XNA to try and make a 3D Asteroids game. I'm really close to having my space ship drive around as if it had thrusters for pitch and yaw. The problem is I can't quite figure out how to translate the rotations, for instance, when I pitch forward 45 degrees and then start to turn - in this case there should be rotation being applied to all three directions to get the "diagonal yaw" - right? I thought I had it right with the calculations below, but they cause a partly pitched forward ship to wobble instead of turn.... :( So my quesiton is: how do you calculate the X, Y, and Z rotations for an object in terms of pitch and yaw? Here's current (almost working) calculations for the Rotation acceleration: float accel = .75f; // Thrust +Y / Forward if (currentKeyboardState.IsKeyDown(Keys.I)) { this.ship.AccelerationY += (float)Math.Cos(this.ship.RotationZ) * accel; this.ship.AccelerationX += (float)Math.Sin(this.ship.RotationZ) * -accel; this.ship.AccelerationZ += (float)Math.Sin(this.ship.RotationX) * accel; } // Rotation +Z / Yaw if (currentKeyboardState.IsKeyDown(Keys.J)) { this.ship.RotationAccelerationZ += (float)Math.Cos(this.ship.RotationX) * accel; this.ship.RotationAccelerationY += (float)Math.Sin(this.ship.RotationX) * accel; this.ship.RotationAccelerationX += (float)Math.Sin(this.ship.RotationY) * accel; } // Rotation -Z / Yaw if (currentKeyboardState.IsKeyDown(Keys.K)) { this.ship.RotationAccelerationZ += (float)Math.Cos(this.ship.RotationX) * -accel; this.ship.RotationAccelerationY += (float)Math.Sin(this.ship.RotationX) * -accel; this.ship.RotationAccelerationX += (float)Math.Sin(this.ship.RotationY) * -accel; } // Rotation +X / Pitch if (currentKeyboardState.IsKeyDown(Keys.F)) { this.ship.RotationAccelerationX += accel; } // Rotation -X / Pitch if (currentKeyboardState.IsKeyDown(Keys.D)) { this.ship.RotationAccelerationX -= accel; } I'm combining that with drawing code that does a rotation to the model: public void Draw(Matrix world, Matrix view, Matrix projection, TimeSpan elsapsedTime) { float seconds = (float)elsapsedTime.TotalSeconds; // update velocity based on acceleration this.VelocityX += this.AccelerationX * seconds; this.VelocityY += this.AccelerationY * seconds; this.VelocityZ += this.AccelerationZ * seconds; // update position based on velocity this.PositionX += this.VelocityX * seconds; this.PositionY += this.VelocityY * seconds; this.PositionZ += this.VelocityZ * seconds; // update rotational velocity based on rotational acceleration this.RotationVelocityX += this.RotationAccelerationX * seconds; this.RotationVelocityY += this.RotationAccelerationY * seconds; this.RotationVelocityZ += this.RotationAccelerationZ * seconds; // update rotation based on rotational velocity this.RotationX += this.RotationVelocityX * seconds; this.RotationY += this.RotationVelocityY * seconds; this.RotationZ += this.RotationVelocityZ * seconds; Matrix translation = Matrix.CreateTranslation(PositionX, PositionY, PositionZ); Matrix rotation = Matrix.CreateRotationX(RotationX) * Matrix.CreateRotationY(RotationY) * Matrix.CreateRotationZ(RotationZ); model.Root.Transform = rotation * translation * world; model.CopyAbsoluteBoneTransformsTo(boneTransforms); foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.World = boneTransforms[mesh.ParentBone.Index]; effect.View = view; effect.Projection = projection; effect.EnableDefaultLighting(); } mesh.Draw(); } }

    Read the article

  • What are the reasons for MMOs to have level caps [on hold]

    - by SamStephens
    In many MMOs players character progression is artificially capped, e.g. by level 60 or 90 or 100 or whatever. Why do MMOs have these level caps in the first place? Why not just allow characters to continue to arbitrary levels with a mathematically designed leveling system that keeps the leveling experience interesting and endless? Answers to this question may help us to see the reason behind the feature and decide if and how this should be implemented in our MMOs.

    Read the article

  • DOT implementation

    - by Denis Ermolin
    I have some DOT(damage over time) implementation problems. My game runs on 30 FPS speed. Current implementation is: let's say hero cast spell which make 1 damage per second. So on every frame i do (pseudo code): damage_done = getRandomDamage() * delta_time; I accumulate damage and when it becomes more then 0 then subtract rounded damage from current health and so on. With 30 FPS and 1 DPS it will be 1/33 = 0.05... We know that floats a not precise enough to sum 30 circulating decimals and have exact 1 in the end. But HP is discrete value and that's why 1 DPS will not have 1 damage after 1 second because value will be 0.9999..... It's not so big deal when you have 100000 DPS - +/- 1 damage will not be noticeable. But if i have 1, 5 DPS? How modern RPG's implemented DOT's?

    Read the article

  • Using Appendbuffers in unity for terrain generation

    - by Wardy
    Like many others I figured I would try and make the most of the monster processing power of the GPU but I'm having trouble getting the basics in place. CPU code: using UnityEngine; using System.Collections; public class Test : MonoBehaviour { public ComputeShader Generator; public MeshTopology Topology; void OnEnable() { var computedMeshPoints = ComputeMesh(); CreateMeshFrom(computedMeshPoints); } private Vector3[] ComputeMesh() { var size = (32*32) * 4; // 4 points added for each x,z pos var buffer = new ComputeBuffer(size, 12, ComputeBufferType.Append); Generator.SetBuffer(0, "vertexBuffer", buffer); Generator.Dispatch(0, 1, 1, 1); var results = new Vector3[size]; buffer.GetData(results); buffer.Dispose(); return results; } private void CreateMeshFrom(Vector3[] generatedPoints) { var filter = GetComponent<MeshFilter>(); var renderer = GetComponent<MeshRenderer>(); if (generatedPoints.Length > 0) { var mesh = new Mesh { vertices = generatedPoints }; var colors = new Color[generatedPoints.Length]; var indices = new int[generatedPoints.Length]; //TODO: build this different based on topology of the mesh being generated for (int i = 0; i < indices.Length; i++) { indices[i] = i; colors[i] = Color.blue; } mesh.SetIndices(indices, Topology, 0); mesh.colors = colors; mesh.RecalculateNormals(); mesh.Optimize(); mesh.RecalculateBounds(); filter.sharedMesh = mesh; } else { filter.sharedMesh = null; } } } GPU code: #pragma kernel Generate AppendStructuredBuffer<float3> vertexBuffer : register(u0); void genVertsAt(uint2 xzPos) { //TODO: put some height generation code here. // could even run marching cubes / dual contouring code. float3 corner1 = float3( xzPos[0], 0, xzPos[1] ); float3 corner2 = float3( xzPos[0] + 1, 0, xzPos[1] ); float3 corner3 = float3( xzPos[0], 0, xzPos[1] + 1); float3 corner4 = float3( xzPos[0] + 1, 0, xzPos[1] + 1 ); vertexBuffer.Append(corner1); vertexBuffer.Append(corner2); vertexBuffer.Append(corner3); vertexBuffer.Append(corner4); } [numthreads(32, 1, 32)] void Generate (uint3 threadId : SV_GroupThreadID, uint3 groupId : SV_GroupID) { uint2 currentXZ = unint2( groupId.x * 32 + threadId.x, groupId.z * 32 + threadId.z); genVertsAt(currentXZ); } Can anyone explain why when I call "buffer.GetData(results);" on the CPU after the compute dispatch call my buffer is full of Vector3(0,0,0), I'm not expecting any y values yet but I would expect a bunch of thread indexes in the x,z values for the Vector3 array. I'm not getting any errors in any of this code which suggests it's correct syntax-wise but maybe the issue is a logical bug. Also: Yes, I know I'm generating 4,000 Vector3's and then basically round tripping them. However, the purpose of this code is purely to learn how round tripping works between CPU and GPU in Unity.

    Read the article

  • Drawing a random x,y grid of objects within a prespective

    - by T Reddy
    I'm wrapping my head around OpenGL ES 2.0 and I think I'm trying to do something very simple, but I think the math may be eluding me. I created a simple, flat-ish cylinder in Blender that is 2 units in diameter. I want to create an arbitrary grid of these edge to edge (think of a checker board). I'm using a 3D perspective with GLKit: CGSize size = [[self view] bounds].size; _projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(45.0f), size.width/size.height, 0.1f, 100.0f); So, I managed to manually get all of these cylinders drawn on the screen just fine. However, I would like to understand how I can programmatically "fit" all of these cylinders on the screen at the same time given the camera location, screen size, cylinder diameter, and the number of rows/columns. So the net effect is that for small grids (i.e., 5x5) the objects are closer to the camera, but for large grids (i.e., 30x30) the objects are farther away. In either case, all of the cylinders are visible.

    Read the article

  • Keypress detection wont work after seemingly unrelated code change

    - by LukeZaz
    I'm trying to have the Enter key cause a new 'map' to generate for my game, but for whatever reason after implementing full-screen in it the input check won't work anymore. I tried removing the new code and only pressing one key at a time, but it still won't work. Here's the check code and the method it uses, along with the newMap method: public class Game1 : Microsoft.Xna.Framework.Game { // ... protected override void Update(GameTime gameTime) { // ... // Check if Enter was pressed - if so, generate a new map if (CheckInput(Keys.Enter, 1)) { blocks = newMap(map, blocks, console); } // ... } // Method: Checks if a key is/was pressed public bool CheckInput(Keys key, int checkType) { // Get current keyboard state KeyboardState newState = Keyboard.GetState(); bool retType = false; // Return type if (checkType == 0) { // Check Type: Is key currently down? if (newState.IsKeyDown(key)) { retType = true; } else { retType = false; } } else if (checkType == 1) { // Check Type: Was the key pressed? if (newState.IsKeyDown(key)) { if (!oldState.IsKeyDown(key)) { // Key was just pressed retType = true; } else { // Key was already pressed, return false retType = false; } } } // Save keyboard state oldState = newState; // Return result if (retType == true) { return true; } else { return false; } } // Method: Generate a new map public List<Block> newMap(Map map, List<Block> blockList, Console console) { // Create new map block coordinates List<Vector2> positions = new List<Vector2>(); positions = map.generateMap(console); // Clear list and reallocate memory previously used up by it blockList.Clear(); blockList.TrimExcess(); // Add new blocks to the list using positions created by generateMap() foreach (Vector2 pos in positions) { blockList.Add(new Block() { Position = pos, Texture = dirtTex }); } // Return modified list return blockList; } // ... } and the generateMap code: // Generate a list of Vector2 positions for blocks public List<Vector2> generateMap(Console console, int method = 0) { ScreenTileWidth = gDevice.Viewport.Width / 16; ScreenTileHeight = gDevice.Viewport.Height / 16; maxHeight = gDevice.Viewport.Height; List<Vector2> blockLocations = new List<Vector2>(); if (useScreenSize == true) { Width = ScreenTileWidth; Height = ScreenTileHeight; } else { maxHeight = Height; } int startHeight = -500; // For debugging purposes, the startHeight is set to an // hopefully-unreachable value - if it returns this, something is wrong // Methods of land generation /// <summary> /// Third version land generation /// Generates a base land height as the second version does /// but also generates a 'max change' value which determines how much /// the land can raise or lower by which it now does by a random amount /// during generation /// </summary> if (method == 0) { // Get the land height startHeight = rnd.Next(1, maxHeight); int maxChange = rnd.Next(1, 5); // Amount ground will raise/lower by int curHeight = startHeight; for (int w = 0; w < Width; w++) { // Run a chance to lower/raise ground level int changeBy = rnd.Next(1, maxChange); int doChange = rnd.Next(0, 3); if (doChange == 1 && !(curHeight <= (1 + maxChange))) { curHeight = curHeight - changeBy; } else if (doChange == 2 && !(curHeight >= (29 - maxChange))) { curHeight = curHeight + changeBy; } for (int h = curHeight; h < Height; h++) { // Location variables float x = w * 16; float y = h * 16; blockLocations.Add(new Vector2(x, y)); } } console.newMsg("[INFO] Cur, height change maximum: " + maxChange.ToString()); } /// <summary> /// Second version land generator /// Generates a solid mass of land starting at a random height /// derived from either screen height or provided height value /// </summary> else if (method == 1) { // Get the land height startHeight = rnd.Next(0, 30); for (int w = 0; w < Width; w++) { for (int h = startHeight; h < ScreenTileHeight; h++) { // Location variables float x = w * 16; float y = h * 16; // Add a tile at set location blockLocations.Add(new Vector2(x, y)); } } } /// <summary> /// First version land generator /// Generates land completely randomly either across screen or /// in a box set by Width and Height values /// </summary> else { // For each tile in the map... for (int w = 0; w < Width; w++) { for (int h = 0; h < Height; h++) { // Location variables float x = w * 16; float y = h * 16; // ...decide whether or not to place a tile... if (rnd.Next(0, 2) == 1) { // ...and if so, add a tile at that location. blockLocations.Add(new Vector2(x, y)); } } } } console.newMsg("[INFO] Cur, base height: " + startHeight.ToString()); return blockLocations; } I never touched any of the above code for this when it broke - changing keys won't seem to fix it. Despite this, I have camera movement set inside another Game1 method that uses WASD and works perfectly. All I did was add a few lines of code here: private int BackBufferWidth = 1280; // Added these variables private int BackBufferHeight = 800; public Game1() { graphics = new GraphicsDeviceManager(this); graphics.PreferredBackBufferWidth = BackBufferWidth; // and this graphics.PreferredBackBufferHeight = BackBufferHeight; // this Content.RootDirectory = "Content"; this.graphics.IsFullScreen = true; // and this } When I try adding a console line to be printed in the event the key is pressed, it seems that the If is never even triggered despite the correct key being pressed.

    Read the article

  • Precision loss when transforming from cartesian to isometric

    - by Justin Skiles
    My goal is to display a tile map in isometric projection. This tile map has 25 tiles across and 25 tiles down. Each tile is 32x32. See below for how I'm accomplishing this. World Space World Space to Screen Space Rotation (45 degrees) Using a 2D rotation matrix, I use the following: double rotation = Math.PI / 4; double rotatedX = ((tileWorldX * Math.Cos(rotation)) - ((tileWorldY * Math.Sin(rotation))); double rotatedY = ((tileWorldX * Math.Sin(rotation)) + (tileWorldY * Math.Cos(rotation))); World Space to Screen Space Scale (Y-axis reduced by 50%) Here I simply scale down the Y value by a factor of 0.5. Problem And it works, kind of. There are some tiny 1px-2px gaps between some of the tiles when rendering. I think there's some precision loss somewhere, or I'm not understanding how to get these tiles to fit together perfectly. I'm not truncating or converting my values to non-decimal types until I absolutely have to (when I pass to the render method, which only takes integers). I'm not sure how to guarantee pixel perfect rendering precision when I'm rotating and scaling on a level of higher precision. Any advice? Do I need to supply for information?

    Read the article

  • Move projectile in direction the gun is facing

    - by Manderin87
    I am attempting to have a projectile follow the direction a gun is facing. When using the following code I am unable to make the projectile go in the right direction. float speed = .5f; float dX = (float) -Math.cos(Math.toRadians(degree)) * speed; float dY = (float) Math.sin(Math.toRadians(degree)) * speed; Can anyone tell me what I am doing wrong? The degree is the direction the gun is facing in degree's.

    Read the article

  • Point[] and Tri not "could not be found"

    - by Craig Dannehl
    Hi I'm trying to learn how to load a .obj file using OpenTK in windows Forms. I have seen a lot of examples out there, but I do see almost everyone uses List, and Point[]. Code example show these highlighted like there IDE know what these are; for example List<Tri> tris = new List<Tri>(); but mine just returns "The type or namespace name 'Tri' could not be found" is there an include I need to add or a using I am missing. Currently have this using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Drawing; using OpenTK; using OpenTK.Graphics; using OpenTK.Graphics.OpenGL;

    Read the article

  • Keeping rotation between two objects

    - by user99
    In my XNA game I have two objects that collide. When the first object collides with the other it is able to latch on to it and move it about the world. I am having a problem with the math here (Math isn't my strong point). I currently have the second object latch on to the first and move around with it, but I cannot get it to keep it's original direction. So, if the object is facing up it should keep this direction relative to how it is being rotated with the original item. Any tips on how I could best to achieve this?

    Read the article

  • D3D9 Alpha Blending on the surfaces

    - by Indeera
    I have a surface (OffScreenPlain or RenderTarget with D3DFMT_A8R8G8B8) which I copy pixels (ARGB) to, from a third party function. Before pixel copying, Bits are accessed by LockRect. This surface is then StretchRect to the Backbuffer which is (D3DFMT_A8R8G8B8). Surface and Backbuffer are different dimensions. Filtering is set to D3DTEXF_NONE. Just after creating the d3d device I've set following RenderState settings D3DRS_ALPHABLENDENABLE -> TRUE D3DRS_BLENDOP -> D3DBLENDOP_ADD D3DRS_SRCBLEND -> D3DBLEND_SRCALPHA D3DRS_DESTBLEND -> D3DBLEND_INVSRCALPHA But I see no alpha blending happening. I've verified that alpha is specified in pixels. I've done a simple test by creating a vertex buffer and drawing a triangle (DrawPrimitive) which displays with alpha blending. In this test surface was StretchRect first and then DrawPrimitive, and the surface content displays without alpha blending and the triangle displays with alpha blending. What am I missing here? Thanks

    Read the article

  • Slick & NiftyGUI. Nifty initialize exception

    - by Romeo
    I found my self into trouble when trying to run a Slick game with a Nifty Game State. This is the code: @Override protected void initGameAndGUI(GameContainer container, StateBasedGame game) throws SlickException { initNifty(container, game); } If i run this i get: java.lang.IllegalStateException: The NiftyGUI was already initialized. Its illegal to do so twice. If i delete the call to initNifty() i get another exception:java.lang.IllegalStateException: NiftyGUI was not initialized.

    Read the article

  • Collision Detection with SAT: False Collision for Diagonal Movement Towards Vertical Tile-Walls?

    - by Macks
    Edit: Problem solved! Big thanks to Jonathan who pointed me in the right direction. Sean describes the method I used in a different thread. Also big thanks to him! :) Here is how I solved my problem: If a collision is registered by my SAT-method, only fire the collision-event on my character if there are no neighbouring solid tiles in the direction of the returned minimum translation vector. I'm developing my first tile-based 2D-game with Javascript. To learn the basics, I decided to write my own "game engine". I have successfully implemented collision detection using the separating axis theorem, but I've run into a problem that I can't quite wrap my head around. If I press the [up] and [left] arrow-keys simultaneously, my character moves diagonally towards the upper left. If he hits a horizontal wall, he'll just keep moving in x-direction. The same goes for [up] and [left] as well as downward-diagonal movements, it works as intended: http://i.stack.imgur.com/aiZjI.png Diagonal movement works fine for horizontal walls, for both left and right-movement However: this does not work for vertical walls. Instead of keeping movement in y-direction, he'll just stop as soon as he "enters" a new tile on the y-axis. So for some reason SAT thinks my character is colliding vertically with tiles from vertical walls: http://i.stack.imgur.com/XBEKR.png My character stops because he thinks that he is colliding vertically with tiles from the wall on the right. This only occurs, when: Moving into top-right direction towards the right wall Moving into top-left direction towards the left wall Bottom-right and bottom-left movement work: the character keeps moving in y-direction as intended. Is this inherited from the way SAT works or is there a problem with my implementation? What can I do to solve my problem? Oh yeah, my character is displayed as a circle but he's actually a rectangular polygon for the collision detection. Thank you very much for your help.

    Read the article

  • Load Texture From Image Content In Runtime

    - by Austin Brunkhorst
    Basically I wrote a world editor for a game I'm working on. Looking ahead, I was brainstorming ways to save the created world including the tile-sets (this game will rely on a tile engine). I was hoping to save the image data of each tile-set in the same file containing the tile positions, etc. and load the image data into a Texture with XNA. Is it possible? Something like this is what I'm going for. Texture2D tileset = Content.LoadFromString<Texture2D>("png tileset data");

    Read the article

  • How do GameEngines stop Pixel Seams appearing in adjacent mesh boundaries due to FP imprecision?

    - by ufomorace
    Graphics cards are mathematically imprecise. So when some meshes are joined by their borders, the graphics card often makes mistakes and decides that some pixels at the seam represent neither object, and unwanted pixels appear. It's a natural behaviour on all graphics cards. How are such worries avoided in Pro Games? Batching? Shaders? Different tangent vectors? Merging? Overlaping seams? Dark backgrounds? Extra vertices at borders? Z precision? Camera distance tweaks? Screencap of a fix that ended up not working:

    Read the article

  • Drawing of a huge model - How to regain performance?

    - by marc wellman
    I have a huge model I want to draw in my XNA application but because of its size I am experiencing a tremendous loss of performance. The model has about ~50 000 000 edges and has a size on disk of 205 MB in DirectX Format. Please don't ask whether this model has to be that big - yes it has! Is there a way to transfer the model directly to my GPU in order to let the GPU do the drawing like when transferring a VertexBuffer like this: graphicsDevice.Vertices[1].SetSource(_instanceBuffers[i], 0, _sizeofMatrix); because when I try to fill a vertexBuffer with all the vertices I am getting a OutOfMemoryException.

    Read the article

< Previous Page | 434 435 436 437 438 439 440 441 442 443 444 445  | Next Page >