Search Results

Search found 25377 results on 1016 pages for 'development 4 0'.

Page 478/1016 | < Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >

  • C++: Checking if an object faces a point (within a certain range)

    - by bojoradarial
    I have been working on a shooter game in C++, and am trying to add a feature whereby missiles shot must be within 90 degrees (PI/2 radians) of the direction the ship is facing. The missiles will be shot towards the mouse. My idea is that the ship's angle of rotation is compared with the angle between the ship and the mouse (std::atan2(mouseY - shipY, mouseX - shipX)), and if the difference is less than PI/4 (45 degrees) then the missile can be fired. However, I can't seem to get this to work. The ship's angle of rotation is increased and decreased with the A and D keys, so it is possible that it isn't between 0 and 2*PI, hence the use of fmod() below. Code: float userRotation = std::fmod(user->Angle(), 6.28318f); if (std::abs(userRotation - missileAngle) > 0.78f) return; Any help would be appreciated. Thanks!

    Read the article

  • Cocos2D: Upgrading from OpenGL ES 1.1 to 2.0

    - by Alex
    I have recently starting upgrading my ios game to the latest Cocos2D (2.0 rc), and I am having some difficulties upgrading my texture generation code to OpenGL 2.0. In the old version I generated images with this code: CCRenderTexture *rt = [CCRenderTexture renderTextureWithWidth:WIDTH height:HEIGHT]; [rt beginWithClear:bgColor.r g:bgColor.g b:bgColor.b a:bgColor.a]; glDisable(GL_TEXTURE_2D); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glVertexPointer(2, GL_FLOAT, 0, verts); glColorPointer(4, GL_FLOAT, 0, colors); glDrawArrays(GL_TRIANGLE_STRIP, 0, (GLsizei)nVerts); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnable(GL_TEXTURE_2D); [rt end]; But since OpenGL 2.0 works differently this code won't work. What is the best way to use the new OpenGL?

    Read the article

  • Big level objects collision system for 2d game

    - by Aristarhys
    I read many variants today and get some knowledge in general, so here is a steps of mine thoughts in pictures (horrible paint.net ones). We need to develop grid system, so we check only thing near, perform simple check to cut out deep check, and at - last deep check like per-pixel collision check. Step 1 - Let p1, p2 are some sprites lets first just check with circle collision - because large distance between p1, p2 this fails and of course so we don't need test more deeply. But if we have not 2, but 20 objects, why we need to even circle test something so far outside of our view. Step 2 - Add basic column system, now we don't bother with p2 if it's in a column far from p1 column, so we even don't do circle test. But p3 is in the same col, so let do circle test, which of course will fail. Step 3 - Lets improve column system to the grid system with grid cell size just like p1, p2, p3 collision boxes, so we cut out things much top or below p1. And this is all great until comes BIG OBJs which is some kind of platforms. They are much bigger then grid cell. Circle test for will be successful, but deep check for whole big obj will fail And that the part I can't get. How do I store the grid position of big object? Like 4 grid coords for big object vertexes? And if one of them close to p1 do circle check for centre of big object then a deep one if succeed? Am I do it wrong? My possible solution:

    Read the article

  • Deferred rendering order?

    - by Nick Wiggill
    There are some effects for which I must do multi-pass rendering. I've got the basics set up (FBO rendering etc.), but I'm trying to get my head around the most suitable setup. Here's what I'm thinking... The framebuffer objects: FBO 1 has a color attachment and a depth attachment. FBO 2 has a color attachment. The render passes: Render g-buffer: normals and depth (used by outline & DoF blur shaders); output to FBO no. 1. Render solid geometry, bold outlines (as in toon shader), and fog; output to FBO no. 2. (can all render via a single fragment shader -- I think.) (optional) DoF blur the scene; output to the default frame buffer OR ELSE render FBO2 directly to default frame buffer. (optional) Mesh wireframes; composite over what's already in the default framebuffer. Does this order seem viable? Any obvious mistakes?

    Read the article

  • About floating point precision and why do we still use it

    - by system_is_b0rken
    Floating point has always been troublesome for precision on large worlds. This article explains behind-the-scenes and offers the obvious alternative - fixed point numbers. Some facts are really impressive, like: "Well 64 bits of precision gets you to the furthest distance of Pluto from the Sun (7.4 billion km) with sub-micrometer precision. " Well sub-micrometer precision is more than any fps needs (for positions and even velocities), and it would enable you to build really big worlds. My question is, why do we still use floating point if fixed point has such advantages? Most rendering APIs and physics libraries use floating point (and suffer it's disadvantages, so developers need to get around them). Are they so much slower? Additionally, how do you think scalable planetary engines like outerra or infinity handle the large scale? Do they use fixed point for positions or do they have some space dividing algorithm?

    Read the article

  • How to make a player stay within bounds of world with 2D Camera

    - by Craig
    Im creating a simple top down survival game. At the moment, i have the sprite which is a ship and moves by rotating left or right then going forward in that direction. I have implemented a 2D camera, its always centered on the player. However, when i move towards the bounds of the world that the sprite is in it just keeps on going :( How to i sort it that it stops at the edge of the world and cant go beyond it? Cheers :) Below is the main game class using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace GamesCoursework_1 { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; // player variables Texture2D Ship; Vector2 Ship_Position; float Ship_Rotation = 0.0f; Vector2 Ship_Origin; Vector2 Ship_Velocity; const float tangentialVelocity = 4f; float friction = 0.05f; static Point CameraViewport = new Point(800, 800); Camera2d cam = new Camera2d((int)CameraViewport.X, (int)CameraViewport.Y); //Size of world static Point worldSize = new Point(1600, 1600); // Screen variables static Point worldCenter = new Point(worldSize.X / 2, worldSize.Y / 2); Rectangle playerBounds = new Rectangle(CameraViewport.X / 2, CameraViewport.Y / 2, worldSize.X - CameraViewport.X, worldSize.Y - CameraViewport.Y); Rectangle worldBounds = new Rectangle(0, 0, worldSize.X, worldSize.Y); Texture2D background; public Game1() { graphics = new GraphicsDeviceManager(this); graphics.PreferredBackBufferWidth = CameraViewport.X; graphics.PreferredBackBufferHeight = CameraViewport.Y; Content.RootDirectory = "Content"; } /// <summary> /// Allows the game to perform any initialization it needs to before starting to run. /// This is where it can query for any required services and load any non-graphic /// related content. Calling base.Initialize will enumerate through any components /// and initialize them as well. /// </summary> protected override void Initialize() { // TODO: Add your initialization logic here base.Initialize(); } /// <summary> /// LoadContent will be called once per game and is the place to load /// all of your content. /// </summary> protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); // TODO: use this.Content to load your game content here Ship = Content.Load<Texture2D>("Ship"); Ship_Origin.X = Ship.Width / 2; Ship_Origin.Y = Ship.Height / 2; background = Content.Load<Texture2D>("aus"); Ship_Position = new Vector2(worldCenter.X, worldCenter.Y); cam.Pos = Ship_Position; cam.Zoom = 1f; } /// <summary> /// UnloadContent will be called once per game and is the place to unload /// all content. /// </summary> protected override void UnloadContent() { // TODO: Unload any non ContentManager content here } /// <summary> /// Allows the game to run logic such as updating the world, /// checking for collisions, gathering input, and playing audio. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here Ship_Position = Ship_Velocity + Ship_Position; keyPressed(); base.Update(gameTime); } /// <summary> /// This is called when the game should draw itself. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // TODO: Add your drawing code here spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, null, null, null,null, cam.get_transformation(GraphicsDevice)); spriteBatch.Draw(background, Vector2.Zero, Color.White); spriteBatch.Draw(Ship, Ship_Position, Ship.Bounds, Color.White, Ship_Rotation, Ship_Origin, 1.0f, SpriteEffects.None, 0f); spriteBatch.End(); base.Draw(gameTime); } private void Ship_Move(Vector2 move) { Ship_Position += move; } private void keyPressed() { KeyboardState keyState; // Move right keyState = Keyboard.GetState(); if (keyState.IsKeyDown(Keys.Right)) { Ship_Rotation = Ship_Rotation + 0.1f; } if (keyState.IsKeyDown(Keys.Left)) { Ship_Rotation = Ship_Rotation - 0.1f; } if (keyState.IsKeyDown(Keys.Up)) { Ship_Velocity.X = (float)Math.Cos(Ship_Rotation) * tangentialVelocity; Ship_Velocity.Y = (float)Math.Sin(Ship_Rotation) * tangentialVelocity; if ((int)Ship_Position.Y < playerBounds.Bottom && (int)Ship_Position.Y > playerBounds.Top) cam._pos.Y = Ship_Position.Y; if ((int)Ship_Position.X > playerBounds.Left && (int)Ship_Position.X < playerBounds.Right) cam._pos.X = Ship_Position.X; //tried world bounds here if (!worldBounds.Contains(new Point((int)Ship_Position.X, (int)Ship_Position.Y))) Ship_Position -= new Vector2(0.0f, -tangentialVelocity * 2); if (!worldBounds.Contains(new Point((int)Ship_Position.X, (int)Ship_Position.Y))) Ship_Position -= new Vector2(0.0f, 2 * tangentialVelocity); } else if(Ship_Velocity != Vector2.Zero) { float i = Ship_Velocity.X; float j = Ship_Velocity.Y; Ship_Velocity.X = i -= friction * i; Ship_Velocity.Y = j -= friction * j; if ((int)Ship_Position.Y < playerBounds.Bottom && (int)Ship_Position.Y > playerBounds.Top) cam._pos.Y = Ship_Position.Y; if ((int)Ship_Position.X > playerBounds.Left && (int)Ship_Position.X < playerBounds.Right) cam._pos.X = Ship_Position.X; } if (keyState.IsKeyDown(Keys.Q)) { if (cam.Zoom < 2f) cam.Zoom += 0.05f; } if (keyState.IsKeyDown(Keys.A)) { if (cam.Zoom > 0.3f) cam.Zoom -= 0.05f; } } } } my 2d camera class using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; namespace GamesCoursework_1 { public class Camera2d { protected float _zoom; // Camera Zoom public Matrix _transform; // Matrix Transform public Vector2 _pos; // Camera Position protected float _rotation; // Camera Rotation public int _viewportWidth, _viewportHeight; // viewport size public Camera2d(int ViewportWidth, int ViewportHeight) { _zoom = 1.0f; _rotation = 0.0f; _pos = Vector2.Zero; _viewportWidth = ViewportWidth; _viewportHeight = ViewportHeight; } // Sets and gets zoom public float Zoom { get { return _zoom; } set { _zoom = value; if (_zoom < 0.1f) _zoom = 0.1f; } // Negative zoom will flip image } public float Rotation { get { return _rotation; } set { _rotation = value; } } // Auxiliary function to move the camera public void Move(Vector2 amount) { _pos += amount; } // Get set position public Vector2 Pos { get { return _pos; } set { _pos = value; } } public Matrix get_transformation(GraphicsDevice graphicsDevice) { _transform = // Thanks to o KB o for this solution Matrix.CreateTranslation(new Vector3(-_pos.X, -_pos.Y, 0)) * Matrix.CreateRotationZ(Rotation) * Matrix.CreateScale(new Vector3(Zoom, Zoom, 1)) * Matrix.CreateTranslation(new Vector3(_viewportWidth * 0.5f, _viewportHeight * 0.5f, 0)); return _transform; } } }

    Read the article

  • Javascript A* path finding

    - by Veyha
    I am trying to learn A* path finding. I am using this library - https://github.com/qiao/PathFinding.js But there is one thing I don't understand how to do. To find a path from player.x/player.y (player.x and player.y are both 0) to 10/10 I use this code var path = finder.findPath(player.x, player.y, 10, 10, grid); This gives an array of where I need to move, but how do I apply this array to my player.x and player.y? The path structure looks like this path = [[0, 0], [1, 0], [1, 1], ..., [10, 10]]

    Read the article

  • How to move an object using X and Y coordinates in JavaScript

    - by Geroy290
    I am making a 2d game with JavaScript and HTML5 and am trying to move an image that I have drawn with JavaScript like so: //canvas var c = document.getElementById("gameCanvas"); var ctx = c.getContext("2d"); //baseball var baseball = new Image(); baseball.onload = function() { ctx.drawImage(baseball, 400, 425); }; baseball.src = "baseball2.png"; I'm not sure how I would move it though, I have seen many people seem to just type something like ballX and ballY but I don't understand where the actual x and y definition comes from. Here is my code so far: http://jsfiddle.net/xRfua/ I have a different image source but it is a local source so I couldn't include it. Thanks in a dvance for any help!

    Read the article

  • How can I create animated card graphics like in Hearthstone?

    - by Appeltaart
    In the game Hearthstone, there are cards with animated images on them. A few examples: http://www.hearthhead.com/card=281/argent-commander http://www.hearthhead.com/card=469/blood-imp The animations seem to be composed of multiple effects: Particle systems. Fading sprites in and out/rotating them Simple scrolling textures A distortion effect, very evident in the cape and hair of example 1. Swirling smoke effects, the light in example 1 and the green/purple glow in example 2. The first three elements are trivial, what I'd like to know is how the last two could be done. Can this even be done realtime in a game, or are they pre-rendered animations?

    Read the article

  • Previewing a Demo Level in Mobile for UDK?

    - by Reno Yeo
    I've already clicked on "Emulate Mobile Features" and everything has been compiled. I've also set the mobile previewer settings to iPhone 4's dimensions and features. However, when i click on the mobile previewer, a new window pops up but it goes into a "Not Responding" mode after a while. Is there anything I'm doing wrong? To be honest, I'm afraid of the difficulty curve required in learning UDK, but I am interested in developing a game for it.

    Read the article

  • How do i approach this collision model?

    - by PeeS
    this is the game level prototype i have already implemented. It has few objects per room to allow me to finally add some collision detection/response code into it. VIDEO As you can probably see, every object inside has it's own AABB, even the room itself has AABB. So a player is like 'inside the Room AABB'. My player will be exactly inside the room, so he would have to collide correctly with those AABBs, so that when he hits any of those objects inside he get's a proper collision response from those AABB's. Now i would like to hear from you what kind of collision approach should i choose in here? How do i approach this kind of stuff: AABB to AABB collision detection then when this is positive go with AABB - Tri to find proper plane normal and calculate response ? AABB to AABB then when positive go with AABB - AABB Side check to find proper proper plane normal and calculate response? Anything else? How do you do this ? Many thanks.

    Read the article

  • andEngine dynamic sprites

    - by Blucreation
    Ive just started with andEngine the past week and i only started learning java/android 3 weeks. I can use a for loop to add multiple sprites to the screen but when i try to check collisions on them it only does it to one and not the rest. I want to be able to add a specific number for sprites made from the same texture to the scene, add collision detection to them and also make them slide across the screen (im making a game where you avoid the obstacles). My simple code: private void createobstacle(float pX, float pY) { obstacle = new AnimatedSprite(pX, pY, this.mObjTextureRegion.deepCopy(), getVertexBufferObjectManager()); obstacle.setScale(MathUtils.random(0.5f, 3f)); scene.attachChild(obstacle); } private void createobstacle(int num) { for(int i=0; i<=num; i++ ) { final float xPos = MathUtils.random(30.0f, (CAMERA_WIDTH - 30.0f)); final float yPos = MathUtils.random(30.0f, (CAMERA_HEIGHT - 30.0f)); createobstacle(xPos, yPos); } } Ive read about arrays but i cannot find any tutorials about anything im stuck with. Any help would be great!

    Read the article

  • What game systems exist which uses camera input?

    - by Marc Pilgaard
    The group and I is in the middle of a semester project where we are currently researching on which game systems are using camera as input or as an interactive medium? We would like some help listing some of the game systems which uses camera input, as it seems hard to find other examples. Currently we know that webcam browser games uses camera input (Newgrounds webcam games), as well as the xbox kinect. I know this questions seems rather vague, though I still hope some people is capable of helping.

    Read the article

  • What should I worry about when changing OpenGL origin to upper left of screen?

    - by derivative
    For self education, I'm writing a 2D platformer engine in C++ using SDL / OpenGL. I initially began with pure SDL using the tutorials on sdltutorials.com and lazyfoo.net, but I'm now rendering in an OpenGL context (specifically immediate mode but I'm learning about VAOs/VBOs) and using SDL for interface, audio, etc. SDL uses a coordinate system with the origin in the upper left of the screen and the positive y-axis pointing down. It's easy to set up my orthographic projection in OpenGL to mirror this. I know that texture coordinates are a right-hand system with values from 0 to 1 -- flipping the texture vertically before rendering (well, flip the file before loading) yields textures that render correctly... which is fine if I'm drawing the entire texture, but ultimately I'll be using tilesets and can imagine problems. What should I be concerned about in terms of rendering when I do this? If anybody has any advice or they've done this themselves and can point out future pitfalls, that would be great, but really any thoughts would be appreciated.

    Read the article

  • Preventing item duplication?

    - by PuppyKevin
    For my game, there's two types of items - stackable, and nonstackable. Nonstackable items get assigned a unique ID that stays with it forever. A character ID is assosicated with the item, as is a state (CHANGED, UNCHANGED, NEW, REMOVED). The character ID and state is used for item saving purposes. Stackable items have one unique ID, as in the entire stack has one unique ID. For example: 5 Potions (stacked ontop of each other) has one unique ID. When dropping a nonstackable item, the state gets set to REMOVED, and the unique ID and state don't change. If picked up by another player, the state gets set to NEW, and the character ID gets changed to the new character's ID. When dropping all items in a stack of stackable items (for example, 5 potions out of 5) - it behaves just like a nonstackable item. When dropping some of a stack of stackable items (for example, 3 potions out of 5)... I really have no clue what to do. The 3 dropped potions have the state of REMOVED, but the same unique ID and character ID. If another player picks it up, it has no choice but to obtain a new unique ID, and its state gets changed to NEW and its character ID to the new one. If the dropping player picks it back up, they'd just be readded to the stack. There's two issues with that though. 1. If the player who dropped the 3 potions picks it back up, there's no way to tell if they legitimately dropped the items, or if they're duped items. 2. If another player picks up the 3 potions (assuming they're duped), there's no way to know if they're duped or not. My question is: How can I create a system that detects duplicated items for both nonstackable and stackable items?

    Read the article

  • OpenGL lighting with dynamic geometry

    - by Tank
    I'm currently thinking hard about how to implement lighting in my game. The geometry is quite dynamic (fixed 3D grid with custom geometry in each cell) and needs some light to get more depth and in general look nicer. A scene in my game always contains sunlight and local light sources like lamps (point lights). One can move underground, so sunlight must be able to illuminate as far as it can get. Here's a render of a typical situation: The lamp is positioned behind the wall to the top, and in the hollow cube there's a hole in the back, so that light can shine through. (I don't want soft shadows, this is just for illustration) While spending the whole day searching through Google, I stumbled on some keywords like deferred rendering, forward rendering, ambient occlusion, screen space ambient occlusion etc. Some articles/tutorials even refer to "normal shading", but to be honest I don't really have an idea to even do simple shading. OpenGL of course has a fixed lighting pipeline with 8 possible light sources. However they just illuminate all vertices without checking for occluding geometry. I'd be very thankful if someone could give me some pointers into the right direction. I don't need complete solutions or similar, just good sources with information understandable for someone with nearly no lighting experience (preferably with OpenGL).

    Read the article

  • I Don't Understand Anything About Randomly Generated Worlds [closed]

    - by Alex Larsen
    What tools do I need to make a Minecraft-like generated world? I heard about Perlin noise and Simplex, but I don't understand anything about them. So far all I found on the internet was a Simplex version for C#, and all it has is functions, and this is what I get: Console.WriteLine(Noise.Generate(SomeNumber, SomeNumber, SumNumber)); Outputs random floats. I'm really lost. I don't understand the whole random generated worlds concept. Can someone help me? And if I use the noise thing I don't understand how to use it.

    Read the article

  • PNG file loading error in ImageMagick

    - by khanhhh89
    I'm trying to understand the tutorial 16 at http://ogldev.atspace.co.uk, which requires the image processing library ImageMagick. But when I run the tutorial, I encountered an following error: freeglut: failed to change scree settings Error loading textures 'test.png': no decode delegates for this image format 'C:/../appdata/magick-6024a_cIJcw90t-j'@error/constitute.c/ReadImage/552 I searched for google and found that my ImageMagick library do not have PNG Delegaes, but when I checked for the information of ImageMagick, it sees PNG in its delegate lists. Command line: convert -configure Result: LIB_VERSION 0x687 DELEGATES: bzlib, freetype, jpeg, jp2, lcms, png, tiff, x11, xml, wmf, zlib Could you explain to me this error, thanks so much!

    Read the article

  • Any ideas on reducing lag in terrain generation?

    - by l5p4ngl312
    Ok so here's the deal. I've written an isometric engine that generates terrain based on camera values using 2D perlin noise. I planned on doing 3D but first I need to work out the lag issues I'm having. I will try to explain how I am doing this so that maybe someone can spot where I am going wrong. I know it should not be this laggy. There is the abstract class Block which right now just contains render(). BlockGrass, etc. extend this class and each has code in the render function to create a textured quad at the given position. Then there is the class Chunk which has the function Generate() and setBlocksInArea(). Generate uses 2D perlin noise to make a height map and stores the heights in a 2D array. It stores the positions of each block it generates in blockarray[x][y][z]. The chunks are 8x8x128. In the main game class there is a 3D array called blocksInArea. The blocks in this array are what gets rendered. When a chunk generates, it adds its blocks to this array at the correct index. It is like this so chunks can be saved to the hard drive (even though they aren't yet) but there can still be optimization with the rendering that you wouldn't have if you rendered each chunk separately. Here's where the laggy part comes in: When the camera moves to a new chunk, a row of chunks generates on the end of the axis that the camera moved on. But it still has to move the other chunks up/down in the blocksInArea (render) array. It does this by calculating the new position in the array and doing the Chunk.setBlocksInArea(): for(int x = 0; x < 8; x++){ for(int y = 0; y < 8; y++){ nx = x+(coordX - camCoordX)*8 ny = y+(coordY - camCoordY)*8 for(int z = 0; z < height[x][y]; z++){ blockarray[x][y][z] = Game.blocksInArea[nx][ny][z]; } } } My reasoning was that this would be much faster than doing the perlin noise all over again, but there are still little spikes of lag when you move in between chunks. Edit: Would it be possible to create a 3 dimensional array list so that shifting of chunks within the array would not be neccessary?

    Read the article

  • Optimizing drawing of cubes

    - by Christian Frantz
    After googling for hours I've come to a few conclusions, I need to either rewrite my whole cube class, or figure out how to use hardware instancing. I can draw up to 2500 cubes with little lag, but after that my fps drops. Is there a way I can use my class for hardware instancing? Or would I be better off rewriting my class for optimization? public class Cube { public GraphicsDevice device; public VertexBuffer cubeVertexBuffer; public Cube(GraphicsDevice graphicsDevice) { device = graphicsDevice; var vertices = new List<VertexPositionTexture>(); BuildFace(vertices, new Vector3(0, 0, 0), new Vector3(0, 1, 1)); BuildFace(vertices, new Vector3(0, 0, 1), new Vector3(1, 1, 1)); BuildFace(vertices, new Vector3(1, 0, 1), new Vector3(1, 1, 0)); BuildFace(vertices, new Vector3(1, 0, 0), new Vector3(0, 1, 0)); BuildFaceHorizontal(vertices, new Vector3(0, 1, 0), new Vector3(1, 1, 1)); BuildFaceHorizontal(vertices, new Vector3(0, 0, 1), new Vector3(1, 0, 0)); cubeVertexBuffer = new VertexBuffer(device, VertexPositionTexture.VertexDeclaration, vertices.Count, BufferUsage.WriteOnly); cubeVertexBuffer.SetData<VertexPositionTexture>(vertices.ToArray()); } private void BuildFace(List<VertexPositionTexture> vertices, Vector3 p1, Vector3 p2) { vertices.Add(BuildVertex(p1.X, p1.Y, p1.Z, 1, 0)); vertices.Add(BuildVertex(p1.X, p2.Y, p1.Z, 1, 1)); vertices.Add(BuildVertex(p2.X, p2.Y, p2.Z, 0, 1)); vertices.Add(BuildVertex(p2.X, p2.Y, p2.Z, 0, 1)); vertices.Add(BuildVertex(p2.X, p1.Y, p2.Z, 0, 0)); vertices.Add(BuildVertex(p1.X, p1.Y, p1.Z, 1, 0)); } private void BuildFaceHorizontal(List<VertexPositionTexture> vertices, Vector3 p1, Vector3 p2) { vertices.Add(BuildVertex(p1.X, p1.Y, p1.Z, 0, 1)); vertices.Add(BuildVertex(p2.X, p1.Y, p1.Z, 1, 1)); vertices.Add(BuildVertex(p2.X, p2.Y, p2.Z, 1, 0)); vertices.Add(BuildVertex(p1.X, p1.Y, p1.Z, 0, 1)); vertices.Add(BuildVertex(p2.X, p2.Y, p2.Z, 1, 0)); vertices.Add(BuildVertex(p1.X, p1.Y, p2.Z, 0, 0)); } private VertexPositionTexture BuildVertex(float x, float y, float z, float u, float v) { return new VertexPositionTexture(new Vector3(x, y, z), new Vector2(u, v)); } public void Draw(BasicEffect effect) { foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); device.SetVertexBuffer(cubeVertexBuffer); device.DrawPrimitives(PrimitiveType.TriangleList, 0, cubeVertexBuffer.VertexCount / 3); } } } The following class is a list that draws the cubes. public class DrawableList<T> : DrawableGameComponent { private BasicEffect effect; private ThirdPersonCam camera; private class Entity { public Vector3 Position { get; set; } public Matrix Orientation { get; set; } public Texture2D Texture { get; set; } } private Cube cube; private List<Entity> entities = new List<Entity>(); public DrawableList(Game game, ThirdPersonCam camera, BasicEffect effect) : base(game) { this.effect = effect; cube = new Cube(game.GraphicsDevice); this.camera = camera; } public void Add(Vector3 position, Matrix orientation, Texture2D texture) { entities.Add(new Entity() { Position = position, Orientation = orientation, Texture = texture }); } public override void Draw(GameTime gameTime) { foreach (var item in entities) { effect.VertexColorEnabled = false; effect.TextureEnabled = true; effect.Texture = item.Texture; Matrix center = Matrix.CreateTranslation(new Vector3(-0.5f, -0.5f, -0.5f)); Matrix scale = Matrix.CreateScale(1f); Matrix translate = Matrix.CreateTranslation(item.Position); effect.World = center * scale * translate; effect.View = camera.view; effect.Projection = camera.proj; effect.FogEnabled = true; effect.FogColor = Color.CornflowerBlue.ToVector3(); effect.FogStart = 1.0f; effect.FogEnd = 50.0f; cube.Draw(effect); } base.Draw(gameTime); } } } There are probably many reasons that my fps is so slow, but I can't seem to figure out how to fix it. I've looked at techcraft as well, but what I have is too specific to what I want the outcome to be to just rewrite everything from scratch

    Read the article

  • How to make my simple round sprite look right in XNA

    - by Joshua Perina
    Ok, I'm very new to graphics programming (but not new to coding). I'm trying to load a simple image in XNA which I can do fine. It is a simple round circle which I made in photoshop. The problem is the edges show up rough when I draw it on the screen even with the exact size. The anti-aliasing is missing. I'm sure I'm missing something very simple: GraphicsDevice.Clear(Color.Black); // TODO: Add your drawing code here spriteBatch.Begin(); spriteBatch.Draw(circle, new Rectangle(10, 10, 10, 10), Color.White); spriteBatch.End(); Couldn't post picture because I'm a first time poster. But my smooth png circle has rough edges. So I found that if I added: spriteBatch.Begin(SpriteSortMode.FrontToBack, BlendState.NonPremultiplied); I can get a smooth image when the image is the same size as the original png. But if I want to scale that image up or down then the rough edges return. How do I get XNA to smoothly resize my simple round image to a smaller size without getting the rough edges?

    Read the article

  • Repairing back-facing triangles without user input

    - by LTR
    My 3D application works with user-imported 3D models. Frequently, those models have a few vertices facing into the wrong direction. (For example, there is a 3D roof and a few triangles of that roof are facing inside the building). I want to repair those automatically. We can make several assumptions about these 3D models: they are completely closed without holes, and the camera is always on the outside. My idea: Shoot 500 rays from every triangle outwards into all directions. From the back side of the triangle, all rays will hit another part of the model. From the front side, at least one ray will hit nothing. Is there a better algorithm? Are there any papers about something like this?

    Read the article

  • Detect if square in grid is within a diamond shape

    - by myrkos
    So I have a game in which basically everything is a square inside a big grid. It's easy to check if a square is inside a box whose center is another square: *** x *o* --> x is not in o's square *** **x *o* --> x IS in o's square *** This can be done by simply subtracting the coordinates of o and x, then taking the largest coordinate of that and comparing it with the half side length. Now I want to do the same thing but check if x is in o's diamond, like so: * **x **o** --> x IS in o's diamond *** * What would be the best way to check if a square is in another square's surrounding diamond-shaped area, given the diamond's half width/height?

    Read the article

  • How to track many in-game statistics

    - by Alex Schearer
    I am looking to track many in-game events, e.g. the score of each move, how many moves are taken, what types of moves, etc. A lot of stats can simply be tracked with a counter. In some cases I need to aggregate data in order to calculate the value (e.g. most common move). How are you tracking in-game stats for your games? How do you avoid creating a class with tens or hundreds of fields? How do you avoid littering the code with tracking invocations? How do you abstract the aggregate data so as to avoid rewriting it for each scenario?

    Read the article

  • How do I design a game framework for fast reaction to user input?

    - by Miro
    I've played some games at cca 30 fps and some of them had low reaction time - cca 0.1sec. I hadn't knew why. Now when I'm designing my framework for crossplatform game, I know why. Probably they've been preparing new frame during rendering the previous. RENDER 1 | RENDER 2 | RENDER 3 | RENDER 4 PREPARE 2 | PREPARE 3 | PREPARE 4 | PREPARE 5 I see first frame when second frame is being rendered and third frame being prepared. If I react in that time to 1st frame it will result in forth frame. So it takes 3/FPS seconds to appear results. In 30 fps it would be 100ms, what is quite bad. So i'm wondering what should I design my framework to response to user interaction quickly?

    Read the article

< Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >