Search Results

Search found 25952 results on 1039 pages for 'development lifecycle'.

Page 526/1039 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • How to set sprite source coordinates?

    - by ChaosDev
    I am creating own sprite drawer with DX11 on C++. Works fine but I dont know how to apply source rectangle to texture coordinates of rendering surface(for animation sprite sheets) //source = (0,0,32,64); //RECT D3DXVECTOR2 t0 = D3DXVECTOR2( 1.0f, 0.0f); D3DXVECTOR2 t1 = D3DXVECTOR2( 1.0f, 1.0f); D3DXVECTOR2 t2 = D3DXVECTOR2( 0.0f, 1.0f); D3DXVECTOR2 t3 = D3DXVECTOR2( 0.0f, 1.0f); D3DXVECTOR2 t4 = D3DXVECTOR2( 0.0f, 0.0f); D3DXVECTOR2 t5 = D3DXVECTOR2( 1.0f, 0.0f); VertexPositionColorTexture vertices[] = { { D3DXVECTOR3( dest.left+dest.right, dest.top, z),D3DXVECTOR4(1,1,1,1), t0}, { D3DXVECTOR3( dest.left+dest.right, dest.top+dest.bottom, z),D3DXVECTOR4(1,1,1,1), t1}, { D3DXVECTOR3( dest.left, dest.top+dest.bottom, z),D3DXVECTOR4(1,1,1,1), t2}, { D3DXVECTOR3( dest.left, dest.top+dest.bottom, z),D3DXVECTOR4(1,1,1,1), t3}, { D3DXVECTOR3( dest.left , dest.top, z),D3DXVECTOR4(1,1,1,1), t4}, { D3DXVECTOR3( dest.left+dest.right, dest.top, z),D3DXVECTOR4(1,1,1,1), t5}, };

    Read the article

  • Text on a model

    - by alecnash
    I am trying to put some text on a Model and I want it to be dynamic. Did some research and came up with drawing the text on the texture and then set it on the model. I use something like this: public static Texture2D SpriteFontTextToTexture(SpriteFont font, string text, Color backgroundColor, Color textColor) { Size = font.MeasureString(text); RenderTarget2D renderTarget = new RenderTarget2D(GraphicsDevice, (int)Size.X, (int)Size.Y); GraphicsDevice.SetRenderTarget(renderTarget); GraphicsDevice.Clear(Color.Transparent); Spritbatch.Begin(); //have to redo the ColorTexture Spritbatch.Draw(ColorTexture.Create(GraphicsDevice, 1024, 1024, backgroundColor), Vector2.Zero, Color.White); Spritbatch.DrawString(font, text, Vector2.Zero, textColor); Spritbatch.End(); GraphicsDevice.SetRenderTarget(null); return renderTarget; } When I was working with primitives and not models everything worked fine because I set the texture exactly where I wanted but with the model (RoundedRect 3D button). It now looks like that: Is there a way to have the text centered only on one side?

    Read the article

  • Create Adventure Game Scene/Room/Backdrop from Real Photo

    - by Lyuben
    Is there a suitable software or a good tutorial for creating 2D rooms/scenery for adventure games from real photos? Is it possible to achieve good results by using photos, or the hand-drawn style will always be the best choice? Thank you! --- EDIT --- I want to clarify that I'm particularly interested in the art creation process, not on the environment in which to build games. I'm writing the game in Java for Android, but I don't think it matters. Also, I'm not trying to decide if the game will have photo realistic rooms or not - I want to achieve 2d pixelated, old-school style background scenes and I wonder if this can be made from photos, because I cannot draw them myself. For example, can I shoot a scene with my camera and then make it look something like the image in the following link: PIXEL ART FOREST I know that I cannot get the same quality as an absolutely hand-drawn pixel, but I'm looking for some decent technology/tutorial/software to make them somewhat similar.

    Read the article

  • Animations not accepted in animator

    - by Lautaro
    In the official Unity Animator State Machine tutorial video animation clips are dragged out from the assets folder into the animator and dropped. I have a 3D model that i bought online to experiment with that comes with animations. I added a custom made animation as well. These all work well in my demo project. But when i add a animator to the assets and try to drag and drop animations onto it it doesnt work. I get a forbidden-sign as a mouse pointer. I try to add animations through the inspector but that does not work either. The tutorials makes it seem so easy and does not talk anything about what animations can be used. What am i doing wrong?

    Read the article

  • Why wont the LibGDX's main class Initialize on Android Launcher?

    - by BluFire
    So I was searching for different ways that could suit me in programming and came across LibGDX. Naturally I looked at the tutorial. As I was doing it, I was following the steps word for word, except naming the classes. In the end, I was able to create the desktop launcher for the game but not the android launcher. The following error is my error: Cannot instantiate the type Game (Game is the name of the class) I got the tutorial from http://steigert.blogspot.com.au/2012/02/1-libgdx-tutorial-introduction.html The link in the tutorial is the original but it uses jogl instead of lwjgl.

    Read the article

  • rotate opengl mesh relative to camera

    - by shuall
    I have a cube in opengl. It's position is determined by multiplying it's specific model matrix, the view matrix, and the projection matrix and then passing that to the shader as per this tutorial (http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/). I want to rotate it relative to the camera. The only way I can think of getting the correct axis is by multiplying the inverse of the model matrix (because that's where all the previous rotations and tranforms are stored) times the view matrix times the axis of rotation (x or y). I feel like there's got to be a better way to do this like use something other than model, view and projection matrices, or maybe I'm doing something wrong. That's what all the tutorials I've seen use. PS I'm also trying to keep with opengl 4 core stuff. edit: If quaternions would fix my problems, could someone point me to a good tutorial/example for switching from 4x4 matrices to quaternions. I'm a little daunted by the task.

    Read the article

  • Artifacts when draw particles with some alpha

    - by piotrek
    I want to draw in my game some particles. But when I draw one particle above another particle, alpha channel from this above "clear" previous drawed particle. I set in OpenGL blend in this way: glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA ); My fragment shader for particle is very simple: precision highp float; precision highp int; uniform sampler2D u_maps[2]; varying vec2 v_texture; uniform float opaque; uniform vec3 colorize; void main() { vec4 texColor = texture2D(u_maps[0], v_texture); gl_FragColor.rgb = texColor.rgb * colorize.rgb; gl_FragColor.a = texColor.a * opaque; } I attach screenshot from this: Do you know what I made wrong ? I use OpenGL ES 2.0.

    Read the article

  • How can I get a 2D texture to rotate like a compass in XNA?

    - by IronGiraffe
    I'm working on a small maze puzzle game and I'm trying to add a compass to make it somewhat easier for the player to find their way around the maze. The problem is: I'm using XNA's draw method to rotate the arrow and I don't really know how to get it to rotate properly. What I need it to do is point towards the exit from the player's position, but I'm not sure how I can do that. So does anyone know how I can do this? Is there a better way to do it?

    Read the article

  • Should I be using a game engine?

    - by Kyle
    I'm an experienced programmer, but I'm completely new to making games. I'm thinking of making an iPhone game that is similar to a 2d tower defense type game. In the web programming world, it would be a big waste of time to make a website without using some sort of web framework (eg ruby on rails). Is that the same for making games? Do people mostly use some sort of framework/game engine for making a game? If so, what are the popular ones for iOS?

    Read the article

  • Wall avoidance steering

    - by Vodemki
    I making a small steering simulator using the reynolds boid algorythm. Now I want to add a wall avoidance feature. My walls are in 3D and defined using two points like that: ---------. P2 | | P1 .--------- My agents have a velocity, a position, etc... Could you tell me how to make avoidance with my agents ? Vector2D ReynoldsSteeringModel::repulsionFromWalls() { Vector2D force; vector<Wall *> wallsList = walls(); Point2D pos = self()->position(); Vector2D velocity = self()->velocity(); for (unsigned i=0; i<wallsList.size(); i++) { //TODO } return force; } Then I use all the forces returned by my boid functions and I apply it to my agent. I just need to know how to do that with my walls ? Thanks for your help.

    Read the article

  • Algorithm for procedural city generation?

    - by Zove Games
    I am planning on making a (simple) procedural city generator using Java. I need ideas on whan algorithm to use for the layout, and the actual buildings. The city will mostly have skyscrapers, not really much complex stuff. For the layout I already have a simple algorithm implemented: Create a Map with java.awt.Point keys and Integer values. Fill it with all the points in the city's bounds with the value as -1 (unnassigned) Shuffle the map, and assign the 1st 10 of the keys IDs (from 1-10) Loop until all points have IDs: Loop though all points: Assign points next to an assigned point IDs of the point next to them, if 2 or more points border the point, then randomly choose which ID the point will get. You will end up with 10 random regions. Make roads bordering these regions. Fill the inside of each region with a randomly spaced and randomly rotated grid PROBLEM: This is not the fastest way to do it. What algorithm should I use for the layout. And what should I use to make each building's design? I don't even know how I'm going to do that yet (fractals maybe). I just need some ideas, not actual code.

    Read the article

  • Sorting for 2D Drawing

    - by Nexian
    okie, looked through quite a few similar questions but still feel the need to ask mine specifically (I know, crazy). Anyhoo: I am drawing a game in 2D (isometric) My objects have their own arrays. (i.e. Tiles[], Objects[], Particles[], etc) I want to have a draw[] array to hold anything that will be drawn. Because it is 2D, I assume I must prioritise depth over any other sorting or things will look weird. My game is turn based so Tiles and Objects won't be changing position every frame. However, Particles probably will. So I am thinking I can populate the draw[] array (probably a vector?) with what is on-screen and have it add/remove object, tile & particle references when I pan the screen or when a tile or object is specifically moved. No idea how often I'm going to have to update for particles right now. I want to do this because my game may have many thousands of objects and I want to iterate through as few as possible when drawing. I plan to give each element a depth value to sort by. So, my questions: Does the above method sound like a good way to deal with the actual drawing? What is the most efficient way to sort a vector? Most of the time it wont require efficiency. But for panning the screen it will. And I imagine if I have many particles on screen moving across multiple tiles, it may happen quite often. For reference, my screen will be drawing about 2,800 objects at any one time. When panning, it will be adding/removing about ~200 elements every second, and each new element will need adding in the correct location based on depth.

    Read the article

  • What is the simplest way to render video into memory (for drawing to a texture) in .NET?

    - by sebf
    In my project I would like to be able to play back video on surfaces in the world. I intend to do this by having the video frames rendered to a block of memory, then use this to update a texture each frame. Everything is in place - except for the part that actually gets the video. I have looked on Google and found that the video library world is very expansive (and geared towards video processing), and am having trouble finding a suitable one. FFMpeg is very comprehensive, but is an entire suite and would take a good amount of work to integrate. So far the most promising library I've found is the one based on the VLC player libraries - by virtue of it using the same resources as VLC Player it is known to be very capable; it also renders to blocks of memory, but the API (at least of the one on Codeplex) is more of a port of the C++ API rather than a managed wrapper. The 'solution' can be any wrapper/API/library, but with characteristics that make it suitable for use in a rendering engine, namely: Renders the video frame data to memory, so it can be picked up and passed to a texture on the GPU easily. Super simple - all that is needed is a way to load, jump and render a frame programatically - ideally it would use the systems codecs and not require an assortment of plugins. Permissive license (LGPL or more free-er) .NET bindings at least; all the better if it is natively managed Can anyone suggest a lightweight, (.NET) library, that can take a video file, and spit out some frames into a byte[]?

    Read the article

  • Unity: Render 2D textures on a 3D object's face

    - by www.Sillitoy.com
    I am not familiar with 3D graphics and I'd like to know what is the right way to render some 2D figures on different points of a wider face of a 3D object. My 3D object is just a cube representing a poker table. I have 2D png for players placeholders and I'd like to render these figures on the 3D object where needed. An alternative solution would be to render the whole face with a big picture containing all the placeholders figures. However it would be a waste of memory and thus less efficient. What do you suggest me?

    Read the article

  • What library should I use for 2D Geometry? [closed]

    - by Luka
    I've been working on a 2D game in java, but found that java just didn't cut it for me and had forced me to a lot of bad design choices, so I've decided to port all my work to c++. The main reason I've decided change to c++ is that i had reached a point where i had 3 geometry libraries (the native, one from the game engine and one to handle "complex" polygons), none of witch worked very well together and i couldn't keep track of them. I'm new to c++, but i know all the basics. My question is, what would be a good geometry library to use, ideally it should be able to handle integer and decimal data types, have point, line, and polygon classes witch are able to check for intersection and contains. Thanks in advance, Luka

    Read the article

  • iOS + cocos2d: how to account for sprite's position for the different device dimensions in an universal app?

    - by fuzzlog
    All the questions I've seen regarding iOS universal apps (with or without cocos2d) deal with the "how to add graphics to a universal app". My question is, how does the code need to be written to ensure that the sprites appear appropriately on the screen (given that an iPhone 5's resolution is not proportional to an iPad's resolution)? Is it just a bunch of "if" statements and duplicate code or do iOS/cocos2d provide common function calls that will place the sprites at an appropriate position?

    Read the article

  • BoundingSpheres move when they should not

    - by NDraskovic
    I have a XNA 4.0 project in which I load a file that contains type and coordinates of items I need to draw to the screen. Also I need to check if one particular type (the only movable one) is passing in front or trough other items. This is the code I use to load the configuration: if (ks.IsKeyDown(Microsoft.Xna.Framework.Input.Keys.L)) { this.GraphicsDevice.Clear(Color.CornflowerBlue); Otvaranje.ShowDialog(); try { using (StreamReader sr = new StreamReader(Otvaranje.FileName)) { String linija; while ((linija = sr.ReadLine()) != null) { red = linija.Split(','); model = red[0]; x = red[1]; y = red[2]; z = red[3]; elementi.Add(Convert.ToInt32(model)); podatci.Add(new Vector3(Convert.ToSingle(x), Convert.ToSingle(y), Convert.ToSingle(z))); sfere.Add(new BoundingSphere(new Vector3(Convert.ToSingle(x), Convert.ToSingle(y), Convert.ToSingle(z)), 1f)); } } } catch (Exception ex) { Window.Title = ex.ToString(); } } The "Otvaranje" is an OpenFileDialog object, "elementi" is a List (determines the type of item that would be drawn), podatci is a List (determines the location where the items will be drawn) and sfere is a List. Now I solved the picking algorithm (checking for ray and bounding sphere intersection) and it works fine, but the collision detection does not. I noticed, while using picking, that BoundingSphere's move even though the objects that they correspond to do not. The movable object is drawn to the world1 Matrix, and the static objects are drawn into the world2 Matrix (world1 and world2 have the same values, I just separated them so that the static elements would not move when the movable one does). The problem is that when I move the item I want, all boundingSpheres move accordingly. How can I move only the boundingSphere that corresponds to that particular item, and leave the rest where they are?

    Read the article

  • How do I simplify a 2D game grid for level management while keeping its by-pixel features?

    - by Eric Thoma
    (I cross-posted this from StackOverflow as this seems to be a more appropriate forum. I've looked around a little here and I did not find an answer, so I hope this is not a recurring question.) This is a question dealing with 2D world design. I am playing around by creating a 2D bird's eye view shooter game, and I am looking to make the game sleek and advanced. I hope to be able to write physics so projectiles have momentum and knock-down properties. I am immediately running into the problem of world design. I need a way to have level files that store everything there is about a game. This is easiest by just having a grid of objects. But there are thin-walls and other objects that don't seem to fit into a traditional cell of a grid. I want to be able to fit all these together so I can streamline level design; so I don't have to put in the exact pixel-specific start and end of a wall. There doesn't seem to be an obvious translation from level file to game without forcing myself into a pacman-life scenario, meaning a scenario where the game feels boxy and discrete. There is a contrast between the smoothly (relatively) moving characters and finite jumps in a grid. I would appreciate an answer that would describe implementation options or point me to resources that do. I would also appreciate references to sites that teach game design. The language I am using is Java (although I would love to use C or C++, but I can never find convenient resources in those languages). Thank you for any answers. Please leave any questions in the space below; I will be able to answer them later tonight (28th Nov).

    Read the article

  • Do I need the 'w' component in my Vector class?

    - by bobobobo
    Assume you're writing matrix code that handles rotation, translation etc for 3d space. Now the transformation matrices have to be 4x4 to fit the translation component in. However, you don't actually need to store a w component in the vector do you? Even in perspective division, you can simply compute and store w outside of the vector, and perspective divide before returning from the method. For example: // post multiply vec2=matrix*vector Vector operator*( const Matrix & a, const Vector& v ) { Vector r ; // do matrix mult r.x = a._11*v.x + a._12*v.y ... real w = a._41*v.x + a._42*v.y ... // perspective divide r /= w ; return r ; } Is there a point in storing w in the Vector class?

    Read the article

  • How can I cleanly and elegantly handle data and dependancies between classes

    - by Neophyte
    I'm working on 2d topdown game in SFML 2, and need to find an elegant way in which everything will work and fit together. Allow me to explain. I have a number of classes that inherit from an abstract base that provides a draw method and an update method to all the classes. In the game loop, I call update and then draw on each class, I imagine this is a pretty common approach. I have classes for tiles, collisions, the player and a resource manager that contains all the tiles/images/textures. Due to the way input works in SFML I decided to have each class handle input (if required) in its update call. Up until now I have been passing in dependencies as needed, for example, in the player class when a movement key is pressed, I call a method on the collision class to check if the position the player wants to move to will be a collision, and only move the player if there is no collision. This works fine for the most part, but I believe it can be done better, I'm just not sure how. I now have more complex things I need to implement, eg: a player is able to walk up to an object on the ground, press a key to pick it up/loot it and it will then show up in inventory. This means that a few things need to happen: Check if the player is in range of a lootable item on keypress, else do not proceed. Find the item. Update the sprite texture on the item from its default texture to a "looted" texture. Update the collision for the item: it might have changed shape or been removed completely. Inventory needs to be updated with the added item. How do I make everything communicate? With my current system I will end up with my classes going out of scope, and method calls to each other all over the place. I could tie up all the classes in one big manager and give each one a reference to the parent manager class, but this seems only slightly better. Any help/advice would be greatly appreciated! If anything is unclear, I'm happy to expand on things.

    Read the article

  • GLSL per pixel lighting with custom light type

    - by Justin
    Ok, I am having a big problem here. I just got into GLSL yesterday, so the code will be terrible, I'm sure. Basically, I am attempting to make a light that can be passed into the fragment shader (for learning purposes). I have four input values: one for the position of the light, one for the color, one for the distance it can travel, and one for the intensity. I want to find the distance between the light and the fragment, then calculate the color from there. The code I have gives me a simply gorgeous ring of light that get's twisted and widened as the matrix is modified. I love the results, but it is not even close to what I am after. I want the light to be moved with all of the vertices, so it is always in the same place in relation to the objects. I can easily take it from there, but getting that to work seems to be impossible with my current structure. Can somebody give me a few pointers (pun not intended)? Vertex shader: attribute vec4 position; attribute vec4 color; attribute vec2 textureCoordinates; varying vec4 colorVarying; varying vec2 texturePosition; varying vec4 fposition; varying vec4 lightPosition; varying float lightDistance; varying float lightIntensity; varying vec4 lightColor; void main() { vec4 ECposition = gl_ModelViewMatrix * gl_Vertex; vec3 tnorm = normalize(vec3 (gl_NormalMatrix * gl_Normal)); fposition = ftransform(); gl_Position = fposition; gl_TexCoord[0] = gl_MultiTexCoord0; fposition = ECposition; lightPosition = vec4(0.0, 0.0, 5.0, 0.0) * gl_ModelViewMatrix * gl_Vertex; lightDistance = 5.0; lightIntensity = 1.0; lightColor = vec4(0.2, 0.2, 0.2, 1.0); } Fragment shader: varying vec4 colorVarying; varying vec2 texturePosition; varying vec4 fposition; varying vec4 lightPosition; varying float lightDistance; varying float lightIntensity; varying vec4 lightColor; uniform sampler2D texture; void main() { float l_distance = sqrt((gl_FragCoord.x * lightPosition.x) + (gl_FragCoord.y * lightPosition.y) + (gl_FragCoord.z * lightPosition.z)); float l_value = lightIntensity / (l_distance / lightDistance); vec4 l_color = vec4(l_value * lightColor.r, l_value * lightColor.g, l_value * lightColor.b, l_value * lightColor.a); vec4 color; color = texture2D(texture, gl_TexCoord[0].st); gl_FragColor = l_color * color; //gl_FragColor = fposition; }

    Read the article

  • Java 2D World question

    - by Munkybunky
    I have a 2D world background made up of a Grid of graphics, which I display on screen with a viewport (800x600) and it all works. My question is I have the following code to convert the mouse co-ordinates to world co-ordinates then World co-ordinates to grid co-ordinates then grid co-ordinates to screen co-ordinates. //Add camerax to mouse screen co-ords to convert to world co-ords. int cursorx_world=(int)camerax+(int)GameInput.mousex; int cursorx_grid=(int)cursorx_world/blocksize; // World Co-ords / gridsize give grid co-ords int cursorx_screen=-(int)camerax+(cursorx_grid*blocksize); So is there anyway I can convert straight from mouse screen co-ords to screen co-ordinates?

    Read the article

  • In 3D camera math, calculate what Z depth is pixel unity for a given FOV

    - by badweasel
    I am working in iOS and OpenGL ES 2.0. Through trial and error I've figured out a frustum to where at a specific z depth pixels drawn are 1 to 1 with my source textures. So 1 pixel in my texture is 1 pixel on the screen. For 2d games this is good. Of course it means that I also factor in things like the size of the quad and the size of the texture. For example if my sprite is a quad 32x32 pixels. The quad size is 3.2 units wide and tall. And the texcoords are 32 / the size of the texture wide and tall. Then the frustum is: matrixFrustum(-(float)backingWidth/frustumScale,(float)backingWidth/frustumScale, -(float)backingHeight/frustumScale, (float)backingHeight/frustumScale, 40, 1000, mProjection); Where frustumScale is 800 for a retina screen. Then at a distance of 800 from camera the sprite is pixel for pixel the same as photoshop. For 3d games sometimes I still want to be able to do this. But depending on the scene I sometimes need the FOV to be different things. I'm looking for a way to figure out what Z depth will achieve this same pixel unity for a given FOV. For this my mProjection is set using: matrixPerspective(cameraFOV, near, far, (float)backingWidth / (float)backingHeight, mProjection); With testing I found that at an FOV of 45.0 a Z of 38.5 is very close to pixel unity. And at an FOV of 30.0 a Z of 59.5 is about right. But how can I calculate a value that is spot on? Here's my matrixPerspecitve code: void matrixPerspective(float angle, float near, float far, float aspect, mat4 m) { //float size = near * tanf(angle / 360.0 * M_PI); float size = near * tanf(degreesToRadians(angle) / 2.0); float left = -size, right = size, bottom = -size / aspect, top = size / aspect; // Unused values in perspective formula. m[1] = m[2] = m[3] = m[4] = 0; m[6] = m[7] = m[12] = m[13] = m[15] = 0; // Perspective formula. m[0] = 2 * near / (right - left); m[5] = 2 * near / (top - bottom); m[8] = (right + left) / (right - left); m[9] = (top + bottom) / (top - bottom); m[10] = -(far + near) / (far - near); m[11] = -1; m[14] = -(2 * far * near) / (far - near); } And my mView is set using: lookAtMatrix(cameraPos, camLookAt, camUpVector, mView); * UPDATE * I'm going to leave this here in case anyone has a different solution, can explain how they do it, or why this works. This is what I figured out. In my system I use a 10th scale unit to pixels on non-retina displays and a 20th scale on retina displays. The iPhone is 640 pixels wide on retina and 320 pixels wide on non-retina (obsolete). So if I want something to be the full screen width I divide by 20 to get the OpenGL unit width. Then divide that by 2 to get the left and right unit position. Something 32 units wide centered on the screen goes from -16 to +16. Believe it or not I have an excel spreadsheet do all this math for me and output all the vertex data for my sprite sheet. It's an arbitrary thing I made up to do .1 units = 1 non-retina pixel or 2 retina pixels. I could have made it .01 units = 2 pixels and someday I might switch to that. But for now it's the other. So the width of the screen in units is 32.0, and that means the left most pixel is at -16.0 and the right most is at 16.0. After messing a bit I figured out that if I take the [0] value of an identity modelViewProjection matrix and multiply it by 16 I get the depth required to get 1:1 pixels. I don't know why. I don't know if the 16 is related to the screen size or just a lucky guess. But I did a test where I placed a sprite at that calculated depth and varied the FOV through all the valid values and the object stays steady on screen with 1:1 pixels. So now I'm just calculating the unityDepth that way. If someone gives me a better answer I'll checkmark it.

    Read the article

  • Where i must put .xnb files in mono game project using VS2010?

    - by user23899
    Hello there my problem was describe below In the "The Content Pipeline" paragraph http://blogs.msdn.com/b/bobfamiliar/archive/2012/08/07/windows-8-xna-and-monogame-part-3-code-migration-and-windows-8-feature-support.aspx#comments Author describe how fix it using VS2012 put xnb files to \AppX\Content folder but i use VS2010 and mono game templates for it and there is no folders like this so where i must put this asstes to run game correctly

    Read the article

  • snapping an angle to the closest cardinal direction

    - by Josh E
    I'm developing a 2D sprite-based game, and I'm finding that I'm having trouble with making the sprites rotate correctly. In a nutshell, I've got spritesheets for each of 5 directions (the other 3 come from just flipping the sprite horizontally), and I need to clamp the velocity/rotation of the sprite to one of those directions. My sprite class has a pre-computed list of radians corresponding to the cardinal directions like this: protected readonly List<float> CardinalDirections = new List<float> { MathHelper.PiOver4, MathHelper.PiOver2, MathHelper.PiOver2 + MathHelper.PiOver4, MathHelper.Pi, -MathHelper.PiOver4, -MathHelper.PiOver2, -MathHelper.PiOver2 + -MathHelper.PiOver4, -MathHelper.Pi, }; Here's the positional update code: if (velocity == Vector2.Zero) return; var rot = ((float)Math.Atan2(velocity.Y, velocity.X)); TurretRotation = SnapPositionToGrid(rot); var snappedX = (float)Math.Cos(TurretRotation); var snappedY = (float)Math.Sin(TurretRotation); var rotVector = new Vector2(snappedX, snappedY); velocity *= rotVector; //...snip private float SnapPositionToGrid(float rotationToSnap) { if (rotationToSnap == 0) return 0.0f; var targetRotation = CardinalDirections.First(x => (x - rotationToSnap >= -0.01 && x - rotationToSnap <= 0.01)); return (float)Math.Round(targetRotation, 3); } What am I doing wrong here? I know that the SnapPositionToGrid method is far from what it needs to be - the .First(..) call is on purpose so that it throws on no match, but I have no idea how I would go about accomplishing this, and unfortunately, Google hasn't helped too much either. Am I thinking about this the wrong way, or is the answer staring at me in the face?

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >