Search Results

Search found 25377 results on 1016 pages for 'development 4 0'.

Page 557/1016 | < Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >

  • I need advice on creating 3D walk cycles in XNA

    - by Zetar
    I want to purchase a number of 3D models from TurboSquid and animate them in an XNA game. I wrote a lot of games from 1985-1999 and have recently become involved with XNA. Now I would like to port one of my old games to the XBOX. I do have a background in 3D animation; but that was years ago. What is the current method for animating a walk cycle with a 3D model and using it inside XNA? Is there a book, software or a tutorial that you can recommend? Thanks in advance and sorry for such a broad and currently naive question.

    Read the article

  • Free movement in a tile-based isometric game

    - by xtr486
    Is there a reasonable easy way to implement free movement in a tile-based isometric game? Meaning that the player wouldn't just instantly jump from one tile to another or not be "snapped" to the grid (for example, if the movement between tiles were animated but you'd be locked from doing anything before the animation finishes). I'm a really beginner with anything related to game programming, but with the help of this site and some other resources it was quite easy to do the basic stuff, but I haven't been able to find any useful resources for this particular problem. Currently I've improvised something close to this: http://jsfiddle.net/KwW5b/4/ (WASD movement). The idea for the movement was to use the mouse map to detect when the player has moved to a different tile and then flip the offsets, and for the most part it works correctly (each corner makes the player move to wrong location: see http://www.youtube.com/watch?v=0xr15IaOhrI, which is probably because I couldn't get the full mouse map work properly), but I have no illusions that it is even close to a good/sane solution. And anyway, it's mostly just to demonstrate what kind of a thing I'd like to implement.

    Read the article

  • What's the difference between Pygame's Sound and Music classes?

    - by Southpaw Hare
    What are the key differences between the Sound and Music classes in Pygame? What are the limitations of each? In what situation would one use one or the other? Is there a benefit to using them in an unintuitive way such as using Sound objects to play music files or visa-versa? Are there specifically issues with channel limitations, and do one or both have the potential to be dropped from their channel unreliably? What are the risks of playing music as a Sound?

    Read the article

  • Complex, yet simple crafting system model

    - by KatShot
    I'm working on some arcade shooter/slasher, and the main logline is "Kick'em with everything you want". There's not so many enemies in GDD, main focus is on tons of weapons and gadgets to cause mayhem. To get weapon, you need to craft it, and now crafting system looks simple, like: 1) You got three slots for weapon parts (like A, B, C) 2) You collect misc weapon parts, and when you got atleast one for every slot, you can craft a weapon (for example, if you got A1, B1, B2, B3 and C1, you can craft such models - A1B1C1, A1B2C1, A1B3C1) As for me, this crafting system is too simple, because weapon parts will just fall from the top of screen, often enough. That's why I'm thinking about adding some more crafting system levels, like resources (collect 10 scrap pieces to make part A1 or C3), etc. My question is: How can i add some more complex, still simple, transparent levels in crafting system? upd. For example, in Minecraft or Terraria, first 5-10 crafting recipies quite transparent and simple IMHO. But then it turns into huge mess to understand, how to craft this or that (for example, fishing rod)

    Read the article

  • Possible to map mouse coordinates to isometric tiles with this coordinate system?

    - by plukich
    I'm trying to implement mouse interaction in a 2d isometric game, but I'm not sure if it's possible given the coordinate system used for tile maps in the game. I've read some helpful articles like this one: How to convert mouse coordinates to isometric indexes? However, this game's coordinate system is "jagged" for lack of a better word, and looks like this: Is it even possible to map mouse coordinates to this successfully, since the y-axis can't be drawn on this tile-map as a straight line? I've thought about doing odd-y-value translations and even-y-value translations with two different matricies, but that only makes sense going from tile-screen.

    Read the article

  • What do you use to create sprite graphics? [closed]

    - by SimpleRookie
    Possible Duplicate: What tools do you use for 2D art/sprite creation? What do you folks suggest for creating sprite graphics and sprite sheets? I fiddle with pixelformer and tilestudio. Pixelfromer has a kicken interface, it is quick and easy to make graphics, but a bit cumbersom if you want to make a spritemap. Tile Studio is a interesting mix or tiles and maps, but it is a bit buggy and basic. The Adobe series, just don't really seem to handle tiny graphics well. (there is a previous posting of this question existing, but it is a year old and I was hoping for further/updated input from the community)

    Read the article

  • (Android) How are OpenGL ES 1 framebuffers and textures sized?

    - by jens
    I am trying to draw to a texture using a framebuffer using OpenGL ES 1.1 on Android, Java. Afterwords I want to overlay this texture full-screen over my game. In theory, this works like a charm, but somehow the coordinates are off. For testing I drew something at (0,0) with width and height 200, and it partly is off-screen. This is how I create the framebuffer: fb = new int[1]; depthRb = new int[1]; renderTex = new int[1]; gl11ep.glGenFramebuffersOES(1, fb, 0); gl11ep.glGenRenderbuffersOES(1, depthRb, 0); // the depth buffer gl.glGenTextures(1, renderTex, 0);// generate texture gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); texBuffer = ByteBuffer.allocateDirect(buf.length*4).order(ByteOrder.nativeOrder()).asIntBuffer(); gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE, texW, texH, 0, GL10.GL_LUMINANCE, GL10.GL_UNSIGNED_BYTE, texBuffer); gl11ep.glBindRenderbufferOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); gl11ep.glRenderbufferStorageOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, GL11ExtensionPack.GL_DEPTH_COMPONENT16, texW, texH); Before I draw, I do this: gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, fb[0]); gl.glClearColor(0f, 0f, 0f, 0f); // specify texture as color attachment gl11ep.glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D, renderTex[0], 0); // attach render buffer as depth buffer gl11ep.glFramebufferRenderbufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_DEPTH_ATTACHMENT_OES, GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); I set texW = 1024 and texH = 512. When rendering this texture fullscreen, with a lightmask (size 200x200) placed at (0, 0) and (texW/2, texH/2). You can see that it seems like the coordinate system doesnt start at (0,0) as that light overlaps the screen and the images are not drawn as squares (my lightcone-texture is a circle, not an ellipse). So, how is the coordinate system of this offscreen-drawn texture defined? Thanks

    Read the article

  • Designing for visually impaired gamers

    - by Aku
    Globally the number of people of all ages visually impaired is estimated to be 285 million, of whom 39 million are blind. — World Health Organisation, 2010. (That's 4.2% and 0.6% of the world population.) Most videogames put a strong emphasis on visuals in their content delivery. Visually impaired gamers are largely left out. How do I design a game to be accessible to visually impaired gamers?

    Read the article

  • does unused vertices in a 3D object affect performance?

    - by Gajet
    For my game I need to generate a mesh dynamically. now I'm wondering does it have a noticeable affect in fps if I allocate more vertices than what I'm actually using or not? and does it matter if I'm using DirectX or OpenGL? edit final output will be a w*h cell grid, but for technical issues it's much more easier for me to allocate (w+1)*(h+1) vertices. sure I'll only use w*h vertices in indexing, and I know there is some memory wasting there, but I want to know if it also affect fps or not? (note that mesh is only generated once in each time you play the game)

    Read the article

  • How to pause and unpause the animation of a sprite?

    - by user1609578
    My game has a sprite representing a character. When the character picks up an item, the sprite should stop moving for a period of time. I use CCbezier to make the sprite move, like this: sprite->runaction(x) Now I want the sprite to stop its current action (moving) and later resume it. I can make the sprite stop by using: sprite->stopaction(x) but if I do that, I can't resume the movement. How can I do that?

    Read the article

  • Rotating object around moving object/player in 2D

    - by Boston
    I am trying to implement a camera which rotates around the world around the player. I have found many solutions online to the task of rotating an object about the origin, or about an arbitrary point. The procedure seems to be to translate the point to be rotated about to the origin, perform the rotation, translate back, then draw. I have gotten this working for rotation around the origin as well as for a fixed point. Rotation of objects around the player works as well, provided the player does not move. However, if the objects are rotated around the player by some non-zero degree, if the player moves after the rotation it causes the rotated objects to move as well. I probably have done a poor job explaining this so here's an image: http://i.imgur.com/1n63iWR.gif And here's the code for the behavior: renderx = (Ox - Px)*cos(camAngle) - (Oy - Py)*sin(camAngle) + Px; rendery = (Ox - Px)*sin(camAngle) + (Oy - Py)*cos(camAngle) + Py; Where (Ox,Oy) is the actual position of the object to be rotated and (Px,Py) is the actual position of the player. Any ideas? I am using C++ with SDL2.0.

    Read the article

  • How can a pygame image be colored?

    - by Juicy
    I'm writing a 2d particle system for a game in Pygame[1]. For the particles, I have an image surface loaded from a file -- basically a white primitive drawn over a transparent background. I'd like the particle engine to emit variously colored particles, but I'm not sure how to tell Pygame to color the surface. I've looked through what passes for documentation, but I'm having trouble finding anything. [1] Yeah, I don't really like Pygame, but my course insists I write this project in Python.

    Read the article

  • Game 30% done on HTML5. Maybe it was a bad idea. Should I change to Unity3d? [on hold]

    - by Dokkat
    I'm creating a 3d game on HTML5. It's 30% complete and the hard part is already coded. The server is on node.js.Now I'm realizing that maybe it was not a wise choice. This is because I realized: Three.js still has many bugs. I don't see the same thing on every machine. Each browser, OS, can give different results. I'm afraid my clients will have a great stress installing my game properly. I have tons of sprites and models on my game. I wonder if my clients will have to load all them again everytime they want to play? I wonder if a Node.js server will be fast enough to handle it, and I'm afraid it won't be scalable. What would you advise me? Should I continue and finish the game on HTML5 or is it better to remake it on something else, like Unity3d for the client and (what?) for the server?

    Read the article

  • How do I connect the seams between my terrain?

    - by gnomgrol
    I'm using c++ and D3D11 and I'm trying to create a (pretty) large terrain, lets say 4096x4096, maybe larger. I've got the basics of terrain creation and already split it up into chunks. But, when I'm rendering them (every chunk has its own vertex and index buffer, as well as its own heightmap), there are still little pieces missing between them. I read a lot about LOD(Level Of Detail) and GMM(Geometry Mipmap), but I can't really implement the theory I read. At the moment, it looks like this: I could really use some help, everything is welcome. If you have some good tutorials on any of this, please share them.

    Read the article

  • Change density of the body dynamically

    - by Siddharth
    In my game, I want to change density of my body object when it collide with other objects. I found something like following to change density but further I could not able to find any hint for this. So someone please help. Fixture fixture = goldenBoxArrayList.get(i) .getGoldenBoxBody() .getFixtureList().get(0); fixture.setDensity(0.5f); After setting fixture data I could not able to set it to the body.

    Read the article

  • How to detect and collide two elastic line segments?

    - by Tautrimas
    There are 4 moving physical nodes in 3D space. They are paired with two elastic line segments / strings (1 <- 2; 3 <- 4). Part I: How to detect the collision of two segments? Part II: On the moment of collision, fifth node is created at the intersection point and here you have the force-based graph. 5-th node (bend point) can slide among the strings as in a real world. Given the new coordinates of 4 nodes, how to calculate the position of the 5-th node on the next frame? I assume string force on the nodes to be F = -k * x where x is the string length. All I came up to is that the force between 5 and 1 equals 5 and 2 (the same with 3 and 4). What are the other properties?.

    Read the article

  • Why do I have to divide the origin of a quad by 4 instead of 2?

    - by vinzBad
    I'm currently transitioning from C#/XNA to C#/OpenTK but I'm getting stuck at the basics. So I have this Sprite-Class: public static bool EnableDebugDraw = true; public float X; public float Y; public float OriginX = 0; public float OriginY = 0; public float Width = 0.1f; public float Height = 0.1f; public Color TintColor = Color.Red; float _layerDepth = 0f; public void Render() { Vector2[] corners = { new Vector2(X-OriginX,Y-OriginY), //top left new Vector2(X +Width -OriginX,Y-OriginY),//top right new Vector2(X +Width-OriginX,Y+Height-OriginY),//bottom rigth new Vector2(X-OriginX,Y+Height-OriginY)//bottom left }; GL.Color3(TintColor); GL.Begin(BeginMode.Quads); { for (int i = 0; i < 4; i++) GL.Vertex3(corners[i].X,corners[i].Y,_layerDepth); } GL.End(); if (EnableDebugDraw) { GL.Color3(Color.Violet); GL.PointSize(3); GL.Begin(BeginMode.Points); { for (int i = 0; i < 4; i++) GL.Vertex2(corners[i]); } GL.End(); GL.Color3(Color.Green); GL.Begin(BeginMode.Points); GL.Vertex2(X + OriginX, Y + OriginY); GL.End(); } With the following setup I try to set the origin of the quad to the middle of the quad. _sprite.OriginX = _sprite.Width / 2; _sprite.OriginY = _sprite.Height / 2; but this sets the origin to the upper right corner of the quad, so i have to _sprite.OriginX = _sprite.Width / 4; _sprite.OriginY = _sprite.Height / 4; However this is not the intended behaviour, could you advise me how I fix this?

    Read the article

  • Unity, Unrealistic Sphere On Inclined Plane

    - by user1086516
    So I am trying to model a ball rolling down an inclined surface in Unity based on what I am observing in real life but it is still quite off. In Unity it takes the ball about 3 seconds to travel from a place to another specified place where in real life it only takes 1 second. The ball isn't as fast to react to the incline as in real life (even though I have tried giving the ball and surface low or zero friction values) The ball does not accelerate as nearly as fast as it does in real life What do I do to give the ball more realistic behavior ? I have tried messing around with mass, physics materials, drag, and angular drag on the ball and surface but it doesn't seem to be helping.

    Read the article

  • What is the correct install process to setup Node.js with Windows Azure Emulator

    - by PazoozaTest Pazman
    This question is related to this question: Node.js running under IIS Express Keeps Crashing to which I need help with reinstalling and getting node.js up and running in windows emulator working. Hello I am reinstalling my machine: Toshiha Laptop 2 GB Ram 32 bit processor What is the correct procedure from start to finish to get node.js development working, so far nothing has worked and the emulator (IIS Express) worker processor keeps crashing. No matter how many instances they all end up crashing. Up until two weeks ago my node development was working fine, but I had to do a reinstall, and since then I haven't been doing any node.js development on windows emulator because the latest June 2012 Azure SDK for Node.js is buggy. These are the steps I have taken: 1) Reformat HD 2) Insert Windows 7 N SP1 CD 3) Reboot machine into CD installation 4) Follow and wait until Windows 7 installed 5) Run Add/Remove programs + enable IIS + IIS management tools 6) Run Windows Update (installed about 53 updates) 7) Go here http://www.windowsazure.com/en-us/develop/nodejs/ 8) Click Windows Installer June 2012 and install Windows Azure SDK for Node.js - June 2012 9) Run Azure Powershell 10) Navigate to c:\node\testSite\webrole1 11) launch site: start-azureemulator -launch 12) Play around on website (then crash!) Problem signature: Problem Event Name: APPCRASH Application Name: iisexpress.exe Application Version: 8.0.8298.0 Application Timestamp: 4f620349 Fault Module Name: iiscore.dll Fault Module Version: 8.0.8298.0 Fault Module Timestamp: 4f63b65c Exception Code: c0000005 Exception Offset: 00021767 OS Version: 6.1.7601.2.1.0.256.28 Locale ID: 1033 Additional Information 1: f66d Additional Information 2: f66d807b515d6b2dc6f28f66db769a01 Additional Information 3: 7b2f Additional Information 4: 7b2f6797d07ebc2c23f2b227e779722e Am I missing a step in my resintall process? Do I have all the required files to do node.js windows azure emulator development? Why is IIS Express crashing all the time? Can I still do node.js windows azure emulator development without using IIS Express and use my local Windows 7 N (SP1) IIS 7.x that comes shipped?

    Read the article

  • How many threads should an Android game use?

    - by kvance
    At minimum, an OpenGL Android game has a UI thread and a Renderer thread created by GLSurfaceView. Renderer.onDrawFrame() should be doing a minimum of work to get the higest FPS. The physics, AI, etc. don't need to run every frame, so we can put those in another thread. Now we have: Renderer thread - Update animations and draw polys Game thread - Logic & periodic physics, AI, etc. updates UI thread - Android UI interaction only Since you don't ever want to block the UI thread, I run one more thread for the game logic. Maybe that's not necessary though? Is there ever a reason to run game logic in the renderer thread?

    Read the article

  • What are the challenges and benefits of writing games with a functional language?

    - by McMuttons
    While I know that functional languages aren't the most commonly used for game writing, there are a lot of benefits associate with them that seem like they would be interesting in any programming context. Especially the ease of parallelization I would think could be very useful as focus is moving toward more and more processors. Also, with F# as a new member of the .NET family, it can be used directly with XNA, for example, which lowers the threshold quite a bit, as opposed to going with LISP, Haskell, Erlang, etc. If anyone has experience writing games with functional code, what has turned out to be the positives and negatives? What was it suited for, what not? Edit: Finding it hard to decide that there's a single good answer for this, so it's probably better suited as a community wiki post.

    Read the article

  • C++ Parallel Asynchonous task

    - by Doodlemeat
    I am currently building a randomly generated terrain game where terrain is created automatically around the player. I am experiencing lag when the generated process is active, as I am running quite heavy tasks with post-processing and creating physics bodies. Then I came to mind using a parallel asynchronous task to do the post-processing for me. But I have no idea how I am going to do that. I have searched for C++ std::async but I believe that is not what I want. In the examples I found, a task returned something. I want the task to change objects in the main program. This is what I want: // Main program // Chunks that needs to be processed. // NOTE! These chunks are already generated, but need post-processing only! std::vector<Chunk*> unprocessedChunks; And then my task could look something like this, running like a loop constantly checking if there is chunks to process. // Asynced task if(unprocessedChunks.size() > 0) { processChunk(unprocessedChunks.pop()); } I know it's not far from easy as I wrote it, but it would be a huge help for me if you could push me at the right direction. In Java, I could type something like this: asynced_task = startAsyncTask(new PostProcessTask()); And that task would run until I do this: asynced_task.cancel();

    Read the article

  • Shared pointers causing weird behaviour

    - by Setzer22
    I have the following code in SFML 2.1 Class ResourceManager: shared_ptr<Sprite> ResourceManager::getSprite(string name) { shared_ptr<Texture> texture(new Texture); if(!texture->loadFromFile(resPath+spritesPath+name)) throw new NotSuchFileException(); shared_ptr<Sprite> sprite(new Sprite(*texture)); return sprite; } Main method: (I'll omit most of the irrelevant code shared_ptr<Sprite> sprite = ResourceManager::getSprite("sprite.png"); ... while(renderWindow.isOpen()) renderWindow.draw(*sprite); Oddly enough this makes my sprite render completely white, but if I do this instead: shared_ptr<Sprite> ResourceManager::getSprite(string name) { Texture* texture = new Texture; // <------- From shared pointer to pointer if(!texture->loadFromFile(resPath+spritesPath+name)) throw new NotSuchFileException(); shared_ptr<Sprite> sprite(new Sprite(*texture)); return sprite; } It works perfectly. So what's happening here? I assumed the shared pointer would work just as a pointer. Could it be that it's getting deleted? My main method is keeping a reference to it so I don't really understand what's going on here :S EDIT: I'm perfectly aware deleting the sprite won't delete the texture and this is generating a memory leak I'd have to handle, that's why I'm trying to use smart pointers on the first place...

    Read the article

  • RPG Equipped Item System

    - by Jimmt
    I'm making a 2d rpg with libgdx and java. I have an Inventory class with an Array of Items, and now I want to be able to equip items onto the player. Would it be more managable to do have every item have an "equipped" boolean flag have an "equipped" array in the player class have individual equipped fields in player class, e.g. private Item equippedWeapon; private Item equippedArmor; public void equipWeapon(Item weapon){ equippedWeapon = weapon; } Or just another way completely? Thanks.

    Read the article

< Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >