Search Results

Search found 35343 results on 1414 pages for 'development tools'.

Page 616/1414 | < Previous Page | 612 613 614 615 616 617 618 619 620 621 622 623  | Next Page >

  • Character coding / programming

    - by Jery
    Lately I tryed a few times to create characters for some games, but at some certain point (especially when collision detection came in) everything became messy and the interaction between chars, the world and certain items had a lot of bugs. So here is my question, how do you ussualy keep track of actions that your character is allowed to do, or more in general do you have some links / advices how to set up a char efficiantly? I´m working on a char right now, who should at least be able to run, jump, pick items up and use different fighting animations. Most ideas I came up with until now use some kind of action.priority / action.duration system to determain whats possible and what not, or a "action-manager" which defines for every action what is possible from that action on but it all doesnt work that well together.

    Read the article

  • Does Windows 8 still support DirectX 9?

    - by SullY
    Is Windows 8 supporting DirectX 9? Because I was looking through some samples written in C++ and DirectX 9 made for Windows 8. It wasn't that, like I know it ( look here http://directxtutorial.com/Lesson.aspx?lessonid=111-4-2 ). E.g. Inizialising DirectX with COM: ComPtr<ID3D11Device1> dev; ComPtr<ID3D11DeviceContext1> devcon; It's just weird because I know it with the old way: ID3D11Device *dev; ID3D11DeviceContext *devcon; ( I hope you understand what I want to tell ) I hope it hasn't change completely due the released their new OS.

    Read the article

  • Low CPU/Memory/Memory-bandwith Pathfinding (maybe like in Warcraft 1)

    - by Valmond
    Dijkstra and A* are all nice and popular but what kind of algorithm was used in Warcraft 1 for pathfinding? I remember that the enemy could get trapped in bowl-like caverns which means there were (most probably) no full-path calculations from "start to end". If I recall correctly, the algorithm could be something like this: A) Move towards enemy until success or hitting a wall B) If blocked by a wall, follow the wall until you can move towards the enemy without being blocked and then do A) But I'd like to know, if someone knows :-) [edit] As explained to Byte56, I'm searching for a low cpu/mem/mem-bandwidth algo and wanted to know if Warcraft had some special secrets to deliver (never seen that kind of pathfinding elsewhere), I hope that that is more concordant with the stackexchange rules.

    Read the article

  • RasterizerState set to null after calling DrawText in Nuclex

    - by ProgrammerAtWork
    I have the following code in XNA: // class members Text t1; Text t2; Text t3; // init // Debugfont is size 24 vectorfont t1 = MM.DebugFont24.Fill("hello"); t1 = MM.DebugFont24.Extrude("hello"); t2 = MM.DebugFont24.Fill("hello"); t2 = MM.DebugFont24.Extrude("hello"); t3 = MM.DebugFont24.Fill("hello"); t3 = MM.DebugFont24.Extrude("hello"); // Draw TextBatch test = new TextBatch(MM.GD); test.DrawText(t1, Color.Red); test.DrawText(t2, Color.Red); test.DrawText(t3, Color.Red); test.End(); //After the second call to the TextBatch, RasterizerState of the GraphicsDevice is set to null //But I don't get any runtime errors or any indication of that something is wrong. Is this supposed to happen? Or am I doing something wrong? I've discovered that this happened because culling was set to None when I was rendering textures

    Read the article

  • Box2Dweb very slow on node.js

    - by Peteris
    I'm using Box2Dweb on node.js. I have a rotated box object that I apply an impulse to move around. The timestep is set at 50ms, however, it bumps up to 100ms and even 200ms as soon as I add any more edges or boxes. Here are the edges I would like to use as bounds around the playing area: // Computing the corners var upLeft = new b2Vec2(0, 0), lowLeft = new b2Vec2(0, height), lowRight = new b2Vec2(width, height), upRight = new b2Vec2(width, 0) // Edges bounding the visible game area var edgeFixDef = new b2FixtureDef edgeFixDef.friction = 0.5 edgeFixDef.restitution = 0.2 edgeFixDef.shape = new b2PolygonShape var edgeBodyDef = new b2BodyDef; edgeBodyDef.type = b2Body.b2_staticBody edgeFixDef.shape.SetAsEdge(upLeft, lowLeft) world.CreateBody(edgeBodyDef).CreateFixture(edgeFixDef) edgeFixDef.shape.SetAsEdge(lowLeft, lowRight) world.CreateBody(edgeBodyDef).CreateFixture(edgeFixDef) edgeFixDef.shape.SetAsEdge(lowRight, upRight) world.CreateBody(edgeBodyDef).CreateFixture(edgeFixDef) edgeFixDef.shape.SetAsEdge(upRight, upLeft) world.CreateBody(edgeBodyDef).CreateFixture(edgeFixDef) Can box2d really become this slow for even two bodies or is there some pitfall? It would be very surprising given all the demos which successfully use tens of objects.

    Read the article

  • Storing Tiled Level Data in J2ME game

    - by Alex
    I'm developing a J2ME game which uses tiled backgrounds for the levels. My question is how do I store this tile information in my game. At the moment it is stored as an array; with each number representing a different tile from the tile-sheet. This works well enough, however I don't like the fact that it is 'hard-coded' into the game because (at least in my opinion) it is harder to edit the levels, or design new ones. I was also thinking that it would be difficult if you wanted to add a 'level pack', I'm not sure on how this would be achieved though; it's not something I was planning on doing, I'm just curious. I was wondering if there was a way I could store level data in some external file and then load this in to the game. The problem is I don't know what the limitations are for J2ME regarding file I/O, can it read in any file like Java? I am aware of the RMS, but from my experience I don't think this would work (unless I am mistaken). Also, would loading the data in this way be too big a performance hit? Or is there another way I can achieve what I am trying to do. As I said, the way I have it at the moment works fine, and if this is the only viable option then it will suffice.

    Read the article

  • rotate opengl mesh relative to camera

    - by shuall
    I have a cube in opengl. It's position is determined by multiplying it's specific model matrix, the view matrix, and the projection matrix and then passing that to the shader as per this tutorial (http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/). I want to rotate it relative to the camera. The only way I can think of getting the correct axis is by multiplying the inverse of the model matrix (because that's where all the previous rotations and tranforms are stored) times the view matrix times the axis of rotation (x or y). I feel like there's got to be a better way to do this like use something other than model, view and projection matrices, or maybe I'm doing something wrong. That's what all the tutorials I've seen use. PS I'm also trying to keep with opengl 4 core stuff. edit: If quaternions would fix my problems, could someone point me to a good tutorial/example for switching from 4x4 matrices to quaternions. I'm a little daunted by the task.

    Read the article

  • Open Source Analysis

    - by BluFire
    There are a lot of code in open source projects, looking at all of the code is time consuming and can be confusing to a novice like me. Are there any sections of open-source projects that should be focused on? What should I focus on when I look at code? I'm asking this in general because if I ask this specifically, the question will only apply in one or two projects rather than an entire group of projects ranging in different types of games and difficulty.

    Read the article

  • System hangs at glReadPixel call with GL_TEXTURE_2D_ARRAY for texturing

    - by Roshan
    I am calling glReadPixel after glDrawArray call. I am rendering a geometry with 3D texture on it as a target GL_TEXTURE_2D_ARRAY. My systems hangs at glreadpixel call. When i use target as GL_TEXTURE_3D the issue does not occurs and it correctly reads the framebuffer contents. glReadPixels(0, 0, GetViewportWidth(), GetViewportHeight(), GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid *)rendered_pixels); I am using SNORM textures with GL_byte data in glTeximage3D call and I am not calling glPixelStorei, is it because of this? What should be the parameter for pixelstore call?

    Read the article

  • Premultiplying matrices with Perspective destroys them

    - by Shadows In Rain
    If I apply world_to_camera, perspective and camera_to_screen to my mesh, everything is okay. But if I premultiply given matrices (i.e. transform = world_to_camera * perpective * camera_to_screen) before applying, then it seems like only perspective has effect. If it is important... My 3d framework was written from scratch (test project for job interview). But it works flawlessly, or at least I think so. So, question. This is expected behaviour, or my implementation is wrong?

    Read the article

  • In a 2D tile-based game, how should NPCs and tiles reference each other?

    - by lezebulon
    I'm making a tile engine for 2D games (seen from the top). Basically the world is composed of a grid of tiles. Now I want to put for instance NPCs that can move on the map. What do you think is best: 1) each tile has a pointer to the NPC that is on its tile, or a NULL pointer 2) having a list of NPCs, and they have the coordinates of the tile they are on. 3) something else? 1) is faster for collision detection but it would use much more memory space and it is slower to find all NPCs in a map. 2) is the opposite. thanks

    Read the article

  • How to design a game engine in an object-oriented language?

    - by chuzzum
    Whenever I try and write a game in any object-oriented language, the first problem I always face (after thinking about what kind of game to write) is how to design the engine. Even if I'm using existing libraries or frameworks like SDL, I still find myself having to make certain decisions for every game, like whether to use a state machine to manage menus, what kind of class to use for resource loading, etc. What is a good design and how would it be implemented? What are some tradeoffs that have to be made and their pros/cons?

    Read the article

  • How do I create weapon attachments?

    - by Tron86
    My question is; I am developing a game for XNA and I am trying to create a weapon attachment for my player model. My player model loads the .md3 format and reads tags for attachment points. I am able to get the tag of my model's hand. And I am also able to get the tag of my weapon's handle. Each tag I am able to get the rotation and position of and this is how I am calculating it: Model.worldMatrix = Matrix.CreateScale(Model.scale) * Matrix.CreateRotationX(-MathHelper.PiOver2) * Matrix.CreateRotationY(MathHelper.PiOver2); Pretty simple, the player model has a scale and its orientation(it loads on its side so I just use a 90 degree X axis rotation, and a Y axis rotation to face away from the camera). I then calculate the torso tag on the lower body, which gives me a local coordinate at the waist. Then I take that matrix and calculate the tag_weapon in the upper body. This gives me the hand position in local space. I also get the rotation matrix from that tag that I store for later use. All this seems to work fine. Now I move onto my weapon: Matrix weaponWorld = Matrix.CreateScale(CurrentWeapon.scale) * Matrix.CreateRotationX(-MathHelper.PiOver2) * TagRotationMatrix * Matrix.CreateTranslation(HandTag.Position) * Matrix.CreateRotationY(PlayerRotation) * Matrix.CreateTranslation(CollisionBody.Position) * You may notice the weapon matrix gets rotated by 90 degress on the X axis as well. This is because they load in on their sides. Once again this seems pretty simple and follows the SRT order I keep reading about. My TagRotation matrix is the hand's rotation. HandTag.Position is its position in local space. CreateRotationY(PlayerRotation) is the player's rotation in world space, and the CollisionBody.Position is the player's world location. Everything seems to be in order, and almost works in game. However when the gun spawns and follows the player's hand it seems to be flipped on an axis every couple frames. Almost like the X or Y axis is being inversed then put right back. Its hard to explain and I am totally stumped. Even removing all my X axis fixes does nothing to solve the problem. Hopefully I explained everything enough as I am a bit new to this! Thanks!

    Read the article

  • What is the simplest way to render video into memory (for drawing to a texture) in .NET?

    - by sebf
    In my project I would like to be able to play back video on surfaces in the world. I intend to do this by having the video frames rendered to a block of memory, then use this to update a texture each frame. Everything is in place - except for the part that actually gets the video. I have looked on Google and found that the video library world is very expansive (and geared towards video processing), and am having trouble finding a suitable one. FFMpeg is very comprehensive, but is an entire suite and would take a good amount of work to integrate. So far the most promising library I've found is the one based on the VLC player libraries - by virtue of it using the same resources as VLC Player it is known to be very capable; it also renders to blocks of memory, but the API (at least of the one on Codeplex) is more of a port of the C++ API rather than a managed wrapper. The 'solution' can be any wrapper/API/library, but with characteristics that make it suitable for use in a rendering engine, namely: Renders the video frame data to memory, so it can be picked up and passed to a texture on the GPU easily. Super simple - all that is needed is a way to load, jump and render a frame programatically - ideally it would use the systems codecs and not require an assortment of plugins. Permissive license (LGPL or more free-er) .NET bindings at least; all the better if it is natively managed Can anyone suggest a lightweight, (.NET) library, that can take a video file, and spit out some frames into a byte[]?

    Read the article

  • Can I name a team with the name of their city to avoid trademark issues?

    - by Paul
    I was wondering, if you want to make a NBA game on smartphones, without the license held by EA, the first solution seems to name your teams with a different name, such as "Chicragro Brulls" (this is just for the example), but would it be possible to just call your team with the name of the city, such as "Chicago vs. Dallas" ? I know the first solution was chosen by Pro Evolution Soccer, would you know any other game that don't use a license?

    Read the article

  • Getting the front buffer into a gfx mem surface (Dx9)

    - by lapin
    I'm using DirectX 9 to acquire the frontbuffer. There are a couple of ways I know of to get at the front buffer: GetRenderTargetData() GetFrontBufferData() The MSDN page on both of these API calls state that the data is copied from device memory to system memory. I'd like to copy the front buffer surface directly to another graphics memory surface, as I have other manipulations to perform on the acquired surface before returning it to system memory. I'm creating a D3DUSAGE_DYNAMIC texture (gfx mem texture) and calling GetFrontBufferData() to write the front buffer to my textures surface0. Is this valid? Will the operation remain in gfx memory, or will it need to move to system memory and then back to graphics memory? If this is the case, is what I'm trying to achieve possible?

    Read the article

  • XNA Windows Phone 7 Sprite movement

    - by Darren Gaughan
    I'm working on a Windows phone game and I'm having difficulty with the sprite movement. What I want to do is make the sprite gradually move to the position that is touched on screen, when there is only one quick touch and release. At the minute all I can do is either make the sprite jump instantly to the touch location or move along to the touch location when the touch is held down. Code for jumping to touch location: TouchCollection touchCollection = TouchPanel.GetState(); foreach (TouchLocation tl in touchCollection) { if ((tl.State == TouchLocationState.Pressed) || (tl.State == TouchLocationState.Moved)) { Vector2 newPos = new Vector2(tl.Position.X,tl.Position.Y); if (position != newPos) { while (position.X < newPos.X) { position.X += (float)theGameTime.ElapsedGameTime.Milliseconds / 10.0f * spriteDirectionRight; } } } } Code to gradually move along while touch is held: TouchCollection touchCollection = TouchPanel.GetState(); foreach (TouchLocation tl in touchCollection) { if ((tl.State == TouchLocationState.Pressed) || (tl.State == TouchLocationState.Moved)) { Vector2 newPos = new Vector2(tl.Position.X,tl.Position.Y); if (position != newPos) { position.X += (float)theGameTime.ElapsedGameTime.Milliseconds / 10.0f * spriteDirectionRight; } } } These are in the Update() method of the Sprite class.

    Read the article

  • Deforming surfaces

    - by Constantin
    I try to accomplish an deforming physic behaviour for levelsurfaces, but don't get an idea how to start with the implemenation so far. Regardless of the shape from the surface (planes, cubes, spheres…), I want to have small indentations at the positions from game-entitys (players, enemys, objects…). It's kind of complicated to explain, so I illustrated what I'm talking about (here is an example with an sphere): So, the surfaces should be able to deforming themselfs a little bit (to apear like an really soft bed or sofa). My surfaces need probably an high vertices count to get an smooth deforming, but my big problem is the math for calculating this deforming… I'm programming in C/C++ with OpenGL, but will be fine with any advices in the right direction. Any help would be highly appreciated,

    Read the article

  • Adapting Javascript game for mobile

    - by Cardin
    I'm currently developing a Javascript web game for desktop users. It is a sort of tower-defense game that relies on mouse input only, developed on canvas using EaselJS. In the future, or perhaps simultaneously, I would like to adapt the game for mobile devices. I can see at least 3 potential areas in shifting from desktop to mobile: 1. resolution size and UI rearrangement, 2. converting mouse events to touch events, 3. distribution as native app wrapper or mobile Web. What would be the best strategy to facilitate this desktop to mobile conversion? For example, should I try to code the game for both platforms, or port the game UI over to mobile by branching the code base. Should I just publish on the mobile Web or wrap the game in a native app framework? And if I were to code for both platforms using the same codebase, should I register both click and touch events, or remap click events to touch using dispatchEvent?

    Read the article

  • jump pads problem

    - by Pasquale Sada
    I'm trying to make a character jump on a landing pad who stays above him. Here is the formula I've used (everything is pretty much self-explainable, maybe except character_MaxForce that is the total force the character can jump ): deltaPosition = target - character_position; sqrtTerm = Sqrt(2*-gravity.y * deltaPosition.y + MaxYVelocity* character_MaxForce); time = (MaxYVelocity-sqrtTerm) /gravity.y; speedSq = jumpVelocity.x* jumpVelocity.x + jumpVelocity.z *jumpVelocity.z; if speedSq < (character_MaxForce * character_MaxForce) we have the right time so we can store the value jumpVelocity.x = deltaPosition.x / time; jumpVelocity.z = deltaPosition.z / time; otherwise we try the other solution time = (MaxYVelocity+sqrtTerm) /gravity.y; and then store it jumpVelocity.x = deltaPosition.x / time; jumpVelocity.z = deltaPosition.z / time; jumpVelocity.y = MaxYVelocity; rigidbody_velocity = jumpVelocity; The problem is that the character is jumping away from the landing pad or sometime he jumps too far never hitting the landing pad.

    Read the article

  • Information about rendering, batches, the graphical card, performance etc. + XNA?

    - by Aidiakapi
    I know the title is a bit vague but it's hard to describe what I'm really looking for, but here goes. When it comes to CPU rendering, performance is mostly easy to estimate and straightforward, but when it comes to the GPU due to my lack of technical background information, I'm clueless. I'm using XNA so it'd be nice if theory could be related to that. So what I actually wanna know is, what happens when and where (CPU/GPU) when you do specific draw actions? What is a batch? What influence do effects, projections etc have? Is data persisted on the graphics card or is it transferred over every step? When there's talk about bandwidth, are you talking about a graphics card internal bandwidth, or the pipeline from CPU to GPU? Note: I'm not actually looking for information on how the drawing process happens, that's the GPU's business, I'm interested on all the overhead that precedes that. I'd like to understand what's going on when I do action X, to adapt my architectures and practices to that. Any articles (possibly with code examples), information, links, tutorials that give more insight in how to write better games are very much appreciated. Thanks :)

    Read the article

  • What is a good way to test demand for a new game platform?

    - by user15256
    I'm working on a game platform that turns your iPhone, android or iPad into a steering wheel, for racing games (like need for speed and dirt 3) and flight simulators for example. I'd love to figure out smart ways to figure out whether gamers would like something like this. I originally asked this question over on the gaming SE and it was for getflypad.com. A lot of the tech is built and most of it is doable - the question here is how to test demand and know whether gamers actually want this.

    Read the article

  • Adding a small slide when player releases left/right key

    - by Dave
    the aim is for the player object to slow down and stop instead of just stopping dead. The following codes works ok when the player is not jumping, but gets stuck in an object if the player is in the air when they do it. Left Key released event: if hsp = 0 exit; hspeed = -3; friction = 0.20; if obj_Player.hspeed = 0 { hspeed = 0; } Right key released event: if hsp = 0 exit; hspeed = +3; friction = 0.20; if obj_Player.hspeed = 0 { hspeed = 0; } and here's the horizontal collision code for interest: if (place_meeting(x+hsp,y,obj_bound)) { while(!place_meeting(x+sign(hsp),y,obj_bound)) { x += sign(hsp); } hsp = 0; } x += hsp; Any help would be much appreciated. Thanks.

    Read the article

  • Direct3D - Zooming into Mouse Position

    - by roohan
    I'm trying to implement my camera class for a simulation. But I cant figure out how to zoom into my world based on the mouse position. I mean the object under the mouse cursor should remain at the same screen position. My zooming looks like this: VOID ZoomIn(D3DXMATRIX& WorldMatrix, FLOAT const& MouseX, FLOAT const& MouseY) { this->Position.z = this->Position.z * 0.9f; D3DXMatrixLookAtLH(&this->ViewMatrix, &this->Position, &this->Target, &this->UpDirection); } I passed the world matrix to the function because I had the idea to move my drawing origin according to the mouse position. But I cant find out how to calculate the offset in to move my drawing origin. Anyone got an idea how to calculate this? Thanks in advance. SOLVED Ok I solved my problem. Here is the code if anyone is interested: VOID CAMERA2D::ZoomIn(FLOAT const& MouseX, FLOAT const& MouseY) { // Get the setting of the current view port. D3DVIEWPORT9 ViewPort; this->Direct3DDevice->GetViewport(&ViewPort); // Convert the screen coordinates of the mouse to world space coordinates. D3DXVECTOR3 VectorOne; D3DXVECTOR3 VectorTwo; D3DXVec3Unproject(&VectorOne, &D3DXVECTOR3(MouseX, MouseY, 0.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); D3DXVec3Unproject(&VectorTwo, &D3DXVECTOR3(MouseX, MouseY, 1.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); // Calculate the resulting vector components. float WorldZ = 0.0f; float WorldX = ((WorldZ - VectorOne.z) * (VectorTwo.x - VectorOne.x)) / (VectorTwo.z - VectorOne.z) + VectorOne.x; float WorldY = ((WorldZ - VectorOne.z) * (VectorTwo.y - VectorOne.y)) / (VectorTwo.z - VectorOne.z) + VectorOne.y; // Move the camera into the screen. this->Position.z = this->Position.z * 0.9f; D3DXMatrixLookAtLH(&this->ViewMatrix, &this->Position, &this->Target, &this->UpDirection); // Calculate the world space vector again based on the new view matrix, D3DXVec3Unproject(&VectorOne, &D3DXVECTOR3(MouseX, MouseY, 0.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); D3DXVec3Unproject(&VectorTwo, &D3DXVECTOR3(MouseX, MouseY, 1.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); // Calculate the resulting vector components. float WorldZ2 = 0.0f; float WorldX2 = ((WorldZ2 - VectorOne.z) * (VectorTwo.x - VectorOne.x)) / (VectorTwo.z - VectorOne.z) + VectorOne.x; float WorldY2 = ((WorldZ2 - VectorOne.z) * (VectorTwo.y - VectorOne.y)) / (VectorTwo.z - VectorOne.z) + VectorOne.y; // Create a temporary translation matrix for calculating the origin offset. D3DXMATRIX TranslationMatrix; D3DXMatrixIdentity(&TranslationMatrix); // Calculate the origin offset. D3DXMatrixTranslation(&TranslationMatrix, WorldX2 - WorldX, WorldY2 - WorldY, 0.0f); // At the offset to the cameras world matrix. this->WorldMatrix = this->WorldMatrix * TranslationMatrix; } Maybe someone has even a better solution than mine.

    Read the article

  • iOS + cocos2d: how to account for sprite's position for the different device dimensions in an universal app?

    - by fuzzlog
    All the questions I've seen regarding iOS universal apps (with or without cocos2d) deal with the "how to add graphics to a universal app". My question is, how does the code need to be written to ensure that the sprites appear appropriately on the screen (given that an iPhone 5's resolution is not proportional to an iPad's resolution)? Is it just a bunch of "if" statements and duplicate code or do iOS/cocos2d provide common function calls that will place the sprites at an appropriate position?

    Read the article

< Previous Page | 612 613 614 615 616 617 618 619 620 621 622 623  | Next Page >