Search Results

Search found 25952 results on 1039 pages for 'development lifecycle'.

Page 524/1039 | < Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >

  • Need efficient way to keep enemy from getting hit multiple times by same source

    - by TenFour04
    My game's a simple 2D one, but this probably applies to many types of scenarios. Suppose my player has a sword, or a gun that shoots a projectile that can pass through and hit multiple enemies. While the sword is swinging, there is a duration where I am checking for the sword making contact with any enemy on every frame. But once an enemy is hit by that sword, I don't want him to continue getting hit over and over as the sword follows through. (I do want the sword to continue checking whether it is hitting other enemies.) I've thought of a couple different approaches (below), but they don't seem like good ones to me. I'm looking for a way that doesn't force cross-referencing (I don't want the enemy to have to send a message back to the sword/projectile). And I'd like to avoid generating/resetting multiple array lists with every attack. Each time the sword swings it generates a unique id (maybe by just incrementing a global static long). Every enemy keeps a list of id's of swipes or projectiles that have already hit them, so the enemy knows not to get hurt by something multiple times. Downside is that every enemy may have a big list to compare to. So projectiles and sword swipes would have to broadcast their end-of-life to all enemies and cause a search and remove on every enemy's array list. Seems kind of slow. Each sword swipe or projectile keeps its own list of enemies that it has already hit so it knows not to apply damage. Downsides: Have to generate a new list (probably pull from a pool and clear one) every time a sword is swung or a projectile shot. Also, this breaks down modularity, because now the sword has to send a message to the enemy, and the enemy has to send a message back to the sword. Seems to me that two-way streets like this are a great way to create very difficult-to-find bugs.

    Read the article

  • How to implement behavior in a component-based game architecture?

    - by ghostonline
    I am starting to implement player and enemy AI in a game, but I am confused about how to best implement this in a component-based game architecture. Say I have a following player character that can be stationary, running and swinging a sword. A player can transit to the swing sword state from both the stationary and running state, but then the swing must be completed before the player can resume standing or running around. During the swing, the player cannot walk around. As I see it, I have two implementation approaches: Create a single AI-component containing all player logic (either decoupled from the actual component or embedded as a PlayerAIComponent). I can easily how to enforce the state restrictions without creating coupling between individual components making up the player entity. However, the AI-component cannot be broken up. If I have, for example, an enemy that can only stand and walk around or only walks around and occasionally swing a sword, I have to create new AI-components. Break the behavior up in components, each identifying a specific state. I then get a StandComponent, WalkComponent and SwingComponent. To enforce the transition rules, I have to couple each component. SwingComponent must disable StandComponent and WalkComponent for the duration of the swing. When I have an enemy that only stands around, swinging a sword occasionally, I have to make sure SwingComponent only disables WalkComponent if it is present. Although this allows for better mix-and-matching components, it can lead to a maintainability nightmare as each time a dependency is added, the existing components must be updated to play nicely with the new requirements the dependency places on the character. The ideal situation would be that a designer can build new enemies/players by dragging components into a container, without having to touch a single line of engine or script code. Although I am not sure script coding can be avoided, I want to keep it as simple as possible. Summing it all up: Should I lob all AI logic into one component or break up each logic state into separate components to create entity variants more easily?

    Read the article

  • Failing Screen Resize Method

    - by StrongJoshua
    So I want my game to draw to a specific "optimal" size and then be stretched to fit screens that are a different size. I'm using LibGDX and figured that I could just draw everything to a FrameBuffer and then resize that buffer to the appropriate size when drawing it to the actual display. However, my method does not work, it just results in a black screen with the top right quarter of the screen white.Intermediary is the FBO, interMatrix is a Matrix4 object, and camera is an OrthographicCamera. @Override public void render() { // update actors currentStage.act(); //render to intermediary buffer batch.setProjectionMatrix(interMatrix); intermediary.begin(); batch.begin(); currentStage.draw(); batch.flush(); intermediary.end(); //resize to actual width and height Sprite s = new Sprite(intermediary.getColorBufferTexture()); s.flip(true, false); batch.setProjectionMatrix(camera.combined); batch.draw(s.getTexture(), 0, 0, width, height); batch.end(); } These are the constructors for the above mentioned objects (GAME_WIDTH and HEIGHT are the "optimal" settings, width and height are the actual sizes, which are the same when running on desktop). intermediary = new FrameBuffer(Format.RGBA8888, GAME_WIDTH, GAME_HEIGHT, false); interMatrix = new Matrix4(); camera = new OrthographicCamera(width, height); interMatrix.setToOrtho2D(0, 0, GAME_WIDTH, GAME_HEIGHT); Is there a better way of doing this or can is this a viable option and how do I fix what I have?

    Read the article

  • Wall jumping collision detection anomaly

    - by Nanor
    I'm creating a game where the player ascends a tower by wall jumping his way to the top. When the player has collided with the right wall they can only jump left and vice versa. Here is my current implementation: if(wallCollision() == "left"){ player.setPosX(0); player.setVelX(0); ignoreCollisions = true; player.setCanJump(true); player.setFacingLeft(false); } else if (wallCollision() == "right"){ player.setPosX(screenWidth-playerWidth*2); player.setVelX(0); ignoreCollisions = true; player.setCanJump(true); player.setFacingLeft(true); } else{ player.setVelY(player.getVelY() + gravity); } and private String wallCollision(){ if(player.getPosX() < playerWidth && !ignoreCollisions) return "left"; else if(player.getPosX() > screenWidth - playerWidth*2 && !ignoreCollisions) return "right"; else{ timeToJump += Gdx.graphics.getDeltaTime(); if(timeToJump > 0.50f){ timeToJump = 0; ignoreCollisions = false; } return "jumping"; } } If the player is colliding with the left wall it will switch between the states left and jumping repeatedly due to the varible ignoreCollisions being switched repeatedly in collision checks. This will give a chance to either jump as intended or simply ascend vertically instead of diagonally. I can't figure out an implementation that will reliably make sure the player jumps as intended. Does anyone have any pointers?

    Read the article

  • Connecting 2 Vertices in 3DS Max?

    - by Reanimation
    How do you connect two vertices in 3DS Max 2013? I have two vertices which I wish to connect with a line to create an edge. (actually several) I have tried all I can think and done several Google searches but it only comes up with older versions method which say use the "connect" button... But I can't find the connect button on my version (see below) This is what my menu looks like: These are the vertices I'm trying to connect: Basically, I've edited an STL file and deleted some edges and vertices. Now I want to fill the gaps and triangulate what's left. Thanks.

    Read the article

  • AI agents with FSM: a question regarding this

    - by Prog
    Finite State Machines implemented with the State design pattern are a common way to design AI agents. I am familiar with the State design pattern and know how to implement it. However I have a question regarding how this is used in games to design AI agents. Please consider a class Monster that represents an AI agent. Simplified it looks like this: class Monster{ State state; // other fields omitted public void update(){ // called every game-loop cycle state.execute(this); } public void setState(State state){ this.state = state; } // irrelevant stuff omitted } There are several State subclasses that implement execute() differently. So far classic State pattern. Here's my question: AI agents are subject to environmental effects and other objects communicating with them. For example an AI agent might tell another AI agent to attack (i.e. agent.attack()). Or a fireball might tell an AI agent to fall down. This means that the agent must have methods such as attack() and fallDown(), or commonly some message receiving mechanism to understand such messages. My question is divided to two parts: 1- Please say if this is correct: With an FSM, the current State of the agent should be the one taking care of such method calls - i.e. the agent delegates to the current state upon every event. Correct? Or wrong? 2- If correct, than how is this done? Are all states obligated by their superclass) to implement methods such as attack(), fallDown() etc., so the agent can always delegate to them on almost every event? Or is it done in some other way?

    Read the article

  • How to move an object along a circumference of another object?

    - by Lumis
    I am so out of math that it hurts, but for some of you this should be a piece of cake. I want to move an object around another along its ages or circumference on a simple circular path. At the moment my game algorithm knows how to move and position a sprite just at the edge of an obstacle and now it waits for the next point to move depending on various conditions. So the mathematical problem here is how to get (aX, aY) and (bX, bY) positions, when I know the Centre (cX, cY), the object position (oX, oY) and the distance required to move (d)

    Read the article

  • Is the TCP protocol good enough for real-time multiplayer games?

    - by kevin42
    Back in the day, TCP connections over dialup/ISDN/slow broadband resulted in choppy, laggy games because a single dropped packet resulted in a resync. That meant a lot of game developers had to implement their own reliability layer on top of UDP, or they used UDP for messages that could be dropped or received out of order, and used a parallel TCP connection for information that must be reliable. Given the average user has faster network connections now, can a real time game such as an FPS give good performance over a TCP connection?

    Read the article

  • Blending effect on textures

    - by joecks
    Hi i am trying to build screen animation like flickering, interlace, color separation similar to old style malfunctioning Amiga screens. The intended effects are shown in this video. I am using libgdx and I already discovered the universal tween engine, which helps a lot to build transitional animations, but how should I approach those blending effects, any suggestions? I will specify my question once I learned more about libgdx, but maybe you could give me some hints already. Thanks!

    Read the article

  • Java 2D Tile Collision

    - by opiop65
    I have been working on a way to do collision detection forever, and just can't figure it out. Here's my simple 2D array: for (int x = 0; x < 16; x++) { for (int y = 0; y < 16; y++) { map[x][y] = AIR; if(map[x][y] == AIR) { air.draw(x * tilesize, y * tilesize); } } } for (int x = 0; x < 16; x++) { for (int y = 6; y < 16; y++) { map[x][y] = GRASS; if(map[x][y] == GRASS) { grass.draw(x * tilesize, y * tilesize); } } } for (int x = 0; x < 16; x++) { for (int y = 8; y < 16; y++) { map[x][y] = STONE; if(map[x][y] == STONE) { stone.draw(x * tilesize, y * tilesize); } } } I want to do it with rectangles, and using the intersect() method, but how would I go about adding rectangles to all the tiles? Edit: My player moves like this: if(input.isKeyDown(Input.KEY_W)) { shiftY -= delta * speed; idY = (int) shiftY; if(shift == true) { shiftY -= delta * runspeed; } if(isColliding == true) { shiftY += delta * speed; } } if(input.isKeyDown(Input.KEY_S)) { shiftY += delta * speed; idY = (int) shiftY; if(shift == true) { shiftY += delta * runspeed; } if(isColliding == true) { shiftY -= delta * speed; } } if (input.isKeyDown(Input.KEY_A)) { steve = left; shiftX -= delta * speed; idX = (int) shiftX; if(shift == true) { shiftX -= delta * runspeed; } if(isColliding == true) { shiftX += delta * speed; } } if (input.isKeyDown(Input.KEY_D)) { steve = right; shiftX += delta * speed; idX = (int) shiftX; if(shift == true) { shiftX += delta * runspeed; } if(isColliding == true) { shiftX -= delta * speed; } } (I have tried my own collision code, but its horrible. Doesn't work in the slightest)

    Read the article

  • Isometric Camera trouble - can't rotate or move correctly

    - by Deukalion
    I'm trying to create a 3D editor, but I've been having some trouble with the Camera and understanding each component. I've created 2 camera that works OK, but now I'm trying to implement an Isometric Camera in XNA without success on the rotation and movement of the camera. All I get working is Zoom. (Cube with x=3f, y=3f, z=1f in center) And this is the constructor for my IsometricCamera (inherits from ICamera, with methods for Rotation, Movement and Zoom, and Properties for World/View/Projection matrices) public IsometricCamera3D(GraphicsDevice device, float startClip = -1000f, float endClip = 1000f) { matrix_projection = Matrix.CreateOrthographic(device.Viewport.Width, device.Viewport.Height, startClip, endClip); rotation = Vector3.Zero; matrix_view = Matrix.CreateScale(zoom) * Matrix.CreateRotationY(MathHelper.ToRadians(45 + 180)) * Matrix.CreateRotationX(MathHelper.ToRadians(30)) * Matrix.CreateRotationZ(MathHelper.ToRadians(120)) * Matrix.CreateTranslation(rotation.X, rotation.Y, rotation.Z); } Problem is when I rotate it, all that happens is that the Cube gets more or less shiny and nothing happens. What is wrong and how should I create my View matrix to move it / rotate it correctly? Rotate, Move and Zoom looks like: MethodName(Vector3 rotation/movement), Zoom(float value); and just increases the value, then calls an update to recreate the View Matrix according to the code in the constructor. Currently, in my editor I use MiddleButton + Mouse Movement to rotate the camera, but it's not working as the other camera. But in my default camera I use World Matrix to move, but I guess that's not the best way to go which is why I'm trying this.

    Read the article

  • Phone complains that identical GLSL struct definition differs in vert/frag programs

    - by stephelton
    When I provide the following struct definition in linked frag and vert shaders, my phone (Samsung Vibrant / Android 2.2) complains that the definition differs. struct Light { mediump vec3 _position; lowp vec4 _ambient; lowp vec4 _diffuse; lowp vec4 _specular; bool _isDirectional; mediump vec3 _attenuation; // constant, linear, and quadratic components }; uniform Light u_light; I know the struct is identical because its included from another file. These shaders work on a linux implementation and on my Android 3.0 tablet. Both shaders declare "precision mediump float;" The exact error is: Uniform variable u_light type/precision does not match in vertex and fragment shader Am I doing anything wrong here, or is my phone's implementation broken? Any advice (other than file a bug report?)

    Read the article

  • What are the steps taken by this GLSL code?

    - by user827992
    1 void main(void) 2 { 3 vec2 pos = mod(gl_FragCoord.xy, vec2(50.0)) - vec2(25.0); 4 float dist_squared = dot(pos, pos); 5 6 gl_FragColor = (dist_squared < 400.0) 7 ? vec4(.90, .90, .90, 1.0) 8 : vec4(.20, .20, .40, 1.0); 9 } taken from http://people.freedesktop.org/~idr/OpenGL_tutorials/03-fragment-intro.html Now, this looks really trivial and simple, but my problem is with the mod function. This function is taking 2 vec2 as inputs but is supposed to take just 2 atomic arguments according to the official documentation, also this function makes an implicit use of the floor function that only accepts, again, 1 atomic argument. Can someone explain this to me step by step and point out what I'm not getting here? It's some kind of OpenGL trick? OpenGL Math trick? in the GLSL docs i always find and explicit reference to the type accepted by the function and vec2 it's not there.

    Read the article

  • Premultiplying matrices with Perspective destroys them

    - by Shadows In Rain
    If I apply world_to_camera, perspective and camera_to_screen to my mesh, everything is okay. But if I premultiply given matrices (i.e. transform = world_to_camera * perpective * camera_to_screen) before applying, then it seems like only perspective has effect. If it is important... My 3d framework was written from scratch (test project for job interview). But it works flawlessly, or at least I think so. So, question. This is expected behaviour, or my implementation is wrong?

    Read the article

  • Drawing visible tiles - side scrolling

    - by Troubleshoot
    Currently I'm calling drawMap every time repaint is called. This is the code I've written for my drawMap method so far. public void drawMap(Graphics2D g2d) { float cameraX = Player.getX() - (Frame.CANVAS_WIDTH / 2); float cameraY = Player.getY() - (Frame.CANVAS_HEIGHT / 2); int tileX = (int) cameraX; int tileY = (int) cameraY; int xIndent = 0, yIndent = 0; int a = 0, b = 0; while (tileX % TILE_SIZE != 0) { tileX--; xIndent++; } while (tileY % TILE_SIZE != 0) { tileY--; yIndent++; } for (int y = tileY; y < tileY + Frame.CANVAS_HEIGHT; y += Map.TILE_SIZE) { for (int x = tileX; x < tileX + Frame.CANVAS_WIDTH; x += Map.TILE_SIZE) { if ((y / TILE_SIZE < 0 || x / TILE_SIZE < 0) || (y / TILE_SIZE > columnSize)) break; g2d.drawImage(map[y / TILE_SIZE][x / TILE_SIZE], a - xIndent, b - yIndent, null); a += TILE_SIZE; } a = 0; b += TILE_SIZE; } } The idea behind this is that it gets the camera position and draws the map relative to the player position. However, instead of the player being in the center of the screen all the time, the player actually moves away from the center as it scrolls to the right, and moves towards to center as it scrolls to the left. I've been trying to pinpoint what I've done wrong but I can't seem to find it. My code also seems quite messy, so am I doing this the correct way?

    Read the article

  • Random enemy placement on a 2d grid

    - by Robb
    I want to place my items and enemies randomly (or as randomly as possible). At the moment I use XNA's Random class to generate a number between 800 for X and 600 for Y. It feels like enemies spawn more towards the top of the map than in the middle or bottom. I do not seed the generator, maybe that is something to consider. Are there other techniques described that can improve random enemy placement on a 2d grid?

    Read the article

  • Which Kinect package for PC takes care of motion tracking too?

    - by Extrakun
    I am aware that there are opensource drivers for interfacing Kinect with the PC. My question is - the drivers at OpenKinect seems to provide only the images and depth data (from the reading of their wiki and API). It seems that you need to provide your own imaging solution. My question is - is there any all-in-one package, with samples/sources that not only grab images from Kinect, but also do the imaging/motion detection for you?

    Read the article

  • C++ Directx 11 D3DXVECTOR3 doesn't allow me to devide it

    - by Miguel P
    If i have a simple vector3 like this: D3DXVECTOR3 inversevector = D3DXVECTOR3( (pos+lookat_pos)); It works perfect! But let's say i wanted to multiply it by: Speed*(float) timeHandler.GetDelta() So: D3DXVECTOR3 inversevector = D3DXVECTOR3( (pos+lookat_pos) * Speed*(float) timeHandler.GetDelta()); Now this fails completely, i've used this snippet before, but for some wierd reason it simply won't work( The vector somehow leads x,y,z to 0 or almost, no idea why). Do you have any idea why?

    Read the article

  • How to Point sprite's direction towards Mouse or an Object [duplicate]

    - by Irfan Dahir
    This question already has an answer here: Rotating To Face a Point 1 answer I need some help with rotating sprites towards the mouse. I'm currently using the library allegro 5.XX. The rotation of the sprite works but it's constantly inaccurate. It's always a few angles off from the mouse to the left. Can anyone please help me with this? Thank you. P.S I got help with the rotating function from here: http://www.gamefromscratch.com/post/2012/11/18/GameDev-math-recipes-Rotating-to-face-a-point.aspx Although it's by javascript, the maths function is the same. And also, by placing: if(angle < 0) { angle = 360 - (-angle); } doesn't fix it. The Code: #include <allegro5\allegro.h> #include <allegro5\allegro_image.h> #include "math.h" int main(void) { int width = 640; int height = 480; bool exit = false; int shipW = 0; int shipH = 0; ALLEGRO_DISPLAY *display = NULL; ALLEGRO_EVENT_QUEUE *event_queue = NULL; ALLEGRO_BITMAP *ship = NULL; if(!al_init()) return -1; display = al_create_display(width, height); if(!display) return -1; al_install_keyboard(); al_install_mouse(); al_init_image_addon(); al_set_new_bitmap_flags(ALLEGRO_MIN_LINEAR | ALLEGRO_MAG_LINEAR); //smoother rotate ship = al_load_bitmap("ship.bmp"); shipH = al_get_bitmap_height(ship); shipW = al_get_bitmap_width(ship); int shipx = width/2 - shipW/2; int shipy = height/2 - shipH/2; int mx = width/2; int my = height/2; al_set_mouse_xy(display, mx, my); event_queue = al_create_event_queue(); al_register_event_source(event_queue, al_get_mouse_event_source()); al_register_event_source(event_queue, al_get_keyboard_event_source()); //al_hide_mouse_cursor(display); float angle; while(!exit) { ALLEGRO_EVENT ev; al_wait_for_event(event_queue, &ev); if(ev.type == ALLEGRO_EVENT_KEY_UP) { switch(ev.keyboard.keycode) { case ALLEGRO_KEY_ESCAPE: exit = true; break; /*case ALLEGRO_KEY_LEFT: degree -= 10; break; case ALLEGRO_KEY_RIGHT: degree += 10; break;*/ case ALLEGRO_KEY_W: shipy -=10; break; case ALLEGRO_KEY_S: shipy +=10; break; case ALLEGRO_KEY_A: shipx -=10; break; case ALLEGRO_KEY_D: shipx += 10; break; } }else if(ev.type == ALLEGRO_EVENT_MOUSE_AXES) { mx = ev.mouse.x; my = ev.mouse.y; angle = atan2(my - shipy, mx - shipx); } // al_draw_bitmap(ship,shipx, shipy, 0); //al_draw_rotated_bitmap(ship, shipW/2, shipH/2, shipx, shipy, degree * 3.142/180,0); al_draw_rotated_bitmap(ship, shipW/2, shipH/2, shipx, shipy,angle, 0); //I directly placed the angle because the allegro library calculates radians, and if i multiplied it by 180/3. 142 the rotation would go hawire, not would, it actually did. al_flip_display(); al_clear_to_color(al_map_rgb(0,0,0)); } al_destroy_bitmap(ship); al_destroy_event_queue(event_queue); al_destroy_display(display); return 0; } EDIT: This was marked duplicate by a moderator. I'd like to say that this isn't the same as that. I'm a total beginner at game programming, I had a view at that other topic and I had difficulty understanding it. Please understand this, thank you. :/ Also, while I was making a print of what the angle is I got this... Here is a screenshot:http://img34.imageshack.us/img34/7396/fzuq.jpg Which is weird because aren't angles supposed to be 360 degrees only?

    Read the article

  • Accounting for waves when doing planar reflections

    - by CloseReflector
    I've been studying Nvidia's examples from the SDK, in particular the Island11 project and I've found something curious about a piece of HLSL code which corrects the reflections up and down depending on the state of the wave's height. Naturally, after examining the brief paragraph of code: // calculating correction that shifts reflection up/down according to water wave Y position float4 projected_waveheight = mul(float4(input.positionWS.x,input.positionWS.y,input.positionWS.z,1),g_ModelViewProjectionMatrix); float waveheight_correction=-0.5*projected_waveheight.y/projected_waveheight.w; projected_waveheight = mul(float4(input.positionWS.x,-0.8,input.positionWS.z,1),g_ModelViewProjectionMatrix); waveheight_correction+=0.5*projected_waveheight.y/projected_waveheight.w; reflection_disturbance.y=max(-0.15,waveheight_correction+reflection_disturbance.y); My first guess was that it compensates for the planar reflection when it is subjected to vertical perturbation (the waves), shifting the reflected geometry to a point where is nothing and the water is just rendered as if there is nothing there or just the sky: Now, that's the sky reflecting where we should see the terrain's green/grey/yellowish reflection lerped with the water's baseline. My problem is now that I cannot really pinpoint what is the logic behind it. Projecting the actual world space position of a point of the wave/water geometry and then multiplying by -.5f, only to take another projection of the same point, this time with its y coordinate changed to -0.8 (why -0.8?). Clues in the code seem to indicate it was derived with trial and error because there is redundancy. For example, the author takes the negative half of the projected y coordinate (after the w divide): float waveheight_correction=-0.5*projected_waveheight.y/projected_waveheight.w; And then does the same for the second point (only positive, to get a difference of some sort, I presume) and combines them: waveheight_correction+=0.5*projected_waveheight.y/projected_waveheight.w; By removing the divide by 2, I see no difference in quality improvement (if someone cares to correct me, please do). The crux of it seems to be the difference in the projected y, why is that? This redundancy and the seemingly arbitrary selection of -.8f and -0.15f lead me to conclude that this might be a combination of heuristics/guess work. Is there a logical underpinning to this or is it just a desperate hack? Here is an exaggeration of the initial problem which the code fragment fixes, observe on the lowest tessellation level. Hopefully, it might spark an idea I'm missing. The -.8f might be a reference height from which to deduce how much to disturb the texture coordinate sampling the planarly reflected geometry render and -.15f might be the lower bound, a security measure.

    Read the article

  • Where can I train my game AI skills? (any upgoing competition?) [on hold]

    - by user1671710
    There are 2 main options - building AI plugin for existing game or entering to some competition. Do you have some concrete tips? Is there any competition which will be soon open? From my research, competitions: http://aichallenge.org (last 2011) http://www.battlecode.org (January 2014) http://webdocs.cs.ualberta.ca/~cdavid/starcraftaicomp/index.shtml (August 2014) http://aibirds.org (maybe summer 2014) http://www.marioai.org (last 2012) http://www.ice.ci.ritsumei.ac.jp/~ftgaic/index.htm (last 2013) http://www.pacman-vs-ghosts.net (last 2012) http://ai-contest2013.gameloft.com/index/contest (last 2013) http://www.botprize.org/ (last 2012) And maybe more. These are from quick research. Obviously there were many competitions this year but it is difficult to catch it. Main question is: Do you have any information of any currently running AI competition?

    Read the article

  • XNA - Obtaining depth from the scene's render target?

    - by user1423893
    I'm currently rendering my scene to a render target so it can be used for rendering methods such as post processing and order independent transparency. rtScene = new RenderTarget2D( GraphicsDevice, GraphicsDevice.PresentationParameters.BackBufferWidth, GraphicsDevice.PresentationParameters.BackBufferHeight, false, SurfaceFormat.Rgba64, DepthFormat.Depth24Stencil8, // Requires a depth format for objects to be drawn correctly (e.g. wireframe model surrounding model) 0, RenderTargetUsage.PreserveContents ); I am required to use RenderTargetUsage.PreserveContents so that the same render target can be rendered to multiple times, once for each of the draw methods below. DrawBackground DrawDeferred DrawForward DrawTransparent The problem is that DrawTransparent requires a copy of the scene's depth as a texture. Is there any way to obtain this from the scene render target above (rtScene)? I can't have more than one render target with RenderTargetUsage.PreserveContents as this causes problems on hardware such as the XBOX 360, so rendering the depth to a separate render target at the same time as I render the scene isn't possible as far as I can tell. Would I be able to get around this problem by "Ping-Ponging" two render targets (using the more compatible RenderTargetUsage.DiscardContents) and using the result for the depth texture?

    Read the article

  • How to set sprite source coordinates?

    - by ChaosDev
    I am creating own sprite drawer with DX11 on C++. Works fine but I dont know how to apply source rectangle to texture coordinates of rendering surface(for animation sprite sheets) //source = (0,0,32,64); //RECT D3DXVECTOR2 t0 = D3DXVECTOR2( 1.0f, 0.0f); D3DXVECTOR2 t1 = D3DXVECTOR2( 1.0f, 1.0f); D3DXVECTOR2 t2 = D3DXVECTOR2( 0.0f, 1.0f); D3DXVECTOR2 t3 = D3DXVECTOR2( 0.0f, 1.0f); D3DXVECTOR2 t4 = D3DXVECTOR2( 0.0f, 0.0f); D3DXVECTOR2 t5 = D3DXVECTOR2( 1.0f, 0.0f); VertexPositionColorTexture vertices[] = { { D3DXVECTOR3( dest.left+dest.right, dest.top, z),D3DXVECTOR4(1,1,1,1), t0}, { D3DXVECTOR3( dest.left+dest.right, dest.top+dest.bottom, z),D3DXVECTOR4(1,1,1,1), t1}, { D3DXVECTOR3( dest.left, dest.top+dest.bottom, z),D3DXVECTOR4(1,1,1,1), t2}, { D3DXVECTOR3( dest.left, dest.top+dest.bottom, z),D3DXVECTOR4(1,1,1,1), t3}, { D3DXVECTOR3( dest.left , dest.top, z),D3DXVECTOR4(1,1,1,1), t4}, { D3DXVECTOR3( dest.left+dest.right, dest.top, z),D3DXVECTOR4(1,1,1,1), t5}, };

    Read the article

  • Best way to solve tile drawing in 2D side scroller?

    - by TheCompBoy
    What i still can't figure out is which would be the more sane way / easier and faster way to draw the map on the screen.. I mean i will use many tiles for my maps in my side scroller.. But problem is should i make the maps in whole images like one .png file for each map (Example) or should i draw the tiles by code like a for loop in c++.. Which way is most recomended or where can i read about which way is the best.

    Read the article

  • jump pads problem

    - by Pasquale Sada
    I'm trying to make a character jump on a landing pad who stays above him. Here is the formula I've used (everything is pretty much self-explainable, maybe except character_MaxForce that is the total force the character can jump ): deltaPosition = target - character_position; sqrtTerm = Sqrt(2*-gravity.y * deltaPosition.y + MaxYVelocity* character_MaxForce); time = (MaxYVelocity-sqrtTerm) /gravity.y; speedSq = jumpVelocity.x* jumpVelocity.x + jumpVelocity.z *jumpVelocity.z; if speedSq < (character_MaxForce * character_MaxForce) we have the right time so we can store the value jumpVelocity.x = deltaPosition.x / time; jumpVelocity.z = deltaPosition.z / time; otherwise we try the other solution time = (MaxYVelocity+sqrtTerm) /gravity.y; and then store it jumpVelocity.x = deltaPosition.x / time; jumpVelocity.z = deltaPosition.z / time; jumpVelocity.y = MaxYVelocity; rigidbody_velocity = jumpVelocity; The problem is that the character is jumping away from the landing pad or sometime he jumps too far never hitting the landing pad.

    Read the article

< Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >