Search Results

Search found 25952 results on 1039 pages for 'development lifecycle'.

Page 404/1039 | < Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >

  • how to move the camera behind a model with the same angle? in XNA

    - by Mehdi Bugnard
    I meet are having difficulty in moving my camera behind an object in a 3D world. I would create two view mode. 1: for fps (first person). 2nd: external view behind the character (second person). I searched the net some example but it does not work in my project. Here is my code used to change view if F2 is pressed //Camera double X1 = this.camera.PositionX; double X2 = this.player.Position.X; double Z1 = this.camera.PositionZ; double Z2 = this.player.Position.Z; //Verify that the user must not let the press F2 if (!this.camera.IsF2TurnedInBoucle) { // If the view mode is the second person if (this.camera.ViewCamera_type == CameraSimples.ChangeView.SecondPerson) { this.camera.ViewCamera_type = CameraSimples.ChangeView.firstPerson; //Calcul position - ?? Here my problem double direction = Math.Atan2(X2 - X1, Z2 - Z1) * 180.0 / 3.14159265; //Calcul angle - ?? Here my problem this.camera.position = .. this.camera.rotation = .. this.camera.MouseRadian_LeftrightRot = (float)direction; } //IF mode view is first person else { //....

    Read the article

  • How to do geometric projection shadows?

    - by John Murdoch
    I have decided that since my game world is mostly flat I don't need better shadows than geometric projections - at least for now. The only problem is I don't even know how to do those properly - that is to produce a 4x4 matrix which would render shadows for my objects (that is, I guess, project them on a horizontal XZ plane). I would like a light source at infinity (e.g., the sun at some point in the sky) and thus parallel projection. My current code does something that looks almost right for small flying objects, but actually is a very rude approximation, as it doesn't project the objects onto the ground, but simply moves them there (I think). Also it always wrongly assumes the sun is always on the zenith (projecting straight down). Gdx.gl20.glEnable(GL10.GL_BLEND); Gdx.gl20.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); //shells shellTexture.bind(); shader.begin(); for (ShellState state : shellStates.values()) { transform.set(camera.combined); transform.mul(state.transform); shader.setUniformMatrix("u_worldView", transform); shader.setUniformi("u_texture", 0); shellMesh.render(shader, GL10.GL_TRIANGLES); } shader.end(); // shadows shader.begin(); for (ShellState state : shellStates.values()) { transform.set(camera.combined); m4.set(state.transform); state.transform.getTranslation(v3); m4.translate(0, -v3.y + 0.5f, 0); // TODO HACK: + 0.5f is a hack to ensure the shadow appears above the ground; this is overall a hack as we are just moving the shell to the surface instead of projecting it on the surface! transform.mul(m4); shader.setUniformMatrix("u_worldView", transform); shader.setUniformi("u_texture", 0); // TODO: make shadow black somehow shellMesh.render(shader, GL10.GL_TRIANGLES); } shader.end(); Gdx.gl.glDisable(GL10.GL_BLEND); So my questions are: a) What is the proper way to produce a Matrix4 to pass to openGL which would render the shadows for my objects? b) I am supposed to use another fragment shader for the shadows which would paint them in semi-transparent grey, correct? c) The limitation of this simplistic approach is that whenever there is some object on the ground (it is not flat) the shadows will not be drawn, correct? d) Do I need to add something very small to the y (up) coordinate to avoid z-fighting with ground textures? Or is the fact they will be semi-transparent enough to resolve that problem?

    Read the article

  • What tools should I consider if my aim is to make a game available to as many platforms as possible?

    - by Kenji Kina
    We're planning on developing a 2D, grid-based puzzle game, and although it's still very early in the planning stages, we'd like to make our decisions well from the beginning. Our strategy will be to make the game available to as many platforms as possible, for example PCs (Windows, Mac and/or Linux), mobile phones (iPhone and/or Android based phones), game consoles (XBLA and/or PSN) PC will have an emphasis, but I believe that's the most flexible platform so that shouldn't be a problem. So, what programming language, game engine, frameworks and all around tools would be best suited for our goal? P.S.: I'm betting a set of tools won't cover ALL of them, and that there will still be some kind of "translating" effort for some platforms, but we'd like to know what the most far reaching are.

    Read the article

  • Making a game "resize-safe"

    - by CPP_Person
    It's one thing to get the graphics aligned perfectly, it's another to do this for every single resolution and not take too much time and/or make the code unreadable due to size. Games like Battlefield 3 and Minecraft seem to manage this. But what do they do to keep things from stretching or going off the screen? I don't know any algorithms to do this. I'd like some help on this topic. I've always programmed games that only handle a single resolution, so help would be appreciate.

    Read the article

  • Multiplayer mobile games and coping with high latency

    - by liortal
    I'm currently researching regarding a design for an online (realtime) mobile multiplayer game. As such, i'm taking into consideration that latencies (lag) is going to be high (perhaps higher than PC/consoles). I'd like to know if there are ways to overcome this or minimize the issues of high latency? The model i'll be using is peer-to-peer (using Photon cloud to broadcast messages to all other players). How do i deal with a scenario where a message about a local object's state at time t will only get to other players at *t + HUGE_LAG* ?

    Read the article

  • XNA windows phone release black textures

    - by Lukasz Kajstura
    i just made a 3d game in XNA for windows phone 7. I build it in release mode on visual studio 2010 and suddenly when I run game there is no textures on models - 2 models are black and one is transparent. Models are in .X format exported from 3dsmax and have textures in .jpg also added to game content. I set build action to none and all worked fine in debug mode. When I change to release mode - black textures. When I set build action to compile it gives me warning: Asset was built 2 times with different settings: using TextureImporter and TextureProcessor using TextureImporter and TextureProcessor, referenced by... and still no textures. What can I do?

    Read the article

  • LibGdx drawing weird behaviour

    - by Ryckes
    I am finding strange behaviour while rendering TextureRegions in my game, only when pausing it. I am making a game for Android, in Java with LibGdx. When I comment out the line "drawLevelPaused()" everything seems to work fine, both running and paused. When it's not commented, everything works fine until I pause the screen, then it draws those two rectangles, but maybe ships are not shown, and if I comment out drawShips() and drawTarget() (just trying) maybe one of the planets disappears, or if I change the order, other things disappear and those that disappeared before now are rendered again. I can't find the way to fix this behaviour I beg your help, and I hope it's my mistake and not a LibGdx issue. I use OpenGL ES 2.0, stated in AndroidManifest.xml, if it is of any help. Thank you in advance. My Screen render method(game loop) is as follows: @Override public void render(float delta) { Gdx.gl.glClearColor(0.1f, 0.1f, 0.1f, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); controller.update(delta); renderer.render(); } When world state is PAUSED controller.update does nothing at all, there is a switch in it. And renderer.render() is as follows: public void render() { int worldState=this.world.getWorldState(); updateCamera(); spriteBatch.begin(); drawPlanets(); drawTarget(); drawShips(); if(worldState==World.PAUSED) { drawLevelPaused(); } else if(worldState==World.LEVEL_WON) { drawLevelWin(); } spriteBatch.end(); } And those methods are: private void updateCamera() { this.offset=world.getCameraOffset(); } private void drawPlanets() { for(Planet planet : this.world.getPlanets()) { this.spriteBatch.draw(this.textures.getTexture(planet.getTexture()), (planet.getPosition().x - this.offset[0]) * ppuX, (planet.getPosition().y - this.offset[1]) * ppuY); } } private void drawTarget() { Target target=this.world.getTarget(); this.spriteBatch.draw(this.textures.getTexture(target.getTexture()), (target.getPosition().x - this.offset[0]) * ppuX, (target.getPosition().y - this.offset[1]) * ppuY); } private void drawShips() { for(Ship ship : this.world.getShips()) { this.spriteBatch.draw(this.textures.getTexture(ship.getTexture()), (ship.getPosition().x - this.offset[0]) * ppuX, (ship.getPosition().y - this.offset[1]) * ppuY, ship.getBounds().width*ppuX/2, ship.getBounds().height*ppuY/2, ship.getBounds().width*ppuX, ship.getBounds().height*ppuY, 1.0f, 1.0f, ship.getAngle()-90.0f); } if(this.world.getStillShipVisibility()) { Ship ship=this.world.getStillShip(); Arrow arrow=this.world.getArrow(); this.spriteBatch.draw(this.textures.getTexture(ship.getTexture()), (ship.getPosition().x - this.offset[0]) * ppuX, (ship.getPosition().y - this.offset[1]) * ppuY, ship.getBounds().width*ppuX/2, ship.getBounds().height*ppuY/2, ship.getBounds().width*ppuX, ship.getBounds().height*ppuY, 1f, 1f, ship.getAngle() - 90f); this.spriteBatch.draw(this.textures.getTexture(arrow.getTexture()), (ship.getCenter().x - this.offset[0] - arrow.getBounds().width/2) * ppuX, (ship.getCenter().y - this.offset[1]) * ppuY, arrow.getBounds().width*ppuX/2, 0, arrow.getBounds().width*ppuX, arrow.getBounds().height*ppuY, 1f, arrow.getRate(), ship.getAngle() - 90f); } } private void drawLevelPaused() { this.shapeRenderer.begin(ShapeType.FilledRectangle); this.shapeRenderer.setColor(0f, 0f, 0f, 0.8f); this.shapeRenderer.filledRect(0, 0, this.width/this.ppuX, PAUSE_MARGIN_HEIGHT/this.ppuY); this.shapeRenderer.filledRect(0, (this.height-PAUSE_MARGIN_HEIGHT)/this.ppuY, this.width/this.ppuX, PAUSE_MARGIN_HEIGHT/this.ppuY); this.shapeRenderer.end(); for(Button button : this.world.getPauseButtons()) { this.spriteBatch.draw(this.textures.getTexture(button.getTexture()), (button.getPosition().x - this.offset[0]) * this.ppuX, (button.getPosition().y - this.offset[1]) * this.ppuY); } }

    Read the article

  • Undefined fireball movement behavior

    - by optimisez
    Demonstration video I try to do after the player shoot 10 times of fireball, then delete all the fireball objects and recreate a 10 new set of fireball objects. I did it but there is a weird bug happens that sometimes the fireball will come out from top and move to the right after shooting a few times. All the 10 fireballs should follow the player all the time and all the fireball should come out from player even after a new set of fireballs is recreated. Any ideas to fix it? Variables typedef struct gameObject{ float X; float Y; int length; int height; bool action; }; // Fireball #define FIREBALL_NUM 10 LPDIRECT3DTEXTURE9 fireball = NULL; RECT fireballRect; gameObject *fireballDest = new gameObject[FIREBALL_NUM]; int iFireBallAnimation; int fireballCount = 0; Set up Fireball void setUpFireBall() { // Initialize destination rectangle, rectangle height and length for (int i = 0; i < FIREBALL_NUM; i++) { fireballDest[i].X = 0; fireballDest[i].Y = 0; fireballDest[i].length = fireballRect.right - fireballRect.left; fireballDest[i].height = fireballRect.bottom - fireballRect.top; } iFireBallAnimation = fireballRect.right - fireballRect.left; // Initialize boolean for (int i = 0; i < FIREBALL_NUM; i++) { fireballDest[i].action = false; } } Initialize fireball void initFireball() { hr = D3DXCreateTextureFromFileEx(d3dDevice, "fireball.png", 512, 512, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 0), NULL, NULL, &fireball); // Initialize source rectangle fireballRect.left = 0; fireballRect.top = 256; fireballRect.right = 64; fireballRect.bottom = 320; setUpFireBall(); } Update fireball void update() { updateAnimation(); updateAI(); updatePhysics(); updateGameState(); } void updatePhysics() { motion(); collison(); } void motion() { playerMove(); playerJump(); playerGravity(); shootFireball(); fireballFollowPlayer(); } void shootFireball() { if (keyArr['Z']) fireballDest[fireballCount].action = true; if (fireballDest[fireballCount].action) { fireballDest[fireballCount].X += 10; if (fireballDest[fireballCount].X > 800) fireballCount++; } } void fireballFollowPlayer() { for (int i = 0; i < FIREBALL_NUM; i++) { if (fireballDest[i].action == false) { fireballDest[i].X = playerDest.X - 30; fireballDest[i].Y = playerDest.Y - 14; } } } void updateGameState() { // When no more fireball left, rearm fireball if (fireballCount == FIREBALL_NUM) { delete[] fireballDest; fireballDest = new gameObject[10]; fireballCount = 0; setUpFireBall(); } } Render fireball void renderFireball() { for (int i = 0; i < FIREBALL_NUM; i++) { if (fireballDest[i].action) sprite->Draw(fireball, &fireballRect, NULL, &D3DXVECTOR3(fireballDest[i].X, fireballDest[i].Y, 0), D3DCOLOR_XRGB(255,255, 255)); } }

    Read the article

  • Direct3d - Code structure

    - by marcg11
    I'm learning directx in a master's degree and they taught us to have a GraphicsLayer class which is the one connecting with the direct3d library. That way this class is completly independent from the other classes (my game classes), meaning changing the renderer to OpenGL wouldn't require much effort but only changing the graphicLayer. This classe has it's LoadAssets, Paint methods, but I have a question, they told us to load all the assets inside this class. This means all these methods will be in the loadAssets method: D3DXCreateTextureFromFileEx(g_pD3DDevice,"tiles.png",0,0,1,0,D3DFMT_UNKNOWN,D3DPOOL_DEFAULT,D3DX_FILTER_NONE,D3DX_FILTER_NONE,NULL,NULL,NULL,&texTiles); // And more resources to load //... texTiles as you see is a LPDIRECT3DTEXTURE9 instance which is declared in the graphicLayer.h. So my question is, how do you manage all the resources? Do I have to declare in the .h all my game textures even if I'm not using them? How would you load only those resources there are in a scene and draw them in a code-strucured way?

    Read the article

  • Calculate an AABB for bone animated model

    - by Byte56
    I have a model that has its initial bounding box calculated by finding the maximum and minimum on the x, y and z axes. Producing a correct result like so: The vertices are then stored in a VBO and only altered with matrices for rotation and bone animation. Currently the bounds are not updated when the model is altered. So the animated and rotated model has bounds like so: (Maybe it's hard to tell, but the bounds are the same as before, and don't accurately represent the rotated/animated model) So my question is, how can I calculate the bounding box using the armature matrices and rotation/translation matrices for each model? Keep in mind the modified vertex data is not available because those calculations are performed on the GPU in the shader. The end result I want is to have an accurate AABB the represents the animated model for picking/basic collision checks.

    Read the article

  • How would I be able to get a game over screen using the pause function?

    - by Joachim Velzel
    I am having problems with my snake game, when the snake collides with itself it draws a "game over" image in the background, but only while it's colliding with itself. I want it to behave like the pause function, so that as soon as the snake collides with itself it draws an image on the screen and stops the game play. And then how would you be able to restart or to quit the game? I just have this for the detection at the moment: if (snakeHeadRectangle.Intersects(snakeBodyRectangleArray[bodyNumber])) { spriteBatch.Draw(textureGameOver, gameOverPosition, Color.White); } Thanks

    Read the article

  • Displaying performance data per engine subsystem

    - by liortal
    Our game (Android based) traces how long it takes to do the world logic updates, and how long it takes to a render a frame to the device screen. These traces are collected every frame, and displayed at a constant interval (currently every 1 second). I've seen games where on-screen data of various engine subsystems is displayed, with the time they consume (either in text) or as horizontal colored bars. I am wondering how to implement such a feature?

    Read the article

  • Playing repeated sound in Java

    - by Diogo Schneider
    I'm trying to play sounds in a Java game with the following code: AudioStream audioStream = new AudioStream(stream); AudioPlayer.player.start(audioStream); The stream variable is just an InputStream to the resource. By the first time this code is called, the sound is played as expected, but by the second time the program just hangs, not even an exception is thrown. I don't know what's going on or how to prevent this. If I try closing either stream or audioStream after the above code, the program doesn't hang, but no sound is ever played at all. Any tips are welcome, thanks.

    Read the article

  • Trying to setup first DirectX project (don't understand the error) [on hold]

    - by user1157885
    I've just started learning DirectX with the book "3D Game Programming with DirectX". I just finished setting up all the paths and adding the code to the project which I think I did correctly, but I get this massive error which I don't really understand and is hard to google. Could someone tell me what it means and how to fix it? Error 1 error TRK0002: Failed to execute command: ""C:\Program Files (x86)\Microsoft DirectX SDK (June 2010)\Utilities\bin\x64\fxc.exe" /nologo /Emain /Fo "C:\Desktop\DirectX 11 Projects\box\Win32Project2\Debug\color.cso" /Od /Zi "....\Book Files\3DGameProg\DVD\Code\Chapter 6 Drawing in Direct3D\Box\FX\color.fx"". The handle is invalid.

    Read the article

  • GetData() error creating framebuffer

    - by Lelezeus
    I'm currently porting a game written in C# with XNA library to Android with Monogame. I have a Texture2D and i'm trying to get an array of uint in this way: Texture2d textureDeform = game.Content.Load<Texture2D>("Texture/terrain"); uint[] pixelDeformData = new uint[textureDeform.Width * textureDeform.Height]; textureDeform.GetData(pixelDeformData, 0, textureDeform.Width * textureDeform.Height); I get the following exception: System.Exception: Error creating framebuffer: Zero at Microsoft.Xna.Framework.Graphics.Texture2D.GetTextureData (Int32 ThreadPriorityLevel) [0x00000] in :0 I found that the problem is in private byte[] GetTextureData(int ThreadPriorityLevel) creating the framebuffer: private byte[] GetTextureData(int ThreadPriorityLevel) { int framebufferId = -1; int renderBufferID = -1; GL.GenFramebuffers(1, ref framebufferId); // framebufferId is still -1 , why can't be created? GraphicsExtensions.CheckGLError(); GL.BindFramebuffer(All.Framebuffer, framebufferId); GraphicsExtensions.CheckGLError(); //renderBufferIDs = new int[currentRenderTargets]; GL.GenRenderbuffers(1, ref renderBufferID); GraphicsExtensions.CheckGLError(); // attach the texture to FBO color attachment point GL.FramebufferTexture2D(All.Framebuffer, All.ColorAttachment0, All.Texture2D, this.glTexture, 0); GraphicsExtensions.CheckGLError(); // create a renderbuffer object to store depth info GL.BindRenderbuffer(All.Renderbuffer, renderBufferID); GraphicsExtensions.CheckGLError(); GL.RenderbufferStorage(All.Renderbuffer, All.DepthComponent24Oes, Width, Height); GraphicsExtensions.CheckGLError(); // attach the renderbuffer to depth attachment point GL.FramebufferRenderbuffer(All.Framebuffer, All.DepthAttachment, All.Renderbuffer, renderBufferID); GraphicsExtensions.CheckGLError(); All status = GL.CheckFramebufferStatus(All.Framebuffer); if (status != All.FramebufferComplete) throw new Exception("Error creating framebuffer: " + status); ... } The frameBufferId is still -1, seems that framebuffer could not be generated and I don't know why. Any help would be appreciated, thank you in advance.

    Read the article

  • Loading files during run time

    - by NDraskovic
    I made a content pipeline extension (using this tutorial) in XNA 4.0 game. I altered some aspects, so it serves my need better, but the basic idea still applies. Now I want to go a step further and enable my game to be changed during run time. The file I am loading trough my content pipeline extension is very simple, it only contains decimal numbers, so I want to enable the user to change that file at will and reload it while the game is running (without recompiling as I had to do so far). This file is a very simplified version of level editor, meaning that it contains rows like: 1 1,5 1,78 -3,6 Here, the first number determines the object that will be drawn to the scene, and the other 3 numbers are coordinates where that object will be placed. So, how can I change the file that contains these numbers so that the game loads it and redraws the scene accordingly? Thanks

    Read the article

  • How to I access "Deny" message from a Lidgren client?

    - by TJ Mott
    I'm using the Lidgren v3 network for a UDP client/server networking model. On the server end, I'm initializing a NetServer object with the NetIncomingMessage.ConnectionApproval message type enabled. So the client is able to successfully connect and the first packet it sends is a login packet, containing a username and password supplied by the user. The server is receiving that and doing some black magic to authenticate, and everything works up to that point. If the login fails, the server calling NetIncomingMessage.SenderConnection.Deny("Invalid Login Credentials"). I want to know how to properly receive this deny message on the client. I'm getting the message, it shows up with a message type of NetIncomingMessage.StatusChanged. If I call ReadString on that message, I get a corrupted version of the string I passed to the Deny method on the server. The type of corruption varies, I've seen odd characters in there but in every case it's truncated and is way shorter than the string I entered. Any ideas? The official documentation is sparse on this topic. I could use pointers from anyone who has successfully used the Lidgren library and uses the Accept or Deny methods. Also, if I don't do any authentication and just Approve() the connection every time, stuff actually works just fine and I'm getting reliable two-way UDP traffic. (And lastly, Stack Exchange said I don't have enough reputation to use the "Lidgren" tag....???)

    Read the article

  • Tessellation Texture Coordinates

    - by Stuart Martin
    Firstly some info - I'm using DirectX 11 , C++ and I'm a fairly good programmer but new to tessellation and not a master graphics programmer. I'm currently implementing a tessellation system for a terrain model, but i have reached a snag. My current system produces a terrain model from a height map complete with multiple texture coordinates, normals, binormals and tangents for rendering. Now when i was using a simple vertex and pixel shader combination everything worked perfectly but since moving to include a hull and domain shader I'm slightly confused and getting strange results. My terrain is a high detail model but the textured results are very large patches of solid colour. My current setup passes the model data into the vertex shader then through the hull into the domain and then finally into the pixel shader for use in rendering. My only thought is that in my hull shader i pass the information into the domain shader per patch and this is producing the large areas of solid colour because each patch has identical information. Lighting and normal data are also slightly off but not as visibly as texturing. Below is a copy of my hull shader that does not work correctly because i think the way that i am passing the data through is incorrect. If anyone can help me out but suggesting an alternative way to get the required data into the pixel shader? or by showing me the correct way to handle the data in the hull shader id be very thankful! cbuffer TessellationBuffer { float tessellationAmount; float3 padding; }; struct HullInputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; }; struct ConstantOutputType { float edges[3] : SV_TessFactor; float inside : SV_InsideTessFactor; }; struct HullOutputType { float3 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; float3 binormal : BINORMAL; float2 tex2 : TEXCOORD1; float4 depthPosition : TEXCOORD2; }; ConstantOutputType ColorPatchConstantFunction(InputPatch<HullInputType, 3> inputPatch, uint patchId : SV_PrimitiveID) { ConstantOutputType output; output.edges[0] = tessellationAmount; output.edges[1] = tessellationAmount; output.edges[2] = tessellationAmount; output.inside = tessellationAmount; return output; } [domain("tri")] [partitioning("integer")] [outputtopology("triangle_cw")] [outputcontrolpoints(3)] [patchconstantfunc("ColorPatchConstantFunction")] HullOutputType ColorHullShader(InputPatch<HullInputType, 3> patch, uint pointId : SV_OutputControlPointID, uint patchId : SV_PrimitiveID) { HullOutputType output; output.position = patch[pointId].position; output.tex = patch[pointId].tex; output.tex2 = patch[pointId].tex2; output.normal = patch[pointId].normal; output.tangent = patch[pointId].tangent; output.binormal = patch[pointId].binormal; return output; } Edited to include the domain shader:- [domain("tri")] PixelInputType ColorDomainShader(ConstantOutputType input, float3 uvwCoord : SV_DomainLocation, const OutputPatch<HullOutputType, 3> patch) { float3 vertexPosition; PixelInputType output; // Determine the position of the new vertex. vertexPosition = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.position = mul(float4(vertexPosition, 1.0f), worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); output.depthPosition = output.position; output.tex = patch[0].tex; output.tex2 = patch[0].tex2; output.normal = patch[0].normal; output.tangent = patch[0].tangent; output.binormal = patch[0].binormal; return output; }

    Read the article

  • DirectWrite Producing Strange Artifacts?

    - by smoth190
    I've written the basis to my UI system around Direct2D. I like it because it's fast and easy to use (even if I had to do some messy work to get it to work with DirectX11). However, I notice when using DirectWrite I'm getting strange problems with my text. As you can see, the e is a little screwwed up, and it overall looks a little bumpy. This only happens with certain fonts in certain sizes, and with certain arrangements of letters. This particular example is Verdana in size 16.0 font. Can I fix this? It's pretty annoying to change all my words and fonts because of this problem.

    Read the article

  • Stuck at enemy movement

    - by Syed
    I am making a TD game in unity, Initially I made all of my enemy movements frame rate dependent say: I had a grid point1 at -22.65 and other at -21.1, diagrammatically: (-22.65) _________(-21.1)_______(-21.1+1.55) ...... so the distance on x axis between two points is 1.55, divided it by 25 jumps with each enemy jump of 0.062 of each frame. On reaching on next point of grid the enemy find again its path. All went fine until I have requirement of FastForward and Pause feature. I used timeScale property of unity but it wont work as they are frame dependent. I also tried double speed of enemy on clicking fast forward button at any time, it has some issues that enemy jumps are now less and it fails to reach on next grid point. Could someone suggest me solution to my problem. Do I need to change the enemy movement code to make it frame independent ?? I need the enemy to reach on the grid specific point I also need later to slow down any one enemy's speed when tower fires on it. Thnx

    Read the article

  • Picture rendered from above and below using an Orthographic camera do not match

    - by Roy T.
    I'm using an orthographic camera to render slices of a model (in order to voxelize it). I render each slice both from above and below in order to determine what is inside each slice. I am using an orthographic camera The model I render is a simple 'T' shape constructed from two cubes. The cubes have the same dimensions and have the same Y (height) coordinate. See figure 1 for a render of it in Blender. I render this model once directly from above and once directly from below. My expectation was that I would get exactly the same image (except for mirroring over the y-axis). However when I render using a very low resolution render target (25x25) the position (in pixels) of the 'T' is different when rendered from above as opposed to rendered from below. See figure 2 and 3. The pink blocks are not part of the original rendering but I've added them so you can easily count/see the differences. Figure 2: the T rendered from above Figure 3: the T rendered from below This is probably due to what I've read about pixel and texel coordinates which might be biased to the top-left as seen from the camera. Since I'm using the same 'up' vector for both of my camera's my bias only shows on the x-axis. I've tried to change the position of the camera and it's look-at by, what I thought, should be half a pixel. I've tried both shifting a single camera and shifting both cameras and while I see some effect I am not able to get a pixel-by-pixel perfect copy from both camera's. Here I initialize the camera and compute, what I believe to be, half pixel. boundsDimX and boundsDimZ is a slightly enlarged bounding box around the model which I also use as the width and height of the view volume of the orthographic camera. Matrix projection = Matrix.CreateOrthographic(boundsDimX, boundsDimZ, 0.5f, sliceHeight + 0.5f); Vector3 halfPixel = new Vector3(boundsDimX / (float)renderTarget.Width, 0, boundsDimY / (float)renderTarget.Height) * 0.5f; This is the code where I set the camera position and camera look ats // Position camera if (downwards) { float cameraHeight = bounds.Max.Y + 0.501f - (sliceHeight * i); Vector3 cameraPosition = new Vector3 ( boundsCentre.X, // possibly adjust by half a pixel? cameraHeight, boundsCentre.Z ); camera.Position = cameraPosition; camera.LookAt = new Vector3(cameraPosition.X, cameraHeight - 1.0f, cameraPosition.Z); } else { float cameraHeight = bounds.Max.Y - 0.501f - (sliceHeight * i); Vector3 cameraPosition = new Vector3 ( boundsCentre.X, cameraHeight, boundsCentre.Z ); camera.Position = cameraPosition; camera.LookAt = new Vector3(cameraPosition.X, cameraHeight + 1.0f, cameraPosition.Z); } Main Question Now you've seen all the problems and code you can guess it. My main question is. How do I align both camera's so that they each render exactly the same image (mirrored along the Y axis)? Figure 1 the original model rendered in blender

    Read the article

  • Z Order in 2D with orthographic projection and texture atlas

    - by Carbon Crystal
    I am working with a 2D game in OpenGL ES and have a question about z-order together with a texture atlas. I am using an orthographic projection because I want pixel-perfect rendering of 2D sprites, however from what I can determine the draw order is really the only thing that will determine which textures (sprites) appear above or below their neighbors. That is, the "z-index" is a function of the order in which the textures are drawn as opposed to the z coordinate on the vertex array being drawn. So.. I have a texture atlas to save binding multiple textures for each draw call but this immediately creates a problem if there is more than one atlas in play. If I need to draw textures from more than one atlas (typically the case if I have too many sprites to fit in a single atlas of a reasonable size), then I can't maintain a "draw order" across atlases unless I want to bind/unbind the atlas textures more than once.. which kinda defeats the purpose. Does anyone have any clues as to what the best approach is here? Currently I'm running under an assumption that I will have to declare different fixed "depths" (e.g foreground, background etc) in my 2D scene and assume that the z-order for sprites at a given depth is the same. Then I can have as many atlases as I need at each depth and simply draw the depths in order (along with their associated atlases) I'd love to hear what other people are doing.

    Read the article

  • Best practices with Vertices in Open GL

    - by Darestium
    What is the best practice in regards to storing vertex data in Open GL? I.e: struct VertexColored { public: GLfloat position[]; GLfloat normal[]; byte colours[]; } class Terrian { private: GLuint vbo_vertices; GLuint vbo_normals; GLuint vbo_colors; GLuint ibo_elements; VertexColored vertices[]; } or having them stored seperatly in the required class like: class Terrian { private: GLfloat vertices[]; GLfloat normals[]; GLfloat colors[]; GLuint vbo_vertices; GLuint vbo_normals; GLuint vbo_colors; GLuint ibo_elements; }

    Read the article

  • Seeking an C/C++ OBJ geometry read/write that does not modify the representation

    - by Blake Senftner
    I am seeking a means to read and write OBJ geometry files with logic that does not modify the geometry representation. i.e. read geometry, immediately write it, and a diff of the source OBJ and the one just written will be identical. Every OBJ writing utility I've been able to find online fails this test. I am writing small command line tools to modify my OBJ geometries, and I need to write my results, not just read the geometry for rendering purposes. Simply needing to write the geometry knocks out 95% of the OBJ libraries on the web. Also, many of the popular libraries modify the geometry representation. For example, Nat Robbin's GLUT library includes the GLM library, which both converts quads to triangles, as well as reverses the topology (face ordering) of the geometry. It's still the same geometry, but if your tool chain expects a given topology, such as for rigging or morph targets, then GLM is useless. I'm not rendering in these tools, so dependencies like OpenGL or GLUT make no sense. And god forbid, do not "optimize" the geometry! Redundant vertices are on purpose for maintaining oneself on cache with our weird little low memory mobile devices.

    Read the article

  • Blur gets displaced compared to original image

    - by user1294203
    I have implemented a SSAO and I'm using a blur step to smooth it out. The problem is that the blurred texture is slightly displaced compared to the original. I'm blurring using a 4x4 kernel since that was my noise kernel in SSAO. The following is the blurring shader: float result = 0.0; for(int i = 0; i < 4; i++){ for(int j = 0; j < 4; j++){ vec2 offset = vec2(TEXEL_SIZE.x * i, TEXEL_SIZE.y * j); result += texture(aoSampler, TexCoord + offset).r; } } out_AO = vec4(vec3(0.0), result * 0.0625); Where TEXEL_SIZE is one over my window resolution. I was thinking that this is was an error based on how OpenGL counts the Texel center, so I tried displacing the texture coordinate I was using by 0.5 * TEXEL_SIZE, but there was still a slight displacement. The texture input to my blur shader, has wrap parameters: glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); When I tell the blur shader to just output the the value of the pixel, the result is not displaced, so it must have something to do with how neighboring pixels are sampled. Any thoughts?

    Read the article

< Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >