Search Results

Search found 13494 results on 540 pages for 'board game'.

Page 259/540 | < Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >

  • Rotating a view of a chunked 2d tilemap

    - by Danie Clawson
    I'm working on a top-down (oblique) tile-based engine. I would like for the tiles to have a definable height in the world, with Characters being occluded by them, etc. This has led to a desire to be able to "rotate" the view of the world, even though I'm using all hand-drawn graphics and blitting. Therefor, I need to rotate the actual world itself, or change how the Camera traverses these arrays. How can, or should, I create individual rotations of 90 degrees, when I have multi-dimensional arrays? Is it faster to actually rotate the array, to access it differently, or to create pre-computed accessor(?) arrays, something like how my chunks work? How can I rotate an individual chunk, or set of chunks? Currently I establish my tile grid like this (tile height not included): function Surface(WIDTH, HEIGHT) { WIDTH = Math.max(WIDTH-(WIDTH%TPC), TPC); HEIGHT = Math.max(HEIGHT-(HEIGHT%TPC), TPC); this.tiles = []; this.chunks = []; //Establish tiles for(var x = 0; x < WIDTH; x++) { var col = [], ch_x = Math.floor(x/TPC); if(!this.chunks[ch_x]) this.chunks.push([]); for(var y = 0; y < HEIGHT; y++) { var tile = new Tile(x, y), ch_y = Math.floor(y/TPC); if(!this.chunks[ch_x][ch_y]) this.chunks[ch_x].push([]); this.chunks[ch_x][ch_y].push(tile); col.push(tile); } this.tiles.push(col); } }; Even some basic advice on my data struct would be much appreciated.

    Read the article

  • How to fix bad Collada produced by FBX?

    - by David
    I tried to use the FBX SDK (2011.3.1) to load FBX files and save them as Collada files in order to be able to import FBX files in Panda3D. Unfortunately the resulting Collada files are not usable for several reasons, among them: There's a Maya specific extra technique diffuse <diffuse> <texture texture="Map__2-image" texcoord="CHANNEL0"> <extra> <technique profile="MAYA"> <wrapU sid="wrapU0">TRUE</wrapU> <wrapV sid="wrapV0">TRUE</wrapV> <blend_mode>ADD</blend_mode> </technique> </extra> </texture> </diffuse> It assigns a texcoord channel name that isn't referenced anywhere else in the file (in the previous code sample, no geometry uses "CHANNEL0"...) Every polygon is exported twice, a first time with a basic material (only diffuse color, specular color, etc.) and a second time with a textured material -- this doubles the number of polygons of each model without any valuable reason Anyway, the resulting Collada file cannot be opened correctly either with OpenCOLLADA or Panda3D's "dae2egg". Anyone has any experience on how to "fix" it and make it understandable by common and well-reputed Collada importers such as OpenCOLLADA?

    Read the article

  • Pygame surfaces and their Rects

    - by Jaka Novak
    I am trying to understand how pygame surfaces work. I am confused about Rect position of Surface object. If I try blit surface on screen at some position then Surface is drawn at right position, but Rect of the surface is still at position (0, 0)... I tried write my own surface class with new rect, but i am not sure if is that right solution. My goal is that i could move surface like image with rect.move() or something like that. If there is any solution to do that i would be happy to read it. Thanks for answer and time for reading this awful English If helps i write some code for better understanding my problem. (run it first, and then uncomment two lines of code and run again to see the diference): import pygame from pygame.locals import * class SurfaceR(pygame.Surface): def __init__(self, size, position): pygame.Surface.__init__(self, size) self.rect = pygame.Rect(position, size) self.position = position self.size = size def get_rect(self): return self.rect def main(): pygame.init() screen = pygame.display.set_mode((640, 480)) pygame.display.set_caption("Screen!?") clock = pygame.time.Clock() fps = 30 white = (255, 255, 255) red = (255, 0, 0) green = (0, 255, 0) blue = (0, 0, 255) surface = pygame.Surface((70,200)) surface.fill(red) surface_re = SurfaceR((300, 50), (100, 300)) surface_re.fill(blue) while True: for event in pygame.event.get(): if event.type == QUIT: return 0 screen.blit(surface, (100,50)) screen.blit(surface_re, surface_re.position) #pygame.draw.rect(screen, white, surface.get_rect()) #pygame.draw.rect(screen, white, surface_re.get_rect()) pygame.display.update() clock.tick(fps) if __name__ == "__main__": main()

    Read the article

  • Extract derived 3D scaling from a 3D Sprite to set to a 2D billboard

    - by Bill Kotsias
    I am trying to get the derived position and scaling of a 3D Sprite and set them to a 2D Sprite. I have managed to do the first part like this: var p:Point = sprite3d.local3DToGlobal(new Vector3D(0,0,0)); billboard.x = p.x; billboard.y = p.y; But I can't get the scaling part correctly. I am trying this: var mat:Matrix3D = sprite3d.transform.getRelativeMatrix3D(stage); // get derived matrix(?) var scaleV:Vector3D = mat.decompose()[2]; // get scaling vector from derived matrix var scale:Number = scaleV.length; billboard.scaleX = scale; billboard.scaleY = scale; ...but the result is apparently wrong. PS. One might ask what I am trying to achieve. I am trying to create "billboard" 3D sprites, i.e. sprites which are affected by all 3D transformations except rotations, thus they always face the "camera".

    Read the article

  • How to get GameElements (RigidBody) size in Unity

    - by Shivan Dragon
    I've made a prefab consisting of a Cube which I've first scaled to more resemble a brick. There's also a Rigidbody added to the cube (in the prefab). Now I want to use that prefab in a c# script to make a wall out of multiple bricks. My question is, how can I access the dimensions of my brick (width, height, the z dimension size) so that in my script I can make bricks which are placed one next to the other (and then one on top of the other)? I've looked at the documentation for GameObject and Rigidbody but I can't find anything helpful. Just for refference, my script so far is: public GameObject brick; void Start () { Instantiate(this.brick, new Vector3(0.01326297f, -30.07855f, 100f), Quaternion.identity); // int brickWidth = this.brick.????; }

    Read the article

  • creating bounding box list

    - by Christian Frantz
    I'm trying to create a list of bounding boxes for each cube drawn, so I can use the boxes to intersect with a ray that my mouse position is casting, but I have no idea how. I've created a list that stores the boxes, but how am I getting the values from each box? for (int x = 0; x < mapHeight; x++) { for (int z = 0; z < mapWidth; z++) { cubes.Add(new Vector3(x, map[x, z], z), Matrix.Identity, grass); boxList.Add(something here); } } public Cube(GraphicsDevice graphicsDevice) { device = graphicsDevice; var vertices = new List<VertexPositionTexture>(); BuildFace(vertices, new Vector3(0, 0, 0), new Vector3(0, 1, 1)); BuildFace(vertices, new Vector3(0, 0, 1), new Vector3(1, 1, 1)); BuildFace(vertices, new Vector3(1, 0, 1), new Vector3(1, 1, 0)); BuildFace(vertices, new Vector3(1, 0, 0), new Vector3(0, 1, 0)); BuildFaceHorizontal(vertices, new Vector3(0, 1, 0), new Vector3(1, 1, 1)); BuildFaceHorizontal(vertices, new Vector3(0, 0, 1), new Vector3(1, 0, 0)); cubeVertexBuffer = new VertexBuffer(device, VertexPositionTexture.VertexDeclaration, vertices.Count, BufferUsage.WriteOnly); cubeVertexBuffer.SetData<VertexPositionTexture>(vertices.ToArray()); } There aren't any clearly defined variables for the bounds of each cube created, so where do I create the bounding box from?

    Read the article

  • OpenGL - Calculating camera view matrix

    - by Karle
    Problem I am calculating the model, view and projection matrices independently to be used in my shader as follows: gl_Position = projection * view * model * vec4(in_Position, 1.0); When I try to calculate my camera's view matrix the Z axis is flipped and my camera seems like it is looking backwards. My program is written in C# using the OpenTK library. Translation (Working) I've created a test scene as follows: From my understanding of the OpenGL coordinate system they are positioned correctly. The model matrix is created using: Matrix4 translation = Matrix4.CreateTranslation(modelPosition); Matrix4 model = translation; The view matrix is created using: Matrix4 translation = Matrix4.CreateTranslation(-cameraPosition); Matrix4 view = translation; Rotation (Not-Working) I now want to create the camera's rotation matrix. To do this I use the camera's right, up and forward vectors: // Hard coded example orientation: // Normally calculated from up and forward // Similar to look-at camera. Vector3 r = Vector.UnitX; Vector3 u = Vector3.UnitY; Vector3 f = -Vector3.UnitZ; Matrix4 rot = new Matrix4( r.X, r.Y, r.Z, 0, u.X, u.Y, u.Z, 0, f.X, f.Y, f.Z, 0, 0.0f, 0.0f, 0.0f, 1.0f); This results in the following matrix being created: I know that multiplying by the identity matrix would produce no rotation. This is clearly not the identity matrix and therefore will apply some rotation. I thought that because this is aligned with the OpenGL coordinate system is should produce no rotation. Is this the wrong way to calculate the rotation matrix? I then create my view matrix as: // OpenTK is row-major so the order of operations is reversed: Matrix4 view = translation * rot; Rotation almost works now but the -Z/+Z axis has been flipped, with the green cube now appearing closer to the camera. It seems like the camera is looking backwards, especially if I move it around. My goal is to store the position and orientation of all objects (including the camera) as: Vector3 position; Vector3 up; Vector3 forward; Apologies for writing such a long question and thank you in advance. I've tried following tutorials/guides from many sites but I keep ending up with something wrong. Edit: Projection Matrix Set-up Matrix4 projection = Matrix4.CreatePerspectiveFieldOfView( (float)(0.5 * Math.PI), (float)display.Width / display.Height, 0.1f, 1000.0f);

    Read the article

  • How to label a cuboid?

    - by usha
    Hi this is how my 3dcuboid looks, I have attached the complete code. I want to label this cuboid using different names across sides, how is this possible using opengl on android? public class MyGLRenderer implements Renderer { Context context; Cuboid rect; private float mCubeRotation; // private static float angleCube = 0; // Rotational angle in degree for cube (NEW) // private static float speedCube = -1.5f; // Rotational speed for cube (NEW) public MyGLRenderer(Context context) { rect = new Cuboid(); this.context = context; } public void onDrawFrame(GL10 gl) { // TODO Auto-generated method stub gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glLoadIdentity(); // Reset the model-view matrix gl.glTranslatef(0.2f, 0.0f, -8.0f); // Translate right and into the screen gl.glScalef(0.8f, 0.8f, 0.8f); // Scale down (NEW) gl.glRotatef(mCubeRotation, 1.0f, 1.0f, 1.0f); // gl.glRotatef(angleCube, 1.0f, 1.0f, 1.0f); // rotate about the axis (1,1,1) (NEW) rect.draw(gl); mCubeRotation -= 0.15f; //angleCube += speedCube; } public void onSurfaceChanged(GL10 gl, int width, int height) { // TODO Auto-generated method stub if (height == 0) height = 1; // To prevent divide by zero float aspect = (float)width / height; // Set the viewport (display area) to cover the entire window gl.glViewport(0, 0, width, height); // Setup perspective projection, with aspect ratio matches viewport gl.glMatrixMode(GL10.GL_PROJECTION); // Select projection matrix gl.glLoadIdentity(); // Reset projection matrix // Use perspective projection GLU.gluPerspective(gl, 45, aspect, 0.1f, 100.f); gl.glMatrixMode(GL10.GL_MODELVIEW); // Select model-view matrix gl.glLoadIdentity(); // Reset } public void onSurfaceCreated(GL10 gl, EGLConfig config) { // TODO Auto-generated method stub gl.glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // Set color's clear-value to black gl.glClearDepthf(1.0f); // Set depth's clear-value to farthest gl.glEnable(GL10.GL_DEPTH_TEST); // Enables depth-buffer for hidden surface removal gl.glDepthFunc(GL10.GL_LEQUAL); // The type of depth testing to do gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); // nice perspective view gl.glShadeModel(GL10.GL_SMOOTH); // Enable smooth shading of color gl.glDisable(GL10.GL_DITHER); // Disable dithering for better performance }} public class Cuboid{ private FloatBuffer mVertexBuffer; private FloatBuffer mColorBuffer; private ByteBuffer mIndexBuffer; private float vertices[] = { //width,height,depth -2.5f, -1.0f, -1.0f, 1.0f, -1.0f, -1.0f, 1.0f, 1.0f, -1.0f, -2.5f, 1.0f, -1.0f, -2.5f, -1.0f, 1.0f, 1.0f, -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, -2.5f, 1.0f, 1.0f }; private float colors[] = { // R,G,B,A COLOR 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f }; private byte indices[] = { // VERTEX 0,1,2,3,4,5,6,7 REPRESENTATION FOR FACES 0, 4, 5, 0, 5, 1, 1, 5, 6, 1, 6, 2, 2, 6, 7, 2, 7, 3, 3, 7, 4, 3, 4, 0, 4, 7, 6, 4, 6, 5, 3, 0, 1, 3, 1, 2 }; public Cuboid() { ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4); byteBuf.order(ByteOrder.nativeOrder()); mVertexBuffer = byteBuf.asFloatBuffer(); mVertexBuffer.put(vertices); mVertexBuffer.position(0); byteBuf = ByteBuffer.allocateDirect(colors.length * 4); byteBuf.order(ByteOrder.nativeOrder()); mColorBuffer = byteBuf.asFloatBuffer(); mColorBuffer.put(colors); mColorBuffer.position(0); mIndexBuffer = ByteBuffer.allocateDirect(indices.length); mIndexBuffer.put(indices); mIndexBuffer.position(0); } public void draw(GL10 gl) { gl.glFrontFace(GL10.GL_CW); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVertexBuffer); gl.glColorPointer(4, GL10.GL_FLOAT, 0, mColorBuffer); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_COLOR_ARRAY); gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE, mIndexBuffer); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_COLOR_ARRAY); } } public class Draw3drect extends Activity { private GLSurfaceView glView; // Use GLSurfaceView // Call back when the activity is started, to initialize the view @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); glView = new GLSurfaceView(this); // Allocate a GLSurfaceView glView.setRenderer(new MyGLRenderer(this)); // Use a custom renderer this.setContentView(glView); // This activity sets to GLSurfaceView } // Call back when the activity is going into the background @Override protected void onPause() { super.onPause(); glView.onPause(); } // Call back after onPause() @Override protected void onResume() { super.onResume(); glView.onResume(); } }

    Read the article

  • Trouble with UV Mapping Blender => Unity 3

    - by Lea Hayes
    For some reason I am getting nasty grey edges around the edges of rendered 3D models that are not present in Blender. I seem to be able to solve the problem by reducing the size of the UV coordinates within the part of the texture that is to be mapped. But this means that: I am wasting valuable texture space Loss of accuracy in drawn UV maps Could I be doing something wrong, perhaps a setting in Unity that needs changing? I have watched countless tutorials which demonstrate Blender default generated UV coordinates with "Texture Paint" which are perfectly aligned in Unity. Here is an illustration of the problem: Left: approximately 15 pixels of margin on each side of UV coordinates Right: Approximately 3 pixels of margin on each side of UV coordinates Note: Texture image resolution is 1024x1024

    Read the article

  • Cocos2d-x 3.0 animation frame by frame

    - by Narek
    As I know animations are actions. Now I need to play animation frame by frame. Say I have an animation from N frames. each frame should be played after t delay. Now I want to play animation frame by frame, each frame advance the animation's state. How I can do this? And what about playing actions frame by frame advancing the state in general. I ask because I use ECS, and I deal with frames. P.S. I want to do something like this: Action * a = MoveTo(initialPoint, finalPoint, durationOfAnimation); a->play(0.001 seconds); a->play(0.003 seconds); a->play(0.02 seconds); a->play(0.67 seconds); a->play(0.06 seconds); And see the animation.

    Read the article

  • Unity: Spin wheels to move vehicle

    - by Paul Manta
    I am just getting started with Unity and I'd like to ask a question. If I have a "Vehicle" object that has two children: "FrontWheel" and "BackWheel" (both 'wheels' are cylinders), how should I set everything up such that I can move the entire vehicle by turning its wheels? When I apply a torque to "FrontWheel", the vehicle starts to move, but instead of the whole thing the moving together, the chassis is rolling on the cylinders and eventually falls off. How can I prevent it from doing that?

    Read the article

  • extrapolating object state based on updates

    - by user494461
    I have a networked multi-user collaborative application. To maintain a consistent virtual world, I send updates for objects from a master peer to a guest peer. The update state contains x,y,z coordinates of object center and his rotation matrix(CHAI3d api used a 3x3 matrix) with 30Hz frequency. I want to reduce this update rate and want to send with a reduced update rate. I want a predictor on both peers. When the predicted value is outside, say a error value of 10% in comparison to master peers objects original state the master peer triggers a state update. Now for position I used velocity,position updates so that the guest peer can extrapolate position. Like velocity for position what parameter should I use for rotation extrapolition?

    Read the article

  • Circular Bullet Spread not Even

    - by SoulBeaver
    I'm creating a bullet shooter much in the style of Touhou. Right now I want to have a very simple circular shot being fired from the enemy. See this picture: As you can see, the spacing is very uneven, which isn't very good if you want to survive. The code I'm using is this: private function shoot() : void { const BULLETS_PER_WAVE : int = 72; var interval : Number = BULLETS_PER_WAVE / 360; for (var i : int = 0; i < BULLETS_PER_WAVE; ++i { var xSpeed : Number = GameConstants.BULLET_NORMAL_SPEED_X * Math.sin(i * interval); var ySpeed : Number = GameConstants.BULLET_NORMAL_SPEED_Y * Math.cos(i * interval); BulletFactory.createNormalBullet(bulletColor_, alice_.center, xSpeed, ySpeed); } canShoot_ = false; cooldownTimer_.start(); } I imagine my mistake is in the sin, cos functions, but I'm not entirely sure what's wrong.

    Read the article

  • Billboarding restricted to an axis (cylindrical)

    - by user8709
    I have succesfully created a GLSL shader for a billboarding effect. I want to tweak this to restrict the billboarding to an arbitrary axis, i.e. a billboarded quad only rotates itself about the y-axis. I use the y-axis as an example, but essentially I would like this to be an arbitrary axis. Can anyone show me how to modify my existing shader below, or if I need to start from scratch, point me towards some resources that could be helpful? precision mediump float; uniform mat4 u_modelViewProjectionMat; uniform mat4 u_modelMat; uniform mat4 u_viewTransposeMat; uniform vec3 u_axis; // <------------ !!! the arbitrary axis to restrict rotation around attribute vec3 a_position0; attribute vec2 a_texCoord0; varying vec2 v_texCoord0; void main() { vec3 pos = (a_position0.x * u_viewTransposeMat[0] + a_position0.y * u_viewTransposeMat[1]).xyz; vec4 position = vec4(pos, 1.0); v_texCoord0 = a_texCoord0; gl_Position = u_modelViewProjectionMat * position; }

    Read the article

  • Pix for visual studio express 2012 (Desktop)

    - by JohnB
    (Originally asked on stackoverflow) Using visual c++ express 2010 for direct3d you have to download the directX sdk and there is a tool called pix for debugging shaders, looking at 3d resources etc. With visual studio 2012 express the directx sdk is included in the windows sdk that comes with it but this does not seem to include the winpix.exe tool. Is this very useful tool still available? I guess I can still use the one from the previous sdk but it seems wrong to install the entire sdk just for that tool. Is there a version for VS2012 express that I'm missing?

    Read the article

  • The View-Matrix and Alternative Calculations

    - by P. Avery
    I'm working on a radiosity processor in DirectX 9. The process requires that the camera be placed at the center of a mesh face and a 'screenshot' be taken facing 5 different directions...forward...up...down...left...right... ...The problem is that when the mesh face is facing up( look vector: 0, 1, 0 )...a view matrix cannot be determined using standard trigonometry functions: Matrix4 LookAt( Vector3 eye, Vector3 target, Vector3 up ) { // The "look-at" vector. Vector3 zaxis = normal(target - eye); // The "right" vector. Vector3 xaxis = normal(cross(up, zaxis)); // The "up" vector. Vector3 yaxis = cross(zaxis, xaxis); // Create a 4x4 orientation matrix from the right, up, and at vectors Matrix4 orientation = { xaxis.x, yaxis.x, zaxis.x, 0, xaxis.y, yaxis.y, zaxis.y, 0, xaxis.z, yaxis.z, zaxis.z, 0, 0, 0, 0, 1 }; // Create a 4x4 translation matrix by negating the eye position. Matrix4 translation = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, -eye.x, -eye.y, -eye.z, 1 }; // Combine the orientation and translation to compute the view matrix return ( translation * orientation ); } The above function comes from http://3dgep.com/?p=1700... ...Is there a mathematical approach to this problem? Edit: A problem occurs when setting the view matrix to up or down directions, here is an example of the problem when facing down: D3DXVECTOR4 vPos( 3, 3, 3, 1 ), vEye( 1.5, 3, 3, 1 ), vLook( 0, -1, 0, 1 ), vRight( 1, 0, 0, 1 ), vUp( 0, 0, 1, 1 ); D3DXMATRIX mV, mP; D3DXMatrixPerspectiveFovLH( &mP, D3DX_PI / 2, 1, 0.5f, 2000.0f ); D3DXMatrixIdentity( &mV ); memcpy( ( void* )&mV._11, ( void* )&vRight, sizeof( D3DXVECTOR3 ) ); memcpy( ( void* )&mV._21, ( void* )&vUp, sizeof( D3DXVECTOR3 ) ); memcpy( ( void* )&mV._31, ( void* )&vLook, sizeof( D3DXVECTOR3 ) ); memcpy( ( void* )&mV._41, ( void* )&(-vEye), sizeof( D3DXVECTOR3 ) ); D3DXVec4Transform( &vPos, &vPos, &( mV * mP ) ); Results: vPos = D3DXVECTOR3( 1.5, -6, -0.5, 0 ) - this vertex is not properly processed by shader as the homogenous w value is 0 it cannot be normalized to a position within device space...

    Read the article

  • How do I add shadow mapping?

    - by Jasper Creyf
    How do I add shadow mapping? I don't care if it uses GLSL it just has to work. I have been searching on stencil shadows and shadow mapping, all the examples given did nothing, if you don't understand that it means not even a single shadow is even being rendered. If you know how to add stencil shadows or shadow mapping, then please show some java code and if you're using GLSL then please show the code for them too.

    Read the article

  • Understanding normal maps on terrain

    - by JohnB
    I'm having trouble understanding some of the math behind normal map textures even though I've got it to work using borrowed code, I want to understand it. I have a terrain based on a heightmap. I'm generating a mesh of triangles at load time and rendering that mesh. Now for each vertex I need to calculate a normal, a tangent, and a bitangent. My understanding is as follows, have I got this right? normal is a unit vector facing outwards from the surface of the triangle. For a vertex I take the average of the normals of the triangles using that vertex. tangent is a unit vector in the direction of the 'u' coordinates of the texture map. As my texture u,v coordinates follow the x and y coordinates of the terrain, then my understanding is that this vector is simply the vector along the surface in the x direction. So should be able to calculate this as simply the difference between vertices in the x direction to get a vector, (and normalize it). bitangent is a unit vector in the direction of the 'v' coordinates of the texture map. As my texture u,v coordinates follow the x and y coordinates of the terrain, then my understanding is that this vector is simply the vector along the surface in the y direction. So should be able to calculate this as simply the difference between vertices in the y direction to get a vector, (and normalize it). However the code I have borrowed seems much more complicated than this and takes into account the actual values of u, and v at each vertex which I don't understand the need for as they increase in exactly the same direction as x, and y. I implemented what I thought from above, and it simply doesn't work, the normals are clearly not working for lighting. Have I misunderstood something? Or can someone explain to me the physical meaning of the tangent and bitangent vectors when applied to a mesh generated from a hightmap like this, when u and v texture coordinates map along the x and y directions. Thanks for any help understanding this.

    Read the article

  • XNA - Render texture to a rendertarget 2d via SpriteBatch error

    - by Jared B
    I got simple code that uses SpriteBatch to draw a texture onto a RenderTarget2D... private void drawScene(GameTime g) { GraphicsDevice.Clear(skyColor); GraphicsDevice.SetRenderTarget(targetScene); drawSunAndMoon(); effect.Fog = true; GraphicsDevice.SetVertexBuffer(line); effect.MainEffect.CurrentTechnique.Passes[0].Apply(); GraphicsDevice.DrawPrimitives(PrimitiveType.TriangleStrip, 0, 2); GraphicsDevice.SetRenderTarget(null); SceneTexture = targetScene; } private void drawPostProcessing(GameTime g) { effect.SceneTexture = SceneTexture; GraphicsDevice.SetRenderTarget(targetBloom); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque, null, null, null); { if(Bloom) effect.BlurEffect.CurrentTechnique.Passes[0].Apply(); spriteBatch.Draw(targetScene, new Rectangle(0, 0, Window.ClientBounds.Width, Window.ClientBounds.Height), Color.White); } spriteBatch.End(); BloomTexture = targetBloom; GraphicsDevice.SetRenderTarget(null); } Both methods are called from Draw(GameTime gameTime). First drawScene is called, then drawPostProcessing is called. The thing is, I can't run the code because "the render target must not be set on the device when it is used as a texture." at line spriteBatch.Draw(targetScene, new Rectangle(0, 0, Window.ClientBounds.Width, Window.ClientBounds.Height), Color.White); I already found the solution, which is to draw the actual renderTarget (targetScene) to the texture so it doesn't create a reference to the loaded rendertarget. However, to my knowledge, the only way of doing this is to write: GraphicsDevice.SetRenderTarget(OutputTarget) SpriteBatch.Draw(InputTarget, ...) GraphicsDevice.SetRenderTarget(null) Which encounters the same exact problem I'm having right now. So, the question I'm asking is: how would I render InputTarget to OutputTarget without reference issues?

    Read the article

  • How to copy depth buffer to CPU memory in DirectX?

    - by Ashwin
    I have code in OpenGL that uses glReadPixels to copy the depth buffer to a CPU memory buffer: glReadPixels(0, 0, w, h, GL_DEPTH_COMPONENT, GL_FLOAT, dbuf); How do I achieve the same in DirectX? I have looked at a similar question which gives the solution to copy the RGB buffer. I've tried to write similar code to copy the depth buffer: IDirect3DSurface9* d3dSurface; d3dDevice->GetDepthStencilSurface(&d3dSurface); D3DSURFACE_DESC d3dSurfaceDesc; d3dSurface->GetDesc(&d3dSurfaceDesc); IDirect3DSurface9* d3dOffSurface; d3dDevice->CreateOffscreenPlainSurface( d3dSurfaceDesc.Width, d3dSurfaceDesc.Height, D3DFMT_D32F_LOCKABLE, D3DPOOL_SCRATCH, &d3dOffSurface, NULL); // FAILS: D3DERR_INVALIDCALL D3DXLoadSurfaceFromSurface( d3dOffSurface, NULL, NULL, d3dSurface, NULL, NULL, D3DX_FILTER_NONE, 0); // Copy from offscreen surface to CPU memory ... The code fails on the call to D3DXLoadSurfaceFromSurface. It returns the error value D3DERR_INVALIDCALL. What is wrong with my code?

    Read the article

  • Running an a single action on multiple sprites at the same time

    - by Stephen
    Ok so I have created a spiraling animation for a football and I want to be able to run it on 2 sprites at the same time. This is what I have done. CCAnimation* footballAnim = [CCAnimation animationWithFrame:@"Football" frameCount:60 delay:0.005f]; spiral = [CCAnimate actionWithAnimation:footballAnim]; CCRepeatForever* repeat = [CCRepeatForever actionWithAction:spiral]; [Sprite1 runAction: repeat]; [Sprite2 runAction: repeat]; but it only runs the action on the first sprite. What am I doing wrong?

    Read the article

  • Arbitrary Rotation about a Sphere

    - by Der
    I'm coding a mechanic which allows a user to move around the surface of a sphere. The position on the sphere is currently stored as theta and phi, where theta is the angle between the z-axis and the xz projection of the current position (i.e. rotation about the y axis), and phi is the angle from the y-axis to the position. I explained that poorly, but it is essentially theta = yaw, phi = pitch Vector3 position = new Vector3(0,0,1); position.X = (float)Math.Sin(phi) * (float)Math.Sin(theta); position.Y = (float)Math.Sin(phi) * (float)Math.Cos(theta); position.Z = (float)Math.Cos(phi); position *= r; I believe this is accurate, however I could be wrong. I need to be able to move in an arbitrary pseudo two dimensional direction around the surface of a sphere at the origin of world space with radius r. For example, holding W should move around the sphere in an upwards direction relative to the orientation of the player. I believe I should be using a Quaternion to represent the position/orientation on the sphere, but I can't think of the correct way of doing it. Spherical geometry is not my strong suit. Essentially, I need to fill the following block: public void Move(Direction dir) { switch (dir) { case Direction.Left: // update quaternion to rotate left break; case Direction.Right: // update quaternion to rotate right break; case Direction.Up: // update quaternion to rotate upward break; case Direction.Down: // update quaternion to rotate downward break; } }

    Read the article

  • With 2 superposed cameras at different depths and switching their culling masks between layers to implement object-selective antialising:

    - by user36845
    We superposed two cameras, one of which uses AA as post-processing effect (AA filtering is cancelled). The camera with the AA effect has depth 0 and the camera with no effect has depth 1 as can be seen in the 5th and 6th Picture. The objects seen on the left are in layer 1 and the ones on the right are in layer 2. We then wrote a script that switches the culling masks of the cameras between the two layers at the push of buttons 1 and 2 respectively, and accomplishes object-selective antialiasing as seen in the first the three pictures. (The way two cameras separately switch culling masks between layers is illustrated in pictures 7,8 & 9.) HOWEVER, after making the environment 3D (see pictures 1-4), by parenting the 2 cameras under First-Person Controller, we started moving around in the environment and stumbled upon a big issue: When we look at the objects from such an angle as in the 4th Picture and we want to apply antialiasing to the first object (object on the left) which stands closer to our cameras now, the culling mask of 1st camera which is at depth 0, has to be switched to that object’s layer while the second object has to be in the culling mask of the 2nd camera at depth 1. And since the two image outputs of two superposed cameras are laid on top of one another; we obtain the erroneous/unrealistic result of the object farther in the back appearing closer to the camera than the front object (see 4th Picture). We already tried switching depths of cameras so that the 1st camera –with AA- now has depth 1 and the second has depth 0; BUT the camera with the AA effect Works in such a way that it applies the AA effect to its full view. So; the camera with the AA effect always has to remain at the lowest depth and the layer of the object to be antialiased has to be then assigned to the culling mask of the AA camera; otherwise all objects in the AA camera’s view (the two cubes in our case) become antialised, which we don’t want. So; how can we resolve this? The pictures are below and in the comments since each post can have 2 pics: Pic 1. No button is pushed: Both objects seem aliased. Pic 2. Button 1 is pushed: Left (1st) object is antialiased. 2nd object remains aliased. Pic 3. Button 2 is pushed: Right (2nd) object is antialiased. 1st object remains aliased. Pic 4. The problematic result in 3D, when using two superposed cameras with different depths Pic 5. Camera 1’s properties can be seen: using AA post-processing and its depth is 0 Pic 6. Camera 2’s properties can be seen: NOT using AA post-processing and its depth is 1 Pic 7. When no button is pushed, both objects are in the culling mask of Camera 2 and are aliased Pic 8. When pushed 1, camera 1 (bottom) shows the 1st object and camera 2 (top) shows the 2nd Pic 9. When pushed 2, camera 1 (bottom) shows the 2nd object and camera 2 (top) shows the 1st

    Read the article

  • Platformer gravity where gravity is greater than tile size

    - by Sara
    I am making a simple grid-tile-based platformer with basic physics. I have 16px tiles, and after playing with gravity it seems that to get a nice quick Mario-like jump feel, the player ends up moving faster than 16px per second at the ground. The problem is that they clip through the first layer of tiles before collisions being detected. Then when I move the player to the top of the colliding tile, they move to the bottom-most tile. I have tried limiting their maximum velocity to be less than 16px but it does not look right. Are there any standard approaches to solving this? Thanks.

    Read the article

  • Renderbuffer to GLSL shader?

    - by Dan
    I have a software that performs volume rendering through a raycasting approach. The actual raycasting shader writes the raycasted volume depth into a framebuffer object, through gl_FragDepth, that I bind before calling the shader. The problem I have is that I would like to use this depth in another shader that I call later on. I figured out that the only way to do that is to bind the framebuffer once the raycasting has finished, read the depthmap through something like glReadPixels(0, 0, m_winSize.x , m_winSize.y, GL_DEPTH_COMPONENT, GL_FLOAT, pixels); and write it to a 2D texture as usual glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, m_winSize.x, m_winSize.y, 0, GL_DEPTH_COMPONENT, GL_FLOAT, pixels) and then pass this 2D texture that contains a simple depth map to the other shader. However, I am not entirely sure that what I do is the proper way to do this. Is there anyway to pass the framebuffer that I fill up in my raycasting shader to the other shader?

    Read the article

< Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >