Search Results

Search found 19281 results on 772 pages for 'blender game engine'.

Page 412/772 | < Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >

  • Manipulating Perlin Noise

    - by Numeri
    I've been learning about Procedurally Generated Content lately (in particular, Perlin noise). Perlin noise works great for making things like landscapes, height maps, and stuff like that. But now I am trying to generate structures more like mountain ranges (in 2D, as 3D would be way over my head right now) or underground veins of ores. I can't manage to manipulate Perlin Noise to do this. Making a cut off point (i.e. using only the tops of the 'mountains' of a heightmap) wouldn't work, because I would get lumps of mountains/veins. Any suggestions? Thanks, Numeri

    Read the article

  • OpenGL ES 2.0 example for JOGL

    - by fjdutoit
    I've scoured the internet for the last few hours looking for an example of how to run even the most basic OpenGL ES 2 example using JOGL but "by Jupiter!" it has been a total fail. I tried converting the android example from the OpenGL ES 2.0 Programming Guide examples (and at the same time looking at the WebGL example -- which worked fine) yet without any success. Are there any examples out there? If anyone else wants some extra help regarding this question see this thread on the official Jogamp forum.

    Read the article

  • How can I transform a Point2f with a matrix on Android?

    - by Vivendi
    I'm developing for Android and I'm using the android.renderscript.Matrix3f class to do some calculations. What I need to do now is to now is to do something like mat.tranform(pointIn, pointOut); So I need to transform a matrix by a given Point class. In awt I would simply do this: AffineTransform t = new AffineTransform(); Point2D.Float p = new Point2D.Float(); t.transform( p, p ); But in Android I now have this: Matrix3f t = new Matrix3f(); PointF p = new PointF(); // Now I need to tranform it somehow.. But the Matrix3f class in Android doesn't have a Matrix.transform(Point2D ptSrc, Point2D ptDst) method. So I guess I have to do the transformation manually. But I'm not really sure how that works. From what I've seen it's something like a translate and then a rotate? Could anyone please tell me how to do this in code?

    Read the article

  • How do I get the child of a unique parent in ActionScript?

    - by Koen
    My question is about targeting a child with a unique parent. For example. Let's say I have a box people can move called box_mc and 3 platforms it can jump on called: Platform_1 Platform_2 Platform_3 All of these platforms have a child element called hit. Platform_1 Hit Platform_2 Hit Platform_3 Hit I use an array and a for each statement to detect if box_mc hits one of the platforms childs. var obj_arr:Array = [Platform_1, Platform_2, Platform_3]; for each(obj in obj_arr){ if(box_mc.hitTestObject(obj.hit)){ trace(obj + " " + obj.hit); box_mc.y = obj.hit.y - box_mc.height; } } obj seems to output the unique parent it is hitting but obj.hit ouputs hit, so my theory is that it is applying the change of y to all the childs called hit in the stage. Would it be possible to only detect the child of that specific parent?

    Read the article

  • Absorbtion 2d image effect

    - by Ed.
    I want to create a specyfic 2d image effect. It consists in modifying a sprite so it looks like it is being zoomed to a point or "absorbed" by that point. I'm not really sure what is the technical name of this effect so I cannot explain it correctly. Here you can see a video of what I'm talking about, it is the effect when the character absorbs the three glyphs. http://www.youtube.com/watch?v=PIo-GddsMcU&t=4m45s What is the name of this effect? How can I implement it with XNA for 2D textures/sprites?

    Read the article

  • OpenGL ES Loading

    - by kuroutadori
    I want to know what is the norm of loading rendering code. Take a button. When the application is loaded, a texture is loaded which has the image of the button on it. When the button is tapped, it then adds a loader into a queue, which is loaded on render thread. It then loads up an array buffer with vertexes and tex coords when render is called. It then adds to a render tree. Then it renders. the render function looks like this void render() { update(); mBaseRenderer->render(); } update() is when the queue is checked to see if anything needs loading. mBaseRenderer->render() is the render tree. What I am asking then is, should I even have the update() there at all and instead have everything preloaded before it renders? If I can have it loaded when need, for instance when there is tap, then how can it be done (My current code causes an dequeueing buffer error (Unknown error: -75) which I assume is to do with OpenGL ES and the context)?

    Read the article

  • Scrolling a WriteableBitmap

    - by Skoder
    I need to simulate my background scrolling but I want to avoid moving my actual image control. Instead, I'd like to use a WriteableBitmap and use a blitting method. What would be the way to simulate an image scrolling upwards? I've tried various things buy I can't seem to get my head around the logic: //X pos, Y pos, width, height Rect src = new Rect(0, scrollSpeed , 480, height); Rect dest = new Rect(0, 700 - scrollSpeed , 480, height); //destination rect, source WriteableBitmap, source Rect, blend mode wb.Blit(destRect, wbSource, srcRect, BlendMode.None); scrollSpeed += 5; if (scrollSpeed > 700) scrollSpeed = 0; If height is 10, the image is quite fuzzy and moreso if the height is 1. If the height is a taller, the image is clearer, but it only seems to do a one to one copy. How can I 'scroll' the image so that it looks like it's moving up in a continuous loop? (The height of the screen is 700).

    Read the article

  • Texturize a shape of multiple triangles in 2D

    - by Deukalion
    This is an example of a shape consisting of multiple points, triangles and eventually a shape: Red Dots = Vector3 (X, Y, Z) or Vector2 (X, Y) If I have a Texture of a certain size, how do I texturize this area in the best way so that the texture inside the shape matches the shape and does not overlap anywhere? Perhaps also with a chance to scale the texture in case it's too small or to big for the shape, but still so that it gets rendered correctly. Do I treat the shape as a rectangle? Figure out it's 4 corners? Or do I calculate the distance between Center - (Texture Width / 2) and Point (to see how "many" times the texture can fit between on that axis to estimate at what Coordinates the Texture should be at that certain point? I've looked at Texture Mapping but haven't found any concrete examples that it explains it well, it's also confusing with 0.0-1.0 values for Texture Coordinates.

    Read the article

  • Arbitrary projection matrix from 6 arbitrary frustum planes

    - by Doub
    A projection matrix represent a tranformation from the camera view space to the rendering system clip space. In other words, it defines the transormation between a 6-sided frustum to the clip cube. The glOrtho and glFrustum use only 6 parameter to define such a projection, but impose several constraints on the frustum that will get projected to the clip cube: the near and far planes are parallel, the left and right planes intersect on a vertical line, and the top and bottom planes intersect on a horizontal lines, both lines being parallel to the near and far planes. I'd like to lift these restrictions. So, from the definition of the 6 frustum side planes (in whatever representation you see fit), how can I compute a general projection matrix?

    Read the article

  • 2D Rectangle Collision Response with Multiple Rectangles

    - by Justin Skiles
    Similar to: Collision rectangle response I have a level made up of tiles where the edges of the level are made up of collidable rectangles. The player's collision box is represented by a rectangle as well. The player can move in 8 directions. The player's velocity is equal in X and Y directions and constant. Each update, I am checking the player's collision against all tiles that are a certain distance away. When the player collides with a rectangle, I am finding the intersection depth and resolving along the most shallow axis followed by the other axis. This resolution happens for both axes simultaneously. See below for two examples of situations where I am having trouble. Moving up-left against the left wall In the scenario below, the player is colliding with two tiles. The tile intersection depth is equal on both axes for the top tile and more shallow in the X axis for the middle tile. Because the player is moving up the wall, the player should slide in an upward direction along the wall. This works properly as long as the rectangle with the more shallow depth is evaluated first. If the equal intersection depth rectangle is evaluated first, there is a chance the player becomes stuck. Moving up-left against the top wall Here is an identical scenario with the exception that the collision is with the top wall. The same problem occurs at the corners when intersection depth is equal for both axes. I guess my overall question is: How can I ensure that collision response occurs on tiles that have non-equal intersection depth before tiles that have equal intersection depth in order to get around the weirdness that occurs at these corners. Sean's answer in the linked question was good, but his solution required having different velocity components in a certain direction. My situation has equal velocities, so there's no good way to tell which direction to resolve at corners. I hope I have made my explanation clear.

    Read the article

  • WIn API Basic Paint program

    - by Tom Burman
    Just trying to learn a bit of Win API. Im trying to make a basic drawing app, a bit like MS Paint. For the time being im trying to get one function to work which is, when you left click and drag the mouse around the screen a line is drawn behind the mouse. Heres what i have so far, but for some reason: 1) the line starts drawing straight away rather then waiting for the left click 2) the line isn't solid its very dotty. case WM_MOUSEMOVE: { if(MK_LBUTTON){ hdc = GetDC(hwnd); hPen = CreatePen(PS_SOLID,5,RGB(0, 0, 255)); SelectObject(hdc, hPen); int x = LOWORD(lParam); int y = HIWORD(lParam); MoveToEx(hdc,x,y,NULL); LineTo(hdc, LOWORD(lParam), HIWORD(lParam)); ReleaseDC(hwnd,hdc); } else break; } } Thanks for any help!

    Read the article

  • XNA shield effect with a Primative sphere problem

    - by Sparky41
    I'm having issue with a shield effect i'm trying to develop. I want to do a shield effect that surrounds part of a model like this: http://i.imgur.com/jPvrf.png I currently got this: http://i.imgur.com/Jdin7.png (The red likes are a simple texture a black background with a red cross in it, for testing purposes: http://i.imgur.com/ODtzk.png where the smaller cross in the middle shows the contact point) This sphere is drawn via a primitive (DrawIndexedPrimitives) This is how i calculate the pieces of the sphere using a class i've called Sphere (this class is based off the code here: http://xbox.create.msdn.com/en-US/education/catalog/sample/primitives_3d) public class Sphere { // During the process of constructing a primitive model, vertex // and index data is stored on the CPU in these managed lists. List vertices = new List(); List indices = new List(); // Once all the geometry has been specified, the InitializePrimitive // method copies the vertex and index data into these buffers, which // store it on the GPU ready for efficient rendering. VertexBuffer vertexBuffer; IndexBuffer indexBuffer; BasicEffect basicEffect; public Vector3 position = Vector3.Zero; public Matrix RotationMatrix = Matrix.Identity; public Texture2D texture; /// <summary> /// Constructs a new sphere primitive, /// with the specified size and tessellation level. /// </summary> public Sphere(float diameter, int tessellation, Texture2D text, float up, float down, float portstar, float frontback) { texture = text; if (tessellation < 3) throw new ArgumentOutOfRangeException("tessellation"); int verticalSegments = tessellation; int horizontalSegments = tessellation * 2; float radius = diameter / 2; // Start with a single vertex at the bottom of the sphere. AddVertex(Vector3.Down * ((radius / up) + 1), Vector3.Down, Vector2.Zero);//bottom position5 // Create rings of vertices at progressively higher latitudes. for (int i = 0; i < verticalSegments - 1; i++) { float latitude = ((i + 1) * MathHelper.Pi / verticalSegments) - MathHelper.PiOver2; float dy = (float)Math.Sin(latitude / up);//(up)5 float dxz = (float)Math.Cos(latitude); // Create a single ring of vertices at this latitude. for (int j = 0; j < horizontalSegments; j++) { float longitude = j * MathHelper.TwoPi / horizontalSegments; float dx = (float)(Math.Cos(longitude) * dxz) / portstar;//port and starboard (right)2 float dz = (float)(Math.Sin(longitude) * dxz) * frontback;//front and back1.4 Vector3 normal = new Vector3(dx, dy, dz); AddVertex(normal * radius, normal, new Vector2(j, i)); } } // Finish with a single vertex at the top of the sphere. AddVertex(Vector3.Up * ((radius / down) + 1), Vector3.Up, Vector2.One);//top position5 // Create a fan connecting the bottom vertex to the bottom latitude ring. for (int i = 0; i < horizontalSegments; i++) { AddIndex(0); AddIndex(1 + (i + 1) % horizontalSegments); AddIndex(1 + i); } // Fill the sphere body with triangles joining each pair of latitude rings. for (int i = 0; i < verticalSegments - 2; i++) { for (int j = 0; j < horizontalSegments; j++) { int nextI = i + 1; int nextJ = (j + 1) % horizontalSegments; AddIndex(1 + i * horizontalSegments + j); AddIndex(1 + i * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + j); AddIndex(1 + i * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + j); } } // Create a fan connecting the top vertex to the top latitude ring. for (int i = 0; i < horizontalSegments; i++) { AddIndex(CurrentVertex - 1); AddIndex(CurrentVertex - 2 - (i + 1) % horizontalSegments); AddIndex(CurrentVertex - 2 - i); } //InitializePrimitive(graphicsDevice); } /// <summary> /// Adds a new vertex to the primitive model. This should only be called /// during the initialization process, before InitializePrimitive. /// </summary> protected void AddVertex(Vector3 position, Vector3 normal, Vector2 texturecoordinate) { vertices.Add(new VertexPositionNormal(position, normal, texturecoordinate)); } /// <summary> /// Adds a new index to the primitive model. This should only be called /// during the initialization process, before InitializePrimitive. /// </summary> protected void AddIndex(int index) { if (index > ushort.MaxValue) throw new ArgumentOutOfRangeException("index"); indices.Add((ushort)index); } /// <summary> /// Queries the index of the current vertex. This starts at /// zero, and increments every time AddVertex is called. /// </summary> protected int CurrentVertex { get { return vertices.Count; } } public void InitializePrimitive(GraphicsDevice graphicsDevice) { // Create a vertex declaration, describing the format of our vertex data. // Create a vertex buffer, and copy our vertex data into it. vertexBuffer = new VertexBuffer(graphicsDevice, typeof(VertexPositionNormal), vertices.Count, BufferUsage.None); vertexBuffer.SetData(vertices.ToArray()); // Create an index buffer, and copy our index data into it. indexBuffer = new IndexBuffer(graphicsDevice, typeof(ushort), indices.Count, BufferUsage.None); indexBuffer.SetData(indices.ToArray()); // Create a BasicEffect, which will be used to render the primitive. basicEffect = new BasicEffect(graphicsDevice); //basicEffect.EnableDefaultLighting(); } /// <summary> /// Draws the primitive model, using the specified effect. Unlike the other /// Draw overload where you just specify the world/view/projection matrices /// and color, this method does not set any renderstates, so you must make /// sure all states are set to sensible values before you call it. /// </summary> public void Draw(Effect effect) { GraphicsDevice graphicsDevice = effect.GraphicsDevice; // Set our vertex declaration, vertex buffer, and index buffer. graphicsDevice.SetVertexBuffer(vertexBuffer); graphicsDevice.Indices = indexBuffer; graphicsDevice.BlendState = BlendState.Additive; foreach (EffectPass effectPass in effect.CurrentTechnique.Passes) { effectPass.Apply(); int primitiveCount = indices.Count / 3; graphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vertices.Count, 0, primitiveCount); } graphicsDevice.BlendState = BlendState.Opaque; } /// <summary> /// Draws the primitive model, using a BasicEffect shader with default /// lighting. Unlike the other Draw overload where you specify a custom /// effect, this method sets important renderstates to sensible values /// for 3D model rendering, so you do not need to set these states before /// you call it. /// </summary> public void Draw(Camera camera, Color color) { // Set BasicEffect parameters. basicEffect.World = GetWorld(); basicEffect.View = camera.view; basicEffect.Projection = camera.projection; basicEffect.DiffuseColor = color.ToVector3(); basicEffect.TextureEnabled = true; basicEffect.Texture = texture; GraphicsDevice device = basicEffect.GraphicsDevice; device.DepthStencilState = DepthStencilState.Default; if (color.A < 255) { // Set renderstates for alpha blended rendering. device.BlendState = BlendState.AlphaBlend; } else { // Set renderstates for opaque rendering. device.BlendState = BlendState.Opaque; } // Draw the model, using BasicEffect. Draw(basicEffect); } public virtual Matrix GetWorld() { return /*world */ Matrix.CreateScale(1f) * RotationMatrix * Matrix.CreateTranslation(position); } } public struct VertexPositionNormal : IVertexType { public Vector3 Position; public Vector3 Normal; public Vector2 TextureCoordinate; /// <summary> /// Constructor. /// </summary> public VertexPositionNormal(Vector3 position, Vector3 normal, Vector2 textCoor) { Position = position; Normal = normal; TextureCoordinate = textCoor; } /// <summary> /// A VertexDeclaration object, which contains information about the vertex /// elements contained within this struct. /// </summary> public static readonly VertexDeclaration VertexDeclaration = new VertexDeclaration ( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(12, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0), new VertexElement(24, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 0) ); VertexDeclaration IVertexType.VertexDeclaration { get { return VertexPositionNormal.VertexDeclaration; } } } A simple call to the class to initialise it. The Draw method is called in the master draw method in the Gamecomponent. My current thoughts on this are: The direction of the weapon hitting the ship is used to get the middle position for the texture Wrap a texture around the drawn sphere based on this point of contact Problem is i'm not sure how to do this. Can anyone help or if you have a better idea please tell me i'm open for opinion? :-) Thanks.

    Read the article

  • Black Screen: How to set Projection/View Matrix

    - by Lisa
    I have a Windows Phone 8 C#/XAML with DirectX component project. I'm rendering some particles, but each particle is a rectangle versus a square (as I've set the vertices to be positions equally offset from each other). I used an Identity matrix in the view and projection matrix. I decided to add the windows aspect ratio to prevent the rectangles. But now I get a black screen. None of the particles are rendered now. I don't know what's wrong with my matrices. Can anyone see the problem? These are the default matrices in Microsoft's project example. View Matrix: XMVECTOR eye = XMVectorSet(0.0f, 0.7f, 1.5f, 0.0f); XMVECTOR at = XMVectorSet(0.0f, -0.1f, 0.0f, 0.0f); XMVECTOR up = XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f); XMStoreFloat4x4(&m_constantBufferData.view, XMMatrixTranspose(XMMatrixLookAtRH(eye, at, up))); Projection Matrix: void CubeRenderer::CreateWindowSizeDependentResources() { Direct3DBase::CreateWindowSizeDependentResources(); float aspectRatio = m_windowBounds.Width / m_windowBounds.Height; float fovAngleY = 70.0f * XM_PI / 180.0f; if (aspectRatio < 1.0f) { fovAngleY /= aspectRatio; } XMStoreFloat4x4(&m_constantBufferData.projection, XMMatrixTranspose(XMMatrixPerspectiveFovRH(fovAngleY, aspectRatio, 0.01f, 100.0f))); } I've tried modifying them to use cocos2dx's WP8 example. XMMATRIX identityMatrix = XMMatrixIdentity(); float fovy = 60.0f; float aspect = m_windowBounds.Width / m_windowBounds.Height; float zNear = 0.1f; float zFar = 100.0f; float xmin, xmax, ymin, ymax; ymax = zNear * tanf(fovy * XM_PI / 360); ymin = -ymax; xmin = ymin * aspect; xmax = ymax * aspect; XMMATRIX tmpMatrix = XMMatrixPerspectiveOffCenterRH(xmin, xmax, ymin, ymax, zNear, zFar); XMMATRIX projectionMatrix = XMMatrixMultiply(tmpMatrix, identityMatrix); // View Matrix float fEyeX = m_windowBounds.Width * 0.5f; float fEyeY = m_windowBounds.Height * 0.5f; float fEyeZ = m_windowBounds.Height / 1.1566f; float fLookAtX = m_windowBounds.Width * 0.5f; float fLookAtY = m_windowBounds.Height * 0.5f; float fLookAtZ = 0.0f; float fUpX = 0.0f; float fUpY = 1.0f; float fUpZ = 0.0f; XMMATRIX tmpMatrix2 = XMMatrixLookAtRH(XMVectorSet(fEyeX,fEyeY,fEyeZ,0.f), XMVectorSet(fLookAtX,fLookAtY,fLookAtZ,0.f), XMVectorSet(fUpX,fUpY,fUpZ,0.f)); XMMATRIX viewMatrix = XMMatrixMultiply(tmpMatrix2, identityMatrix); XMStoreFloat4x4(&m_constantBufferData.view, viewMatrix); Vertex Shader cbuffer ModelViewProjectionConstantBuffer : register(b0) { //matrix model; matrix view; matrix projection; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float4 color : COLOR; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float4 color : COLOR; }; PixelInputType main(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; //===================================== // TODO: ADDED for testing input.position.z = 0.0f; //===================================== // Calculate the position of the vertex against the world, view, and projection matrices. //output.position = mul(input.position, model); output.position = mul(input.position, view); output.position = mul(output.position, projection); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Store the particle color for the pixel shader. output.color = input.color; return output; } Before I render the shader, I set the view/projection matrices into the constant buffer void ParticleRenderer::SetShaderParameters() { ViewProjectionConstantBuffer* dataPtr; D3D11_MAPPED_SUBRESOURCE mappedResource; DX::ThrowIfFailed(m_d3dContext->Map(m_constantBuffer.Get(), 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource)); dataPtr = (ViewProjectionConstantBuffer*)mappedResource.pData; dataPtr->view = m_constantBufferData.view; dataPtr->projection = m_constantBufferData.projection; m_d3dContext->Unmap(m_constantBuffer.Get(), 0); // Now set the constant buffer in the vertex shader with the updated values. m_d3dContext->VSSetConstantBuffers(0, 1, m_constantBuffer.GetAddressOf() ); // Set shader texture resource in the pixel shader. m_d3dContext->PSSetShaderResources(0, 1, &m_textureView); } Nothing, black screen... I tried so many different look at, eye, and up vectors. I tried transposing the matrices. I've set the particle center position to always be (0, 0, 0), I tried different positions too, just to make sure they're not being rendered offscreen.

    Read the article

  • DX11 - Weird shader behavior with and without branching

    - by Martin Perry
    I have found problem in my shader code, which I dont´t know how to solve. I want to rewrite this code without "ifs" tmp = evaluate and result is 0 or 1 (nothing else) if (tmp == 1) val = X1; if (tmp == 0) val = X2; I rewite it this way, but this piece of code doesn ´t word correctly tmp = evaluate and result is 0 or 1 (nothing else) val = tmp * X1 val = !tmp * X2 However if I change it to: tmp = evaluate and result is 0 or 1 (nothing else) val = tmp * X1 if (!tmp) val = !tmp * X2 It works fine... but it is useless because of "if", which need to be eliminated I honestly don´t understand it Posted Image . I tried compilation with NO and FULL optimalization, result is same

    Read the article

  • Get coordinates of arraylist

    - by opiop65
    Here's my map class: public class map{ public static final int CLEAR = 0; public static final ArrayList<Integer> STONE = new ArrayList<Integer>(); public static final int GRASS = 2; public static final int DIRT = 3; public static final int WIDTH = 32; public static final int HEIGHT = 24; public static final int TILE_SIZE = 25; // static int[][] map = new int[WIDTH][HEIGHT]; ArrayList<ArrayList<Integer>> map = new ArrayList<ArrayList<Integer>>(WIDTH * HEIGHT); enum tiles { air, grass, stone, dirt } Image air, grass, stone, dirt; Random rand = new Random(); public Map() { /* default map */ /*for(int y = 0; y < WIDTH; y++){ map[y][y] = (rand.nextInt(2)); System.out.println(map[y][y]); }*/ /*for (int y = 18; y < HEIGHT; y++) { for (int x = 0; x < WIDTH; x++) { map[x][y] = STONE; } } for (int y = 18; y < 19; y++) { for (int x = 0; x < WIDTH; x++) { map[x][y] = GRASS; } } for (int y = 19; y < 20; y++) { for (int x = 0; x < WIDTH; x++) { map[x][y] = DIRT; } }*/ for (int y = 0; y < HEIGHT; y++) { for(int x = 0; x < WIDTH; x++){ map.set(x * WIDTH + y, STONE); } } try { init(null, null); } catch (SlickException e) { e.printStackTrace(); } render(null, null, null); } public void init(GameContainer gc, StateBasedGame sbg) throws SlickException { air = new Image("res/air.png"); grass = new Image("res/grass.png"); stone = new Image("res/stone.png"); dirt = new Image("res/dirt.png"); } public void render(GameContainer gc, StateBasedGame sbg, Graphics g) { for (int x = 0; x < WIDTH; x++) { for (int y = 0; y < HEIGHT; y++) { switch (map.get(x * WIDTH + y)) { case CLEAR: air.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; case STONE: stone.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; case GRASS: grass.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; case DIRT: dirt.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; } } } } public static boolean blocked(float x, float y) { return map[(int) x][(int) y] == STONE; } public static Rectangle blockBounds(int x, int y) { return (new Rectangle(x, y, TILE_SIZE, TILE_SIZE)); } } Specifically I am looking at this: for (int x = 0; x < WIDTH; x++) { for (int y = 0; y < HEIGHT; y++) { switch (map.get(x * WIDTH + y).intValue()) { case CLEAR: air.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; case STONE: stone.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; case GRASS: grass.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; case DIRT: dirt.draw(x * TILE_SIZE, y * TILE_SIZE, TILE_SIZE, TILE_SIZE); break; } } } How can I access the coordinates of my arraylist map and then draw the tiles to the screen? Thanks!

    Read the article

  • Deferred Rendering With Diffuse,Specular, and Normal maps

    - by John
    I have been reading up on deferred rendering and I am trying to implement a renderer using the Sponza atrium model, which can be found here, as my sandbox.Note I am also using OpenGL 3.3 and GLSL. I am loading the model from a Wavefront OBJ file using Assimp. I extract all geometry information including tangents and bitangents. For all the aiMaterials,I extract the following information which essentially comes from the sponza.mtl file. Ambient/Diffuse/Specular/Emissive Reflectivity Coefficients(Ka,Kd,Ks,Ke) Shininess Diffuse Map Specular Map Normal Map I understand that I must render vertex attributes such as position ,normals,texture coordinates to textures as well as depth for the second render pass. A lot of resources mention putting colour information into a g-buffer in the initial render pass but do you not require the diffuse,specular and normal maps and therefore lights to determine the fragment colour? I know that doesnt make since sense because lighting should be done in the second render pass. In terms of normal mapping, do you essentially just pass the tangent,bitangents, and normals into g-buffers and then construct the tangent matrix and apply it to the sampled normal from the normal map. Ultimately, I would like to know how to incorporate this material information into my deferred renderer.

    Read the article

  • Problem with Ogre::Camera lookAt function when target is directly below.

    - by PigBen
    I am trying to make a class which controls a camera. It's pretty basic right now, it looks like this: class HoveringCameraController { public: void init(Ogre::Camera & camera, AnimatedBody & target, Ogre::Real height); void update(Ogre::Real time_delta); private: Ogre::Camera * camera_; AnimatedBody * target_; Ogre::Real height_; }; HoveringCameraController.cpp void HoveringCameraController::init(Ogre::Camera & camera, AnimatedBody & target, Ogre::Real height) { camera_ = &camera; target_ = &target; height_ = height; update(0.0); } void HoveringCameraController::update(Ogre::Real time_delta) { auto position = target_->getPosition(); position.y += height_; camera_->setPosition(position); camera_->lookAt(target_->getPosition()); } AnimatedBody is just a class that encapsulates an entity, it's animations and a scene node. The getPosition function is simply forwarded to it's scene node. What I want(for now) is for the camera to simply follow the AnimatedBody overhead at the distance given(the height parameter), and look down at it. It follows the object around, but it doesn't look straight down, it's tilted quite a bit in the positive Z direction. Does anybody have any idea why it would do that? If I change this line: position.y += height_; to this: position.x += height_; or this: position.z += height_; it does exactly what I would expect. It follows the object from the side or front, and looks directly at it.

    Read the article

  • Euler angles to Cartesian Coordinates for use with gluLookAt

    - by notrodash
    I have searched all of the internet but just couldn't find the answer. I am using LibGDX and this is part of my code that loops over and over: public void render() { GL11 gl = Gdx.gl11; float centerX = (float)Math.cos(yaw) * (float)Math.cos(pitch); float centerY = (float)Math.sin(yaw) * (float)Math.cos(pitch); float centerZ = (float)Math.sin(pitch); System.out.println(centerX+" "+centerY+" "+centerZ+" ~ "+GDXRacing.camera.position.x+" "+GDXRacing.camera.position.y+" "+GDXRacing.camera.position.z); Gdx.glu.gluLookAt(gl, GDXRacing.camera.position.x, GDXRacing.camera.position.y, GDXRacing.camera.position.z, centerX, centerY, centerZ, 0, 1, 0); if(Gdx.input.isKeyPressed(Keys.A)) { yaw--; } if(Gdx.input.isKeyPressed(Keys.D)) { yaw++; } } I might just be bad at the math, but I dont get it. Does someone have a good explanation and an idea about how to deal with this? I am trying to make a first person camera. By the way, the camera is translated by +10 on the Z axis. Currently when I run the application, this is what I get: Watch video in browser | Download video (for those who cant download the video, everything shakes in a clockwise/anticlockwise action, depending on if I increase or decrease the Yaw value) -Thank you. [edit] and with this code: public void render() { GL11 gl = Gdx.gl11; float centerX = (float)(MathUtils.cosDeg(yaw)*4); float centerY = 0; float centerZ = (float)(MathUtils.sinDeg(yaw)*4); System.out.println(centerX+" "+centerY+" "+centerZ+" ~ "+GDXRacing.camera.position.x+" "+GDXRacing.camera.position.y+" "+GDXRacing.camera.position.z); Gdx.glu.gluLookAt(gl, GDXRacing.camera.position.x, GDXRacing.camera.position.y, GDXRacing.camera.position.z, centerX, centerY, centerZ, 0, 1, 0); if(Gdx.input.isKeyPressed(Keys.A)) { yaw--; } if(Gdx.input.isKeyPressed(Keys.D)) { yaw++; } } it slowly swings from the left to the right. This approach worked for turning left and right for 2d games though. What am I doing wrong?

    Read the article

  • 3D Model not translating correctly (visually)

    - by ChocoMan
    In my first image, my model displays correctly: But when I move the model's position along the Z-axis (forward) I get this, yet the Y-axis doesnt change. An if I keep going, the model disappears into the ground: Any suggestions as to how I can get the model to translate properly visually? Here is how Im calling the model and the terrain in draw(): cameraPosition = new Vector3(camX, camY, camZ); // Copy any parent transforms. Matrix[] transforms = new Matrix[mShockwave.Bones.Count]; mShockwave.CopyAbsoluteBoneTransformsTo(transforms); Matrix[] ttransforms = new Matrix[terrain.Bones.Count]; terrain.CopyAbsoluteBoneTransformsTo(ttransforms); // Draw the model. A model can have multiple meshes, so loop. foreach (ModelMesh mesh in mShockwave.Meshes) { // This is where the mesh orientation is set, as well // as our camera and projection. foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(modelRotation) * Matrix.CreateTranslation(modelPosition); // Looking at the model (picture shouldnt change other than rotation) effect.View = Matrix.CreateLookAt(cameraPosition, modelPosition, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.TextureEnabled = true; } // Draw the mesh, using the effects set above. prepare3d(); mesh.Draw(); } //Terrain test foreach (ModelMesh meshT in terrain.Meshes) { foreach (BasicEffect effect in meshT.Effects) { effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; effect.World = ttransforms[meshT.ParentBone.Index] * Matrix.CreateRotationY(0) * Matrix.CreateTranslation(terrainPosition); // Looking at the model (picture shouldnt change other than rotation) effect.View = Matrix.CreateLookAt(cameraPosition, terrainPosition, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.TextureEnabled = true; } // Draw the mesh, using the effects set above. prepare3d(); meshT.Draw(); DrawText(); } base.Draw(gameTime); } Im suspecting that there may be something wrong with how I'm handling my camera. The model rotates fine on its Y-axis.

    Read the article

  • Direct2D Transform

    - by James
    I have a beginner question about Direct2D transforms. I have a 20 x 10 bitmap that I would like to draw in different orientations. To start, I would like to draw it vertically with a destination rectangle of say: (left, top, right, bottom) (300, 300, 310, 320) The bitmap is wider than it is tall (20 x 10), but when I draw it vertically, it will be appear taller than it is wide (10 x 20). I know that I can use a rotation matrix like so: m_pRenderTarget->SetTransform( D2D1::Matrix3x2F::Rotation( 90.0f, D2D1::Point2F(<center of shape>)) ); But when I use this method to rotate my shape, the destination rectangle is still wider than it is tall. Maybe it would look something like this: (left, top, right, bottom) (280, 290, 300, 300) The destination rectangle is 20 x 10 but the bitmap appears on the screen as 10 x 20. I can't look at the destination rectangle in the debugger and compare it to: (left, top, right, bottom) (300, 300, 310, 320) Is there any simple way to say "I want to rotate it so that the image is rendered to exactly this destination rectangle after the rotation?" In this case, I would like to say "Please rotate the bitmap so that it appears on the screen at this location:" (left, top, right, bottom) (300, 300, 310, 320) If I can't do that, is there any way to find out the 10 x 20 destination rectangle where the bitmap is actually being rendered to the screen?

    Read the article

  • OpenGL ES 2 on Android: native window

    - by ThreaderSlash
    According to OGLES specification, we have the following definition: EGLSurface eglCreateWindowSurface(EGLDisplay display, EGLConfig config, NativeWindowType native_window, EGLint const * attrib_list) More details, here: http://www.khronos.org/opengles/documentation/opengles1_0/html/eglCreateWindowSurface.html And also by definition: int32_t ANativeWindow_setBuffersGeometry(ANativeWindow* window, int32_t width, int32_t height, int32_t format); More details, here: http://mobilepearls.com/labs/native-android-api I am running Android Native App on OGLES 2 and debugging it in a Samsung Nexus device. For setting up the 3D scene graph environment, the following variables are defined: struct android_app { ... ANativeWindow* window; }; android_app* mApplication; ... mApplication=&pApplication; And to initialize the App, we run the commands in the code: ANativeWindow_setBuffersGeometry(mApplication->window, 0, 0, lFormat); mSurface = eglCreateWindowSurface(mDisplay, lConfig, mApplication->window, NULL); Funny to say is that, the command ANativeWindow_setBuffersGeometry behaves as expected and works fine according to its definition, accepting all the parameters sent to it. But the eglCreateWindowSurface does no accept the parameter mApplication-window, as it should accept according to its definition. Instead, it looks for the following input: EGLNativeWindowType hWnd; mSurface = eglCreateWindowSurface(mDisplay,lConfig,hWnd,NULL); As an alternative, I considered to use instead: NativeWindowType hWnd=android_createDisplaySurface(); But debugger says: Function 'android_createDisplaySurface' could not be resolved Is 'android_createDisplaySurface' compatible only for OGLES 1 and not for OGLES 2? Can someone tell if there is a way to convert mApplication-window? In a way that the data from the android_app get accepted to the window surface?

    Read the article

  • How to Align Gun with Bullets

    - by Shane
    I have a top-down 2D shooter. I have an image of a player holding a gun, that rotates to face the mouse. Please note that the gun isn't a separate image tethered to the player, but rather part of the player. Right now, bullets are created at the player's x and y. This works when the player is facing the right way, but not when they rotate. The bullets move in the right direction, but don't come from the gun. How can I fix this? TL;DR: When the player rotates, bullets don't come from gun. public void fire() { angle = sprite.getRotation(); System.out.println(angle); x = sprite.getX(); y = sprite.getY(); Bullet b = new Bullet(x, y, angle); Utils.world.addBullet(b); }

    Read the article

  • Point[] and Tri not "could not be found"

    - by Craig Dannehl
    Hi I'm trying to learn how to load a .obj file using OpenTK in windows Forms. I have seen a lot of examples out there, but I do see almost everyone uses List, and Point[]. Code example show these highlighted like there IDE know what these are; for example List<Tri> tris = new List<Tri>(); but mine just returns "The type or namespace name 'Tri' could not be found" is there an include I need to add or a using I am missing. Currently have this using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Drawing; using OpenTK; using OpenTK.Graphics; using OpenTK.Graphics.OpenGL;

    Read the article

  • Is there a way to export all the images of my tweening effect in Flash?

    - by Paul
    i'm using Flash to create the animation of my character in 2D (i'm just beginning). Is it possible to make a tween effect of a character, and then automatically export all the images/frames? So far, it's a bit fastidious : i create my tweening effect, then i put a keyframe for each frame i want to copy and paste, then i select the movieclips and shapes and copy and paste them into another flash document, i position those clips at the exact same location as the previous image, then i erase the previous image and export the image... For 30 frames! Is there any faster way? Thanks

    Read the article

  • How to build a "traffic AI"?

    - by Lunikon
    A project I am working on right now features a lot of "traffic" in the sense of cars moving along roads, aircraft moving aroun an apron etc. As of now the available paths are precalculated, so nodes are generated automatically for crossings which themselves are interconnected by edges. When a character/agent spawns into the world it starts at some node and finds a path to a target node by means of a simply A* algorithm. The agent follows the path and ultimately reaches its destination. No problem so far. Now I need to enable the agents to avoid collisions and to handle complex traffic situations. Since I'm new to the field of AI I looked up several papers/articles on steering behavior but found them to be too low-level. My problem consists less of the actual collision avoidance (which is rather simple in this case because the agents follow strictly defined paths) but of situations like one agent leaving a dead-end while another one wants to enter exactly the same one. Or two agents meeting at a bottleneck which only allows one agent to pass at a time but both need to pass it (according to the optimal route found before) and they need to find a way to let the other one pass first. So basically the main aspect of the problem would be predicting traffic movement to avoid dead-locks. Difficult to describe, but I guess you get what I mean. Do you have any recommendations for me on where to start looking? Any papers, sample projects or similar things that could get me started? I appreciate your help!

    Read the article

< Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >