Search Results

Search found 16410 results on 657 pages for 'game component'.

Page 345/657 | < Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >

  • libgdx draw issue and animation

    - by johnny-b
    it seems as though i cannot get the draw method to work??? it seems as though the bullet.draw(batcher) does not work and i cannot understand why as the bullet is a sprite. i have made a Sprite[] and added them as animation. could that be it? i tried batcher.draw(AssetLoader.bulletAnimation.getKeyFrame(runTime), bullet.getX(), bullet.getY(), bullet.getOriginX() / 2, bullet.getOriginY() / 2, bullet.getWidth(), bullet.getHeight(), 1, 1, bullet.getRotation()); but that dont work, the only way it draws is this batcher.draw(AssetLoader.bulletAnimation.getKeyFrame(runTime), bullet.getX(), bullet.getY()); below is the code. // this is in a Asset Class texture = new Texture(Gdx.files.internal("SpriteN1.png")); texture.setFilter(TextureFilter.Nearest, TextureFilter.Nearest); bullet1 = new Sprite(texture, 380, 350, 45, 20); bullet1.flip(false, true); bullet2 = new Sprite(texture, 425, 350, 45, 20); bullet2.flip(false, true); Sprite[] bullets = { bullet1, bullet2 }; bulletAnimation = new Animation(0.06f, bullets); bulletAnimation.setPlayMode(Animation.PlayMode.LOOP); // this is the GameRender class public class GameRender() { private Bullet bullet; private Ball ball; public GameRenderer(GameWorld world) { myWorld = world; cam = new OrthographicCamera(); cam.setToOrtho(true, 480, 320); batcher = new SpriteBatch(); // Attach batcher to camera batcher.setProjectionMatrix(cam.combined); shapeRenderer = new ShapeRenderer(); shapeRenderer.setProjectionMatrix(cam.combined); // Call helper methods to initialize instance variables initGameObjects(); initAssets(); } private void initGameObjects() { ball = GameWorld.getBall(); bullet = myWorld.getBullet(); scroller = myWorld.getScroller(); } private void initAssets() { ballAnimation = AssetLoader.ballAnimation; bulletAnimation = AssetLoader.bulletAnimation; } public void render(float runTime) { Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL30.GL_COLOR_BUFFER_BIT); batcher.begin(); // Disable transparency // This is good for performance when drawing images that do not require // transparency. batcher.disableBlending(); // The ball needs transparency, so we enable that again. batcher.enableBlending(); batcher.draw(AssetLoader.ballAnimation.getKeyFrame(runTime), ball.getX(), ball.getY(), ball.getWidth(), ball.getHeight()); batcher.draw(AssetLoader.bulletAnimation.getKeyFrame(runTime), bullet.getX(), bullet.getY()); // End SpriteBatch batcher.end(); } } // this is the gameworld class public class GameWorld { public static Ball ball; private Bullet bullet; private ScrollHandler scroller; public GameWorld() { ball = new Ball(480, 273, 32, 32); bullet = new Bullet(10, 10); scroller = new ScrollHandler(0); } public void update(float delta) { ball.update(delta); bullet.update(delta); scroller.update(delta); } public static Ball getBall() { return ball; } public ScrollHandler getScroller() { return scroller; } public Bullet getBullet() { return bullet; } } is there anyway so make the sprite work?

    Read the article

  • Directional and orientation problem

    - by Ahmed Saleh
    I have drawn 5 tentacles which are shown in red. I have drew those tentacles on a 2D Circle, and positioned them on 5 vertices of the that circle. BTW, The circle is never be drawn, I have used it to simplify the problem. Now I wanted to attached that circle with tentacles underneath the jellyfish. There is a problem with the current code but I don't know what is it. You can see that the circle is parallel to the base of the jelly fish. I want it to be shifted so that it be inside the jelly fish. but I don't know how. I tried to multiply the direction vector to extend it but that didn't work. // One tentacle is constructed from nodes // Get the direction of the first tentacle's node 0 to node 39 of that tentacle; Vec3f dir = m_tentacle[0]->geNodesPos()[0] - m_tentacle[0]->geNodesPos()[39]; // Draw the circle with tentacles on it Vec3f pos = m_SpherePos; drawCircle(pos,dir,30,m_tentacle.size()); for (int i=0; i<m_tentacle.size(); i++) { m_tentacle[i]->Draw(); } // Draw the jelly fish, and orient it on the 2D Circle gl::pushMatrices(); Quatf q; // assign quaternion to rotate the jelly fish around the tentacles q.set(Vec3f(0,-1,0),Vec3f(dir.x,dir.y,dir.z)); // tanslate it to the position of the whole creature per every frame gl::translate(m_SpherePos.x,m_SpherePos.y,m_SpherePos.z); gl::rotate(q); // draw the jelly fish at center 0,0,0 drawHemiSphere(Vec3f(0,0,0),m_iRadius,90); gl::popMatrices();

    Read the article

  • how to properly implement alpha blending in a complex 3d scene

    - by Gajet
    I know this question might sound a bit easy to answer but It's driving me crazy. There are too many possible situations that a good alpha blending mechanism should handle, and for each Algorithm I can think of there is something missing. these are the methods I've though about so far: first of I though about object sorting by depth, this one simply fails because Objects are not simple shapes, they might have curves and might loop inside each other. so I can't always tell which one is closer to camera. then I thought about sorting triangles but this one also might fail, thought I'm not sure how to implement it there is a rare case that might again cause problem, in which two triangle pass through each other. again no one can tell which one is nearer. the next thing was using depth buffer, at least the main reason we have depth buffer is because of the problems with sorting that I mentioned but now we get another problem. Since objects might be transparent, in a single pixel there might be more than one object visible. So for which Object should I store pixel depth? I then thought maybe I can only store the most front Object depth, and using that determine how should I blend next draw calls at that pixel. But again there was a problem, think about 2 semi transparent planes with a solid plane in middle of them. I was going to render the solid plane at the end, one can see the most distant plane. note that I was going to merge every two planes until there is only one color left for that pixel. Obviously I can use sorting methods too because of the same reasons I've explained above. Finally the only thing I imagine being able to work is to render all objects into different render targets and then sort those layers and display the final output. But this time I don't know how can I implement this algorithm.

    Read the article

  • DX10 sprite and pixel shader

    - by Alex Farber
    I am using ID3DX10Sprite to draw 2D image on the screen. 3D scene contains only one textured sprite placed over the whole window area. Render method looks like this: m_pDevice-ClearRenderTargetView(...); m_pSprite-Begin(D3DX10_SPRITE_SORT_TEXTURE); m_pSprite-DrawSpritesImmediate(&m_SpriteDefinition, 1, 0, 0); m_pSprite-End(); Now I want to make some transformations with the sprite texture in a shader. Currently the program doesn't work with shader. How it is possible to add pixel shader to the program with this structure? Inside the shader, I need to set all colors equal to red, and multiply pixel values by some coefficient. Something like this: float4 TexturePixelShader(PixelInputType input) : SV_Target { float4 textureColor; textureColor = shaderTexture.Sample(SampleType, input.tex); textureColor.x = textureColor.x * coefficient; textureColor.y = textureColor.x; textureColor.z = textureColor.x; return textureColor; }

    Read the article

  • Initial direction of intersection between two moving vehicles?

    - by Larolaro
    I'm working with a bit of projectile prediction for my AI and I'm looking for some ideas, any input? If a blue vehicle is moving in a direction at a constant speed of X m/s and a stationary orange vehicle has a rocket that travels Y m/s, which initial direction would the orange vehicle have to fire the rocket for it to hit the blue vehicle at the earliest time in the future? Thanks for reading!

    Read the article

  • How can I generate signed distance fields (2D) in real time, fast?

    - by heishe
    In a previous question, it was suggested that signed distance fields can be precomputed, loaded at runtime and then used from there. For reasons I will explain at the end of this question (for people interested), I need to create the distance fields in real time. There are some papers out there for different methods which are supposed to be viable in real-time environments, such as methods for Chamfer distance transforms and Voronoi diagram-approximation based transforms (as suggested in this presentation by the Pixeljunk Shooter dev guy), but I (and thus can be assumed a lot of other people) have a very hard time actually putting them to use, since they're usually long, largely bloated with math and not very algorithmic in their explanation. What algorithm would you suggest for creating the distance fields in real-time (favourably on the GPU) especially considering the resulting quality of the distance fields? Since I'm looking for an actual explanation/tutorial as opposed to a link to just another paper or slide, this question will receive a bounty once it's eligible for one :-). Here's why I need to do it in real time: There's something else:

    Read the article

  • Making efficeint voxel engines using "chunks"

    - by Wardy
    Concept I'm currently looking in to how voxel engines work with a view to possibly making one myself. I see a lot of stuff like this ... https://sites.google.com/site/letsmakeavoxelengine/home/chunks ... which talks about how to go about reducing the draw calls. What I can't seem to understand is how it actually saves draw call counts on the basis of the logic being something like this ... Without chunks foreach voxel in myvoxels DrawIfVisible() With Chunks foreach chunk in mychunks DrawIfVisible() which then does ... foreach voxel in myvoxels DrawIfVisible() So surely you saved nothing ?!?! You still make a draw call for each visible voxel do you not? A visible voxel needs a draw call in either scenario. The only real saving I can see is that the logic that evaluates a chunk will be able to determine if a large number of voxels are visible or not effectively saving a bit of "is this chunk visible" cpu time. But it's the draw calls that interest me ... The fewer of those, the faster the application. EDIT: In case it makes any difference I will probably be using XNA (DX not OpenGL) for my engine so don't consider my choice of example in the link above my choice of technology. But this question is such that I doubt it would matter.

    Read the article

  • 3D Texture Mapping (Atlas)

    - by Tim Hatch
    This is a pretty simple question. If I was to use multiple images in a single texture for a 3D cube, how would I go about re-using each vertex (having 8 total vs 24)? With a single buffer of 8 vertices, I don't see how I'd properly reuse the UV values. Any help on that? I know it's not terribly clear, but I figured it was a simple question. The 2D method is pretty easy, the next coordinates would be the same as the first (0,0 and 0,1 respectively). However, the above 3D version has me quite befuddled.

    Read the article

  • Strange behavior of RigidBody with gravity and impulse applied

    - by Heisenbug
    I'm doing some experiments trying to figure out how physics works in Unity. I created a cube mesh with a BoxCollider and a RigidBody. The cuve is laying on a mesh plane with a BoxCollider. I'm trying to update the object position applying a force on its RigidBody. Inside script FixedUpdate function I'm doing the following: public void FixedUpdate() { if (leftButtonPressed()) this.rigidbody.AddForce( this.transform.forward * this.forceStrength, ForceMode.Impulse); } Despite the object is aligned with the world axis and the force is applied along Z axis, it performs a quite big rotation movement around its y axis. Since I didn't modify the center of mass and the BoxCollider position and dimension, all values should be fine. Removing gravity and letting the object flying without touching the plane, the problem doesn't show. So I suppose it's related to the friction between objects, but I can't understand exactly which is the problem. Why this? What's my mistake? How can I fix this, or what's the right way to do such a moving an object on a plane through a force impulse?

    Read the article

  • Check if an object is facing another based on angles

    - by Isaiah
    I already have something that calculates the bearing angle to get one object to face another. You give it the positions and it returns the angle to get one to face the other. Now I need to figure out how tell if on object is facing toward another object within a specified field and I can't find any information about how to do this. The objects are obj1 and obj2. Their angles are at obj1.angle and obj2.angle. Their vectors are at obj1.pos and obj2.pos. It's in the format [x,y]. The angle to have one face directly at another is found with direction(obj1.pos,obj2.pos). I want to set the function up like this: isfacing(obj1,obj2,area){...} and return true/false depending if it's in the specified field area to the angle to directly see it. I've got a base like this: var isfacing = function (obj1,obj2,area){ var toface = direction(obj1.pos,obj2.pos); if(toface+area >= obj1.angle && ob1.angle >= toface-area){ return true; } return false; } But my problem is that the angles are in 360 degrees, never above 360 and never below 0. How can I account for that in this? If the first object's angle is say at 0 and say I subtract a field area of 20 or so. It'll check if it's less than -20! If I fix the -20 it becomes 340 but x < 340 isn't what I want, I'd have to x 340 in that case. Is there someone out there with more sleep than I that can help a new dev pulling an all-nighter just to get enemies to know if they're attacking in the right direction? I hope I'm making this harder than it seems. I'd just make them always face the main char if the producer didn't want attacks from behind to work while blocking. In which case I'll need the function above anyways. I've tried to give as much info as I can think would help. Also this is in 2d.

    Read the article

  • How to use mount points in MilkShape models?

    - by vividos
    I have bought the Warriors & Commoners model pack from Frogames and the pack contains (among other formats) two animated models and several non-animated objects (axe, shield, pilosities, etc.) in MilkShape3D format. I looked at the official "MilkShape 3D Viewer v2.0" (msViewer2.zip at http://www.chumba.ch/chumbalum-soft/ms3d/download.html) source code and implemented loading the model, calculating the joint matrices and everything looks fine. In the model there are several joints that are designated as the "mount points" for the static objects like axe and shield. I now want to "put" the axe into the hand of the animated model, and I couldn't quite figure out how. I put the animated vertices in a VBO that gets updated every frame (I know I should do this with a shader, but I didn't have time to do this yet). I put the static vertices in another VBO that I want to keep static and not updated every frame. I now tried to render the animated vertices first, then use the joint matrix for the "mount joint" to calculate the location of the static object. I tried many things, and what about seems to be right is to transpose the joint matrix, then use glMatrixMult() to transform the modelview matrix. For some objects like the axe this is working, but not for others, e.g. the pilosities. Now my question: How is this generally implemented when using bone/joint models, and especially with MilkShape3D models? Am I on the right track?

    Read the article

  • Collision checking problem on a Tiled map

    - by nosferat
    I'm working on a pacman styled dungeon crawler, using the free oryx sprites. I've created the map using Tiled, separating the floor, walls and treasure in three different layers. After importing the map in libGDX, it renders fine. I also added the player character, for now it just moves into one direction, the player cannot control it yet. I wanted to add collision and I was planning to do this by checking if the player's new position is on a wall tile. Therefore as you can see in the following code snippet, I get the tile type of the appropriate tile and if it is not zero (since on that layer there is nothing except the wall tile) it is a collision and the player cannot move further: final Vector2 newPos = charController.move(warrior.getX(), warrior.getY()); if(!collided(newPos)) { warrior.setPosition(newPos.x, newPos.y); warrior.flip(charController.flipX(), charController.flipY()); } [..] private boolean collided(Vector2 newPos) { int row = (int) Math.floor((newPos.x / 32)); int col = (int) Math.floor((newPos.y / 32)); int tileType = tiledMap.layers.get(1).tiles[row][col]; if (tileType == 0) { return false; } return true; } The character only moves one tile with this code: If I reduce the col value by two it two more tiles. I think the problem will be around indexing, but I'm totally confused because the zero in the coordinate system of libGDX is in the bottom left corner of the screen, and I don't know the tiles array's indexing is similair or not. The size of the map is 19x21 tiles and looks like the following (the starting position of the player is marked with blue:

    Read the article

  • Can't export Blender model for use in jMonkeyEngine SDK

    - by Nathan Sabruka
    I have a scene rendered in blender called "civ1.blend" which contains multiple materials (for example, I have one called "white"). I want to use this model in jMonkeyEngine, so I used the OGRE exporter to create .scene and .material files. This gives me, for example, a civ1.scene file and a white.material file.However, when I then try to import civ1.scene into the jMonkeyEngine SDK, I get an error along the lines of "Cannot find material file 'civ1.material'". Like I said, I have a white.material file, but I do not have a civ1.material file. Did anyone encounter this problem? How do I fix this?

    Read the article

  • Wavefront mesh: determine which face a point belongs to?

    - by Mina Samy
    I have a 3D mesh Wavefront .obj file. Is there any algorithm that takes an arbitrary point coordinates as input and determines which face of the mesh that point belongs to ?? The mesh is rendered on the screen, then the user clicks on it, I want to determine which part of the mesh the user has clicked on ? Here's the code using LibGDX: Vector3 intersection=new Vector3(); Ray ray=camera.getPickRay(x, y); //vertices is an array that hold the coordinates of the mesh boolean ok=Intersector.intersectRayTriangles(ray, vertices, intersection); Thanks

    Read the article

  • The right way to add images to Monogame/Windows

    - by ashes999
    I'm starting out with MonoGame. For now, I'm only targeting Windows (desktop -- not Windows 8 specifically). I've used a couple of XNA products in the past (raw XNA, FlatRedBall, SilverSprite), so I may have a misunderstanding about how I should add images to my content. How do I add images to my project? Currently, I created a new Monogame project, added a folder called "Content," and added images under there; the only caveat is that I need to set the Copy to Output Directory action to one of the Copy ones. It seems strange, because my "raw" XNA project just last week had a Content project in it (XNA Framework Content Pipeline, according to VS2010), which compiled my images to XNB (I think). It seems like Monogame doesn't use the same content pipeline, but I'm not sure. Edit: My question is not about "how do I get the XNA content pipeline to work with Monogame." My question is "why would I want to use the XNA content pipeline in Monogame?" Because there are (at least) two solutions (that I see today): Add the images to the Monogame project and set the Copy to Output Directory options to copy. Add a XNA content pipeline project and add my images to that instead; reference it from my MOnogame project. Which solution should I use, and why? I currently have a working version with the first option.

    Read the article

  • MCP 1.7.10 Java class navigation

    - by Elias Benevedes
    So, I'm new to the Minecraft modding community and trying to understand where to start. I've attempted to do it before, but dropped it to the complexity of starting and the lack of a site like this to help (Mind that I'm also semi-new to Java, but have worked extensively in Javascript and Python. I understand how Java is different from the two). I have downloaded MCP 9.08 (Decompiles 1.7.10), and decompiled Minecraft. I'm looking to mod client, so I didn't supply it with a server jar. Everything seemed to work fine in decompile (Only error was it couldn't find the server jar). I can find my files in /mcp908/src/minecraft/net/minecraft. However, if I open up one of the classes in, say, block, I see a bunch of variables starting with p_ and ending with _. Is there any way to make these variables more decipherable, to understand what's going on so I can learn by example? Thank you.

    Read the article

  • Why does my VertexDeclaration apparently not contain Position0?

    - by Phil
    I'm trying to get my code from calling each individual draw call down to using at least a VertexBuffer, and preferably an indexBuffer, but now that I'm attempting to test my code, I'm getting the error: The current vertex declaration does not include all the elements required by the current vertex shader. Position0 is missing. Which makes absolutely no sense to me, as my VertexDeclaration is: public readonly static VertexDeclaration VertexDeclaration = new VertexDeclaration( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(sizeof(float) * 3, VertexElementFormat.Color, VertexElementUsage.Color, 0), new VertexElement(sizeof(float) * 3 + 4, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0) ); Which clearly contains the information. I am attempting to draw with the following lines: VertexBuffer vb = new VertexBuffer(GraphicsDevice, VertexPositionColorNormal.VertexDeclaration, c.VertexList.Count, BufferUsage.WriteOnly); IndexBuffer ib = new IndexBuffer(GraphicsDevice, typeof(int), c.IndexList.Count, BufferUsage.WriteOnly); vb.SetData<VertexPositionColorNormal>(c.VertexList.ToArray()); ib.SetData<int>(c.IndexList.ToArray()); GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vb.VertexCount, 0, c.IndexList.Count/3); Where c is a Chunk class containing an 8x8x8 array of boxes. Full code is available at https://github.com/mrbaggins/Box/tree/ProperMeshing/box/box. Relevant locations are Chunk.cs (Contains the VertexDeclaration) and Game1.cs (Draw() is in Lines 230-250). Not much else of relevance to this problem anywhere else. Note that large commented sections are from old version of drawing.

    Read the article

  • How to implement an intelligent enemy in a shoot-em-up?

    - by bummzack
    Imagine a very simple shoot-em-up, something we all know: You're the player (green). Your movement is restricted to the X axis. Our enemy (or enemies) is at the top of the screen, his movement is also restricted to the X axis. The player fires bullets (yellow) at the enemy. I'd like to implement an A.I. for the enemy that should be really good at avoiding the players bullets. My first idea was to divide the screen into discrete sections and assign weights to them: There are two weights: The "bullet-weight" (grey) is the danger imposed by a bullet. The closer the bullet is to the enemy, the higher the "bullet-weight" (0..1, where 1 is highest danger). Lanes without a bullet have a weight of 0. The second weight is the "distance-weight" (lime-green). For every lane I add 0.2 movement cost (this value is kinda arbitrary now and could be tweaked). Then I simply add the weights (white) and go to the lane with the lowest weight (red). But this approach has an obvious flaw, because it can easily miss local minima as the optimal place to go would be simply between two incoming bullets (as denoted with the white arrow). So here's what I'm looking for: Should find a way through bullet-storm, even when there's no place that doesn't impose a threat of a bullet. Enemy can reliably dodge bullets by picking an optimal (or almost optimal) solution. Algorithm should be able to factor in bullet movement speed (as they might move with different velocities). Ways to tweak the algorithm so that different levels of difficulty can be applied (dumb to super-intelligent enemies). Algorithm should allow different goals, as the enemy doesn't only want to evade bullets, he should also be able to shoot the player. That means that positions where the enemy can fire at the player should be preferred when dodging bullets. So how would you tackle this? Contrary to other games of this genre, I'd like to have only a few, but very "skilled" enemies instead of masses of dumb enemies.

    Read the article

  • Algorithms for rainfall + river creation in procedurally generated terrain

    - by Peck
    I've recently become fascinated by the things that can be done with procedurally terrain and have started experimenting with world building a bit. I'd like to be able to make worlds something like Dwarf fortress with biomes created from meshing together various maps. So first step has been done. Using the diamond-square algorithm I've created some nice hieghtmaps. Next step is I would like to add some water features and have them somewhat realistically generated with rainfall. I've read about a few different approaches such as starting at the high points of the map, and "stepping" down to the lowest neighboring point, pooling/eroding as it works its way down to sea level. Are there any documented algorithms with this or are they more off the cuff? Would love any advice/thoughts.

    Read the article

  • Unity Problem with colliding instances of same object

    - by Kuba Sienkiewicz
    I want to check if object's instance is overlapping with another instance (any spawned object with another spawned object, not necessary the same object). I'm doing this by detecting collisions between bodies. But I have a problem. Spawned object (instances) are detecting collision with everything but other spawned objects. I've checked collision layers etc. All of spawned objects have rigidbodies and mesh colliders. Also when I attach my script to another body and I touch that body with an instanced object it detects collision. So problem is visible only in collision between spawned objects. And one more information I have script, rigid body and collider attached to child of main object. using UnityEngine; using System.Collections; public class CantPlace : MonoBehaviour { public bool collided = false; // Use this for initialization void Start () { } // Update is called once per frame void Update () { //Debug.Log (collided); } void OnTriggerEnter(Collider collider) { //if (true) { //foreach (Transform child in this.transform) { // if (child.name == "Cylinder") { //collided = true; Color c; c = this.renderer.material.color; c.g = 0f; c.b = 1f; c.r = 0f; this.renderer.material.color = c; Debug.Log (collider.name); //} // } //} //foreach (ContactPoint contact in collision.contacts) { // Debug.DrawRay(contact.point, contact.normal, Color.red,15f); // } } }

    Read the article

  • Rendering output to arbitary quadrilateral

    - by Trainee4Life
    I want to render output on a device to an arbitary quadirlateral, i.e. project texture on to a quad. What are the possible ways I could implement it? Till now, I have investigated: Drawing textured quadrilateral - Quads look odd as they are composed of triangles, and the distortion looks odd. The issue I'm facing has been discussed here and here as well. Setting transformation on device - Need help in getting this implemented. Pixel shaders - Not able to implement the desired effect. Any help would be much appreciated.

    Read the article

  • Is it possible to construct a cube with less than 24 vertices

    - by Telanor
    I have a cube-based world like minecraft and I'm wondering if there's a way to construct a cube with less than 24 vertices so I can reduce memory usage. It doesn't seem possible to me for 2 reasons: the normals wouldn't come out right and per-face textures wouldn't work. Is this the case or am I wrong? Maybe there's some fancy new dx11 tech that can help? Edit: Just to clarify, I have 2 requirements: I need surface normals for each cube face in order to do proper lighting and I need a way to address a different indexes in a texture array for each cube face

    Read the article

  • ssao implementation

    - by Irbis
    I try to implement a ssao based on this tutorial: link I use a deferred rendering and world coordinates for shading calculations. When saving gbuffer a vertex shader output looks like this: worldPosition = vec3(ModelMatrix * vec4(inPosition, 1.0)); normal = normalize(normalModelMatrix * inNormal); gl_Position = ProjectionMatrix * ViewMatrix * ModelMatrix * vec4(inPosition, 1.0); Next for a ssao calculations I render a scene as a full screen quad and I save an occlusion parameter in a texture. (Vertex positions in the world space: link Normals in the world space: link) SSAO implementation: subroutine (RenderPassType) void ssao() { vec2 texCoord = CalcTexCoord(); vec3 worldPos = texture(texture0, texCoord).xyz; vec3 normal = normalize(texture(texture1, texCoord).xyz); vec2 noiseScale = vec2(screenSize.x / 4, screenSize.y / 4); vec3 rvec = texture(texture2, texCoord * noiseScale).xyz; vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); vec3 bitangent = cross(normal, tangent); mat3 tbn = mat3(tangent, bitangent, normal); float occlusion = 0.0; float radius = 4.0; for (int i = 0; i < kernelSize; ++i) { vec3 pix = tbn * kernel[i]; pix = pix * radius + worldPos; vec4 offset = vec4(pix, 1.0); offset = ProjectionMatrix * ViewMatrix * offset; offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; float sample_depth = texture(texture0, offset.xy).z; float range_check = abs(worldPos.z - sample_depth) < radius ? 1.0 : 0.0; occlusion += (sample_depth <= pix.z ? 1.0 : 0.0); } outputColor = vec4(occlusion, occlusion, occlusion, 1); } That code gives following results: camera looking towards -z world space: link camera looking towards +z world space: link I wonder if it is possible to use world coordinates in the above code ? When I move camera I get different results because world space positions don't change. Can I treat worldPos.z as a linear depth ? What should I change to get a correct results ? I except the white areas in place of occlusion, so the ground should has the white areas only near to the object.

    Read the article

  • Are there any open source projects for car engine sound simulation?

    - by Petteri Hietavirta
    I have been thinking how to create realistic sound for a car. The main sound is the engine, then all kind of wind, road and suspension sounds. Are there any open source projects for the engine sound simulation? Simply pitching up the sample does not sound too great. The ideal would be to something that allows me to pick type of the engine (i.e. inline-4 vs v-8), add extras like turbo/supercharger whine and finally set the load and rpm. Edit: Something like http://www.sonory.org/examples.html

    Read the article

  • Finding the shorter turning direction towards a target

    - by A.B.
    I'm trying to implement a type of movement where the object gradually faces the target. The problem I've run into is figuring out which turning direction is faster. The following code works until the object's orientation crosses the -PI or PI threshold, at which point it will start turning into the opposite direction void moveToPoint(sf::Vector2f destination) { if (destination == position) return; auto distance = distanceBetweenPoints(position, destination); auto direction = angleBetweenPoints(position, destination); /// Decides whether incrementing or decrementing orientation is faster /// the next line is the problem if (atan2(sin(direction - rotation), cos(direction - rotation)) > 0 ) { /// Increment rotation rotation += rotation_speed; } else { /// Decrement rotation rotation -= rotation_speed; } if (distance < movement_speed) { position = destination; } else { position.x = position.x + movement_speed*cos(rotation); position.y = position.y + movement_speed*sin(rotation); } updateGraphics(); } 'rotation' and 'rotation_speed' are implemented as custom data type for radians which cannot have values lower than -PI and greater than PI. Any excess or deficit "wraps around". For example, -3.2 becomes ~3.08.

    Read the article

< Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >