Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 507/1071 | < Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >

  • What has the most efficient intersection test against an AABB tree - OBB, Cylinder or Capsule?

    - by identitycrisisuk
    I'm currently trying to find collisions in 3D between a tighter volume than an AABB and a tree of AABB volumes. I just need to know whether they are intersecting, no closest distance or collision response. An OBB, Cylinder or Capsule would all roughly fit these purposes but Cylinder and Capsule were the first thing I thought of, which I have found little information about detecting intersections online. Am I right in thinking that they would always be more complex to perform Separating Axis Tests on even though they might seem like simpler shapes? I figure by the time I get my head around SAT for curved shapes I could have done the thing with OBBs but I wanted to find out for sure.

    Read the article

  • Opengl + SDL linking error

    - by me2loveit2
    I am trying to load an image as a texture with opengl using c++ in visual studio 2010. I researched a couple hours online and found the SDL library, then I implemented a simple example and got some linking error I can not seem to figure out. The error log is here: 1Build started 10/20/2012 12:09:17 AM. 1InitializeBuildStatus: 1 Touching "Debug\texture mapping test.unsuccessfulbuild". 1ClCompile: 1 All outputs are up-to-date. 1 texture mapping test.cpp 1ManifestResourceCompile: 1 All outputs are up-to-date. 1texture mapping test.obj : error LNK2019: unresolved external symbol _IMG_Load referenced in function "void __cdecl display(void)" (?display@@YAXXZ) 1MSVCRTD.lib(crtexe.obj) : error LNK2019: unresolved external symbol main referenced in function __tmainCRTStartup 1C:\Users\Me\Documents\Visual Studio 2010\Projects\Programming projects\texture mapping test\Debug\texture mapping test.exe : fatal error LNK1120: 2 unresolved externals 1 1Build FAILED. 1 1Time Elapsed 00:00:02.45 ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== Can someone please help me!! I am at a desperate point right now. I downloaded the SDL, and copied all the .h file into: C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Include I added the .lib (x86) files into://as a not i tried the (x64) file too but got the exact same error C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Lib and the .dll(x86) into: C:\Windows\System32 For implementing textures, I used the simple sample code from: http://www.sdltutorials.com/sdl-tip-sdl-surface-to-opengl-texture Please let me know if you can see me doing something wrong, or know how I can fix this!! Thanks Phil

    Read the article

  • Why do I have an error when adding states in slick?

    - by SystemNetworks
    When I was going to create another state I had an error. This is my code: public static final int play2 = 3; and public Game(String gamename){ this.addState(new mission(play2)); } and public void initStatesList(GameContainer gc) throws SlickException{ this.getState(play2).init(gc, this); } I have an error in the addState. above the above code. I don't know where is the problem. But if you want the whole code it is here: package javagame; import org.newdawn.slick.*; import org.newdawn.slick.state.*; public class Game extends StateBasedGame{ public static final String gamename = "NET FRONT"; public static final int menu = 0; public static final int play = 1; public static final int train = 2; public static final int play2 = 3; public Game(String gamename){ super(gamename); this.addState(new Menu(menu)); this.addState(new Play(play)); this.addState(new train(train)); this.addState(new mission(play2)); } public void initStatesList(GameContainer gc) throws SlickException{ this.getState(menu).init(gc, this); this.getState(play).init(gc, this); this.getState(train).init(gc, this); this.enterState(menu); this.getState(play2).init(gc, this); } public static void main(String[] args) { try{ AppGameContainer app =new AppGameContainer(new Game(gamename)); app.setDisplayMode(1500, 1000, false); app.start(); }catch(SlickException e){ e.printStackTrace(); } } } //SYSTEM NETWORKS(C) 2012 NET FRONT

    Read the article

  • How can I make Maya export a mesh as double-sided?

    - by bobobobo
    I'm exporting from Maya 2009 to OBJ. The mesh I'm exporting has in it's Render Stats "Double Sided" checked, but when the polygon is exported, only a single side is actually exported. What really needs to happen is for each polygon that is double sided, two polygons need to be exported, facing in opposite directions.. I can do this manually, but is there a way to make the OBJ exporter do it for me?

    Read the article

  • How do I pass vertex and color positions to OpenGL shaders?

    - by smoth190
    I've been trying to get this to work for the past two days, telling myself I wouldn't ask for help. I think you can see where that got me... I thought I'd try my hand at a little OpenGL, because DirectX is complex and depressing. I picked OpenGL 3.x, because even with my OpenGL 4 graphics card, all my friends don't have that, and I like to let them use my programs. There aren't really any great tutorials for OpenGL 3, most are just "type this and this will happen--the end". I'm trying to just draw a simple triangle, and so far, all I have is a blank screen with my clear color (when I set the draw type to GL_POINTS I just get a black dot). I have no idea what the problem is, so I'll just slap down the code: Here is the function that creates the triangle: void CEntityRenderable::CreateBuffers() { m_vertices = new Vertex3D[3]; m_vertexCount = 3; m_vertices[0].x = -1.0f; m_vertices[0].y = -1.0f; m_vertices[0].z = -5.0f; m_vertices[0].r = 1.0f; m_vertices[0].g = 0.0f; m_vertices[0].b = 0.0f; m_vertices[0].a = 1.0f; m_vertices[1].x = 1.0f; m_vertices[1].y = -1.0f; m_vertices[1].z = -5.0f; m_vertices[1].r = 1.0f; m_vertices[1].g = 0.0f; m_vertices[1].b = 0.0f; m_vertices[1].a = 1.0f; m_vertices[2].x = 0.0f; m_vertices[2].y = 1.0f; m_vertices[2].z = -5.0f; m_vertices[2].r = 1.0f; m_vertices[2].g = 0.0f; m_vertices[2].b = 0.0f; m_vertices[2].a = 1.0f; //Create the VAO glGenVertexArrays(1, &m_vaoID); //Bind the VAO glBindVertexArray(m_vaoID); //Create a vertex buffer glGenBuffers(1, &m_vboID); //Bind the buffer glBindBuffer(GL_ARRAY_BUFFER, m_vboID); //Set the buffers data glBufferData(GL_ARRAY_BUFFER, sizeof(m_vertices), m_vertices, GL_STATIC_DRAW); //Set its usage glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex3D), 0); glVertexAttribPointer(1, 4, GL_FLOAT, GL_TRUE, sizeof(Vertex3D), (void*)(3*sizeof(float))); //Enable glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); //Check for errors if(glGetError() != GL_NO_ERROR) { Error("Failed to create VBO: %s", gluErrorString(glGetError())); } //Unbind... glBindVertexArray(0); } The Vertex3D struct is as such... struct Vertex3D { Vertex3D() : x(0), y(0), z(0), r(0), g(0), b(0), a(1) {} float x, y, z; float r, g, b, a; }; And finally the render function: void CEntityRenderable::RenderEntity() { //Render... glBindVertexArray(m_vaoID); //Use our attribs glDrawArrays(GL_POINTS, 0, m_vertexCount); glBindVertexArray(0); //unbind OnRender(); } (And yes, I am binding and unbinding the shader. That is just in a different place) I think my problem is that I haven't fully wrapped my mind around this whole VertexAttribArray thing (the only thing I like better in DirectX was input layouts D:). This is my vertex shader: #version 330 //Matrices uniform mat4 projectionMatrix; uniform mat4 viewMatrix; uniform mat4 modelMatrix; //In values layout(location = 0) in vec3 position; layout(location = 1) in vec3 color; //Out values out vec3 frag_color; //Main shader void main(void) { //Position in world gl_Position = vec4(position, 1.0); //gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(in_Position, 1.0); //No color changes frag_color = color; } As you can see, I've disable the matrices, because that just makes debugging this thing so much harder. I tried to debug using glslDevil, but my program just crashes right before the shaders are created... so I gave up with that. This is my first shot at OpenGL since the good old days of LWJGL, but that was when I didn't even know what a shader was. Thanks for your help :)

    Read the article

  • Using box2d DrawDebugData with multi layer scene ?

    - by Mr.Gando
    In my Game, a Scene is composed by several layers. Each layer has different camera transformations. This way I can have a layer at z=3 (GUI), z=2 (Monsters), z=1 (scrolling background), and this 3 layers compose my whole Scene. My render loop looks something like: renderLayer() applyTransformations() renderVisibleEntities() renderChildLayers() end If I call DrawDebugData() in the render loop, the whole b2world debug data will be rendered once for each layer in my scene, this generates a mess, because the "debug boxes" get duplicated, some of them get the camera transformations applied and some of them don't, etc. What I would like to do, would be to make DrawDebugData to draw only certain debug boxes. In that way, I could call something like b2world->DrawDebugDataForLayer(int layer_id) and call that on each layer like : renderLayer() applyTransformations() renderVisibleEntities() //Only render my corresponding layer debug data b2world->DrawDebugDataForLayer(layer_id) renderChildLayers() end Is there a way to subclass b2World so I could add this functionality ( specific to my game ) ? If not, what would be the best way to achieve this (Cocos2d uses a similar scene graph approach and box2d, but I'm not sure if debugDraw works in Cocos2d... ) Thanks

    Read the article

  • Character with several colliders and rigidbodies

    - by Lautaro
    I am doing a PvP fighting game. This is the GameObject hierarchy of the player character. Player contains: Legs Sword Torso Head I want to be able to Register impacts of the sword on a specific body part Use AddForce on the whole player entity when a body part is struck Change the animation of the player that owns the sword that hit Questions Is it correct that the only rigidbody should be on the root Player GameObject ? Is it correct that The body parts should have colliders and be triggers ? Is it correct that The swords should have colliders but not be trigger ?

    Read the article

  • Algorithm to simplify building/structural meshes

    - by morpheus
    I am looking for an algorithm to simplify the meshes of buildings or similar structures. EDIT: I had made a comment that Hoppe's algorithm tends to make meshes more and more spherical with simplification. But, I am not sure about it, so am deleting the comment. Buildings in contrast should tend to become more and more rectangular with increasing simplification. The D3DX extensions for D3D in version 9.0 (d3dx9.lib) used to have classes to do progressive mesh simplification. See: http://doc.51windows.net/Directx9_SDK/?url=/directx9_sdk/graphics/reference/d3dx/functions/mesh/d3dxgeneratepmesh.htm http://msdn.microsoft.com/en-us/library/windows/desktop/bb281243(v=vs.85).aspx

    Read the article

  • Partial Shader Signatures HLSL D3D11 C++

    - by ThePhD
    I had been debugging a problem I was having in a single shader file with 2 functions in it. I'm using DirectX 11, vs_5_0 and ps_5_0. I have stripped it down to its basic components to understand what was going wrong with the shaders, because the different named components of the Pixel and Vertex shaders were swapping the data being input: void QuadVertex ( inout float4 position : SV_Position, inout float4 color : COLOR0, inout float2 tex : TEXCOORD0 ) { // ViewProject is a 4x4 matrix, // just included here to show the simple passthrough of the data position = mul(position, ViewProjection); } And a Pixel Shader: float4 QuadPixel ( float4 color : COLOR0, float2 tex : TEXCOORD0 ) : SV_Target0 { // Color is filled with position data and tex is // filled with color values from the Vertex Shader return color; } The ID3D11InputLayout and associated C++ code correctly compiles the shaders and sets them up with some simple primitive data: data[0].Position.x = 0.0f * 210; data[0].Position.y = 1.0f * 160; data[0].Position.z = 0.0f; data[1].Position.x = 0.0f * 210; data[1].Position.y = 0.0f * 160; data[1].Position.z = 0.0f; data[2].Position.x = 1.0f * 210; data[2].Position.y = 1.0f * 160; data[2].Position.z = 0.0f; data[0].Colour = Colors::Red; data[1].Colour = Colors::Red; data[2].Colour = Colors::Red; data[0].Texture = Vector2::Zero; data[1].Texture = Vector2::Zero; data[2].Texture = Vector2::Zero; When used with the shader, the float4 color always ended up with the position data, and the float2 tex always ended up with the color data. After a moment, I figured out that the shader's input and output signatures needed to be in the correct order and the correct format and be laid out in the exact order of the output from the Vertex Shader, regardless of the semantics: float4 QuadPixel ( float4 pos : SV_Position, float4 color : COLOR0, float2 tex : TEXCOORD0 ) : SV_Target0 { return color; } After finding this out, My question is: Why don't the semantics map the appropriate components when going from Vertex Shader to Pixel Shader? Is there any way that I can make it so certain semantics are always mapped to other semantics, or do I always have to follow the rigid Shader Signature (in this case, Position, Color, and Texture) ? As a side note for why I'm asking: I know that when using XNA, my shader signatures for functions could differ in position and even drop items from Vertex Shader to Pixel Shader function parameters, having only the COLOR0 and TEXCOORD0 components being used (and it would still match up correctly). However, I also know that XNA relied on DX9 (and maybe a little DX10) implementation, and that maybe this kind of flexibility no longer exists in DX11?

    Read the article

  • 3d Picking under reticle

    - by Wolftousen
    i'm currently trying to work out some 3d picking code that I started years ago, but then lost interested the assignment was completed (this part wasn't actually part of the assignment). I am not using the mouse coords for picking, i'm just using the position in 3d space and a ray directly out from there. A small hitch though is that I want to use a cone and not a ray. Here are the variables i'm using: float iReticleSlope = 95/3000; //inverse reticle slope float baseReticle = 1; //radius of the reticle at z = 0 float maxRange = 3000; //max range to target Quaternion orientation; //the cameras orientation Vector3d position; //the cameras position Then I loop through each object in the world: Vector3d transformed; //object position after transformations float d, r; //holder variables for(i = 0; i < objects.length; i++) { transformed = position - objects[i].position; //transform the position relative to camera orientation.multiply(transformed); //orient the object relative to the camera if(transformed.z < 0) { d = sqrt(transformed[0] * transformed[0] + transformed[1] * transformed[1]); r = -transformed[2] * iReticleSlope + objects[i].radius; if(d < r && -transformed[2] - objects[i].radius <= maxRange) { //the object is under the reticle } else { //the object is not under the reticle } } else { //the object is not under the reticle } } Now this all works fine and dandy until the window ratio doesn't match the resolution ratio. Is there any simple way to account for that

    Read the article

  • Keypress Left is called twice in Update when key is pressed only once

    - by Simran kaur
    I have a piece of code that is changing the position of player when left key is pressed. It is inside of Update() function. I know, Update is called multiple times, but since I have an ifstatement to check if left arrow is pressed, it should update only once. I have tested using print statement that once pressed, it gets called twice. Problem: Position updated twice when key is pressed only once. Below given is the structure of my code: void Update() { if (Input.GetKeyDown (KeyCode.LeftArrow)) { print ("PRESSEEEEEEEEEEEEEEEEEEDDDDDDDDDDDDDD"); } } I looked up on web and what was suggested id this: if (Event.current.type == EventType.KeyDown && Event.current.keyCode == KeyCode.LeftArrow) { print("pressed"); } But, It gives me an error that says: Object reference not set to instance of an object How can I fix this?

    Read the article

  • Fast determination of whether objects are onscreen in 2D

    - by Ben Ezard
    So currently, I have this in each object's renderer's update method: float a = transform.position.x * Main.scale; float b = transform.position.y * Main.scale; float c = Camera.main.transform.position.x * Main.scale; float d = Camera.main.transform.position.y * Main.scale; onscreen = a + width - c > 0 && a - c < GameView.width && b + height - d > 0 && b - d < GameView.height; transform.position is a 2D vector containing the game engine's definition of where the object is - this is then multiplied by Main.scale to translate that coordinate into actual screen space Similarly, Camera.main.transform.position is the in-engine representation of where the main camera is, and this is also multiplied by Main.scale The problem is, as my game is tile-based, thousands of these updates get called every frame, just to determine whether or not each object should be drawn - how can I improve this please?

    Read the article

  • How to export a C++ class library to C# using a dll?

    - by SICGames2013
    In my previous revision game engine I deported major functions for the game editor for C#. Now, I'm beginning to revise the game engine with a static library. There's a already dynamic library created in C++ to use DLLEXPORT for C#. Just now I want to test out the newer functions and created a DLL file from C++. Because the DLL contains classes I was wondering how would I be able to use DLL Export. Would I do this: [DLLEXPORT("GameEngine.dll", EntryPoint="SomeClass", Conventional=_stdcall)] static extern void functionFromClass(); I have a feeling it's probably DLLImport and not DLLExport. I was wondering how would I go about this? Another way I was thinking was because I already have the DLL in C++ prepared already to go the C# Class Library. I could just keep the new engine as a lib, and link the lib with the old DLL C++ file. Wouldn't the EntryPoint be able to point to the class the function is in?

    Read the article

  • Alternatives to the GPL

    - by Bane
    I made a game, and I am currently making a game engine. I want them both to be completely free and open source. What license should I choose? I was reading a bit on GPL, but that seems to be more suited for system code and libraries, AFAIK, as it doesn't permit the use of code for proprietorial software - which, in turn, implies that the code can be used in the first place. I can see that, obviously, game engines can be considered libraries, and therefor be used, but what about game code? Is there an alternative to GPL?

    Read the article

  • PhysX Capsule Character Controller floating above ground

    - by Jannie
    I am using PhysX Version 3.0.2 in the simulation package I'm working on, and I've encountered some bizarre behavior with the capsule character controller. When I set the controller's height and radius to the appropriate values (r = 0.25, h = 1.86)it behaves correctly (moving along the ground, colliding with other objects, and so on) except that the capsule itself is floating above the ground. The actor will then bump his head when trying to get through a door, since the capsule is the correct height but also floating above the ground. This image should illustrate what I'm going on about: One can clearly see that the rest of the scene has their collision bodies wrapped correctly, it's just the capsule that's going wrong! The stop-gap I've implemented is creating a smaller capsule and giving it an offset, but I need to implement ray-picking for the controller next so the capsule has to surround the character model properly. Here's my character creation code (with height = 1.86f and radius = 0.25f): NxController* D3DPhysXManager::CreateCharacterController( std::string l_stdsControllerName, float l_fHeight, float l_fRadius, D3DXVECTOR3 l_v3Position ) { NxCapsuleControllerDesc l_CapsuleControllerDescription; l_CapsuleControllerDescription.height = l_fHeight; l_CapsuleControllerDescription.radius = l_fRadius; l_CapsuleControllerDescription.position.set( l_v3Position.x, l_v3Position.y, l_v3Position.z ); l_CapsuleControllerDescription.callback = &this->m_ControllerHitReport; NxController* l_pController = this->m_pControllerManager->createController( this->m_pScene, l_CapsuleControllerDescription ); this->m_pControllerMap.insert( l_ControllerValuePair( l_stdsControllerName, l_pController ) ); return l_pController; } Any help at all would be appreciated, I just can't figure this one out! P.S. I've found a couple of (rather old) threads describing the same issue, but it seems they couldn't find a solution either. Here are the links: http://forum-archive.developer.nvidia.com/index.php?showtopic=6409 http://forum-archive.developer.nvidia.com/index.php?showtopic=3272 http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=23003

    Read the article

  • Scrolling Box2D DebugDraw

    - by onedayitwillmake
    I'm developing a game using Box2D (javascript implementation - Box2DWeb), and I would like to know how I can pan the debug draw. I know the usual answer is - don't use debug draw, it's just for debugging. I'm not, however not all my objects are on the same screen, and i'd like to see where they are in the physics representation. How can I pan the debug drawing? As you can see the debug draw stuff, is show on the top left, but it only shows a small part of the world. Here is an example of what I mean: http://onedayitwillmake.com/ChuClone/ The game is open source, If you'd like to poke through and note something that perhaps i'm doing something that is obviously wrong: https://github.com/onedayitwillmake/ChuClone

    Read the article

  • Game Editor - When screen is clicked, how do you identify which object that is clicked?

    - by Deukalion
    I'm trying to create a Game Editor, currently just placing different types of Shapes and such. I'm doing this in Windows Forms while drawing the 3D with XNA. So, if I have a couple of Shapes on the screen and I click the screen I want to be able to identify "which" of these objects you clicked. What is the best method for this? Since having two objects one behind the other, it should be able to recognize the one in front and not the one behind it and also if I rotate the camera and click on the one behind it - it should identify it and not the first one. Are there any smart ways to go about this?

    Read the article

  • Path Modifier in Tower Of Defense Game

    - by Siddharth
    I implemented PathModifier for path of each enemy in my tower of defense game. So I applied fixed time to path modifier in that enemy complete their path. Like following code describe. new PathModifier(speed, path); Here speed define the time to complete the path. But in tower of defense game my problem is, there is a tower which slow down the movement of the enemy. In that particular situation I was stuck. please someone provide me some guidance what to do in this situation. EDIT : Path path = new Path(wayPointList.size()); for (int j = 0; j < wayPointList.size(); j++) { Point point = grid.getCellPoint(wayPointList.get(j).getRow(), wayPointList.get(j).getCol()); path.to(point.x, point.y); }

    Read the article

  • Non-unique display names?

    - by Davy8
    I know of at least big title game (Starcraft II) that doesn't require unique display names, so it would seem like it can work in at least some circumstance. Under what situations does allowing non-unique display names work well? When does it not work well? Does it come down to whether or not impersonation of someone else is a problem? The reasons I believe it works for Starcraft II is that there isn't any kind of in-game trading of virtual goods and other than "for kicks" there isn't much incentive to impersonate someone else in the game. There's also ladder rankings so even trying to impersonate a pro is easily detectable unless you're on a similar skill level. What are some other cases where it makes sense to specifically allow or disallow duplicate display names? (I have no idea what to tag this as. I went with game-design because I needed at least 1 tag and I don't have rep to create new ones yet.)

    Read the article

  • Blender 2.64, what are the actual hot-keys for certain actions

    - by Shivan Dragon
    I know this sounds mega lame but I've looked for hotkeys for certain actions, first in the appliation's User Settings (where I didn't find them) then in the official documentation (where I did find some of them but they're not the right ones): http://wiki.blender.org/index.php/Doc:2.4/Manual/3D_interaction/Transform_Control/Manipulators (Ctrl - Alt - S is recommended for Scale, but instead it opens the Save As... window - I think these changed in the latest versions, but they forgot to update the docs) So then, what are the hot keys for: selecting translate manipulator selecting rotate manipulator selecting scale manipulator In Edit mode: select vertex (editing) select edges (editing) select faces (editing) thanks.

    Read the article

  • Design: How to model / where to store relational data between classes

    - by Walker
    I'm trying to figure out the best design here, and I can see multiple approaches, but none that seems "right." There are three relevant classes here: Base, TradingPost, and Resource. Each Base has a TradingPost which can offer various Resources depending on the Base's tech level. Where is the right place to store the minimum tech level a base must possess to offer any given resource? A database seems like overkill. Putting it in each subclass of Resource seems wrong--that's not an intrinsic property of the Resource. Do I have a mediating class, and if so, how does it work? It's important that I not be duplicating code; that I have one place where I set the required tech level for a given item. Essentially, where does this data belong? P.S. Feel free to change the title; I struggled to come up with one that fits.

    Read the article

  • OpenGL depth texture wrong

    - by CoffeeandCode
    I have been writing a game engine for a while now and have decided to reconstruct my positions from depth... but how I read the depth seems to be wrong :/ What is wrong in my rendering? How I init my depth texture in the FBO gl::BindTexture(gl::TEXTURE_2D, this->textures[0]); // Depth gl::TexImage2D( gl::TEXTURE_2D, 0, gl::DEPTH32F_STENCIL8, width, height, 0, gl::DEPTH_STENCIL, gl::FLOAT_32_UNSIGNED_INT_24_8_REV, nullptr ); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_MAG_FILTER, gl::NEAREST); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_MIN_FILTER, gl::NEAREST); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_WRAP_S, gl::CLAMP_TO_EDGE); gl::TexParameterf(gl::TEXTURE_2D, gl::TEXTURE_WRAP_T, gl::CLAMP_TO_EDGE); gl::FramebufferTexture2D( gl::FRAMEBUFFER, gl::DEPTH_STENCIL_ATTACHMENT, gl::TEXTURE_2D, this->textures[0], 0 ); Linear depth readings in my shader Vertex #version 150 layout(location = 0) in vec3 position; layout(location = 1) in vec2 uv; out vec2 uv_f; void main(){ uv_f = uv; gl_Position = vec4(position, 1.0); } Fragment (where the issue probably is) #version 150\n uniform sampler2D depth_texture; in vec2 uv_f; out vec4 Screen; void main(){ float n = 0.00001; float f = 100.0; float z = texture(depth_texture, uv_f).x; float linear_depth = (n * z)/(f - z * (f - n)); Screen = vec4(linear_depth); // It ISN'T because I don't separate alpha } When Rendered so gamedev.stackexchange, what's wrong with my rendering/glsl?

    Read the article

  • Pathfinding with MicroPather : costs calculations with sectors and portals

    - by Adan
    Hello, I'm considering using micropather to help me with pathfinding. I'm not using a discrete map : I'm working in 2d with sectors and portales. However, I'm just wondering what is the best way to compute costs with this library in this context. Just to be more clear about geometrical shapes I'm using : sectors are basically convex polygons, and portals are segments that lies on sector's edge. Micropather exposes a pure virtual Graph class that you must inherate and overrides 3 functions. I understand how pathfinding works, so there's no problem in overriding those functions. Right now, my implementation give me results, i.e I'm able to find a path in my map, but I'm not sure I'm using an optimal solution. For the AdjacentCost method : I just take the distance between sector's centers as the cost. I think a better solution should be to use the portal between the two sectors, compute its center, and then the cost should be : distance( sector A center, portal center ) + distance ( sector B center, portal center ). I'm pretty sure the approximation I'm using with just sector's center is enough for most cases, but maybe with thin and long sectors that are perpendicular, this approximation could mislead the A* algorithm. For the LeastCostEstimate method : I just take the midpoint of the two sectors. So, as you understand, I'm always working with sectors' centers, and it's working fine. And I'm pretty sure there's a better way to work. Any suggestions or feedbacks? Thanks in advance!

    Read the article

  • Best way to go for simple online multi-player games?

    - by Mr_CryptoPrime
    I want to create a trivia game for my website. The graphic design does not have to be too fancy, probably no more advanced than a typical flash game. It needs to be secure because I want users to be able to play for real money. It also needs to run fast so users don't spend their time frustrated with game freezing. Compatibility, as with almost all online products, is key because of the large target market. I am most acquainted with Java programming, but I don't want to do it in Java if there is something much better. I am assuming I will have to utilize a variety of different languages in order for everything to come together. If someone could point out the main structure of everything so I could get a good start that would be great! 1) Language choice for simple secure online multiplayer games? 2) Perhaps use a database like MySQL, stored on a secure server for the trivia questions? 3) Free educational resources and even simpler projects to practice? Any ideas or suggestions would be helpful...Thanks!

    Read the article

  • Sensor based vs. AABB based collision

    - by Hillel
    I'm trying to write a simple collision system, which will probably be primarily used for 2D platformers, and I've been planning out an AABB system for a few weeks now, which will work seamlessly with my grid data structure optimization. I picked AABB because I want a simple system, but I also want it to be perfect. Now, I've been hearing a lot lately about a different method to handle collision, using sensors, which are placed in the important parts of the entity. I understand it's a good way to handle slopes, better than AABB collision. The thing is, I can't find a basic explanation of how it works, let alone a comparison of it and the AABB method. If someone could explain it to me, or point me to a good tutorial, I'd very much appreciate it, and also a comparison of the advantages and disadvantages of the two techniques would be nice.

    Read the article

< Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >