Search Results

Search found 29201 results on 1169 pages for 'game development'.

Page 533/1169 | < Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >

  • Fast pixelshader 2D raytracing

    - by heishe
    I'd like to do a simple 2D shadow calculation algorithm by rendering my environment into a texture, and then use raytracing to determine what pixels of the texture are not visible to the point light (simply handed to the shader as a vec2 position) . A simple brute force algorithm per pixel would looks like this: line_segment = line segment between current pixel of texture and light source For each pixel in the texture: { if pixel is not just empty space && pixel is on line_segment output = black else output = normal color of the pixel } This is, of course, probably not the fastest way to do it. Question is: What are faster ways to do it or what are some optimizations that can be applied to this technique?

    Read the article

  • RGB values from image into a one dimension array in c#

    - by velocityxyz
    I was wondering if there is a was a way to read rgb values from an image into a one dimensional array in C#. If it doesnt make sense, in java I would do something like this. int[] pixels; BufferedImage image = getClass().getResourceAsStream("asdfghjkl.png"); int w = image.getWidth(); int h = image.getHeight(); pixels = new int[w * h]; image.getRGB(0, 0, w, h, pixels, 0, w) ; So any help would be great, or if you can point me in the right direction, that'd be great

    Read the article

  • Animate from end frame of one animation to end frame of another Unity3d/C#

    - by Timothy Williams
    So I have two legacy FBX animations; Animation A and Animation B. What I'm looking to do is to be able to fade from A to B regardless of the current frame A is on. Using animation.CrossFade() will play A in reverse until it reaches frame 0, then play B forward. What I'm looking to do is blend from the current frame of A to the end frame of B. Probably via some sort of lerp between the facial position in A and the facial position in the last frame of B. Does anyone know how I might be able to accomplish this? Either via a built in function or potentially lerping of some sort?

    Read the article

  • How can I do Mouse Selection In OpenGL 3.0?

    - by NoobScratcher
    Hello I'm pretty good programmer I've made my own 2D games in SDL and made a gui in 3D using Old OpenGL and Modern OpenGL but.. I'm having problems with trying to click 3D models with opengl I have no idea what to do too be honest. Do I read the area that I've clicked? or what do I do? 100% shore this has been asked before but I just don't know what to do...?? using : OpenGL 3.0 WIN32 API C++

    Read the article

  • Bullet Physic: Transform body after adding

    - by Mathias Hölzl
    I would like to transform a rigidbody after adding it to the btDiscreteDynamicsWorld. When I use the CF_KINEMATIC_OBJECT flag I am able to transform it but it's static (no collision response/gravity). When I don't use the CF_KINEMATIC_OBJECT flag the transform doesn't gets applied. So how to I transform non-static objects in bullet? DemoCode: btBoxShape* colShape = new btBoxShape(btVector3(SCALING*1,SCALING*1,SCALING*1)); /// Create Dynamic Objects btTransform startTransform; startTransform.setIdentity(); btScalar mass(1.f); //rigidbody is dynamic if and only if mass is non zero, otherwise static bool isDynamic = (mass != 0.f); btVector3 localInertia(0,0,0); if (isDynamic) colShape->calculateLocalInertia(mass,localInertia); btDefaultMotionState* myMotionState = new btDefaultMotionState(); btRigidBody::btRigidBodyConstructionInfo rbInfo(mass,myMotionState,colShape,localInertia); btRigidBody* body = new btRigidBody(rbInfo); body->setCollisionFlags(body->getCollisionFlags()|btCollisionObject::CF_KINEMATIC_OBJECT); body->setActivationState(DISABLE_DEACTIVATION); m_dynamicsWorld->addRigidBody(body); startTransform.setOrigin(SCALING*btVector3( btScalar(0), btScalar(20), btScalar(0) )); body->getMotionState()->setWorldTransform(startTransform);

    Read the article

  • Draw multiple objects with textures

    - by Simplex
    I want to draw cubes using textures. void OperateWithMainMatrix(ESContext* esContext, GLfloat offsetX, GLfloat offsetY, GLfloat offsetZ) { UserData *userData = (UserData*) esContext->userData; ESMatrix modelview; ESMatrix perspective; //Manipulation with matrix ... glVertexAttribPointer(userData->positionLoc, 3, GL_FLOAT, GL_FALSE, 0, cubeFaces); //in cubeFaces coordinates verticles cube glVertexAttribPointer(userData->normalLoc, 3, GL_FLOAT, GL_FALSE, 0, cubeFaces); //for normals (use in fragment shaider for textures) glEnableVertexAttribArray(userData->positionLoc); glEnableVertexAttribArray(userData->normalLoc); // Load the MVP matrix glUniformMatrix4fv(userData->mvpLoc, 1, GL_FALSE, (GLfloat*)&userData->mvpMatrix.m[0][0]); //Bind base map glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_CUBE_MAP, userData->baseMapTexId); //Set the base map sampler to texture unit to 0 glUniform1i(userData->baseMapLoc, 0); // Draw the cube glDrawArrays(GL_TRIANGLES, 0, 36); } (coordinates transformation is in OperateWithMainMatrix() ) Then Draw() function is called: void Draw(ESContext *esContext) { UserData *userData = esContext->userData; // Set the viewport glViewport(0, 0, esContext->width, esContext->height); // Clear the color buffer glClear(GL_COLOR_BUFFER_BIT); // Use the program object glUseProgram(userData->programObject); OperateWithMainMatrix(esContext, 0.0f, 0.0f, 0.0f); eglSwapBuffers(esContext->eglDisplay, esContext->eglSurface); } This work fine, but if I try to draw multiple cubes (next code for example): void Draw(ESContext *esContext) { ... // Use the program object glUseProgram(userData->programObject); OperateWithMainMatrix(esContext, 2.0f, 0.0f, 0.0f); OperateWithMainMatrix(esContext, 1.0f, 0.0f, 0.0f); OperateWithMainMatrix(esContext, 0.0f, 0.0f, 0.0f); OperateWithMainMatrix(esContext, -1.0f, 0.0f, 0.0f); OperateWithMainMatrix(esContext, -2.0f, 0.0f, 0.0f); eglSwapBuffers(esContext->eglDisplay, esContext->eglSurface); } A side faces overlapes frontal face. The side face of the right cube overlaps frontal face of the center cube. How can i remove this effect and display miltiple cubes without it?

    Read the article

  • Why does Unity3D crash in VirtualBox?

    - by FakeRainBrigand
    I'm running Unity3D in a virtual instance of Windows, using the Virtual Box software on Linux. I have guest additions installed with DirectX support. I've tried using Windows XP SP3 32-bit, and Windows 7 64bit. My host is Ubuntu 12.04 64bit. I installed and registered Unity on both. It loads up fine, and then crashes my entire VirtualBox instance (equivalent of a computer shutting off with no warning).

    Read the article

  • Pathfinding in multi goal, multi agent environment

    - by Rohan Agrawal
    I have an environment in which I have multiple agents (a), multiple goals (g) and obstacles (o). . . . a o . . . . . . . o . g . . a . . . . . . . . . . o . . . . o o o o . g . . o . . . . . . . o . . . . o . . . . o o o o a What would an appropriate algorithm for pathfinding in this environment? The only thing I can think of right now, is to Run a separate version of A* for each goal separately, but i don't think that's very efficient.

    Read the article

  • Mesh with quads to triangle mesh

    - by scape
    I want to use Blender for making models yet realize some of the polygons are not triangles but contain quads or more (example: cylinder top and bottom). I could export the the mesh as a basic mesh file and import it in to an openGL application and workout rendering the quads as tris, but anything with more than 4 vert indices is beyond me. Is it typical to convert the mesh to a triangle-based mesh inside blender before exporting it? I actually tried this through the quads_convert_to_tris method within a blender py script and the top of the cylinder does not look symmetrical. What is typically done to render a loaded mesh as a tri?

    Read the article

  • Render an image with separate layers for shadows/reflections in 3D Studio Max?

    - by Bernd Plontsch
    I have a scene with a simple object standing on a ground in the center. Caused by lights and the ground material there is some shadow and reflection on the ground surrounding the object. How can I render an image containing 3 separate layers for the object the ground the reflection / shadow on the ground Which format to use for this (it should include all 3 layers + I should be able to enable/disable them in Photoshop)? How do I define or prepare those layers for being rendering as image layers?

    Read the article

  • Combine 3D objects in XNA 4

    - by Christoph
    Currently I am writing on my thesis for university, the theme I am working on is 3D Visualization of hierarchical structures using cone trees. I want to do is to draw a cone and arrange a number of spheres at the bottom of the cone. The spheres should be arranged according to the radius and the number of spheres correctly. As you can imagine I need a lot of these cone/sphere combinations. First Attempt I was able to find some tutorials that helped with drawing cones and spheres. Cone public Cone(GraphicsDevice device, float height, int tessellation, string name, List<Sphere> children) { //prepare children and calculate the children spacing and radius of the cone if (children == null || children.Count == 0) { throw new ArgumentNullException("children"); } this.Height = height; this.Name = name; this.Children = children; //create the cone if (tessellation < 3) { throw new ArgumentOutOfRangeException("tessellation"); } //Create a ring of triangels around the outside of the cones bottom for (int i = 0; i < tessellation; i++) { Vector3 normal = this.GetCircleVector(i, tessellation); // add the vertices for the top of the cone base.AddVertex(Vector3.Up * height, normal); //add the bottom circle base.AddVertex(normal * this.radius + Vector3.Down * height, normal); //Add indices base.AddIndex(i * 2); base.AddIndex(i * 2 + 1); base.AddIndex((i * 2 + 2) % (tessellation * 2)); base.AddIndex(i * 2 + 1); base.AddIndex((i * 2 + 3) % (tessellation * 2)); base.AddIndex((i * 2 + 2) % (tessellation * 2)); } //create flate triangle to seal the bottom this.CreateCap(tessellation, height, this.Radius, Vector3.Down); base.InitializePrimitive(device); } Sphere public void Initialize(GraphicsDevice device, Vector3 qi) { int verticalSegments = this.Tesselation; int horizontalSegments = this.Tesselation * 2; //single vertex on the bottom base.AddVertex((qi * this.Radius) + this.lowering, Vector3.Down); for (int i = 0; i < verticalSegments; i++) { float latitude = ((i + 1) * MathHelper.Pi / verticalSegments) - MathHelper.PiOver2; float dy = (float)Math.Sin(latitude); float dxz = (float)Math.Cos(latitude); //Create a singe ring of latitudes for (int j = 0; j < horizontalSegments; j++) { float longitude = j * MathHelper.TwoPi / horizontalSegments; float dx = (float)Math.Cos(longitude) * dxz; float dz = (float)Math.Sin(longitude) * dxz; Vector3 normal = new Vector3(dx, dy, dz); base.AddVertex(normal * this.Radius, normal); } } // Finish with a single vertex at the top of the sphere. AddVertex((qi * this.Radius) + this.lowering, Vector3.Up); // Create a fan connecting the bottom vertex to the bottom latitude ring. for (int i = 0; i < horizontalSegments; i++) { AddIndex(0); AddIndex(1 + (i + 1) % horizontalSegments); AddIndex(1 + i); } // Fill the sphere body with triangles joining each pair of latitude rings. for (int i = 0; i < verticalSegments - 2; i++) { for (int j = 0; j < horizontalSegments; j++) { int nextI = i + 1; int nextJ = (j + 1) % horizontalSegments; base.AddIndex(1 + i * horizontalSegments + j); base.AddIndex(1 + i * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + j); base.AddIndex(1 + i * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + j); } } // Create a fan connecting the top vertex to the top latitude ring. for (int i = 0; i < horizontalSegments; i++) { base.AddIndex(CurrentVertex - 1); base.AddIndex(CurrentVertex - 2 - (i + 1) % horizontalSegments); base.AddIndex(CurrentVertex - 2 - i); } base.InitializePrimitive(device); } The tricky part now is to arrange the spheres at the bottom of the cone. I tried is to draw just the cone and then draw the spheres. I need a lot of these cones, so it would be pretty hard to calculate all the positions correctly. Second Attempt So the second try was to generate a object that builds all vertices of the cone and all of the spheres at once. So I was hoping to render a cone with all its spheres arranged correctly. After a short debug I found out that the cone is created and the first sphere, when it turn of the second sphere I am running into an OutOfBoundsException of ushort.MaxValue. Cone and Spheres public ConeWithSpheres(GraphicsDevice device, float height, float coneDiameter, float sphereDiameter, int coneTessellation, int sphereTessellation, int numberOfSpheres) { if (coneTessellation < 3) { throw new ArgumentException(string.Format("{0} is to small for the tessellation of the cone. The number must be greater or equal to 3", coneTessellation)); } if (sphereTessellation < 3) { throw new ArgumentException(string.Format("{0} is to small for the tessellation of the sphere. The number must be greater or equal to 3", sphereTessellation)); } //set properties this.Height = height; this.ConeDiameter = coneDiameter; this.SphereDiameter = sphereDiameter; this.NumberOfChildren = numberOfSpheres; //end set properties //generate the cone this.GenerateCone(device, coneTessellation); //generate the spheres //vector that defines the Y position of the sphere on the cones bottom Vector3 lowering = new Vector3(0, 0.888f, 0); this.GenerateSpheres(device, sphereTessellation, numberOfSpheres, lowering); } // ------ GENERATE CONE ------ private void GenerateCone(GraphicsDevice device, int coneTessellation) { int doubleTessellation = coneTessellation * 2; //Create a ring of triangels around the outside of the cones bottom for (int index = 0; index < coneTessellation; index++) { Vector3 normal = this.GetCircleVector(index, coneTessellation); //add the vertices for the top of the cone base.AddVertex(Vector3.Up * this.Height, normal); //add the bottom of the cone base.AddVertex(normal * this.ConeRadius + Vector3.Down * this.Height, normal); //add indices base.AddIndex(index * 2); base.AddIndex(index * 2 + 1); base.AddIndex((index * 2 + 2) % doubleTessellation); base.AddIndex(index * 2 + 1); base.AddIndex((index * 2 + 3) % doubleTessellation); base.AddIndex((index * 2 + 2) % doubleTessellation); } //create flate triangle to seal the bottom this.CreateCap(coneTessellation, this.Height, this.ConeRadius, Vector3.Down); base.InitializePrimitive(device); } // ------ GENERATE SPHERES ------ private void GenerateSpheres(GraphicsDevice device, int sphereTessellation, int numberOfSpheres, Vector3 lowering) { int verticalSegments = sphereTessellation; int horizontalSegments = sphereTessellation * 2; for (int childCount = 1; childCount < numberOfSpheres; childCount++) { //single vertex at the bottom of the sphere base.AddVertex((this.GetCircleVector(childCount, this.NumberOfChildren) * this.SphereRadius) + lowering, Vector3.Down); for (int verticalSegmentsCount = 0; verticalSegmentsCount < verticalSegments; verticalSegmentsCount++) { float latitude = ((verticalSegmentsCount + 1) * MathHelper.Pi / verticalSegments) - MathHelper.PiOver2; float dy = (float)Math.Sin(latitude); float dxz = (float)Math.Cos(latitude); //create a single ring of latitudes for (int horizontalSegmentsCount = 0; horizontalSegmentsCount < horizontalSegments; horizontalSegmentsCount++) { float longitude = horizontalSegmentsCount * MathHelper.TwoPi / horizontalSegments; float dx = (float)Math.Cos(longitude) * dxz; float dz = (float)Math.Sin(longitude) * dxz; Vector3 normal = new Vector3(dx, dy, dz); base.AddVertex((normal * this.SphereRadius) + lowering, normal); } } //finish with a single vertex at the top of the sphere base.AddVertex((this.GetCircleVector(childCount, this.NumberOfChildren) * this.SphereRadius) + lowering, Vector3.Up); //create a fan connecting the bottom vertex to the bottom latitude ring for (int i = 0; i < horizontalSegments; i++) { base.AddIndex(0); base.AddIndex(1 + (i + 1) % horizontalSegments); base.AddIndex(1 + i); } //Fill the sphere body with triangles joining each pair of latitude rings for (int i = 0; i < verticalSegments - 2; i++) { for (int j = 0; j < horizontalSegments; j++) { int nextI = i + 1; int nextJ = (j + 1) % horizontalSegments; base.AddIndex(1 + i * horizontalSegments + j); base.AddIndex(1 + i * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + j); base.AddIndex(1 + i * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + j); } } //create a fan connecting the top vertiex to the top latitude for (int i = 0; i < horizontalSegments; i++) { base.AddIndex(this.CurrentVertex - 1); base.AddIndex(this.CurrentVertex - 2 - (i + 1) % horizontalSegments); base.AddIndex(this.CurrentVertex - 2 - i); } base.InitializePrimitive(device); } } Any ideas how I could fix this?

    Read the article

  • GLSL Shader Texture Performance

    - by Austin
    I currently have a project that renders OpenGL video using a vertex and fragment shader. The shaders work fine as-is, but in trying to add in texturing, I am running into performance issues and can't figure out why. Before adding texturing, my program ran just fine and loaded my CPU between 0%-4%. When adding texturing (specifically textures AND color -- noted by comment below), my CPU is 100% loaded. The only code I have added is the relevant texturing code to the shader, and the "glBindTexture()" calls to the rendering code. Here are my shaders and relevant rending code. Vertex Shader: #version 150 uniform mat4 mvMatrix; uniform mat4 mvpMatrix; uniform mat3 normalMatrix; uniform vec4 lightPosition; uniform float diffuseValue; layout(location = 0) in vec3 vertex; layout(location = 1) in vec3 color; layout(location = 2) in vec3 normal; layout(location = 3) in vec2 texCoord; smooth out VertData { vec3 color; vec3 normal; vec3 toLight; float diffuseValue; vec2 texCoord; } VertOut; void main(void) { gl_Position = mvpMatrix * vec4(vertex, 1.0); VertOut.normal = normalize(normalMatrix * normal); VertOut.toLight = normalize(vec3(mvMatrix * lightPosition - gl_Position)); VertOut.color = color; VertOut.diffuseValue = diffuseValue; VertOut.texCoord = texCoord; } Fragment Shader: #version 150 smooth in VertData { vec3 color; vec3 normal; vec3 toLight; float diffuseValue; vec2 texCoord; } VertIn; uniform sampler2D tex; layout(location = 0) out vec3 colorOut; void main(void) { float diffuseComp = max( dot(normalize(VertIn.normal), normalize(VertIn.toLight)) ), 0.0); vec4 color = texture2D(tex, VertIn.texCoord); colorOut = color.rgb * diffuseComp * VertIn.diffuseValue + color.rgb * (1 - VertIn.diffuseValue); // FOLLOWING LINE CAUSES PERFORMANCE ISSUES colorOut *= VertIn.color; } Relevant Rendering Code: // 3 textures have been successfully pre-loaded, and can be used // texture[0] is a 1x1 white texture to effectively turn off texturing glUseProgram(program); // Draw squares glBindTexture(GL_TEXTURE_2D, texture[1]); // Set attributes, uniforms, etc glDrawArrays(GL_QUADS, 0, 6*4); // Draw triangles glBindTexture(GL_TEXTURE_2D, texture[0]); // Set attributes, uniforms, etc glDrawArrays(GL_TRIANGLES, 0, 3*4); // Draw reference planes glBindTexture(GL_TEXTURE_2D, texture[0]); // Set attributes, uniforms, etc glDrawArrays(GL_LINES, 0, 4*81*2); // Draw terrain glBindTexture(GL_TEXTURE_2D, texture[2]); // Set attributes, uniforms, etc glDrawArrays(GL_TRIANGLES, 0, 501*501*6); // Release glBindTexture(GL_TEXTURE_2D, 0); glUseProgram(0); Any help is greatly appreciated!

    Read the article

  • Blur gets displaced compared to original image

    - by user1294203
    I have implemented a SSAO and I'm using a blur step to smooth it out. The problem is that the blurred texture is slightly displaced compared to the original. I'm blurring using a 4x4 kernel since that was my noise kernel in SSAO. The following is the blurring shader: float result = 0.0; for(int i = 0; i < 4; i++){ for(int j = 0; j < 4; j++){ vec2 offset = vec2(TEXEL_SIZE.x * i, TEXEL_SIZE.y * j); result += texture(aoSampler, TexCoord + offset).r; } } out_AO = vec4(vec3(0.0), result * 0.0625); Where TEXEL_SIZE is one over my window resolution. I was thinking that this is was an error based on how OpenGL counts the Texel center, so I tried displacing the texture coordinate I was using by 0.5 * TEXEL_SIZE, but there was still a slight displacement. The texture input to my blur shader, has wrap parameters: glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); When I tell the blur shader to just output the the value of the pixel, the result is not displaced, so it must have something to do with how neighboring pixels are sampled. Any thoughts?

    Read the article

  • Why does my sprite glitch when moving? [closed]

    - by rphello101
    Using Slick 2D/Java, I'm using the mouse to rotate a sprite and WASD to move it (A and D are used to strafe). I finally got the directional keys and rotation to work in sync, but I'm having problems with sporadic movement. It seems that the move speed is not always set to the value I have it at. Sometimes the sprite with just shoot across the screen. Furthermore, it seems that at 0 degrees, when the left key is pressed, the sprite moves backwards, not to the left. There also seems to be quite a bit of glitching when two keys are pressed, like left and up. Anyone see anything obvious? Here is the rotational code: int mX = Mouse.getX(); int mY = HEIGHT - Mouse.getY(); int pX = sprite.x+sprite.image.getWidth()/2; int pY = sprite.y+sprite.image.getHeight()/2; double mAng; if(mX!=pX){ mAng = Math.toDegrees(Math.atan2(mY - pY, mX - pX)); if(mAng==0 && mX<=pX) mAng=180; } else{ if(mY>pY) mAng=90; else mAng=270; } sprite.angle = mAng; sprite.image.setRotation((float) mAng); Movement code: Input input = gc.getInput(); Vector2f direction = new Vector2f(); Vector2f velocity = new Vector2f(); Vector2f left; Vector2f right; direction.x = (float) Math.cos(Math.toRadians(sprite.angle)); direction.y = (float) Math.sin(Math.toRadians(sprite.angle)); if(direction.length()>0) direction = direction.normalise(); left = new Vector2f(-direction.y, direction.x); right = new Vector2f(direction.y, -direction.x); velocity.x = (float) (direction.x * sprite.moveSpeed); velocity.y = (float) (direction.y * sprite.moveSpeed); if(input.isKeyDown(sprite.up)){ sprite.x += velocity.x*delta; sprite.y += velocity.y*delta; }if (input.isKeyDown(sprite.down)){ sprite.x -= velocity.x*delta; sprite.y -= velocity.y*delta; }if (input.isKeyDown(sprite.left)){ sprite.x += left.x * sprite.moveSpeed * delta; sprite.y += left.y * sprite.moveSpeed * delta; }if (input.isKeyDown(sprite.right)){ sprite.x += right.x * sprite.moveSpeed * delta; sprite.y += right.y * sprite.moveSpeed * delta; }

    Read the article

  • Camera closes in on the fixed point

    - by V1ncam
    I've been trying to create a camera that is controlled by the mouse and rotates around a fixed point (read: (0,0,0)), both vertical and horizontal. This is what I've come up with: camera.Eye = Vector3.Transform(camera.Eye, Matrix.CreateRotationY(camRotYFloat)); Vector3 customAxis = new Vector3(-camera.Eye.Z, 0, camera.Eye.X); camera.Eye = Vector3.Transform(camera.Eye, Matrix.CreateFromAxisAngle(customAxis, camRotXFloat * 0.0001f)); This works quit well, except from the fact that when I 'use' the second transformation (go up and down with the mouse) the camera not only goes up and down, it also closes in on the point. It zooms in. How do I prevent this? Thanks in advance.

    Read the article

  • Is it possible to construct a cube with fewer than 24 vertices

    - by Telanor
    I have a cube-based world like Minecraft and I'm wondering if there's a way to construct a cube with fewer than 24 vertices so I can reduce memory usage. It doesn't seem possible to me for 2 reasons: the normals wouldn't come out right and per-face textures wouldn't work. Is this the case or am I wrong? Maybe there's some fancy new DX11 tech that can help? Edit: Just to clarify, I have 2 requirements: I need surface normals for each cube face in order to do proper lighting and I need a way to address a different indexes in a texture array for each cube face

    Read the article

  • How can I calculate a vertex normal for a hard edge?

    - by K.G.
    Here is a picture of a lovely polygon: Circled is a vertex, and numbered are its adjacent faces. I have calculated the normals of those faces as such (not yet normalized, 0-indexed): Vertex 1 normal 0: 0.000000 0.000000 -0.250000 Vertex 1 normal 1: 0.000000 0.000000 -0.250000 Vertex 1 normal 2: -0.250000 0.000000 0.000000 Vertex 1 normal 3: -0.250000 0.000000 0.000000 Vertex 1 normal 4: 0.250000 0.000000 0.000000 What I'm wondering is, how can I determine, taken as given that I want this vertex to represent a hard edge, whether its normal should be the normal of 1/2 or 3/4? My plan after I glanced at the sketch I used to put this together was "Ha! I'll just use whichever two faces have the same normal!" and now I see that there are two sets of two faces for which this is true. Is there a rule I can apply based on the face winding, angle of the adjacent edges, moon phase, coin flip, to consistently choose a normal direction for this box? For the record, all of the other polygons I plan to use will have their normals dictated in Maya, but after encountering this problem, it made me really curious.

    Read the article

  • Seeking an C/C++ OBJ geometry read/write that does not modify the representation

    - by Blake Senftner
    I am seeking a means to read and write OBJ geometry files with logic that does not modify the geometry representation. i.e. read geometry, immediately write it, and a diff of the source OBJ and the one just written will be identical. Every OBJ writing utility I've been able to find online fails this test. I am writing small command line tools to modify my OBJ geometries, and I need to write my results, not just read the geometry for rendering purposes. Simply needing to write the geometry knocks out 95% of the OBJ libraries on the web. Also, many of the popular libraries modify the geometry representation. For example, Nat Robbin's GLUT library includes the GLM library, which both converts quads to triangles, as well as reverses the topology (face ordering) of the geometry. It's still the same geometry, but if your tool chain expects a given topology, such as for rigging or morph targets, then GLM is useless. I'm not rendering in these tools, so dependencies like OpenGL or GLUT make no sense. And god forbid, do not "optimize" the geometry! Redundant vertices are on purpose for maintaining oneself on cache with our weird little low memory mobile devices.

    Read the article

  • SDL_image & OpenGL Problem

    - by Dylan
    i've been following tutorials online to load textures using SDL and display them on a opengl quad. but ive been getting weird results that no one else on the internet seems to be getting... so when i render the texture in opengl i get something like this. http://www.kiddiescissors.com/after.png when the original .bmp file is this: http://www.kiddiescissors.com/before.bmp ive tried other images too, so its not that this particular image is corrupt. it seems like my rgb channels are all jumbled or something. im pulling my hair out at this point. heres the relevant code from my init() function if ( SDL_Init(SDL_INIT_VIDEO) != 0 ) { return 1; } SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1); SDL_GL_SetAttribute( SDL_GL_RED_SIZE, 8 ); SDL_GL_SetAttribute( SDL_GL_GREEN_SIZE, 8 ); SDL_GL_SetAttribute( SDL_GL_BLUE_SIZE, 8 ); SDL_GL_SetAttribute( SDL_GL_ALPHA_SIZE, 8 ); SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 1); SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 4); SDL_SetVideoMode(WINDOW_WIDTH, WINDOW_HEIGHT, 32, SDL_HWSURFACE | SDL_GL_DOUBLEBUFFER | SDL_OPENGL); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(50, (GLfloat)WINDOW_WIDTH/WINDOW_HEIGHT, 1, 50); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glEnable(GL_DEPTH_TEST); glEnable(GL_MULTISAMPLE); glEnable(GL_TEXTURE_2D); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA); glEnable(GL_BLEND); heres the code that is called when my main player object (the one with which this sprite is associated) is initialized texture = 0; SDL_Surface* surface = IMG_Load("i.bmp"); glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, surface->w, surface->h, 0, GL_RGB, GL_UNSIGNED_BYTE, surface->pixels); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); SDL_FreeSurface(surface); and then heres the relevant code from my display function glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); glColor4f(1, 1, 1, 1); glPushMatrix(); glBindTexture(GL_TEXTURE_2D, texture); glTranslatef(getCenter().x, getCenter().y, 0); glRotatef(getAngle()*(180/M_PI), 0, 0, 1); glTranslatef(-getCenter().x, -getCenter().y, 0); glBegin(GL_QUADS); glTexCoord2f(0, 0); glVertex3f(getTopLeft().x, getTopLeft().y, 0); glTexCoord2f(0, 1); glVertex3f(getTopLeft().x, getTopLeft().y + size.y, 0); glTexCoord2f(1, 1); glVertex3f(getTopLeft().x + size.x, getTopLeft().y + size.y, 0); glTexCoord2f(1, 0); glVertex3f(getTopLeft().x + size.x, getTopLeft().y, 0); glEnd(); glPopMatrix(); let me know if i left out anything important... or if you need more info from me. thanks a ton, -Dylan

    Read the article

  • Robust line of sight test on the inside of a polygon with tolerance

    - by David Gouveia
    Foreword This is a followup to this question and the main problem I'm trying to solve. My current solution is an hack which involves inflating the polygon, and doing most calculations on the inflated polygon instead. My goal is to remove this step completely, and correctly solve the problem with calculations only. Problem Given a concave polygon and treating all of its edges as if they were walls in a level, determine whether two points A and B are in line of sight of each other, while accounting for some degree of floating point errors. I'm currently basing my solution on a series of line-segment interection tests. In other words: If any of the end points are outside the polygon, they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B crosses any of the edges from the polygon, then they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B does not cross any of the edges from the polygon, then they are in line of sight. But the problem is dealing correctly with all the edge cases. In particular, it must be able to deal with all the situations depicted below, where red lines are examples that should be rejected, and green lines are examples that should be accepted. I probably missed a few other situations, such as when the line segment from A to B is colinear with an edge, but one of the end points is outside the polygon. One point of particular interest is the difference between 1 and 9. In both cases, both end points are vertices of the polygon, and there are no edges being intersected, but 1 should be rejected while 9 should be accepted. How to distinguish these two? I could check some middle point within the segment to see if it falls inside or not, but it's easy to come up with situations in which it would fail. Point 7 was also pretty tricky and I had to to treat it as a special case, which checks if two points are adjacent vertices of the polygon directly. But there are also other chances of line segments being col linear with the edges of the polygon, and I'm still not entirely sure how I should handle those cases. Is there any well known solution to this problem?

    Read the article

  • 2D SAT How to find collision center or point or area?

    - by Felipe Cypriano
    I've just implemented collision detection using SAT and this article as reference to my implementation. The detection is working as expected but I need to know where both rectangles are colliding. I need to find the center of the intersection, the black point on the image above. I've found some articles about this but they all involve avoiding the overlap or some kind of velocity, I don't need this. I just need to put a image on top of it. Like two cars crashed so I put an image on top of the collision. Any ideas? ## Update The information I've about the rectangles are the four points that represents them, the upper right, upper left, lower right and lower left coordinates. I'm trying to find an algorithm that can give me the intersection of these points.

    Read the article

  • Physics timestep questions

    - by SSL
    I've got a projectile working perfectly using the code below: //initialised in loading screen 60 is the FPS - projectilEposition and velocity are Vector3 types gravity = new Vector3(0, -(float)9.81 / 60, 0); //called every frame projectilePosition += projectileVelocity; This seems to work fine but I've noticed in various projectile examples I've seen that the elapsedtime per update is taken into account. What's the difference between the two and how can I convert the above to take into account the elapsedtime? (I'm using XNA - do I use ElapsedTime.TotalSeconds or TotalMilliseconds)? Edit: Forgot to add my attempt at using elapsedtime, which seemed to break the physics: projectileVelocity.Y += -(float)((9.81 * gameTime.ElapsedGameTime.TotalSeconds * gameTime.ElapsedGameTime.TotalSeconds) * 0.5f); Thanks for the help

    Read the article

  • Implementing an automatic navigation mesh generation for 2d top down map?

    - by J2V
    I am currently in the middle of implementing an A* pathfinding for enemies. In order to implement the actual A* logic, I need a navigation mesh for my map. I am working on a 2D top down rpg map. The world is static, meaning there is no requirement for dynamic runtime mesh generation. My world objects are pixel based, not tile based and have associated data with them such as scale, rotation, origin etc. I will obviously need some vertex data being generated from my world objects, maybe create a polygon generation from color data? I could create a colormap with objects for my whole map, but I have no idea how to begin creating nav mesh polygons. How would an actual navigation mesh generation look like with this kind of available information? Can anyone maybe point to some great resources? I have looked into some 3D nav mesh tools, but they seem kind of overly complex for my situation and also have a lot of their req data available from models. Thanks a lot in advance! I have been trying to get my head around it for some time now.

    Read the article

  • Lighting-Reflectance Models & Licensing Issues

    - by codey
    Generally, or specifically, is there any licensing issue with using any of the well known lighting/reflectance models (i.e. the BRDFs or other distribution or approximation functions): Phong, Blinn–Phong, Cook–Torrance, Blinn-Torrance-Sparrow, Lambert, Minnaert, Oren–Nayar, Ward, Strauss, Ashikhmin-Shirley and common modifications where applicable, such as: Beckmann distribution, Blinn distribution, Schlick's approximation, etc. in your shader code utilised in a commercial product? Or is it a non-issue?

    Read the article

  • how can I specify interleaved vertex attributes and vertex indices

    - by freefallr
    I'm writing a generic ShaderProgram class that compiles a set of Shader objects, passes args to the shader (like vertex position, vertex normal, tex coords etc), then links the shader components into a shader program, for use with glDrawArrays. My vertex data already exists in a VertexBufferObject that uses the following data structure to create a vertex buffer: class CustomVertex { public: float m_Position[3]; // x, y, z // offset 0, size = 3*sizeof(float) float m_TexCoords[2]; // u, v // offset 3*sizeof(float), size = 2*sizeof(float) float m_Normal[3]; // nx, ny, nz; float colour[4]; // r, g, b, a float padding[20]; // padded for performance }; I've already written a working VertexBufferObject class that creates a vertex buffer object from an array of CustomVertex objects. This array is said to be interleaved. It renders successfully with the following code: void VertexBufferObject::Draw() { if( ! m_bInitialized ) return; glBindBuffer( GL_ARRAY_BUFFER, m_nVboId ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, m_nVboIdIndex ); glEnableClientState( GL_VERTEX_ARRAY ); glEnableClientState( GL_TEXTURE_COORD_ARRAY ); glEnableClientState( GL_NORMAL_ARRAY ); glEnableClientState( GL_COLOR_ARRAY ); glVertexPointer( 3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 0) ); glTexCoordPointer(3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 12)); glNormalPointer(GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 20)); glColorPointer(3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 32)); glDrawElements( GL_TRIANGLES, m_nNumIndices, GL_UNSIGNED_INT, ((char*)NULL + 0) ); glDisableClientState( GL_VERTEX_ARRAY ); glDisableClientState( GL_TEXTURE_COORD_ARRAY ); glDisableClientState( GL_NORMAL_ARRAY ); glDisableClientState( GL_COLOR_ARRAY ); glBindBuffer( GL_ARRAY_BUFFER, 0 ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, 0 ); } Back to the Vertex Array Object though. My code for creating the Vertex Array object is as follows. This is performed before the ShaderProgram runtime linking stage, and no glErrors are reported after its steps. // Specify the shader arg locations (e.g. their order in the shader code) for( int n = 0; n < vShaderArgs.size(); n ++) glBindAttribLocation( m_nProgramId, n, vShaderArgs[n].sFieldName.c_str() ); // Create and bind to a vertex array object, which stores the relationship between // the buffer and the input attributes glGenVertexArrays( 1, &m_nVaoHandle ); glBindVertexArray( m_nVaoHandle ); // Enable the vertex attribute array (we're using interleaved array, since its faster) glBindBuffer( GL_ARRAY_BUFFER, vShaderArgs[0].nVboId ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, vShaderArgs[0].nVboIndexId ); // vertex data for( int n = 0; n < vShaderArgs.size(); n ++ ) { glEnableVertexAttribArray(n); glVertexAttribPointer( n, vShaderArgs[n].nFieldSize, GL_FLOAT, GL_FALSE, vShaderArgs[n].nStride, (GLubyte *) NULL + vShaderArgs[n].nFieldOffset ); AppLog::Ref().OutputGlErrors(); } This doesn't render correctly at all. I get a pattern of white specks onscreen, in the shape of the terrain rectangle, but there are no regular lines etc. Here's the code I use for rendering: void ShaderProgram::Draw() { using namespace AntiMatter; if( ! m_nShaderProgramId || ! m_nVaoHandle ) { AppLog::Ref().LogMsg("ShaderProgram::Draw() Couldn't draw object, as initialization of ShaderProgram is incomplete"); return; } glUseProgram( m_nShaderProgramId ); glBindVertexArray( m_nVaoHandle ); glDrawArrays( GL_TRIANGLES, 0, m_nNumTris ); glBindVertexArray(0); glUseProgram(0); } Can anyone see errors or omissions in either the VAO creation code or rendering code? thanks!

    Read the article

< Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >