Search Results

Search found 32277 results on 1292 pages for 'module development'.

Page 428/1292 | < Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >

  • Render 2 images that uses different shaders

    - by Code Vader
    Based on the giawa/nehe tutorials, how can I render 2 images with different shaders. I'm pretty new to OpenGl and shaders so I'm not completely sure whats happening in my code, but I think the shaders that is called last overwrites the first one. private static void OnRenderFrame() { // calculate how much time has elapsed since the last frame watch.Stop(); float deltaTime = (float)watch.ElapsedTicks / System.Diagnostics.Stopwatch.Frequency; watch.Restart(); // use the deltaTime to adjust the angle of the cube angle += deltaTime; // set up the OpenGL viewport and clear both the color and depth bits Gl.Viewport(0, 0, width, height); Gl.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit); // use our shader program and bind the crate texture Gl.UseProgram(program); //<<<<<<<<<<<< TOP PYRAMID // set the transformation of the top_pyramid program["model_matrix"].SetValue(Matrix4.CreateRotationY(angle * rotate_cube)); program["enable_lighting"].SetValue(lighting); // bind the vertex positions, UV coordinates and element array Gl.BindBufferToShaderAttribute(top_pyramid, program, "vertexPosition"); Gl.BindBufferToShaderAttribute(top_pyramidNormals, program, "vertexNormal"); Gl.BindBufferToShaderAttribute(top_pyramidUV, program, "vertexUV"); Gl.BindBuffer(top_pyramidTrianlges); // draw the textured top_pyramid Gl.DrawElements(BeginMode.Triangles, top_pyramidTrianlges.Count, DrawElementsType.UnsignedInt, IntPtr.Zero); //<<<<<<<<<< CUBE // set the transformation of the cube program["model_matrix"].SetValue(Matrix4.CreateRotationY(angle * rotate_cube)); program["enable_lighting"].SetValue(lighting); // bind the vertex positions, UV coordinates and element array Gl.BindBufferToShaderAttribute(cube, program, "vertexPosition"); Gl.BindBufferToShaderAttribute(cubeNormals, program, "vertexNormal"); Gl.BindBufferToShaderAttribute(cubeUV, program, "vertexUV"); Gl.BindBuffer(cubeQuads); // draw the textured cube Gl.DrawElements(BeginMode.Quads, cubeQuads.Count, DrawElementsType.UnsignedInt, IntPtr.Zero); //<<<<<<<<<<<< BOTTOM PYRAMID // set the transformation of the bottom_pyramid program["model_matrix"].SetValue(Matrix4.CreateRotationY(angle * rotate_cube)); program["enable_lighting"].SetValue(lighting); // bind the vertex positions, UV coordinates and element array Gl.BindBufferToShaderAttribute(bottom_pyramid, program, "vertexPosition"); Gl.BindBufferToShaderAttribute(bottom_pyramidNormals, program, "vertexNormal"); Gl.BindBufferToShaderAttribute(bottom_pyramidUV, program, "vertexUV"); Gl.BindBuffer(bottom_pyramidTrianlges); // draw the textured bottom_pyramid Gl.DrawElements(BeginMode.Triangles, bottom_pyramidTrianlges.Count, DrawElementsType.UnsignedInt, IntPtr.Zero); //<<<<<<<<<<<<< STAR Gl.Disable(EnableCap.DepthTest); Gl.Enable(EnableCap.Blend); Gl.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.One); Gl.BindTexture(starTexture); //calculate the camera position using some fancy polar co-ordinates Vector3 position = 20 * new Vector3(Math.Cos(phi) * Math.Sin(theta), Math.Cos(theta), Math.Sin(phi) * Math.Sin(theta)); Vector3 upVector = ((theta % (Math.PI * 2)) > Math.PI) ? Vector3.Up : Vector3.Down; program_2["view_matrix"].SetValue(Matrix4.LookAt(position, Vector3.Zero, upVector)); // make sure the shader program and texture are being used Gl.UseProgram(program_2); // loop through the stars, drawing each one for (int i = 0; i < stars.Count; i++) { // set the position and color of this star program_2["model_matrix"].SetValue(Matrix4.CreateTranslation(new Vector3(stars[i].dist, 0, 0)) * Matrix4.CreateRotationZ(stars[i].angle)); program_2["color"].SetValue(stars[i].color); Gl.BindBufferToShaderAttribute(star, program_2, "vertexPosition"); Gl.BindBufferToShaderAttribute(starUV, program_2, "vertexUV"); Gl.BindBuffer(starQuads); Gl.DrawElements(BeginMode.Quads, starQuads.Count, DrawElementsType.UnsignedInt, IntPtr.Zero); // update the position of the star stars[i].angle += (float)i / stars.Count * deltaTime * 2 * rotate_stars; stars[i].dist -= 0.2f * deltaTime * rotate_stars; // if we've reached the center then move this star outwards and give it a new color if (stars[i].dist < 0f) { stars[i].dist += 5f; stars[i].color = new Vector3(generator.NextDouble(), generator.NextDouble(), generator.NextDouble()); } } Glut.glutSwapBuffers(); } The same goes for the textures, whichever one I mention last gets applied to both object?

    Read the article

  • Algorithm to find all tiles within a given radius on staggered isometric map

    - by kasztelan
    Given staggered isometric map and a start tile what would be the best way to get all surrounding tiles within given radius(middle to middle)? I can get all neighbours of a given tile and distance between each of them without any problems but I'm not sure what path to take after that. This feature will be used quite often (along with A*) so I'd like to avoid unecessary calculations. If it makes any difference I'm using XNA and each tile is 64x32 pixels.

    Read the article

  • Drawing a circle in opengl es android, squiggly boundaries

    - by ladiesMan217
    I am new to OpenGL ES and facing a hard time drawing a circle on my GLSurfaceView. Here's what I have so far. the Circle Class public class MyGLBall { private int points=40; private float vertices[]={0.0f,0.0f,0.0f}; private FloatBuffer vertBuff; //centre of circle public MyGLBall(){ vertices=new float[(points+1)*3]; for(int i=3;i<(points+1)*3;i+=3){ double rad=(i*360/points*3)*(3.14/180); vertices[i]=(float)Math.cos(rad); vertices[i+1]=(float) Math.sin(rad); vertices[i+2]=0; } ByteBuffer bBuff=ByteBuffer.allocateDirect(vertices.length*4); bBuff.order(ByteOrder.nativeOrder()); vertBuff=bBuff.asFloatBuffer(); vertBuff.put(vertices); vertBuff.position(0); } public void draw(GL10 gl){ gl.glPushMatrix(); gl.glTranslatef(0, 0, 0); // gl.glScalef(size, size, 1.0f); gl.glColor4f(1.0f,1.0f,1.0f, 1.0f); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertBuff); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glDrawArrays(GL10.GL_TRIANGLE_FAN, 0, points/2); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glPopMatrix(); } } I couldn't retrieve the screenshot of my image but here's what it looks like As you can see the border has crests and troughs thereby renering it squiggly which I do not want. All I want is a simple curve

    Read the article

  • C++ problem with assimp 3D model loader

    - by Brendan Webster
    In my game I have model loading functions for Assimp model loading library. I can load the model and render it, but the model displays incorrectly. The models load in as if they were using a seperate projection matrix. I have looked over my code over and over again, but I probably keep on missing the obvious reason why this is happening. Here is an image of my game: It's simply a 6 sided cube, but it's off big time! Here are my code snippets for rendering the cube to the screen: void C_MediaLoader::display(void) { float tmp; glTranslatef(0,0,0); // rotate it around the y axis glRotatef(angle,0.f,0.f,1.f); glColor4f(1,1,1,1); // scale the whole asset to fit into our view frustum tmp = scene_max.x-scene_min.x; tmp = aisgl_max(scene_max.y - scene_min.y,tmp); tmp = aisgl_max(scene_max.z - scene_min.z,tmp); tmp = (1.f / tmp); glScalef(tmp/5, tmp/5, tmp/5); // center the model //glTranslatef( -scene_center.x, -scene_center.y, -scene_center.z ); // if the display list has not been made yet, create a new one and // fill it with scene contents if(scene_list == 0) { scene_list = glGenLists(1); glNewList(scene_list, GL_COMPILE); // now begin at the root node of the imported data and traverse // the scenegraph by multiplying subsequent local transforms // together on GL's matrix stack. recursive_render(scene, scene->mRootNode); glEndList(); } glCallList(scene_list); } void C_MediaLoader::recursive_render (const struct aiScene *sc, const struct aiNode* nd) { unsigned int i; unsigned int n = 0, t; struct aiMatrix4x4 m = nd->mTransformation; // update transform aiTransposeMatrix4(&m); glPushMatrix(); glMultMatrixf((float*)&m); // draw all meshes assigned to this node for (; n < nd->mNumMeshes; ++n) { const struct aiMesh* mesh = scene->mMeshes[nd->mMeshes[n]]; apply_material(sc->mMaterials[mesh->mMaterialIndex]); if(mesh->mNormals == NULL) { glDisable(GL_LIGHTING); } else { glEnable(GL_LIGHTING); } for (t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace* face = &mesh->mFaces[t]; GLenum face_mode; switch(face->mNumIndices) { case 1: face_mode = GL_POINTS; break; case 2: face_mode = GL_LINES; break; case 3: face_mode = GL_TRIANGLES; break; default: face_mode = GL_POLYGON; break; } glBegin(face_mode); for(i = 0; i < face->mNumIndices; i++) { int index = face->mIndices[i]; if(mesh->mColors[0] != NULL) glColor4fv((GLfloat*)&mesh->mColors[0][index]); if(mesh->mNormals != NULL) glNormal3fv(&mesh->mNormals[index].x); glVertex3fv(&mesh->mVertices[index].x); } glEnd(); } } // draw all children for (n = 0; n < nd->mNumChildren; ++n) { recursive_render(sc, nd->mChildren[n]); } glPopMatrix(); } Sorry there is so much code to look through, but I really cannot find the problem, and I would love to have help.

    Read the article

  • Drawing particles with CPU instead of GPU (XNA)

    - by Helix
    I'm trying out modifications to the following particle system. http://create.msdn.com/en-US/education/catalog/sample/particle_3d I have a function such that when I press Space, all the particles have their positions and velocities set to 0. for (int i = 0; i < particles.GetLength(0); i++) { particles[i].Position = Vector3.Zero; particles[i].Velocity = Vector3.Zero; } However, when I press space, the particles are still moving. If I go to FireParticleSystem.cs I can turn settings.Gravity to 0 and the particles stop moving, but the particles are still not being shifted to (0,0,0). As I understand it, the problem lies in the fact that the GPU is processing all the particle positions, and it's calculating where the particles should be based on their initial position, their initial velocity and multiplying by their age. Therefore, all I've been able to do is change the initial position and velocity of particles, but I'm unable to do it on the fly since the GPU is handling everything. I want the CPU to calculate the positions of the particles individually. This is because I will be later implementing some sort of wind to push the particles around. How do I stop the GPU from taking over? I think it's something to do with VertexBuffers and the draw function, but I don't know how to modify it to make it work.

    Read the article

  • 2D OBB collision detection, resolving collisions?

    - by Milo
    I currently use OBBs and I have a vehicle that is a rigid body and some buildings. Here is my update() private void update() { camera.setPosition((vehicle.getPosition().x * camera.getScale()) - ((getWidth() ) / 2.0f), (vehicle.getPosition().y * camera.getScale()) - ((getHeight() ) / 2.0f)); //camera.move(input.getAnalogStick().getStickValueX() * 15.0f, input.getAnalogStick().getStickValueY() * 15.0f); if(input.isPressed(ControlButton.BUTTON_GAS)) { vehicle.setThrottle(1.0f, false); } if(input.isPressed(ControlButton.BUTTON_BRAKE)) { vehicle.setBrakes(1.0f); } vehicle.setSteering(input.getAnalogStick().getStickValueX()); vehicle.update(16.6666f / 1000.0f); ArrayList<Building> buildings = city.getBuildings(); for(Building b : buildings) { if(vehicle.getRect().overlaps(b.getRect())) { vehicle.update(-17.0f / 1000.0f); break; } } } The collision detection works well. What doesn't is how they are dealt with. My goal is simple. If the vehicle hits a building, it should stop, and never go into the building. When I apply negative torque to reverse the car should not feel buggy and move away from the building. I don't want this to look buggy. This is my rigid body class: class RigidBody extends Entity { //linear private Vector2D velocity = new Vector2D(); private Vector2D forces = new Vector2D(); private float mass; //angular private float angularVelocity; private float torque; private float inertia; //graphical private Vector2D halfSize = new Vector2D(); private Bitmap image; public RigidBody() { //set these defaults so we don't get divide by zeros mass = 1.0f; inertia = 1.0f; } //intialize out parameters public void initialize(Vector2D halfSize, float mass, Bitmap bitmap) { //store physical parameters this.halfSize = halfSize; this.mass = mass; image = bitmap; inertia = (1.0f / 20.0f) * (halfSize.x * halfSize.x) * (halfSize.y * halfSize.y) * mass; RectF rect = new RectF(); float scalar = 10.0f; rect.left = (int)-halfSize.x * scalar; rect.top = (int)-halfSize.y * scalar; rect.right = rect.left + (int)(halfSize.x * 2.0f * scalar); rect.bottom = rect.top + (int)(halfSize.y * 2.0f * scalar); setRect(rect); } public void setLocation(Vector2D position, float angle) { getRect().set(position, getWidth(), getHeight(), angle); } public Vector2D getPosition() { return getRect().getCenter(); } @Override public void update(float timeStep) { //integrate physics //linear Vector2D acceleration = Vector2D.scalarDivide(forces, mass); velocity = Vector2D.add(velocity, Vector2D.scalarMultiply(acceleration, timeStep)); Vector2D c = getRect().getCenter(); c = Vector2D.add(getRect().getCenter(), Vector2D.scalarMultiply(velocity , timeStep)); setCenter(c.x, c.y); forces = new Vector2D(0,0); //clear forces //angular float angAcc = torque / inertia; angularVelocity += angAcc * timeStep; setAngle(getAngle() + angularVelocity * timeStep); torque = 0; //clear torque } //take a relative Vector2D and make it a world Vector2D public Vector2D relativeToWorld(Vector2D relative) { Matrix mat = new Matrix(); float[] Vector2Ds = new float[2]; Vector2Ds[0] = relative.x; Vector2Ds[1] = relative.y; mat.postRotate(JMath.radToDeg(getAngle())); mat.mapVectors(Vector2Ds); return new Vector2D(Vector2Ds[0], Vector2Ds[1]); } //take a world Vector2D and make it a relative Vector2D public Vector2D worldToRelative(Vector2D world) { Matrix mat = new Matrix(); float[] Vectors = new float[2]; Vectors[0] = world.x; Vectors[1] = world.y; mat.postRotate(JMath.radToDeg(-getAngle())); mat.mapVectors(Vectors); return new Vector2D(Vectors[0], Vectors[1]); } //velocity of a point on body public Vector2D pointVelocity(Vector2D worldOffset) { Vector2D tangent = new Vector2D(-worldOffset.y, worldOffset.x); return Vector2D.add( Vector2D.scalarMultiply(tangent, angularVelocity) , velocity); } public void applyForce(Vector2D worldForce, Vector2D worldOffset) { //add linear force forces = Vector2D.add(forces ,worldForce); //add associated torque torque += Vector2D.cross(worldOffset, worldForce); } @Override public void draw( GraphicsContext c) { c.drawRotatedScaledBitmap(image, getPosition().x, getPosition().y, getWidth(), getHeight(), getAngle()); } } Essentially, when any rigid body hits a building it should exhibit the same behavior. How is collision solving usually done? Thanks

    Read the article

  • Detect two specific objects collision with bullet physics

    - by sebap123
    I have got some problem with defining collision between objects in my game using bullet physics. I know that objects are colliding with each other simultaneously and I don't have to do anything more. However I need to be noticed when one object collides with one of the rest. It is quite awkward written so I will tell what I want to achive. I have got ball which hits wall from tubes. Everything is on the floor. When ball hits wall some fragments fall down to infinity. So I have got bellow floor btStaticPlaneShape. This is place where most of objects is stoping and then I can start another action. But not all of them. So I've been trying to use function checkCollideWith but it isn't good method as it was said in reference and wiki. So I've checked method described in wiki http://bulletphysics.org/mediawiki-1.5.8/index.php/Collision_Callbacks_and_Triggers called contact information. This isn't good method either because it is extremly hard to identify what is what when colliding. You have to also remember that ball is almost all the time colliding with something - floor, wall or eart level. So is there any other method to check what is colliding with what?

    Read the article

  • HTML5 game engine for a 2D or 2.5D RPG style "map walk"

    - by stargazer
    please help me to choose a HTML5 game engine or Javascript libraries I want to do the following in the game: when the game starts a part the huge map (full size of the map: about 7 screens) is shown. The map itself is completely designed in the editor mapeditor.org (or in some comparable editor - if you know a good alternative to mapeditor.org - let me know) and loaded at runtime or at design time. The game engine should support loading of isometric maps (well, in worst case only orthogonal maps will be sufficient) both "tile layer" and "object layer" from mapeditor.org should be supported. Scrolling/performance of this map should be fast enough. The map and the game should be either in 2D (orthogonal map) or in 2.5D (isometric map) The game engine should support movement of sprites with animation. Let say I have a sprite for "human" with animation sequences showing "walking" in 8 directions - it should be imported into game engine and should "walk" on the map without writing a lot of Javascript code. Automatic scrolling of the map the "human" nears the screen border. Collision detection, "solid" objects. The mapeditor.org supports properies on tiles. Let say I assign a "solid" property to some tiles in editor. It should be easy to check this "solid" property in the game engine and implement kind of "solid" behavior, so the animanted sprites do not walk through the walls. Collision detection - it should be easy to implement some custom functionality like "when sprite A is close to sprite B - call this function" Showing "dialogs" or popup windows on top of the map - should be easy to implement. Cross-browser audio support - (it is implemented quite well in construct 2 from scirra, so I'm looking for the comparable audio quality) The game itself is a king of RPG but without fighting scenes and without huge "inventory". The main character just walking on the map, discovers some things, there are dialogs and sounds. The functionality of this example from sprite.js http://batiste.dosimple.ch/sprite.js/tests/mapeditor/map_reader.html is very close to what I'm developing. But I'm not a Javascript guru (and a very lazy guy) and would like to write even less Javascript code as in the example...

    Read the article

  • How to break terrain (in blender) into Chunks for a game engine

    - by Red
    I've created an island in blender. 2048x2048 blender units. The engine developer wants me to split the terrain into 128x128 "chunks" so that would be 16x16 "chunks" from a top down view. The engine isn't using height maps, in order to allow caves, 3d overhangs, etc. I'm not in charge of the engine, so suggestions about that aren't needed here :/ I just need a good, seamless way to split a terrain mesh into 16x16 pieces without leaving holes. I'm very new to blender, so baby steps are very much welcomed :) Here's a quick render of what i've got so far. I added a plane to show sea level but i've since removed it. http://i.imgur.com/qTsoC.png

    Read the article

  • How to set TextureFilter to Point to make example Bloom filter work?

    - by Mr Bell
    I have simple app that renders some particles and now I am trying to apply the bloom shader from the xna samplers ( http://create.msdn.com/en-US/education/catalog/sample/bloom ) to it, but I am running into this exception: "XNA Framework HiDef profile requires TextureFilter to be Point when using texture format Vector4." When the BloomComponent tries to end the sprite batch in the DrawFullscreenQuad method: spriteBatch.Begin(0, BlendState.Opaque, SamplerState.PointWrap, null, null, effect); spriteBatch.Draw(texture, new Rectangle(0, 0, width, height), Color.White); spriteBatch.End(); //<------- Exception thrown here It seems to be related to the pixel shaders that I am using to animate the particle. In a nutshell, I have a texture2d in vector4 format that holds particle positions, and another one for velocities. Here is a snippet from that area: GraphicsDevice.SetRenderTarget(tempRenderTarget); animationEffect.CurrentTechnique = animationEffect.Techniques[technique]; spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque, SamplerState.PointWrap, DepthStencilState.DepthRead, RasterizerState.CullNone, animationEffect); spriteBatch.Draw(randomValues, new Rectangle(0, 0, width, height), Color.White); spriteBatch.End(); What I comment out the code that calls the particle animation pixel shaders the bloom component runs fine. Is there some state that I need to reset to make the bloom work?

    Read the article

  • Calculate the Intersection of Two Volumes

    - by igrad
    If you've ever played The Swapper, you'll have a good idea of what I'm asking about. I need to check for, and isolate, areas of a rectangle that may intersect with either a circle or another rectangle. These selected areas will receive special properties, and the areas will be non-static, since the intersecting shapes themselves will also be dynamic. My first thought was to use raycasting detection, though I've only seen that in use with circles, or even ellipses. I'm curious if there's a method of using raycasting with a more rectangular approach, or if there's a totally different method already in use to accomplish this task. I would like something more exact than checking in large chunks, and since I'm using SDL2 with a logical renderer size of 1920x1080, checking if each pixel is intersecting is out of the question, as it would slow things down past a playable speed. I already have a multi-shape collision function-template in place, and I could use that, though it only checks if sides or corners are intersecting; it does not compute the overlapping area, or even find the circle's secant line, though I can't imagine it would be overly complex to implement. TL;DR: I need to find and isolate areas of a rectangle that may intersect with a circle or another rectangle without checking every single pixel on-screen.

    Read the article

  • 2D tower defense - A bullet to an enemy

    - by Tashu
    I'm trying to find a good solution for a bullet to hit the enemy. The game is 2D tower defense, the tower is supposed to shoot a bullet and hit the enemy guaranteed. I tried this solution - http://blog.wolfire.com/2009/07/linear-algebra-for-game-developers-part-1/ The link mentioned to subtract the bullet's origin and the enemy as well (vector subtraction). I tried that but a bullet just follows around the enemy. float diffX = enemy.position.x - position.x; float diffY = enemy.position.y - position.y; velocity.x = diffX; velocity.y = diffY; position.add(velocity.x * deltaTime, velocity.y * deltaTime); I'm familiar with vectors but not sure what steps (vector math operations) to be done to get this solution working.

    Read the article

  • Why would GLCapabilities.setHardwareAccelerated(true/false) have no effect on performance?

    - by Luke
    I've got a JOGL application in which I am rendering 1 million textures (all the same texture) and 1 million lines between those textures. Basically it's a ball-and-stick graph. I am storing the vertices in a vertex array on the card and referencing them via index arrays, which are also stored on the card. Each pass through the draw loop I am basically doing this: gl.glBindBuffer(GL.GL_ARRAY_BUFFER, <buffer id>); gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, <buffer id>); gl.glDrawElements(GL.GL_POINTS, <size>, GL.GL_UNSIGNED_INT, 0); gl.glBindBuffer(GL.GL_ARRAY_BUFFER, <buffer id>); gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, <buffer id>); gl.glDrawElements(GL.GL_LINES, <size>, GL.GL_UNSIGNED_INT, 0); I noticed that the JOGL library is pegging one of my CPU cores. Every frame, the run method internal to the library is taking quite long. I'm not sure why this is happening since I have called setHardwareAccelerated(true) on the GLCapabilities used to create my canvas. What's more interesting is that I changed it to setHardwareAccelerated(false) and there was no impact on the performance at all. Is it possible that my code is not using hardware rendering even when it is set to true? Is there any way to check? EDIT: As suggested, I have tested breaking my calls up into smaller chunks. I have tried using glDrawRangeElements and respecting the limits that it requests. All of these simply resulted in the same pegged CPU usage and worse framerates. I have also narrowed the problem down to a simpler example where I just render 4 million textures (no lines). The draw loop then just doing this: gl.glEnableClientState(GL.GL_VERTEX_ARRAY); gl.glEnableClientState(GL.GL_INDEX_ARRAY); gl.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT); gl.glMatrixMode(GL.GL_MODELVIEW); gl.glLoadIdentity(); <... Camera and transform related code ...> gl.glEnableVertexAttribArray(0); gl.glEnable(GL.GL_TEXTURE_2D); gl.glAlphaFunc(GL.GL_GREATER, ALPHA_TEST_LIMIT); gl.glEnable(GL.GL_ALPHA_TEST); <... Bind texture ...> gl.glBindBuffer(GL.GL_ARRAY_BUFFER, <buffer id>); gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, <buffer id>); gl.glDrawElements(GL.GL_POINTS, <size>, GL.GL_UNSIGNED_INT, 0); gl.glDisable(GL.GL_TEXTURE_2D); gl.glDisable(GL.GL_ALPHA_TEST); gl.glDisableVertexAttribArray(0); gl.glFlush(); Where the first buffer contains 12 million floats (the x,y,z coords of the 4 million textures) and the second (element) buffer contains 4 million integers. In this simple example it is simply the integers 0 through 3999999. I really want to know what is being done in software that is pegging my CPU, and how I can make it stop (if I can). My buffers are generated by the following code: gl.glBindBuffer(GL.GL_ARRAY_BUFFER, <buffer id>); gl.glBufferData(GL.GL_ARRAY_BUFFER, <size> * BufferUtil.SIZEOF_FLOAT, <buffer>, GL.GL_STATIC_DRAW); gl.glVertexAttribPointer(0, 3, GL.GL_FLOAT, false, 0, 0); and: gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, <buffer id>); gl.glBufferData(GL.GL_ELEMENT_ARRAY_BUFFER, <size> * BufferUtil.SIZEOF_INT, <buffer>, GL.GL_STATIC_DRAW); ADDITIONAL INFO: Here is my initialization code: gl.setSwapInterval(1); //Also tried 0 gl.glShadeModel(GL.GL_SMOOTH); gl.glClearDepth(1.0f); gl.glEnable(GL.GL_DEPTH_TEST); gl.glDepthFunc(GL.GL_LESS); gl.glHint(GL.GL_PERSPECTIVE_CORRECTION_HINT, GL.GL_FASTEST); gl.glPointParameterfv(GL.GL_POINT_DISTANCE_ATTENUATION, POINT_DISTANCE_ATTENUATION, 0); gl.glPointParameterfv(GL.GL_POINT_SIZE_MIN, MIN_POINT_SIZE, 0); gl.glPointParameterfv(GL.GL_POINT_SIZE_MAX, MAX_POINT_SIZE, 0); gl.glPointSize(POINT_SIZE); gl.glTexEnvf(GL.GL_POINT_SPRITE, GL.GL_COORD_REPLACE, GL.GL_TRUE); gl.glEnable(GL.GL_POINT_SPRITE); gl.glClearColor(clearColor.getX(), clearColor.getY(), clearColor.getZ(), 0.0f); Also, I'm not sure if this helps or not, but when I drag the entire graph off the screen, the FPS shoots back up and the CPU usage falls to 0%. This seems obvious and intuitive to me, but I thought that might give a hint to someone else.

    Read the article

  • Collision detection - Smooth wall sliding, no bounce effect

    - by Joey
    I'm working on a basic collision detection system that provides point - OBB collision detection. I have around 200 cubes in my environment and I check (for now) each of them in turn and see if it collides. If it does I return the colliding face's normal, save the old player position and do some trigonometry to return a new player position for my wall sliding. edit I'll define my meaning of wall sliding: If a player walks in a vertical slope and has a slight horizontal rotation to the left or the right and keeps walking forward in the wall the player should slide a little to the right/left while continually walking towards the wall till he left the wall. Thus, sliding along the wall. Everything works fine and with multiple objects as well but I still have one problem I can't seem to figure out: smooth wall sliding. In my current implementation sliding along the walls make my player bounce like a mad man (especially noticable with gravity on and moving forward). I have a velocity/direction vector, a normal vector from the collided plane and an old and new player position. First I negate the normal vector and get my new velocity vector by substracting the inverted normal from my direction vector (which is the vector to slide along the wall) and I add this vector to my new Player position and recalculate the direction vector (in case I have multiple collisions). I know I am missing some step but I can't seem to figure it out. Here is my code for the collision detection (run every frame): Vector direction; Vector newPos(camera.GetOriginX(), camera.GetOriginY(), camera.GetOriginZ()); direction = newPos - oldPos; // Direction vector // Check for collision with new position for(int i = 0; i < NUM_OBJECTS; i++) { Vector normal = objects[i].CheckCollision(newPos.x, newPos.y, newPos.z, direction.x, direction.y, direction.z); if(normal != Vector::NullVector()) { // Get inverse normal (direction STRAIGHT INTO wall) Vector invNormal = normal.Negative(); Vector wallDir = direction - invNormal; // We know INTO wall, and DIRECTION to wall. Substract these and you got slide WALL direction newPos = oldPos + wallDir; direction = newPos - oldPos; } } Any help would be greatly appreciated! FIX I eventually got things up and running how they should thanks to Krazy, I'll post the updated code listing in case someone else comes upon this problem! for(int i = 0; i < NUM_OBJECTS; i++) { Vector normal = objects[i].CheckCollision(newPos.x, newPos.y, newPos.z, direction.x, direction.y, direction.z); if(normal != Vector::NullVector()) { Vector invNormal = normal.Negative(); invNormal = invNormal * (direction * normal).Length(); // Change normal to direction's length and normal's axis Vector wallDir = direction - invNormal; newPos = oldPos + wallDir; direction = newPos - oldPos; } }

    Read the article

  • Where do you search/look for game developers for an indie game startup?

    - by G.Campos
    Hey there I just recently saw stackoverflow had a game dev sister site so here I am, wondering if you experienced fellows know where one can search/look for game developers for an indie game startup? In other words: I have a game idea which I've written down with as much detail as possible (so anyone else can understand how it works) and now I'm looking for a heavy php programmer with whom to pair up in order to go from idea to reality. I'm a front-end/interface designer and an intermediate programmer. I recognize my project requires heavy programming skills which I do not have as of today =) So, what websites, communities or places do you recommend I go look into? Where do good programmers interested in indie games go look for projects if they don't have their own? Thanks in advance G.Campos

    Read the article

  • Physics engine that can handle multiple attractors?

    - by brice
    I'm putting together a game that will be played mostly with three dimensional gravity. By that I mean multiple planets/stars/moons behaving realistically, and path plotting and path prediction in the gravity field. I have looked at a variety of physics engines, such as Bullet, tokamak or Newton, but none of them seem to be suitable, as I'd essentially have to re-write the gravity engine in their framework. Do you know of a physics engine that is capable of dealing with multiple bodies all attracted to one another? I don't need scenegraph management, or rendering, just core physics. (collision detection would be a bonus, as would rigid body dynamics). My background is in physics, so I would be able to write an engine that uses Verlet integration or RK4 (or even Euler integration, if I had to) but I'd much rather adapt an off the shelf solution. [edit]: There are some great resources for physics simulation of n-body problems online, and on stackoverflow

    Read the article

  • Custom Gesture in cocos2d

    - by Lewis
    I've found a little tutorial that would be useful for my game: http://blog.mellenthin.de/archives/2012/02/13/an-one-finger-rotation-gesture-recognizer/ But I can't work out how to convert that gesture to work with cocos2d, I have found examples of pre made gestures in cocos2d, but no custom ones, is it possible? EDIT STILL HAVING PROBLEMS WITH THIS: I've added the code from Sentinel below (from SO), the Gesture and RotateGesture have both been added to my solution and are compiling. Although In the rotation class now I only see selectors, how do I set those up? As the custom gesture found in that project above looks like: header file for custom gesture: #import <Foundation/Foundation.h> #import <UIKit/UIGestureRecognizerSubclass.h> @protocol OneFingerRotationGestureRecognizerDelegate <NSObject> @optional - (void) rotation: (CGFloat) angle; - (void) finalAngle: (CGFloat) angle; @end @interface OneFingerRotationGestureRecognizer : UIGestureRecognizer { CGPoint midPoint; CGFloat innerRadius; CGFloat outerRadius; CGFloat cumulatedAngle; id <OneFingerRotationGestureRecognizerDelegate> target; } - (id) initWithMidPoint: (CGPoint) midPoint innerRadius: (CGFloat) innerRadius outerRadius: (CGFloat) outerRadius target: (id) target; - (void)reset; - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event; @end .m for custom gesture file: #include <math.h> #import "OneFingerRotationGestureRecognizer.h" @implementation OneFingerRotationGestureRecognizer // private helper functions CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2); CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB); - (id) initWithMidPoint: (CGPoint) _midPoint innerRadius: (CGFloat) _innerRadius outerRadius: (CGFloat) _outerRadius target: (id <OneFingerRotationGestureRecognizerDelegate>) _target { if ((self = [super initWithTarget: _target action: nil])) { midPoint = _midPoint; innerRadius = _innerRadius; outerRadius = _outerRadius; target = _target; } return self; } /** Calculates the distance between point1 and point 2. */ CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2) { CGFloat dx = point1.x - point2.x; CGFloat dy = point1.y - point2.y; return sqrt(dx*dx + dy*dy); } CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB) { CGFloat a = endLineA.x - beginLineA.x; CGFloat b = endLineA.y - beginLineA.y; CGFloat c = endLineB.x - beginLineB.x; CGFloat d = endLineB.y - beginLineB.y; CGFloat atanA = atan2(a, b); CGFloat atanB = atan2(c, d); // convert radiants to degrees return (atanA - atanB) * 180 / M_PI; } #pragma mark - UIGestureRecognizer implementation - (void)reset { [super reset]; cumulatedAngle = 0; } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesBegan:touches withEvent:event]; if ([touches count] != 1) { self.state = UIGestureRecognizerStateFailed; return; } } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesMoved:touches withEvent:event]; if (self.state == UIGestureRecognizerStateFailed) return; CGPoint nowPoint = [[touches anyObject] locationInView: self.view]; CGPoint prevPoint = [[touches anyObject] previousLocationInView: self.view]; // make sure the new point is within the area CGFloat distance = distanceBetweenPoints(midPoint, nowPoint); if ( innerRadius <= distance && distance <= outerRadius) { // calculate rotation angle between two points CGFloat angle = angleBetweenLinesInDegrees(midPoint, prevPoint, midPoint, nowPoint); // fix value, if the 12 o'clock position is between prevPoint and nowPoint if (angle > 180) { angle -= 360; } else if (angle < -180) { angle += 360; } // sum up single steps cumulatedAngle += angle; // call delegate if ([target respondsToSelector: @selector(rotation:)]) { [target rotation:angle]; } } else { // finger moved outside the area self.state = UIGestureRecognizerStateFailed; } } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesEnded:touches withEvent:event]; if (self.state == UIGestureRecognizerStatePossible) { self.state = UIGestureRecognizerStateRecognized; if ([target respondsToSelector: @selector(finalAngle:)]) { [target finalAngle:cumulatedAngle]; } } else { self.state = UIGestureRecognizerStateFailed; } cumulatedAngle = 0; } - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesCancelled:touches withEvent:event]; self.state = UIGestureRecognizerStateFailed; cumulatedAngle = 0; } @end Then its initialised like this: // calculate center and radius of the control CGPoint midPoint = CGPointMake(image.frame.origin.x + image.frame.size.width / 2, image.frame.origin.y + image.frame.size.height / 2); CGFloat outRadius = image.frame.size.width / 2; // outRadius / 3 is arbitrary, just choose something >> 0 to avoid strange // effects when touching the control near of it's center gestureRecognizer = [[OneFingerRotationGestureRecognizer alloc] initWithMidPoint: midPoint innerRadius: outRadius / 3 outerRadius: outRadius target: self]; [self.view addGestureRecognizer: gestureRecognizer]; The selector below is also in the same file where the initialisation of the gestureRecogonizer: - (void) rotation: (CGFloat) angle { // calculate rotation angle imageAngle += angle; if (imageAngle > 360) imageAngle -= 360; else if (imageAngle < -360) imageAngle += 360; // rotate image and update text field image.transform = CGAffineTransformMakeRotation(imageAngle * M_PI / 180); [self updateTextDisplay]; } I can't seem to get this working in the RotateGesture class can anyone help me please I've been stuck on this for days now. SECOND EDIT: Here is the users code from SO that was suggested to me: Here is projec on GitHub: SFGestureRecognizers It uses builded in iOS UIGestureRecognizer, and don't needs to be integrated into cocos2d sources. Using it, You can make any gestures, just like you could, if you whould work with UIGestureRecognizer. For example: I made a base class Gesture, and subclassed it for any new gesture: //Gesture.h @interface Gesture : NSObject <UIGestureRecognizerDelegate> { UIGestureRecognizer *gestureRecognizer; id delegate; SEL preSolveSelector; SEL possibleSelector; SEL beganSelector; SEL changedSelector; SEL endedSelector; SEL cancelledSelector; SEL failedSelector; BOOL preSolveAvailable; CCNode *owner; } - (id)init; - (void)addGestureRecognizerToNode:(CCNode*)node; - (void)removeGestureRecognizerFromNode:(CCNode*)node; -(void)recognizer:(UIGestureRecognizer*)recognizer; @end //Gesture.m #import "Gesture.h" @implementation Gesture - (id)init { if (!(self = [super init])) return self; preSolveAvailable = YES; return self; } - (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer { return YES; } - (BOOL)gestureRecognizer:(UIGestureRecognizer *)recognizer shouldReceiveTouch:(UITouch *)touch { //! For swipe gesture recognizer we want it to be executed only if it occurs on the main layer, not any of the subnodes ( main layer is higher in hierarchy than children so it will be receiving touch by default ) if ([recognizer class] == [UISwipeGestureRecognizer class]) { CGPoint pt = [touch locationInView:touch.view]; pt = [[CCDirector sharedDirector] convertToGL:pt]; for (CCNode *child in owner.children) { if ([child isNodeInTreeTouched:pt]) { return NO; } } } return YES; } - (void)addGestureRecognizerToNode:(CCNode*)node { [node addGestureRecognizer:gestureRecognizer]; owner = node; } - (void)removeGestureRecognizerFromNode:(CCNode*)node { [node removeGestureRecognizer:gestureRecognizer]; } #pragma mark - Private methods -(void)recognizer:(UIGestureRecognizer*)recognizer { CCNode *node = recognizer.node; if (preSolveSelector && preSolveAvailable) { preSolveAvailable = NO; [delegate performSelector:preSolveSelector withObject:recognizer withObject:node]; } UIGestureRecognizerState state = [recognizer state]; if (state == UIGestureRecognizerStatePossible && possibleSelector) { [delegate performSelector:possibleSelector withObject:recognizer withObject:node]; } else if (state == UIGestureRecognizerStateBegan && beganSelector) [delegate performSelector:beganSelector withObject:recognizer withObject:node]; else if (state == UIGestureRecognizerStateChanged && changedSelector) [delegate performSelector:changedSelector withObject:recognizer withObject:node]; else if (state == UIGestureRecognizerStateEnded && endedSelector) { preSolveAvailable = YES; [delegate performSelector:endedSelector withObject:recognizer withObject:node]; } else if (state == UIGestureRecognizerStateCancelled && cancelledSelector) { preSolveAvailable = YES; [delegate performSelector:cancelledSelector withObject:recognizer withObject:node]; } else if (state == UIGestureRecognizerStateFailed && failedSelector) { preSolveAvailable = YES; [delegate performSelector:failedSelector withObject:recognizer withObject:node]; } } @end Subclass example: //RotateGesture.h #import "Gesture.h" @interface RotateGesture : Gesture - (id)initWithTarget:(id)target preSolveSelector:(SEL)preSolve possibleSelector:(SEL)possible beganSelector:(SEL)began changedSelector:(SEL)changed endedSelector:(SEL)ended cancelledSelector:(SEL)cancelled failedSelector:(SEL)failed; @end //RotateGesture.m #import "RotateGesture.h" @implementation RotateGesture - (id)initWithTarget:(id)target preSolveSelector:(SEL)preSolve possibleSelector:(SEL)possible beganSelector:(SEL)began changedSelector:(SEL)changed endedSelector:(SEL)ended cancelledSelector:(SEL)cancelled failedSelector:(SEL)failed { if (!(self = [super init])) return self; preSolveSelector = preSolve; delegate = target; possibleSelector = possible; beganSelector = began; changedSelector = changed; endedSelector = ended; cancelledSelector = cancelled; failedSelector = failed; gestureRecognizer = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:@selector(recognizer:)]; gestureRecognizer.delegate = self; return self; } @end Use example: - (void)addRotateGesture { RotateGesture *rotateRecognizer = [[RotateGesture alloc] initWithTarget:self preSolveSelector:@selector(rotateGesturePreSolveWithRecognizer:node:) possibleSelector:nil beganSelector:@selector(rotateGestureStateBeganWithRecognizer:node:) changedSelector:@selector(rotateGestureStateChangedWithRecognizer:node:) endedSelector:@selector(rotateGestureStateEndedWithRecognizer:node:) cancelledSelector:@selector(rotateGestureStateCancelledWithRecognizer:node:) failedSelector:@selector(rotateGestureStateFailedWithRecognizer:node:)]; [rotateRecognizer addGestureRecognizerToNode:movableAreaSprite]; } I dont understand how to implement the custom gesture code at the start of this post into the rotateGesture class which is a subclass of the gesture class written by the SO user. Any ideas please? When I get 6 more rep I'll add a bounty to this.

    Read the article

  • Clientside anticheating in multiplayer game 1vs1

    - by garnav
    I'm developing a simple card game, where there will be a matchmaking system that will put you against another human player. This will be the only game mode available, a 1vs1 against another human, no AI. I want to prevent cheating as much as possible. I have already read a lot of similar questions here and I already know that I cannot trust the client and I have to make all verifications server side. I intend to have a server (need one for the matchmaking anyway) and I intend to make some verifications server side but if I want to check everything server side this makes my server to be able to keep track of the state of all current games and check every action, and I don't have the money/infrastructure to support that server. My idea is to make clients check and verify some of the actions made by their opponent* and if they find some illegal action notify the possible cheating to the server and make the server verify it. This will still require my server to keep track of all current games, but it will save resources only checking some things that cannot be checked at client side(like card order in the deck) and only checking other things when they are actually wrong. *(only those they can check with out allowing themselves cheating! for example:they can't check if the played card was in hand cos that will need them to know all cards in hand) Summing up, my questions are: is this a viable approach? will I actually save resources doing this or the extra complexity in the server and client for exchanging this messages is not worth it? do you know any game that has successfully or unsuccessfully tried a similar approach? Thanks all for reading and answering

    Read the article

  • Normal maps red in OpenGL?

    - by KaiserJohaan
    I am using Assimp to import 3d models, and FreeImage to parse textures. The problem I am having is that the normal maps are actually red rather than blue when I try to render them as normal diffuse textures. http://i42.tinypic.com/289ing3.png When I open the images in a image-viewing program they do indeed show up as blue. Heres when I create the texture; OpenGLTexture::OpenGLTexture(const std::vector<uint8_t>& textureData, uint32_t textureWidth, uint32_t textureHeight, TextureType textureType, Logger& logger) : mLogger(logger), mTextureID(gNextTextureID++), mTextureType(textureType) { glGenTextures(1, &mTexture); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, mTexture); CHECK_GL_ERROR(mLogger); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, textureWidth, textureHeight, 0, glTextureFormat, GL_UNSIGNED_BYTE, &textureData[0]); CHECK_GL_ERROR(mLogger); glGenerateMipmap(GL_TEXTURE_2D); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, 0); CHECK_GL_ERROR(mLogger); } Here is my fragment shader. You can see I just commented out the normal-map parsing and treated the normal map texture as the diffuse texture to display it and illustrate the problem. As for the rest of the code it interacts as expected with the diffuse textures so I dont see a obvious problem there. "#version 330 \n \ \n \ layout(std140) uniform; \n \ \n \ const int MAX_LIGHTS = 8; \n \ \n \ struct Light \n \ { \n \ vec4 mLightColor; \n \ vec4 mLightPosition; \n \ vec4 mLightDirection; \n \ \n \ int mLightType; \n \ float mLightIntensity; \n \ float mLightRadius; \n \ float mMaxDistance; \n \ }; \n \ \n \ uniform UnifLighting \n \ { \n \ vec4 mGamma; \n \ vec3 mViewDirection; \n \ int mNumLights; \n \ \n \ Light mLights[MAX_LIGHTS]; \n \ } Lighting; \n \ \n \ uniform UnifMaterial \n \ { \n \ vec4 mDiffuseColor; \n \ vec4 mAmbientColor; \n \ vec4 mSpecularColor; \n \ vec4 mEmissiveColor; \n \ \n \ bool mHasDiffuseTexture; \n \ bool mHasNormalTexture; \n \ bool mLightingEnabled; \n \ float mSpecularShininess; \n \ } Material; \n \ \n \ uniform sampler2D unifDiffuseTexture; \n \ uniform sampler2D unifNormalTexture; \n \ \n \ in vec3 frag_position; \n \ in vec3 frag_normal; \n \ in vec2 frag_texcoord; \n \ in vec3 frag_tangent; \n \ in vec3 frag_bitangent; \n \ \n \ out vec4 finalColor; " " \n \ \n \ void CalcGaussianSpecular(in vec3 dirToLight, in vec3 normal, out float gaussianTerm) \n \ { \n \ vec3 viewDirection = normalize(Lighting.mViewDirection); \n \ vec3 halfAngle = normalize(dirToLight + viewDirection); \n \ \n \ float angleNormalHalf = acos(dot(halfAngle, normalize(normal))); \n \ float exponent = angleNormalHalf / Material.mSpecularShininess; \n \ exponent = -(exponent * exponent); \n \ \n \ gaussianTerm = exp(exponent); \n \ } \n \ \n \ vec4 CalculateLighting(in Light light, in vec4 diffuseTexture, in vec3 normal) \n \ { \n \ if (light.mLightType == 1) // point light \n \ { \n \ vec3 positionDiff = light.mLightPosition.xyz - frag_position; \n \ float dist = max(length(positionDiff) - light.mLightRadius, 0); \n \ \n \ float attenuation = 1 / ((dist/light.mLightRadius + 1) * (dist/light.mLightRadius + 1)); \n \ attenuation = max((attenuation - light.mMaxDistance) / (1 - light.mMaxDistance), 0); \n \ \n \ vec3 dirToLight = normalize(positionDiff); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (attenuation * angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (attenuation * gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 2) // directional light \n \ { \n \ vec3 dirToLight = normalize(light.mLightDirection.xyz); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 4) // ambient light \n \ return diffuseTexture * Material.mAmbientColor * light.mLightIntensity * light.mLightColor; \n \ else \n \ return vec4(0.0); \n \ } \n \ \n \ void main() \n \ { \n \ vec4 diffuseTexture = vec4(1.0); \n \ if (Material.mHasDiffuseTexture) \n \ diffuseTexture = texture(unifDiffuseTexture, frag_texcoord); \n \ \n \ vec3 normal = frag_normal; \n \ if (Material.mHasNormalTexture) \n \ { \n \ diffuseTexture = vec4(normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0), 1.0); \n \ // vec3 normalTangentSpace = normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0); \n \ //mat3 tangentToWorldSpace = mat3(normalize(frag_tangent), normalize(frag_bitangent), normalize(frag_normal)); \n \ \n \ // normal = tangentToWorldSpace * normalTangentSpace; \n \ } \n \ \n \ if (Material.mLightingEnabled) \n \ { \n \ vec4 accumLighting = vec4(0.0); \n \ \n \ for (int lightIndex = 0; lightIndex < Lighting.mNumLights; lightIndex++) \n \ accumLighting += Material.mEmissiveColor * diffuseTexture + \n \ CalculateLighting(Lighting.mLights[lightIndex], diffuseTexture, normal); \n \ \n \ finalColor = pow(accumLighting, Lighting.mGamma); \n \ } \n \ else { \n \ finalColor = pow(diffuseTexture, Lighting.mGamma); \n \ } \n \ } \n"; Why is this? does normal-map textures need some sort of special treatment in opengl?

    Read the article

  • Adding Vertices to a dynamic mesh via Method Call

    - by Raven Dreamer
    I have a C# Struct with a static method, "Get Shape" which populates a List with the vertices of a polyhedron. Method Signature: public static void GetShape(Block b, int x, int y, int z, List<Vector3> vertices, List<int> triangles, List<Vector2> uvs, List<Vector2> uv2s) Adding directly to the vertices list (via vertices.Add(vector3) ), the code works as expected, and the new polyhedron appears when I trigger the method. However, I want to do some processing on the vertices I'm adding (a rotation), and the most sensible way I can think to do that is by creating a separate list of Vector3s, and then combining the lists when I'm done. However, vertices.AddRange(newVerts) does not add the shape to the mesh, nor does a foreach loop with verts.Add(vertices[i]). And this is before I've added in any of the processing! I have a feeling this might stem from passing the list of vertices in as a parameter, rather than returning a list and then adding to the vertices in the calling object, but since I'm filling 4 lists, I was trying to avoid having to create a data struct to return all four at once. Any ideas? The working version of the method is reprinted below, in full: public static void GetShape(Block b, int x, int y, int z, List<Vector3> vertices, List<int> triangles, List<Vector2> uvs, List<Vector2> uv2s) { //List<Vector3> vertices = new List<Vector3>(); int l_blockShape = b.blockShape; int l_blockType = b.blockType; //CheckFace checks if the block is empty //if this block is empty, don't draw anything. int vertexIndex; //only y faces need to be hidden. //if((l_blockShape & BlockShape.NegZFace) == BlockShape.NegZFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.2f, y + 1, z+.2f)); vertices.Add(new Vector3(x+.8f, y + 1, z+.2f)); vertices.Add(new Vector3(x+.8f, y , z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.2f)); // first triangle for the face triangles.Add(vertexIndex); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+3); // second triangle for the face triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+3); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //XY Z+1 face //if((l_blockShape & BlockShape.PosZFace) == BlockShape.PosZFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.8f, y + 1, z+.8f)); vertices.Add(new Vector3(x+.2f, y + 1, z+.8f)); vertices.Add(new Vector3(x+.2f, y , z+.8f)); vertices.Add(new Vector3(x+.8f, y , z+.8f)); // first triangle for the face triangles.Add(vertexIndex); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+3); // second triangle for the face triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+3); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //ZY face //if((l_blockShape & BlockShape.NegXFace) == BlockShape.NegXFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.2f, y + 1, z+.8f)); vertices.Add(new Vector3(x+.2f, y + 1, z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.8f)); // first triangle for the face triangles.Add(vertexIndex); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+3); // second triangle for the face triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+3); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //ZY X+1 face // if((l_blockShape & BlockShape.PosXFace) == BlockShape.PosXFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.8f, y + 1, z+.2f)); vertices.Add(new Vector3(x+.8f, y + 1, z+.8f)); vertices.Add(new Vector3(x+.8f, y , z+.8f)); vertices.Add(new Vector3(x+.8f, y , z+.2f)); // first triangle for the face triangles.Add(vertexIndex); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+3); // second triangle for the face triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+3); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //ZX face if((l_blockShape & BlockShape.NegYFace) == BlockShape.NegYFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.8f, y , z+.8f)); vertices.Add(new Vector3(x+.8f, y , z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.8f)); // first triangle for the face triangles.Add(vertexIndex+3); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex); // second triangle for the face triangles.Add(vertexIndex+3); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+1); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //ZX + 1 face if((l_blockShape & BlockShape.PosYFace) == BlockShape.PosYFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.8f, y+1 , z+.2f)); vertices.Add(new Vector3(x+.8f, y+1 , z+.8f)); vertices.Add(new Vector3(x+.2f, y+1 , z+.8f)); vertices.Add(new Vector3(x+.2f, y+1 , z+.2f)); // first triangle for the face triangles.Add(vertexIndex+3); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex); // second triangle for the face triangles.Add(vertexIndex+3); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+1); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } }

    Read the article

  • Geometry shader for multiple primitives

    - by Byte56
    How can I create a geometry shader that can handle multiple primitives? For example when creating a geometry shader for triangles, I define a layout like so: layout(triangles) in; layout(triangle_strip, max_vertices=3) out; But if I use this shader then lines or points won't show up. So adding: layout(triangles) in; layout(triangle_strip, max_vertices=3) out; layout(lines) in; layout(line_strip, max_vertices=2) out; The shader will compile and run, but will only render lines (or whatever the last primitive defined is). So how do I define a single geometry shader that will handle multiple types of primitives? Or is that not possible and I need to create multiple shader programs and change shader programs before drawing each type?

    Read the article

  • How to store a shmup level?

    - by pek
    I am developing a 2D shmup (i.e. Aero Fighters) and I was wondering what are the various ways to store a level. Assuming that enemies are defined in their own xml file, how would you define when an enemy spawns in the level? Would it be based on time? Updates? Distance? Currently I do this based on "level time" (the amount of time the level is running - pausing doesn't update the time). Here is an example (the serialization was done by XNA): <?xml version="1.0" encoding="utf-8"?> <XnaContent xmlns:level="pekalicious.xanor.XanorContentShared.content.level"> <Asset Type="level:Level"> <Enemies> <Enemy> <EnemyType>data/enemies/smallenemy</EnemyType> <SpawnTime>PT0S</SpawnTime> <NumberOfSpawns>60</NumberOfSpawns> <SpawnOffset>PT0.2S</SpawnOffset> </Enemy> <Enemy> <EnemyType>data/enemies/secondenemy</EnemyType> <SpawnTime>PT0S</SpawnTime> <NumberOfSpawns>10</NumberOfSpawns> <SpawnOffset>PT0.5S</SpawnOffset> </Enemy> <Enemy> <EnemyType>data/enemies/secondenemy</EnemyType> <SpawnTime>PT20S</SpawnTime> <NumberOfSpawns>10</NumberOfSpawns> <SpawnOffset>PT0.5S</SpawnOffset> </Enemy> <Enemy> <EnemyType>data/enemies/boss1</EnemyType> <SpawnTime>PT30S</SpawnTime> <NumberOfSpawns>1</NumberOfSpawns> <SpawnOffset>PT0S</SpawnOffset> </Enemy> </Enemies> </Asset> </XnaContent> Each Enemy element is basically a wave of specific enemy types. The type is defined in EnemyType while SpawnTime is the "level time" this wave should appear. NumberOfSpawns and SpawnOffset is the number of enemies that will show up and the time it takes between each spawn respectively. This could be a good idea or there could be better ones out there. I'm not sure. I would like to see some opinions and ideas. I have two problems with this: spawning an enemy correctly and creating a level editor. The level editor thing is an entirely different problem (which I will probably post in the future :P). As for spawning correctly, the problem lies in the fact that I have a variable update time and so I need to make sure I don't miss an enemy spawn because the spawn offset is too small, or because the update took a little more time. I kinda fixed it for the most part, but it seems to me that the problem is with how I store the level. So, any ideas? Comments? Thank you in advance.

    Read the article

  • Something other than Vertex Welding with Texture Atlas?

    - by Tim Winter
    What options (in C# with XNA) would there be for texture usage in a procedural generated 3D world made of cubes to increase performance? Yes, it's like Minecraft. I've been doing a texture atlas and rendering faces individually (4 vertices per face), but I've also read in a couple places about using texture wrapping with two 1D atlases to merge adjacent faces with the same texture. If two or more adjacent faces share the same image, it'd be quite easy to wrap in this way reducing vertices by a large amount. My problem with this is having too many textures, swapping too often, and many image related things like non-power of 2 images. Is there a middle ground option between the 1D texture atlas trick and rendering 4 vertices per cube face? This is a picture of what I have currently (in wireframe). 4 vertices per face seems extremely inefficient to me.

    Read the article

  • OpenGL: Attempt to allocate a texture to big for the current hardware

    - by AnonymousMan
    I'm getting the following error: java.io.IOException: Attempt to allocate a texture to big for the current hardware at org.newdawn.slick.opengl.InternalTextureLoader.getTexture(InternalTextureLoader.java:320) at org.newdawn.slick.opengl.InternalTextureLoader.getTexture(InternalTextureLoader.java:254) at org.newdawn.slick.opengl.InternalTextureLoader.getTexture(InternalTextureLoader.java:200) at org.newdawn.slick.opengl.TextureLoader.getTexture(TextureLoader.java:64) at org.newdawn.slick.opengl.TextureLoader.getTexture(TextureLoader.java:24) The image I'm trying to use is 128x128. System.out.println(GL11.glGetInteger(GL11.GL_MAX_TEXTURE_SIZE)); I get: 32. 32??!! My graphics card is AMD Radeon HD 7970M with 2048 MB GDDR5 RAM, I can run all the latest games in 1080p and 60fps with no problem, and those textures sure as hell doesn't look like they are 32x32 pixels to me! How can I fix this? -- Edit: Here's the chaos code I use to init OpenGL: Display.setDisplayMode(new DisplayMode(500,500)); Display.create(); if (!GLContext.getCapabilities().OpenGL11) { throw new Exception("OpenGL 1.1 not supported."); } Display.setTitle("Game"); glMatrixMode(GL_PROJECTION); glLoadIdentity(); GLU.gluPerspective(45, 1, 0.1f, 5000); Mouse.setGrabbed(true); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glEnable(GL_TEXTURE_2D); glClearColor(0, 0, 0, 0); glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LEQUAL); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glEnable(GL_BLEND); glEnable(GL_POINT_SMOOTH); glEnable(GL_LINE_SMOOTH); glEnable(GL_POLYGON_SMOOTH); glEnable(GL_POLYGON_OFFSET_FILL); glShadeModel(GL_SMOOTH); Display is a LWJGL thing, it makes the OpenGL context and the window. Anyway, I don't think there's anything in the init code that can help me but you never know...

    Read the article

  • Camera wont stay behind model after pitch, then rotation

    - by ChocoMan
    I have a camera position behind a model. Currently, if I push the left thumbstick making my model move forward, backward, or strafe, the camera stays with the model. If I push the right thumbstick left or right, the model rotates in those directions fine along with the camera rotating while maintaining its position relatively behind the model. But when I pitch the model up or down, then rotate the model afterwards, the camera moves slightly rotates in a clock-like fashion behind the model. If I do a few rotations of the model and try to pitch the camera, the camera will eventually be looking at the side, then eventually the front of the model while also rotating in a clock-like fashion. My question is, how do I keep the camera to pitch up and down behind the model no matter how much the model has rotated? Here is what I got: // Rotates model and pitches camera on its own axis public void modelRotMovement(GamePadState pController) { // Rotates Camera with model Yaw = pController.ThumbSticks.Right.X * MathHelper.ToRadians(angularSpeed); // Pitches Camera around model Pitch = pController.ThumbSticks.Right.Y * MathHelper.ToRadians(angularSpeed); AddRotation = Quaternion.CreateFromYawPitchRoll(Yaw, 0, 0); ModelLoad.MRotation *= AddRotation; MOrientation = Matrix.CreateFromQuaternion(ModelLoad.MRotation); } // Orbit (yaw) Camera around with model (only seeing back of model) public void cameraYaw(Vector3 axisYaw, float yaw) { ModelLoad.CameraPos = Vector3.Transform(ModelLoad.CameraPos - ModelLoad.camTarget, Matrix.CreateFromAxisAngle(axisYaw, yaw)) + ModelLoad.camTarget; } // Raise camera above or below model's shoulders public void cameraPitch(Vector3 axisPitch, float pitch) { ModelLoad.CameraPos = Vector3.Transform(ModelLoad.CameraPos - ModelLoad.camTarget, Matrix.CreateFromAxisAngle(axisPitch, pitch)) + ModelLoad.camTarget; } // Call in update method public void updateCamera() { cameraYaw(Vector3.Up, Yaw); cameraPitch(Vector3.Right, Pitch); } NOTE: I tried to use addPitch just like addRotation but it didn't work...

    Read the article

< Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >