Search Results

Search found 19182 results on 768 pages for 'game engine'.

Page 386/768 | < Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >

  • Doing an SNES Mode 7 (affine transform) effect in pygame

    - by 2D_Guy
    Is there such a thing as a short answer on how to do a Mode 7 / mario kart type effect in pygame? I have googled extensively, all the docs I can come up with are dozens of pages in other languages (asm, c) with lots of strange-looking equations and such. Ideally, I would like to find something explained more in English than in mathematical terms. I can use PIL or pygame to manipulate the image/texture, or whatever else is necessary. I would really like to achieve a mode 7 effect in pygame, but I seem close to my wit's end. Help would be greatly appreciated. Any and all resources or explanations you can provide would be fantastic, even if they're not as simple as I'd like them to be. If I can figure it out, I'll write a definitive how to do mode 7 for newbies page. edit: mode 7 doc: http://www.coranac.com/tonc/text/mode7.htm

    Read the article

  • 2D map/plane with nodes overlayed that supports panning, scaling and clicking on nodes

    - by garlicman
    I'm trying my hand at Android development and seem to be running into an invisible ceiling in trying to get what I want accomplished. Basically I'm trying to create an app that renders a 2D surface map that I can (pinch) zoom and pan. I'll have to place nodes on the surface of the map that will scale/zoom and pan in relation to the surface. I started out with a 2D ImageView approach and got as far as pinch zoom, pan and laying nodes as relative ImageViews, but all the methods I tried to get X,Y,W,H for the 2D surface were always off for some reason. Additionally, I was never able to scale the node ImageViews correctly, and as a result never got far enough to try and work out their X,Y scaled offset. So I decided to get back to 3D rendering. Conceptually pan/zoom is camera manipulation, so I don't have to mess with how to scale the 2D map or the nodes. But I need a starting point or sample to get me going that's close to what I'm trying to achieve. A sample on a translucent spinning cube isn't helping as much as I need it to. Any tips? Links, insults and sympathy are all welcome!

    Read the article

  • Computing a normal matrix in conjunction with gluLookAt

    - by Chris Smith
    I have a hand-rolled camera class that converts yaw, pitch, and roll angles into a forward, side, and up vector suitable for calling gluLookAt. Using this camera class I can modify the model-view matrix to move about the 3D world just fine. However, I am having trouble when using this camera class (and associated model-view matrix) when trying to perform directional lighting in my vertex shader. The problem is that the light direction, (0, 1, 0) for example, is relative to where the 'camera is looking' and not the actual world coordinates. (Or is this eye coordinates vs. model coordinates?) I would like the light direction to be unaffected by the camera's viewing direction. For example, when the camera is looking down the Z axis the ground is lit correctly. However, if I point the camera straight at the ground, then it goes dark. This is (I think) because the light direction is parallel with the camera's 'up' vector which is perpendicular with the ground's normal vector. I tried computing the normal matrix without taking the camera's model view into account, but then none of my objects were rotated correctly. Sorry if this sounds vague. I suspect there is a straight forward answer, but I'm not 100% clear on how the normal matrix should be used for transforming vertex normals in my vertex shader. For reference, here is pseudo code for my rendering loop: pMatrix = new Matrix(); pMatrix = makePerspective(...) mvMatrix = new Matrix() camera.apply(mvMatrix); // Calls gluLookAt // Move the object into position. mvMatrix.translatev(position); mvMatrix.rotatef(rotation.x, 1, 0, 0); mvMatrix.rotatef(rotation.y, 0, 1, 0); mvMatrix.rotatef(rotation.z, 0, 0, 1); var nMatrix = new Matrix(); nMatrix.set(mvMatrix.get().getInverse().getTranspose()); // Set vertex shader uniforms. gl.uniformMatrix4fv(shaderProgram.pMatrixUniform, false, new Float32Array(pMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.mvMatrixUniform, false, new Float32Array(mvMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.nMatrixUniform, false, new Float32Array(nMatrix.getFlattened())); // ... gl.drawElements(gl.TRIANGLES, this.vertexIndexBuffer.numItems, gl.UNSIGNED_SHORT, 0); And the corresponding vertex shader: // Attributes attribute vec3 aVertexPosition; attribute vec4 aVertexColor; attribute vec3 aVertexNormal; // Uniforms uniform mat4 uMVMatrix; uniform mat4 uNMatrix; uniform mat4 uPMatrix; // Varyings varying vec4 vColor; // Constants const vec3 LIGHT_DIRECTION = vec3(0, 1, 0); // Opposite direction of photons. const vec4 AMBIENT_COLOR = vec4 (0.2, 0.2, 0.2, 1.0); float ComputeLighting() { vec4 transformedNormal = vec4(aVertexNormal.xyz, 1.0); transformedNormal = uNMatrix * transformedNormal; float base = dot(normalize(transformedNormal.xyz), normalize(LIGHT_DIRECTION)); return max(base, 0.0); } void main(void) { gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0); float lightWeight = ComputeLighting(); vColor = vec4(aVertexColor.xyz * lightWeight, 1.0) + AMBIENT_COLOR; } Note that I am using WebGL, so if the anser is use glFixThisProblem(...) any pointers on how to re-implement that on WebGL if missing would be appreciated.

    Read the article

  • converting a mouse click to a ray

    - by Will
    I have a perspective projection. When the user clicks on the screen, I want to compute the ray between the near and far planes that projects from the mouse point, so I can do some ray intersection code with my world. I am using my own matrix and vector and ray classes and they all work as expected. However, when I try and convert the ray to world coordinates my far always ends up as 0,0,0 and so my ray goes from the mouse click to the centre of the object space, rather than through it. (The x and y coordinates of near and far are identical, they differ only in the z coordinates where they are negatives of each other) GLint vp[4]; glGetIntegerv(GL_VIEWPORT,vp); matrix_t mv, p; glGetFloatv(GL_MODELVIEW_MATRIX,mv.f); glGetFloatv(GL_PROJECTION_MATRIX,p.f); const matrix_t inv = (mv*p).inverse(); const float unit_x = (2.0f*((float)(x-vp[0])/(vp[2]-vp[0])))-1.0f, unit_y = 1.0f-(2.0f*((float)(y-vp[1])/(vp[3]-vp[1]))); const vec_t near(vec_t(unit_x,unit_y,-1)*inv); const vec_t far(vec_t(unit_x,unit_y,1)*inv); ray = ray_t(near,far-near); What have I got wrong? (How do you unproject the mouse-point?)

    Read the article

  • Physics like asteroides

    - by user2933016
    I try to make a ship that has the physic properties like asteroides. I have this for now(All in Java): Ship.class public class Ship { public static final float sMaxHealth = 0.1F; public static final float sMaxMoveVelocity = 5.0F; public static final float sMaxAngleVelocity = 20.0F; public static final float sRadius = 1.0F; public static final float sMoveDeceleration = 10.0F; public static final float sMoveAcceleration = 2.0F; public static final float sAngleDeceleration = 15.0F; public static final float sAngleAcceleration = 20.0F; private float mHealth; private float mXVelocity; private float mYVelocity; private float mAngleVelocity; private float mX; private float mY; private float mAngle; } (I let the getter and setter away for now) Controller code // Player input if(Gdx.input.isKeyPressed(Keys.UP)) { mPlayer.setXVelocity(mPlayer.getXVelocity() + (float) Math.cos(mPlayer.getAngle()) * Ship.sMoveAcceleration); mPlayer.setYVelocity(mPlayer.getYVelocity() + (float) Math.sin(mPlayer.getAngle()) * Ship.sMoveAcceleration); } if(Gdx.input.isKeyPressed(Keys.LEFT)) { mPlayer.setAngleVelocity(mPlayer.getAngleVelocity() + Ship.sAngleAcceleration * pDeltaTime); } if(Gdx.input.isKeyPressed(Keys.RIGHT)) { mPlayer.setAngleVelocity(mPlayer.getAngleVelocity() - Ship.sAngleAcceleration * pDeltaTime); } // X velocity if(mPlayer.getXVelocity() < 0) { if(-mPlayer.getXVelocity() > Ship.sMaxMoveVelocity) { mPlayer.setXVelocity(-Ship.sMaxMoveVelocity); } mPlayer.setXVelocity(mPlayer.getXVelocity() + Ship.sMoveDeceleration * pDeltaTime); if(mPlayer.getXVelocity() > 0) { mPlayer.setXVelocity(0); } } else if(mPlayer.getXVelocity() > 0) { if(mPlayer.getXVelocity() > Ship.sMaxMoveVelocity) { mPlayer.setXVelocity(Ship.sMaxMoveVelocity); } mPlayer.setXVelocity(mPlayer.getXVelocity() - Ship.sMoveDeceleration * pDeltaTime); if(mPlayer.getXVelocity() < 0) { mPlayer.setXVelocity(0); } } // Y velocity if(mPlayer.getYVelocity() < 0) { if(-mPlayer.getYVelocity() > Ship.sMaxMoveVelocity) { mPlayer.setYVelocity(-Ship.sMaxMoveVelocity); } mPlayer.setYVelocity(mPlayer.getYVelocity() + Ship.sMoveDeceleration * pDeltaTime); if(mPlayer.getYVelocity() > 0) { mPlayer.setYVelocity(0); } } else if(mPlayer.getYVelocity() > 0) { if(mPlayer.getYVelocity() > Ship.sMaxMoveVelocity) { mPlayer.setYVelocity(Ship.sMaxMoveVelocity); } mPlayer.setYVelocity(mPlayer.getYVelocity() - Ship.sMoveDeceleration * pDeltaTime); if(mPlayer.getYVelocity() < 0) { mPlayer.setYVelocity(0); } } // Angle velocity if(mPlayer.getAngleVelocity() < 0) { if(-mPlayer.getAngleVelocity() > Ship.sMaxAngleVelocity) { mPlayer.setAngleVelocity(-Ship.sMaxAngleVelocity); } mPlayer.setAngleVelocity(mPlayer.getAngleVelocity() + Ship.sAngleDeceleration * pDeltaTime); if(mPlayer.getAngleVelocity() > 0) { mPlayer.setAngleVelocity(0); } } else if(mPlayer.getAngleVelocity() > 0) { if(mPlayer.getAngleVelocity() > Ship.sMaxAngleVelocity) { mPlayer.setAngleVelocity(Ship.sMaxAngleVelocity); } mPlayer.setAngleVelocity(mPlayer.getAngleVelocity() - Ship.sAngleDeceleration * pDeltaTime); if(mPlayer.getAngleVelocity() < 0) { mPlayer.setAngleVelocity(0); } } mPlayer.setX(mPlayer.getX() + mPlayer.getXVelocity() * pDeltaTime); mPlayer.setY(mPlayer.getY() + mPlayer.getYVelocity() * pDeltaTime); mPlayer.setAngle(mPlayer.getAngle() + mPlayer.getAngleVelocity() * pDeltaTime); Why the ship does not behave like in asteroides ? What do I wrong?

    Read the article

  • How are these bullets done?

    - by Mike
    I really want to know how the bullets in Radiangames Inferno are done. The bullets seem like they are just billboard particles but I am curious about how their tails are implemented. They can curve so this means they are not just a billboard. Also, they appear continuous which implies that the tails are not made of a bunch of smaller particles (I think). Can anyone shead some light on this for me?

    Read the article

  • How to read BC4 texture in GLSL?

    - by Question
    I'm supposed to receive a texture in BC4 format. In OpenGL, i guess this format is called GL_COMPRESSED_RED_RGTC1. The texture is not really a "texture", more like a data to handle at fragment shader. Usually, to get colors from a texture within a fragment shader, i do : uniform sampler2D TextureUnit; void main() { vec4 TexColor = texture2D(TextureUnit, vec2(gl_TexCoord[0])); (...) the result of which is obviously a v4, for RGBA. But now, i'm supposed to receive a single float from the read. I'm struggling to understand how this is achieved. Should i still use a texture sampler, and expect the value to be in a specific position (for example, within TexColor.r ?), or should i use something else ?

    Read the article

  • What are the benefits of designing a KeyBinding relay?

    - by Adam Naylor
    The input system of Quake3 is handled using a Keybinding relay, whereby each keypress is matched against a 'binding' which is then passed to the CLI along with a time stamp of when the keypress (or release) occurred. I just wanted to get an idea from developers what they considered to be the key benefits of designing your input system around this approach? One thing i don't particularly like is the appending of the timestamp to the bound command. This seems like a bit of a hack to bend the CLI into handling the games input? Also I feel that detecting the keypress only to add the command to a stream of text that gets parsed at a later date to be a slightly latent way of responding to input? (or is this unfounded?) The only real benefit i can see is that it allows you to bind 'complex' commands to keypresses; like 'switch weapon;+fire;' for example. Or maybe for journaling purposes? Thanks for any insights!

    Read the article

  • How to I get a rotated sprite to move left or right?

    - by rphello101
    Using Java/Slick 2D, I'm using the mouse to rotate a sprite on the screen and the directional keys (in this case, WASD) to move the spite. Forwards and backwards is easy, just position += cos(ang)*speed or position -= cos(ang)*speed. But how do I get the sprite to move left or right? I'm thinking it has something to do with adding 90 degrees to the angle or something. Any ideas? Rotation code: int mX = Mouse.getX(); int mY = HEIGHT - Mouse.getY(); int pX = sprite.x+sprite.image.getWidth()/2; int pY = sprite.y+sprite.image.getHeight()/2; double mAng; if(mX!=pX){ mAng = Math.toDegrees(Math.atan2(mY - pY, mX - pX)); if(mAng==0 && mX<=pX) mAng=180; } else{ if(mY>pY) mAng=90; else mAng=270; } sprite.angle = mAng; sprite.image.setRotation((float) mAng); And the movement code (delta is change in time): Input input = gc.getInput(); Vector2f direction = new Vector2f(); Vector2f velocity = new Vector2f(); direction.x = (float) Math.cos(Math.toRadians(sprite.angle)); direction.y = (float) Math.sin(Math.toRadians(sprite.angle)); if(direction.length()>0) direction = direction.normalise(); //On a separate note, what does this line of code do? velocity.x = (float) (direction.x * sprite.moveSpeed); velocity.y = (float) (direction.y * sprite.moveSpeed); if(input.isKeyDown(sprite.up)){ sprite.x += velocity.x*delta; sprite.y += velocity.y*delta; }if (input.isKeyDown(sprite.down)){ sprite.x -= velocity.x*delta; sprite.y -= velocity.y*delta; }if (input.isKeyDown(sprite.left)){ //??? }if (input.isKeyDown(sprite.right)){ //??? }

    Read the article

  • How can I locate empty space next to polygon regions?

    - by Stephen
    Let's say I have the following area in a top-down map: The circle is the player, the black square is an obstacle, and the grey polygons with red borders are walk-able areas that will be used as a navigation mesh for enemies. Obstacles and grey polygons are always convex. The grey regions were defined using an algorithm when the world was generated at runtime. Notice the little white column. I need to figure out where any empty space like this is, if at all, after the algorithm builds the grey regions, so that I can fill the space with another region. Basically what I'm hoping for is an algorithm that can detect empty space next to a polygon.

    Read the article

  • OpenGL-ES: clearing the alpha of the FrameBufferObject

    - by MrDatabase
    This question is a follow-up to Texture artifacts on iPad How does one "clear the alpha of the render texture frameBufferObject"? I've searched around here, StackOverflow and various search engines but no luck. I've tried a few things... for example calling GlClear(GL_COLOR_BUFFER_BIT) at the beginning of my render loop... but it doesn't seem to make a difference. Any help is appreciated since I'm still new to OpenGL. Cheers! p.s. I read on SO and in Apple's documentation that GlClear should always be called at the beginning of the renderLoop. Agree? Disagree? Here's where I read this: http://stackoverflow.com/questions/2538662/how-does-glclear-improve-performance

    Read the article

  • Is there an easy and automatic way of converting a Windows XNA project into a Monotouch Monogame project?

    - by Krumelur
    I have just started with XNA development on Windows. But as I'm a fan of iOS I had to try porting my test code over to Monotouch on the Mac. I used these instructions: http://www.facepuncher.com/blogs/10parameters/?p=42 But this is so much (manual) work! And it really doesn't answer open topics like: why would I copy all the XNB files and in addition all the resources, like PNGs? Is there maybe a tool that automatically converts a Windows XNA project into a Monotouch iOS project or at least creates the correct folder structure?

    Read the article

  • How should I unbind and delete OpenAL buffers?

    - by Joe Wreschnig
    I'm using OpenAL to play sounds. I'm trying to implement a fire-and-forget play function that takes a buffer ID and assigns it to a source from a pool I have previously allocated, and plays it. However, there is a problem with object lifetimes. In OpenGL, delete functions either automatically unbind things (e.g. textures), or automatically deletes the thing when it eventually is unbound (e.g. shaders) and so it's usually easy to manage deletion. However alDeleteBuffers instead simply fails with AL_INVALID_OPERATION if the buffer is still bound to a source. Is there an idiomatic way to "delete" OpenAL buffers that allows them to finish playing, and then automatically unbinds and really them? Do I need to tie buffer management more deeply into the source pool (e.g. deleting a buffer requires checking all the allocated sources also)? Similarly, is there an idiomatic way to unbind (but not delete) buffers when they are finished playing? It would be nice if, when I was looking for a free source, I only needed to see if a buffer was attached at all and not bother checking the source state. (I'm using C++, although approaches for C are also fine. Approaches assuming a GCd language and using finalizers are probably not applicable.)

    Read the article

  • Texture the quad with different parts of texture

    - by PolGraphic
    I have a 2D quad. Let say it's position is (5,10) and size is (7,11). I want to texture it with one texture, but using three different parts of it. I want to texture the part of quad from x = 5 to x = 7 with part of texture from U = 0 to U = 0.5 (replaying it after achieving 0.5, so I will have 4 same 0.5-lenght fragments). The second one with some other part of texture (also repeating it) and third in the same style. But, how to achieve it? I know that: float2 tc = fmod(input.TexCoord, textureCoordinates.zw - textureCoordinates.xy) + textureCoordinates.xy; //textureCoordinates.xy = fragments' offset Will give me the texture part replaying.

    Read the article

  • What exactly is UV and UVW Mapping?

    - by Michael Stum
    Trying to understand some basic 3D concepts, at the moment I'm trying to figure out how textures actually work. I know that UV and UVW mapping are techniques that map 2D Textures to 3D Objects - Wikipedia told me as much. I googled for explanations but only found tutorials that assumed that I already know what it is. From my understanding, each 3D Model is made out of Points, and several points create a face? Does each point or face have a secondary coordinate that maps to a x/y position in the 2D Texture? Or how does unwrapping manipulate the model? Also, what does the W in UVW really do, what does it offer over UV? As I understand it, W maps to the Z coordinate, but in what situation would I have different textures for the same X/Y and different Z, wouldn't the Z part be invisible? Or am I completely misunderstanding this?

    Read the article

  • Quadtree collapsing

    - by Caius Eugene
    Okay so i've spent a few days learning what a quadtree is and how to implement one. So far I have a quadtree that when I click inside a leaf it subdivides, I wondering how do I get the previous subdivisions to collapse back up, so that only one area is subdivided at a time? This is what mine looks like: (1. initial mouse click) (2. another mouse click) The aim to to eventually track the position of my mouse and subdivide the area it is in dynamically. THE OVERALL aim it to use this to create a terrain mesh and subdivide based on the camera. But I've gone right back to basics to get an understanding of how this will work. Any advice would be grand! - Caius

    Read the article

  • Ray Tracing Shadows in deferred rendering

    - by Grieverheart
    Recently I have programmed a raytracer for fun and found it beutifully simple how shadows are created compared to a rasterizer. Now, I couldn't help but I think if it would be possible to implement somthing similar for ray tracing of shadows in a deferred renderer. The way I though this could work is after drawing to the gbuffer, in a separate pass and for each pixel to calculate rays to the lights and draw them as lines of unique color together with the geometry (with color 0). The lines will be cut-off if there is occlusion and this fact could be used in a fragment shader to calculate which rays are occluded. I guess there must be something I'm missing, for example I'm not sure how the fragment shader could save the occlusion results for each ray so that they are available for pixel at the ray's origin. Has this method been tried before, is it possible to implement it as I described and if yes what would be the drawbacks in performance of calculating shadows this way?

    Read the article

  • Only draw visible objects to the camera in 2D

    - by Deukalion
    I have Map, each map has an array of Ground, each Ground consists of an array of VertexPositionTexture and a texture name reference so it renders a texture at these points (as a shape through triangulation). Now when I render my map I only want to get a list of all objects that are visible in the camera. (So I won't loop through more than I have to) Structs: public struct Map { public Ground[] Ground { get; set; } } public struct Ground { public int[] Indexes { get; set; } public VertexPositionNormalTexture[] Points { get; set; } public Vector3 TopLeft { get; set; } public Vector3 TopRight { get; set; } public Vector3 BottomLeft { get; set; } public Vector3 BottomRight { get; set; } } public struct RenderBoundaries<T> { public BoundingBox Box; public T Items; } when I load a map: foreach (Ground ground in CurrentMap.Ground) { Boundaries.Add(new RenderBoundaries<Ground>() { Box = BoundingBox.CreateFromPoints(new Vector3[] { ground.TopLeft, ground.TopRight, ground.BottomLeft, ground.BottomRight }), Items = ground }); } TopLeft, TopRight, BottomLeft, BottomRight are simply the locations of each corner that the shape make. A rectangle. When I try to loop through only the objects that are visible I do this in my Draw method: public int Draw(GraphicsDevice device, ICamera camera) { BoundingFrustum frustum = new BoundingFrustum(camera.View * camera.Projection); // Visible count int count = 0; EffectTexture.World = camera.World; EffectTexture.View = camera.View; EffectTexture.Projection = camera.Projection; foreach (EffectPass pass in EffectTexture.CurrentTechnique.Passes) { pass.Apply(); foreach (RenderBoundaries<Ground> render in Boundaries.Where(m => frustum.Contains(m.Box) != ContainmentType.Disjoint)) { // Draw ground count++; } } return count; } When I try adding just one ground, then moving the camera so the ground is out of frame it still returns 1 which means it still gets draw even though it's not within the camera's view. Am I doing something or wrong or can it be because of my Camera? Any ideas why it doesn't work?

    Read the article

  • Can't click on a button with startDrag() active on stage

    - by Pedro
    I need to know how can I enable mouse click on a button when I have a MouseEvent listener for the stage. I have a MClip associated with the mouse cursor: Mouse.hide(); scope.startDrag(true); And an MouseEnvet on the stage: stage.addEventListener(MouseEvent.CLICK, FunctionXYZ); When I try to click on any button they don't assume the function that I create for those buttons... for example, button for fullscreen, exit, help, etc... Thank you very much. BR, Pedro

    Read the article

  • How to move the object around the screen

    - by Abhishek
    I am trying to move the object around the screen I try this code -(void) move { CGFloat upperLimit = mWinSize.height - (mGunda.contentSize.height / 2.0); CGFloat upperLimit1 = mWinSize.height; CGFloat lowerLimit = (mGunda.contentSize.height / 2.0); CGFloat RightLimit = mWinSize.width - (mGunda.contentSize.width/2.0); CGFloat Right = (mGunda.contentSize.width/2.0); if ( mImageGoingUpward ) { mGunda.position = ccp( mGunda.position.x, mGunda.position.y + 5); if ( mGunda.position.y >= upperLimit ) { mImageGoingUpward = NO; mHori = NO; } } else { mGunda.position = ccp( mGunda.position.x, mGunda.position.y - 5); if ( mGunda.position.y <= lowerLimit ) { mGunda.position = ccp(mGunda.position.x +5, lowerLimit); } if(mGunda.position.x >= RightLimit) { mGunda.position = ccp(mGunda.position.x, mGunda.position.y+10); mHori = YES; } if(mHori) { if(mGunda.position.y >= upperLimit) { mGunda.position = ccp(mGunda.position.x - 5,mGunda.position.y); } } } } } It move the object from bottom to top & top to bottom & bottom to right & right to right top of the screen here is problem I have got It not move to the right top to left side of screen this rotationis not happen. How can I do this

    Read the article

  • OpenGL font rendering

    - by DEElekgolo
    I am trying to make an openGL text rendering class using FreeType. I was originally following this code but it doesn't seem to work out for me. I get nothing reguardless of what parameters I put for Draw(). class Font { public: Font() { if (FT_Init_FreeType(&ftLibrary)) { printf("Could not initialize FreeType library\n"); return; } glGenBuffers(1,&iVerts); } bool Load(std::string sFont, unsigned int Size = 12.0f) { if (FT_New_Face(ftLibrary,sFont.c_str(),0,&ftFace)) { printf("Could not open font: %s\n",sFont.c_str()); return true; } iSize = Size; FT_Set_Pixel_Sizes(ftFace,0,(int)iSize); FT_GlyphSlot gGlyph = ftFace->glyph; //Generating the texture atlas. //Rather than some amazing rectangular packing method, I'm just going //to have one long strip of letters with the height being that of the font size. int width = 0; int height = 0; for (int i = 32; i < 128; i++) { if (FT_Load_Char(ftFace,i,FT_LOAD_RENDER)) { printf("Error rendering letter %c for font %s.\n",i,sFont.c_str()); } width += gGlyph->bitmap.width; height += std::max(height,gGlyph->bitmap.rows); } //Generate the openGL texture glActiveTexture(GL_TEXTURE0); //if I texture exists then delete it. iTexture ? glDeleteBuffers(1,&iTexture):0; glGenTextures(1,&iTexture); glBindTexture(GL_TEXTURE_2D,iTexture); glPixelStorei(GL_UNPACK_ALIGNMENT,1); glTexImage2D(GL_TEXTURE_2D,0,GL_ALPHA,width,height,0,GL_ALPHA,GL_UNSIGNED_BYTE,0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); //load the glyphs and set the glyph data int x = 0; for (int i = 32; i < 128; i++) { if (FT_Load_Char(ftFace,i,FT_LOAD_RENDER)) { //if it cant load the character continue; } //load the glyph map into the texture glTexSubImage2D(GL_TEXTURE_2D,0,x,0, gGlyph->bitmap.width, gGlyph->bitmap.rows, GL_ALPHA, GL_UNSIGNED_BYTE, gGlyph->bitmap.buffer); //move the "pen" down the strip x += gGlyph->bitmap.width; chars[i].ax = (float)(gGlyph->advance.x >> 6); chars[i].ay = (float)(gGlyph->advance.y >> 6); chars[i].bw = (float)gGlyph->bitmap.width; chars[i].bh = (float)gGlyph->bitmap.rows; chars[i].bl = (float)gGlyph->bitmap_left; chars[i].bt = (float)gGlyph->bitmap_top; chars[i].tx = (float)x/width; } printf("Loaded font: %s\n",sFont.c_str()); return true; } void Draw(std::string sString,Vector2f vPos = Vector2f(0,0),Vector2f vScale = Vector2f(1,1)) { struct pPoint { pPoint() { x = y = s = t = 0; } pPoint(float a,float b,float c,float d) { x = a; y = b; s = c; t = d; } float x,y; float s,t; }; pPoint* cCoordinates = new pPoint[6*sString.length()]; int n = 0; for (const char *p = sString.c_str(); *p; p++) { float x2 = vPos.x() + chars[*p].bl * vScale.x(); float y2 = -vPos.y() - chars[*p].bt * vScale.y(); float w = chars[*p].bw * vScale.x(); float h = chars[*p].bh * vScale.y(); float x = vPos.x() + chars[*p].ax * vScale.x(); float y = vPos.y() + chars[*p].ay * vScale.y(); //skip characters with no pixels //still advances though if (!w || !h) { continue; } //triangle one cCoordinates[n++] = pPoint( x2 , -y2 , chars[*p].tx , 0); cCoordinates[n++] = pPoint( x2+w , -y2 , chars[*p].tx + chars[*p].bw / w , 0); cCoordinates[n++] = pPoint( x2 , -y2-h , chars[*p].tx , chars[*p].bh / h); cCoordinates[n++] = pPoint( x2+w , -y2 , chars[*p].tx + chars[*p].bw / w , 0); cCoordinates[n++] = pPoint( x2 , -y2-h , chars[*p].tx , chars[*p].bh / h); cCoordinates[n++] = pPoint( x2+w , -y2-h , chars[*p].tx + chars[*p].bw / w , chars[*p].bh / h); } glBindBuffer(GL_ARRAY_BUFFER,iVerts); glBindBuffer(GL_TEXTURE_2D,iTexture); //Vertices glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(2,GL_FLOAT,sizeof(pPoint),&cCoordinates[0].x); //TexCoord 0 glClientActiveTexture(GL_TEXTURE0); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glTexCoordPointer(2,GL_FLOAT,sizeof(pPoint),&cCoordinates[0].s); glCullFace(GL_NONE); glBufferData(GL_ARRAY_BUFFER,6*sString.length(),cCoordinates,GL_DYNAMIC_DRAW); glDrawArrays(GL_TRIANGLES,0,n); glCullFace(GL_BACK); glBindBuffer(GL_ARRAY_BUFFER,0); glBindBuffer(GL_TEXTURE_2D,0); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); } ~Font() { glDeleteBuffers(1,&iVerts); glDeleteBuffers(1,&iTexture); } private: unsigned int iSize; //openGL texture atlas unsigned int iTexture; //openGL geometry buffer; unsigned int iVerts; FT_Library ftLibrary; FT_Face ftFace; struct Character { float ax,ay;//Advance float bw,bh;//bitmap size float bl,bt;//bitmap left and top float tx; } chars[128]; };

    Read the article

  • What method replaces GL_SELECT for box selection?

    - by Jake
    Last 2 weeks I started working on a box selection that selects shapes using GL_SELECT and I just got it working finally. When looking up resources online, there is a significant number of posts that say GL_SELECT is deprecated in OpenGL 3.0, but there is no mention of what had replace that function. I learnt OpenGL 1.2 in back in college 2 years back but checking wikipedia now, I realise we already have OpenGL 4.0 but I am unaware of what I need to do to keep myself up to date. So, in the meantime, what would be the latest preferred method for box selection? EDIT: I found http://www.khronos.org/files/opengl-quick-reference-card.pdf on page 5 this card still lists glRenderMode(GL_SELECT) as part of the OpenGL 3.2 reference.

    Read the article

  • Avoid if statements in DirectX 10 shaders?

    - by PolGraphic
    I have heard that if statements should be avoid in shaders, because both parts of the statements will be execute, and than the wrong will be dropped (which harms the performance). It's still a problem in DirectX 10? Somebody told me, that in it only the right branch will be execute. For the illustration I have the code: float y1 = 5; float y2 = 6; float b1 = 2; float b2 = 3; if(x>0.5){ x = 10 * y1 + b1; }else{ x = 10 * y2 + b2; } Is there an other way to make it faster? If so, how do it? Both branches looks similar, the only difference is the values of "constants" (y1, y2, b1, b2 are the same for all pixels in Pixel Shader).

    Read the article

  • Velocity control of the player, why doesn't this work?

    - by Dominic Grenier
    I have the following code inside a while True loop: if abs(playerx) < MAXSPEED: if moveLeft: playerx -= 1 if moveRight: playerx += 1 if abs(playery) < MAXSPEED: if moveDown: playery += 1 if moveUp: playery -= 1 if moveLeft == False and abs(playerx) > 0: playerx += 1 if moveRight == False and abs(playerx) > 0: playerx -= 1 if moveUp == False and abs(playery) > 0: playery += 1 if moveDown == False and abs(playery) > 0: playery -= 1 player.x += playerx player.y += playery if player.left < 0 or player.right > 1000: player.x -= playerx if player.top < 0 or player.bottom > 600: player.y -= playery The intended result is that while an arrow key is pressed, playerx or playery increments by one at every iteration until it reaches MAXSPEED and stays at MAXSPEED. And that when the player stops pressing that arrow key, his speed decreases until it reaches 0. To me, this code explicitly says that... But what actually happens is that playerx or playery keeps incrementing regardless of MAXSPEED and continues moving even after the player stops pressing the arrow key. I keep rereading but I'm completely baffled by this weird behavior. Any insights? Thanks.

    Read the article

  • convert image to spritesheet of tiles for isometric map?

    - by Paul
    is there a way to convert an isometric image (like the first image) to a spritesheet (like the second image), in order to place each image on the isometric map with the code? The map looks like the first image, but some buildings are bigger than just one tile, so I need several squares (let's say the first image is a building, made of multiple tiles with different colors), and each square is placed with an offset of 64x32. The building is created in Blender and I save the image with the isometric perspective. But I have to split each square from this image in order to have the spritesheet, maybe there is smarter way, or a java software that would make the conversion for me?

    Read the article

< Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >