Search Results

Search found 12894 results on 516 pages for 'game kit'.

Page 267/516 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • What could cause a pixel shader to paint outside the lines of the vertex shader output?

    - by Rei Miyasaka
    From what I understand, the pixels that a pixel shader operates on are specified implicitly by the SV_POSITION output (in DirectX) of the vertex shader. What then could cause a pixel shader to render in the middle of nowhere? I used the new Visual Studio 2012 graphics debugger to visualize my vertex and pixel shader output. This is the output from a DrawIndexed() call that draws a cube: The pink part is the rendered output of the pixel shader, which takes the cube on its left as its input. The vertex shader code: cbuffer Buf { float4x4 final; }; struct In { float4 pos:POSITION; float3 norm:NORMAL; float2 texuv:TEXCOORD; }; struct Out { float4 col:COLOR; float2 tex:TEXCOORD; float4 pos:SV_POSITION; }; Out main(In input) { Out output; output.pos = mul(input.pos, final); output.col = float4(1.0f, 0.5f, 0.5f, 1.0f); output.tex = input.texuv; return output; } And the pixel shader: struct In { float4 col:COLOR; float2 tex:TEXCOORD; float4 pos:SV_POSITION; }; float4 main(In input) : SV_TARGET { return input.col; } The raster stage is the only thing between the vertex shader and the pixel shader, so my suspicion is that it's some raster stage settings. But the raster stage shouldn't change the shape of the vertex shader output so drastically, should it?

    Read the article

  • Demystifying "chunked level of detail"

    - by Caius Eugene
    Just recently trying to make sense of implementing a chunked level of detail system in Unity. I'm going to be generating four mesh planes, each with a height map but I guess that isn't too important at the moment. I have a lot of questions after reading up about this technique, I hope this isn't too much to ask all in one go, but I would be extremely grateful for someone to help me make sense of this technique. 1 : I can't understand at which point down the Chunked LOD pipeline that the mesh gets split into chunks. Is this during the initial mesh generation, or is there a separate algorithm which does this. 2 : I understand that a Quadtree data structure is used to store the Chunked LOD data, I think i'm missing the point a bit, but Is the quadtree storing vertex and triangles data for each subdivision level? 3a : How is the camera distance usually calculated. When reading up about quadtree's, Axis-aligned bounding box's are mentioned a lot. In this case would each chunk have a collision bounding box to detect the camera or player is nearby? or is there a better way of doing this? (raycast maybe?) 3b : Do the chunks calculate the camera distance themselves? 4 : Does each chunk have the same "resolution". for example at top level the mesh will be 32x32, will each subdivided node also be 32x32. Example below:

    Read the article

  • What method replaces GL_SELECT for box selection?

    - by Jake
    Last 2 weeks I started working on a box selection that selects shapes using GL_SELECT and I just got it working finally. When looking up resources online, there is a significant number of posts that say GL_SELECT is deprecated in OpenGL 3.0, but there is no mention of what had replace that function. I learnt OpenGL 1.2 in back in college 2 years back but checking wikipedia now, I realise we already have OpenGL 4.0 but I am unaware of what I need to do to keep myself up to date. So, in the meantime, what would be the latest preferred method for box selection? EDIT: I found http://www.khronos.org/files/opengl-quick-reference-card.pdf on page 5 this card still lists glRenderMode(GL_SELECT) as part of the OpenGL 3.2 reference.

    Read the article

  • Is there an easy and automatic way of converting a Windows XNA project into a Monotouch Monogame project?

    - by Krumelur
    I have just started with XNA development on Windows. But as I'm a fan of iOS I had to try porting my test code over to Monotouch on the Mac. I used these instructions: http://www.facepuncher.com/blogs/10parameters/?p=42 But this is so much (manual) work! And it really doesn't answer open topics like: why would I copy all the XNB files and in addition all the resources, like PNGs? Is there maybe a tool that automatically converts a Windows XNA project into a Monotouch iOS project or at least creates the correct folder structure?

    Read the article

  • Producing a smooth mesh from density cloud and marching cubes

    - by Wardy
    Based on my results from this question I decided to build myself a 3D noise map containing float values in place of my existing boolean point values. The effect I'm trying to produce is something like this, rather than typical rolling hills; which should explain the "missing cubes" in the image below. If I render my density map in normal "minecraft mode" (1 block per point in the density map) varying the size of the cube based on the value in my density map (floats in the range 0 to 1) I get something like this: I'm now happy that I can produce a density map for the marching cubes algorithm (which will need a little tweaking) but for some reason when I run it through my implementation it's not producing what I expect. My problem is that I'm getting something like the first image in this answer to my previous question, when I want to achieve the effect in the second image. Upon further investigation I can't see how marching cubes does the "move vertex along the edge" type logic (i.e. the difference between the two images on my previous link). I see that it does do some interpolation, but I'm not convinced I have the correct understanding of what I think it should do, because the code in question appears to give the same result regardless of whether I use boolean or float values. I took the code from here which is a C# implementation of marching cubes, but instead of using the MarchingCubesPrimitive I modified it to accept an object of type IDrawable, containing lists for the various collections (vertices, normals, UVs, indices), the logic was otherwise untouched. My understanding is that given a very low isovalue the accuracy level of the surface being rendered should increase, so in short "less 45 degree slows more rolling hills" type mesh output. However this isn't what I'm seeing. Have I missed something or is the implementation flawed and need to be fixed? EDIT: A little more detail on what I am seeing when I "marching cube" the data. Ok so firstly, ignore the fact that the meshes created by the chunks don't "connect" (i'll probably raise another question about this later). Then look at the shaping of the island, it's too ... square, from the voxels rendered as boxes you get the impression there's a clean soft gradual hill and yet from the image there are sharp falling edges even in the most central areas where the gradient in the first image looks the most smooth. The data is "regenerated" each time I run this so no 2 islands come out the same, and it's purely random so not based on noise, but still, how can it look so smooth in 1 image and so not smooth in the other?

    Read the article

  • Cool examples of procedural pixel shader effects?

    - by Robert Fraser
    What are some good examples of procedural/screen-space pixel shader effects? No code necessary; just looking for inspiration. In particular, I'm looking for effects that are not dependent on geometry or the rest of the scene (would look okay rendered alone on a quad) and are not image processing (don't require a "base image", though they can incorporate textures). Multi-pass or single-pass is fine. Screenshots or videos would be ideal, but ideas work too. Here are a few examples of what I'm looking for (all from the RenderMonkey samples): PS - I'm aware of this question; I'm not asking for a source of actual shader implementations but instead for some inspirational ideas -- and the ones at the NVIDIA Shader Library mostly require a scene or are image processing effects. EDIT: this is an open-ended question and I wish there was a good way to split the bounty. I'll award the rep to the best answer on the last day.

    Read the article

  • Calculate an AABB for bone animated model

    - by Byte56
    I have a model that has its initial bounding box calculated by finding the maximum and minimum on the x, y and z axes. Producing a correct result like so: The vertices are then stored in a VBO and only altered with matrices for rotation and bone animation. Currently the bounds are not updated when the model is altered. So the animated and rotated model has bounds like so: (Maybe it's hard to tell, but the bounds are the same as before, and don't accurately represent the rotated/animated model) So my question is, how can I calculate the bounding box using the armature matrices and rotation/translation matrices for each model? Keep in mind the modified vertex data is not available because those calculations are performed on the GPU in the shader. The end result I want is to have an accurate AABB the represents the animated model for picking/basic collision checks.

    Read the article

  • Why does multiplying texture coordinates scale the texture?

    - by manning18
    I'm having trouble visualizing this geometrically - why is it that multiplying the U,V coordinates of a texture coordinate has the effect of scaling that texture by that factor? eg if you scaled the texture coordinates by a factor of 3 ..then doesn't this mean that if you had texture coordinates 0,1 and 0,2 ...you'd be sampling 0,3 and 0,6 in the U,V texture space of 0..1? How does that make it bigger eg HLSL: tex2D(textureSampler, TexCoords*3) Integers make it smaller, decimals make it bigger I mean I understand intuitively if you added to the U,V coordinates, as that is simply an offset into the sampling range, but what's the case with multiplication? I have a feeling when someone explains this to me I'm going to be feeling mighty stupid

    Read the article

  • Cropping a line (laser beam) in XNA

    - by electroflame
    I have a laser sprite that I wish to crop. I want to crop it so when it collides with an item, I can calculate the distance between the starting point, and the ending point, and only draw that. This eliminates the "overdraw" of a laser drawing past an item. Essentially, I'm trying to crop a line, but also keep that line "attached" to the nose of my ship. The line should not be drawn past the nose of my ship, that should be the starting point. There is no rotation to worry about. Currently, I thought that doing this through SpriteBatch would be best. This is my current Spritebatch code: spriteBatch.Draw(Laser.sprite, new Rectangle((int)Laser.position.X, (int)Laser.position.Y, Laser.sprite.Width, LaserHeight), new Rectangle(0, 0, (int)(Laser.sprite.Width), LaserHeight), new Color(255, 255, 255, (byte)MathHelper.Clamp(Laser.Alpha, 0, 255)), Laser.rotation, new Vector2(Laser.sprite.Width/2, LaserHeight/2), SpriteEffects.None, 0); But this doesn't quite work. It does only draw part of the sprite, but when LaserHeight is incremented, it lengthens the line in both ways! I believe this is due to some stupid error on my part with the Origin of the draw. Quick recap: I need to have my laser sprite drawn with the bottom of it at the nose of my ship, and then use LaserHeight to crop the image so only part of it is drawn. I have a feeling my explanation is a bit...lacking. So if you require more information, please say so and I will try to provide more information. Thanks in advance!

    Read the article

  • Strategies to Defeat Memory Editors for Cheating - Desktop Games

    - by ashes999
    I'm assuming we're talking about desktop games -- something the player downloads and runs on their local computer. Many are the memory editors that allow you to detect and freeze values, like your player's health. How do you prevent cheating via memory-modifiation? What strategies are effective to combat this kind of cheating? For reference, I know that players can: - Search for something by value or range - Search for something that changed value - Set memory values - Freeze memory values I'm looking for some good ones. Two I use that are mediocre are: Displaying values as a percentage instead of the number (eg. 46/50 = 92% health) A low-level class that holds values in an array and moves them with each change. (For example, instead of an int, I have a class that's an array of ints, and whenever the value changes, I use a different, randomly-chosen array item to hold the value)

    Read the article

  • Render an image with separate layers for shadows/reflections in 3D Studio Max?

    - by Bernd Plontsch
    I have a scene with a simple object standing on a ground in the center. Caused by lights and the ground material there is some shadow and reflection on the ground surrounding the object. How can I render an image containing 3 separate layers for the object the ground the reflection / shadow on the ground Which format to use for this (it should include all 3 layers + I should be able to enable/disable them in Photoshop)? How do I define or prepare those layers for being rendering as image layers?

    Read the article

  • How to change speed without changing path travelled?

    - by Ben Williams
    I have a ball which is being thrown from one side of a 2D space to the other. The formula I am using for calculating the ball's position at any one point in time is: x = x0 + vx0*t y = y0 + vy0*t - 0.5*g*t*t where g is gravity, t is time, x0 is the initial x position, vx0 is the initial x velocity. What I would like to do is change the speed of this ball, without changing how far it travels. Let's say the ball starts in the lower left corner, moves upwards and rightwards in an arc, and finishes in the lower right corner, and this takes 5s. What I would like to be able to do is change this so it takes 10s or 20s, but the ball still follows the same curve and finishes in the same position. How can I achieve this? All I can think of is manipulating t but I don't think that's a good idea. I'm sure it's something simple, but my maths is pretty shaky.

    Read the article

  • Inverting matrix then decomposing gives different quaternion than decomposing then inverting the quat

    - by Fraser
    I'm getting different signs when I convert a matrix to quaternion and invert that, versus when I invert a matrix and then get the quaternion from it: Quaternion a = Quaternion.Invert(getRotation(m)); Quaternion b = getRotation(Matrix.Invert(m)); I would expect a and b to be identical (or inverses of each other). However, it looks like q1 = (x, y, -z, -w) while q2 = (-x, -y, w, z). In other words, the Z and W components have been switched for some reason. Note: getRotation() decomposes the transform matrix and returns just the rotation part of it (I've tried normalizing the result; it does nothing). The matrix m is a complete transform matrix and contains a translation (and possibly a scale) as well as a rotation. I'm using D3DXMatrixDecompose to do the actual decomposition.

    Read the article

  • Rotate sphere in Javascript / three.js while moving on x/z axes

    - by kaipr
    I have a sphere/ball in three.js which I want to "roll" arround on a x/z axis. For the z axe I could simply do this no matter what the current x and y rotation is: sphere.roll_z = function(distance) { sphere.position.z += distance; sphere.rotation.x += distance > 0 ? 0.05 : -0.05; } But how can I roll it along the x axe? And how could I properly do the roll_z? I've found a lot about quateration and matrixes, but I can't figure out how to use them properly to achieve my (rather simple) goal. I'm aware that I have to update multiple rotations and that I have to calculate how far to rotate the sphere to match the distance, but the "how" is the question. It's probably just lack of mathematical skills which I should train, but a working example/short explanation would help alot to start with.

    Read the article

  • accessing c++ class members with luaplus

    - by cppanda
    i've implemented LuaPlus in my engine eventmanager successfully and really like the flexibility i gained. but i'm still not exactly where i want to by, because i can't link my c++ classes to a lua class. for example i have a Actor class in c++, and i want to be able to create the same class in lua and gain access to members with luaplus, but i can't figure how i can achieve that. Is this actually luaplus built in functionality, or do i have to write my own interface that exchanges data tables between c++ and lua? my current approach would be to fire an event in luascript that creates an new actor class in c++ code, and transfer its id and the data i need to back to lua. when i modify the data i send the modifications back to c++ code again, but i actually thought there's something in luaplus that exposes this functionality already.

    Read the article

  • Restrict Tile Map to its boundaries

    - by Farooq Arshed
    I have loaded a tmx file in cocos2dx and now I am trying to implement panning. I have successfully implemented the panning first part where the map moves. Now I want to restrict the map so it does not display the map beyond its boundary where it shows black screen. I am confused as to how to implement it. Below is my code any help would be appreciated. bool HelloWorld::init() { if ( !CCLayer::init() ) { return false; } const char* tmx= "isometric_grass_and_water.tmx"; _tileMap = new CCTMXTiledMap(); _tileMap->initWithTMXFile(tmx); this->addChild(_tileMap); this->setTouchEnabled(true); return true; } void HelloWorld::ccTouchesBegan(CCSet *touches, CCEvent *event){ CCSetIterator it; for (it=touches->begin(); it!=touches->end(); ++it){ CCTouch* touch = (CCTouch*)it.operator*(); CCLog("touches id: %d", touch->getID()); oldLoc = touch->getLocationInView(); oldLoc = CCDirector::sharedDirector()->convertToGL(oldLoc); } } void HelloWorld::ccTouchesMoved(CCSet *touches, CCEvent *event) { if (touches->count() == 1) { CCTouch* touch = (CCTouch*)( touches->anyObject() ); this->moveScreen(touch); } else if (touches->count() == 2) { this->scaleScreen(touches); } } void HelloWorld::moveScreen(CCTouch* touch) { CCPoint currentLoc = touch->getLocationInView(); currentLoc = CCDirector::sharedDirector()->convertToGL(currentLoc); CCPoint moveTo = ccpSub(oldLoc, currentLoc); moveTo = ccpMult(moveTo, -1); oldLoc = currentLoc; this->setPosition(ccpAdd(this->getPosition(), ccp(moveTo.x, moveTo.y))); }

    Read the article

  • How to determine character's foot contact point on a uniform triangle mesh terrain?

    - by xenon
    For a terrain that is modelled by a heightmap with a uniform triangle mesh, what are some techniques I could use to determine the contact point of the foot of a character standing on the terrain? Since the terrain's Y values are altered by the heightmap, they won't be flat any more. As the character moves on the terrain, it has to know at which values of Y-value its foot should be. Conceptually, what are some methods and techniques to determine the contact point of the character's foot standing on the terrain?

    Read the article

  • LWJGL glRotatef() without rotating axes?

    - by Brandon oubiub
    Okay so, I noticed when you rotate around an axis, say you do this: glRotatef(90.0f, 1.0f, 0.0f, 0.0f); That will rotate things 90 degrees around the x-axis. However, it also sort of rotates the y and z axes as well. So now the y-axis is pointing in and out of the screen, instead of up and down. So when I try to do stuff like this: glRotatef(90.0f, 1.0f, 0.0f, 0.0f); glRotatef(whatever, 0.0f, 1.0f, 0.0f); glRotatef(whatever2, 0.0f, 0.0f, 1.0f); The rotations around the y and z-axes end up not how I want them. I was wondering if there is any way I can sort of rotate just the axes back to their initial position after using glRotatef(), without rotating the object back. Or something like that, just so that when I rotate around the y-axis, it rotates around a vertical axis.

    Read the article

  • How to move the object around the screen

    - by Abhishek
    I am trying to move the object around the screen I try this code -(void) move { CGFloat upperLimit = mWinSize.height - (mGunda.contentSize.height / 2.0); CGFloat upperLimit1 = mWinSize.height; CGFloat lowerLimit = (mGunda.contentSize.height / 2.0); CGFloat RightLimit = mWinSize.width - (mGunda.contentSize.width/2.0); CGFloat Right = (mGunda.contentSize.width/2.0); if ( mImageGoingUpward ) { mGunda.position = ccp( mGunda.position.x, mGunda.position.y + 5); if ( mGunda.position.y >= upperLimit ) { mImageGoingUpward = NO; mHori = NO; } } else { mGunda.position = ccp( mGunda.position.x, mGunda.position.y - 5); if ( mGunda.position.y <= lowerLimit ) { mGunda.position = ccp(mGunda.position.x +5, lowerLimit); } if(mGunda.position.x >= RightLimit) { mGunda.position = ccp(mGunda.position.x, mGunda.position.y+10); mHori = YES; } if(mHori) { if(mGunda.position.y >= upperLimit) { mGunda.position = ccp(mGunda.position.x - 5,mGunda.position.y); } } } } } It move the object from bottom to top & top to bottom & bottom to right & right to right top of the screen here is problem I have got It not move to the right top to left side of screen this rotationis not happen. How can I do this

    Read the article

  • How can I draw an arrow at the edge of the screen pointing to an object that is off screen?

    - by Adam Henderson
    I am wishing to do what is described in this topic: http://www.allegro.cc/forums/print-thread/283220 I have attempted a variety of the methods mentioned here. First I tried to use the method described by Carrus85: Just take the ratio of the two triangle hypontenuses (doesn't matter which triagle you use for the other, I suggest point 1 and point 2 as the distance you calculate). This will give you the aspect ratio percentage of the triangle in the corner from the larger triangle. Then you simply multiply deltax by that value to get the x-coordinate offset, and deltay by that value to get the y-coordinate offset. But I could not find a way to calculate how far the object is away from the edge of the screen. I then tried using ray casting (which I have never done before) suggested by 23yrold3yrold: Fire a ray from the center of the screen to the offscreen object. Calculate where on the rectangle the ray intersects. There's your coordinates. I first calculated the hypotenuse of the triangle formed by the difference in x and y positions of the two points. I used this to create a unit vector along that line. I looped through that vector until either the x coordinate or the y coordinate was off the screen. The two current x and y values then form the x and y of the arrow. Here is the code for my ray casting method (written in C++ and Allegro 5) void renderArrows(Object* i) { float x1 = i->getX() + (i->getWidth() / 2); float y1 = i->getY() + (i->getHeight() / 2); float x2 = screenCentreX; float y2 = ScreenCentreY; float dx = x2 - x1; float dy = y2 - y1; float hypotSquared = (dx * dx) + (dy * dy); float hypot = sqrt(hypotSquared); float unitX = dx / hypot; float unitY = dy / hypot; float rayX = x2 - view->getViewportX(); float rayY = y2 - view->getViewportY(); float arrowX = 0; float arrowY = 0; bool posFound = false; while(posFound == false) { rayX += unitX; rayY += unitY; if(rayX <= 0 || rayX >= screenWidth || rayY <= 0 || rayY >= screenHeight) { arrowX = rayX; arrowY = rayY; posFound = true; } } al_draw_bitmap(sprite, arrowX - spriteWidth, arrowY - spriteHeight, 0); } This was relatively successful. Arrows are displayed in the bottom right section of the screen when objects are located above and left of the screen as if the locations of the where the arrows are drawn have been rotated 180 degrees around the center of the screen. I assumed this was due to the fact that when I was calculating the hypotenuse of the triangle, it would always be positive regardless of whether or not the difference in x or difference in y is negative. Thinking about it, ray casting does not seem like a good way of solving the problem (due to the fact that it involves using sqrt() and a large for loop). Any help finding a suitable solution would be greatly appreciated, Thanks Adam

    Read the article

  • Looking for literature about graphics pipeline optimization

    - by zacharmarz
    I am looking for some books, articles or tutorials about graphics architecture and graphics pipeline optimizations. It shouldn't be too old (2008 or newer) - the newer, the better. I have found something in [Optimising the Graphics Pipeline, NVIDIA, Koji Ashida] - too old, [Real-time rendering, Akenine Moller], [OpenGL Bindless Extensions, NVIDIA, Jeff Bolz], [Efficient multifragment effects on graphics processing units, Louis Frederic Bavoil] and some internet discussions. But there is not too much information and I want to read more. It should contain something about application, driver, memory and shader units communication and data transfers. About vertices and attributes. Also pre and post T&L cache (if they still exist in nowadays architectures) etc. I don't need anything about textures, frame buffers and rasterization. It can also be about OpenGL (not about DirecX) and optimizing extensions (not old extensions like VBOs, but newer like vertex_buffer_unified_memory).

    Read the article

  • XNA 4: GetData from Texture2D and Set it into Texture3D with specific order

    - by cubrman
    I am trying to convert my color grading 2d lookup texture into 3d LUT. When I simply use: ColorAtlas.GetData(data); ColorAtlas3D.SetData(data); I get this: I tried building my 2d atlass horizontally but it did not helped - the data was messed up in a different way. So my question is how can I influence the order of the data I get from the 2d atlas and how can I properly pass it into my 3d atlas? Update: I know that I can GetData from a specific Rectangular area and put it into several arrays, but the result is still the same. This is what I tried: Color[] data2D = new Color[0]; for (int i = 0; i < 32; i++) { Color[] data = new Color[32 * 32]; GraphicsDevice.SetRenderTarget(null); ColorAtlas.GetData(0, new Rectangle(0, i*32, 32, 32), data, 0, data.Length); int oldLength = data2D.Length; Array.Resize<Color>(ref data2D, oldLength + data.Length); Array.Copy(data, 0, data2D, oldLength, data.Length); } ColorAtlas3D.SetData(data2D);

    Read the article

  • Why are my 3ds Max .fbx exports huge?

    - by abracadabra1980
    I've made an animation in 3ds Max and want to export it to .fbx and import it into Unity. I've done this once without problems. But this time, my .max file is 2,8MB and my .fbx file came out a huge 630MB! There's nothing wrong with my model: I exported it from a Blender model (to .fbx) and imported it to 3ds max (converted it to an editable poly) to do my rigging and animation. As soon as I import some .bip animations, I get these huge files. Is there a safe way to get smaller file sizes? I don't mind redoing the rigging if I can solve this.

    Read the article

  • How to use UILongPressGestureRecognizer with sprite drag & wait?

    - by ganesh
    May be it's asked before also but I couldn't find any good answer. Please tell me how this can be implemented with UILongPressGestureRecognizer? A user drags a sprite from X location to Y location. Then it waits at Y location (touch is not ended yet) for 1 or 2 secs and release the touch i.e touch is ended. In this case, shouldn't following states be triggered in below order for UILongPressGestureRecognizer: UIGestureRecognizerStateBegan UIGestureRecognizerStateChanged UIGestureRecognizerStateEnded ? My problem is if UIPanGestureRecognizer is also implemented to handle drags, UILongPressGesture is never triggered even after Long waits. Any thoughts?

    Read the article

  • LWJGL texture bleeding fix won't work

    - by user1990950
    I tried a lot of things to fix texture bleeding, but nothing works. I don't want to add a transparent border around my textures, because I already got too many and it would take too much time and I can't do it with code because I'm loading textures with slick. My textures are seperate textures and they seem to wrap on the other side (texture bleeding). Here are the textures that are "bleeding": The head, body, arm and leg are seperate textures. Here's the code I'm using to draw a texture: public static void drawTextureN(Texture texture, Vector2f position, Vector2f translation, Vector2f origin,Vector2f scale,float rotation, Color color, FlipState flipState) { texture.setTextureFilter(GL11.GL_NEAREST); color.bind(); texture.bind(); GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_S, GL12.GL_CLAMP_TO_EDGE); GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_T, GL12.GL_CLAMP_TO_EDGE); GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_NEAREST); GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_NEAREST); GL11.glTranslatef((int)position.x, (int)position.y, 0); GL11.glTranslatef(-(int)translation.x, -(int)translation.y, 0); GL11.glRotated(rotation, 0f, 0f, 1f); GL11.glScalef(scale.x, scale.y, 1); GL11.glTranslatef(-(int)origin.x, -(int)origin.y, 0); float pixelCorrection = 0f; GL11.glBegin(GL11.GL_QUADS); GL11.glTexCoord2f(0,0); GL11.glVertex2f(0,0); GL11.glTexCoord2f(1,0); GL11.glVertex2f(texture.getTextureWidth(),0); GL11.glTexCoord2f(1,1); GL11.glVertex2f(texture.getTextureWidth(),texture.getTextureHeight()); GL11.glTexCoord2f(0,1); GL11.glVertex2f(0,texture.getTextureHeight()); GL11.glEnd(); GL11.glLoadIdentity(); } I tried a half pixel correction but it didn't make any sense because GL12.GL_CLAMP_TO_EDGE. I set pixelCorrection to 0, but it still wont work.

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >