Search Results

Search found 21563 results on 863 pages for 'game testing'.

Page 452/863 | < Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >

  • Instead of the specified Texture, black circles on a green background are getting rendered. Why?

    - by vinzBad
    I'm trying to render a Texture via OpenGL. But instead of the texture black circles on a green background are rendered. (They scale, depending what the rotation of the texture is) Example: The texture I'm trying to render is the following: This is the code I use to render the texture, it's located in my Sprite-class. public void Render() { Matrix4 matrix = Matrix4.CreateTranslation(-OriginX, -OriginY, 0) * Matrix4.CreateRotationZ(Rotation) * Matrix4.CreateTranslation(X, Y, 0); Vector2[] corners = { new Vector2(0,0), //top left new Vector2(Width ,0),//top right new Vector2(Width,Height),//bottom rigth new Vector2(0,Height)//bottom left }; //copy the corners to the uv coordinates Vector2[] uv = corners.ToArray<Vector2>(); //transform the coordinates for (int i = 0; i < 4; i++) corners[i] = new Vector2(Vector3.Transform(new Vector3(corners[i]), matrix)); //GL.Color3(TintColor); GL.BindTexture(TextureTarget.Texture2D, _ID); GL.Begin(BeginMode.Quads); { for (int i = 0; i < 4; i++) { GL.TexCoord2(uv[i]); GL.Vertex3(corners[i].X, corners[i].Y, _layerDepth); } } GL.End(); if (EnableDebugDraw) { GL.Color3(Color.Violet); GL.PointSize(3); GL.Begin(BeginMode.Points); { for (int i = 0; i < 4; i++) GL.Vertex2(corners[i]); } GL.End(); GL.Color3(Color.Green); GL.Begin(BeginMode.Points); GL.Vertex2(X, Y); GL.End(); } } This is how I setup OpenGL. public static void SetupGL() { GL.Enable(EnableCap.AlphaTest); GL.AlphaFunc(AlphaFunction.Greater, 0.1f); GL.Enable(EnableCap.Texture2D); GL.Hint(HintTarget.PerspectiveCorrectionHint, HintMode.Nicest); } With this function I load the texture: public static uint LoadTexture(string path) { uint id; GL.GenTextures(1, out id); GL.BindTexture(TextureTarget.Texture2D, id); Bitmap bitmap = new Bitmap(path); BitmapData data = bitmap.LockBits(new System.Drawing.Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb); GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, data.Width, data.Height, 0, OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, data.Scan0); bitmap.UnlockBits(data); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear); return id; } And here I call Sprite.Render() protected override void OnRenderFrame(FrameEventArgs e) { GL.ClearColor(Color.MidnightBlue); GL.Clear(ClearBufferMask.ColorBufferBit); _sprite.Render(); SwapBuffers(); base.OnRenderFrame(e); } As I stole this code from the Textures-Example from OpenTK, I don't understand why this doesn't work.

    Read the article

  • Label not properly centered in TextButton

    - by Kees de Bruin
    I'm using LibGDX v1.1.0 and I see that the label of a TextButton is not properly centered. I have the following code: m_resumeButton = new TextButton("resume", skin); m_resumeButton.addListener(new ChangeListener() { public void changed(ChangeEvent event, Actor actor) { m_state = GameState.RUNNING; getGame().getWorld().pauseWorld(false); } }); The default TextButtonStyle is defined as: "com.badlogic.gdx.scenes.scene2d.ui.TextButton$TextButtonStyle": { "default": { "up": "menu-button", "down": "menu-button-down", "checked": "menu-button-down", "disabled": "menu-button-disabled", "font": "font24", "fontColor": "white" } } The menu button images are simple 240x48 bitmaps saved as 9-patch images. An image can be found here to illustrate the problem: https://www.dropbox.com/s/cwuhu5xb9ro5w6m/screenshot001.jpg Am I doing something wrong? Or is there a problem with the button images I'm using?

    Read the article

  • HTML5 Canvas Tileset Animation

    - by Veyha
    How to do in HTML5 canvas Image animating? I am have this code now: http://jsfiddle.net/WnjB6/1/ In here I am can add animations something like - Animation.add('stand', [0, 1, 2, 3, 4, 5]); But how to play this animation? My image drawing function is - drawTile(canvasX, canvasY, tile, tileWidth, tileHeight); Animation['stand']; return 0, 1, 2, 3, 4, 5 I am need something like when I am run Animation.play('stand') run animation from 'stand' array. I am try to do this something like one day, but no have more idea how. :( Thanks and sorry for my bad English language.

    Read the article

  • Why Android for new (micro) consoles?

    - by Klaim
    There are a lot of new (micro) consoles with Android inside coming soon or already there (OUYA for example). My question is: why use Android and not another OS as base for these consoles? I assume that there are pragmatic answers to this but I can't see any clear killer feature. For example, one can assume that any stable Linux distribution would work (like Valve seem to think). Android is primarily targetted at mobile platforms which mean it is built around the idea of being interrupted which goes against a lot of what console devs needs in new hardware - but is not killing Android as a platform for gamedev either as it's just a constraint. Why not another OS? What's the Android killer features that make micro-console builder using it instead of Linux or anything else?

    Read the article

  • Dirt compression from vehicle tires

    - by Mungoid
    So I kinda have this working but its not correct because it just averages, so I wanted to know if anyone here has any ideas. I'm trying to simulate loose dirt compression under the tires of a vehicle to reduce the potential bumpiness of 'chunky' terrain. Currently how I do this is that I have a bounding box shape around my tires, set a little lower so they intersect with the terrain. Each frame, I (currently) average all of the heights of each point in the terrain that are within the box bounds of that tire, and then set them all to that average. Clearly this won't work in most cases because, for example, if i'm on a hill, the terrain will deform way too much. One way I thought was to have a max and min amount the points could raise and lower but that still doesn't seem to work properly and sometimes looks more like steps than smooth dirt. I wanna say that there is probably a bit more to this that what i'm currently doing but I am not sure where to look. Could anyone here shed some light on this subject? Would I benefit any by maybe looking up some smoothing algorith or something similar?

    Read the article

  • Can minecraft support an asymmetrical mesh?

    - by Qwaar
    So in a bout of fancy I have decided I want to play as a Zaku II from gundam, and was saddened that player skins must be symmetrical. Then I remembered my friends mod that let him play as a MLP pony, and another one that let you shapeshift into mobs. So I decided I could just butcher a player model mesh and slap on the shoulder spike and shield, slap a Zaku skin I found on it, port the colors over onto more texture for the shoulder portions, and call it a day once I added it to the shiftable list, before butchering a gun mod to turn a gun into a ZMP-78. Before I get started on this though, I need to know if minecraft will support an asymmetrical mesh.

    Read the article

  • Restrict movement within a radius

    - by Phil
    I asked a similar question recently but now I think I know more about what I really want to know. I can answer my own question if I get to understand this bit. I have a situation where a sprite's center point needs to be constrained within a certain boundary in 2d space. The boundary is circular so the sprite is constrained within a radius. This radius is defined as a distance from the center of a certain point. I know the position of the center point and I can track the center position of the sprite. This is the code to detect the distance: float distance = Vector2.Distance(centerPosition, spritePosition)); if (distance > allowedDistance) { } The positions can be wherever on the grid, they are not described as in between -1 or 1. So basically the detecting code works, it only prints when the sprite is outside of it's boundary I just don't know what to do when it oversteps. Please explain any math used as I really want to understand what you're thinking to be able to elaborate on it myself.

    Read the article

  • how does HDR work?

    - by dotminic
    I'm trying to understand what HDR is and how it works. I understand the basic concepts and have an slight idea of how it is implemented with D3D/hlsl. However it's still pretty foggy. Say I'm rendering a sphere with a texture of the earth and a small point list of vertices to act as stars, how would I render this in HDR ? Here are a few things I'm confused about: I'm guessing, I can't use just any basic image format for the texture as the values would be limited to [0, 255] and clamped to [0, 1] in a shader. Same goes for the back buffer, I take it the format needs to be a float point format ? What are the other steps involved ? Surely there has to be more than just using floating point formats to render to a render target and then apply some bloom as a post process ? (considering the output will be 8bpp anyway) Basically, what are the steps for HDR ? How does it work ? I can't seem to find any good papers / articles that describe the process, other than this one, but it seems to skim over the basics a little, so it's confusing.

    Read the article

  • XNA content.load Dependancy

    - by Richard
    Quick question, My project i'm building for test purposes is working fine but i have dependencies flying around everywhere due to the XNA framework. In Update i have gametime passed everywhere... this is okay. In Draw i have gametime & spritebatch passed everywhere... this is okay. My issue is in the content.load textures/sounds/fonts. I have them as public variables ie Texture1 = Content.load(of texture2d)("Texture1") I'm passing a 'Game1' pointer into the constructor of every new class being instantiated to gain access to these variables. Am i missing an OOP trick to prevent me having to pass a pointer to 'game1' to every New class?

    Read the article

  • Increase moving speed of body

    - by Siddharth
    How to move ball speedily on the screen using box2d in libGDX? public class Box2DDemo implements ApplicationListener { private SpriteBatch batch; private TextureRegion texture; private World world; private Body groundDownBody, groundUpBody, groundLeftBody, groundRightBody, ballBody; private BodyDef groundBodyDef1, groundBodyDef2, groundBodyDef3, groundBodyDef4, ballBodyDef; private PolygonShape groundDownPoly, groundUpPoly, groundLeftPoly, groundRightPoly; private CircleShape ballPoly; private Sprite sprite; private FixtureDef fixtureDef; private Vector2 ballPosition; private Box2DDebugRenderer renderer; Vector2 vector2; @Override public void create() { texture = new TextureRegion(new Texture( Gdx.files.internal("img/red_ring.png"))); sprite = new Sprite(texture); sprite.setOrigin(sprite.getWidth() / 2, sprite.getHeight() / 2); batch = new SpriteBatch(); world = new World(new Vector2(0.0f, -10.0f), false); groundBodyDef1 = new BodyDef(); groundBodyDef1.type = BodyType.StaticBody; groundBodyDef1.position.x = 0.0f; groundBodyDef1.position.y = 0.0f; groundDownBody = world.createBody(groundBodyDef1); groundBodyDef2 = new BodyDef(); groundBodyDef2.type = BodyType.StaticBody; groundBodyDef2.position.x = 0f; groundBodyDef2.position.y = Gdx.graphics.getHeight(); groundUpBody = world.createBody(groundBodyDef2); groundBodyDef3 = new BodyDef(); groundBodyDef3.type = BodyType.StaticBody; groundBodyDef3.position.x = 0f; groundBodyDef3.position.y = 0f; groundLeftBody = world.createBody(groundBodyDef3); groundBodyDef4 = new BodyDef(); groundBodyDef4.type = BodyType.StaticBody; groundBodyDef4.position.x = Gdx.graphics.getWidth(); groundBodyDef4.position.y = 0f; groundRightBody = world.createBody(groundBodyDef4); groundDownPoly = new PolygonShape(); groundDownPoly.setAsBox(480.0f, 10f); fixtureDef = new FixtureDef(); fixtureDef.density = 0f; fixtureDef.restitution = 1f; fixtureDef.friction = 0f; fixtureDef.shape = groundDownPoly; fixtureDef.filter.groupIndex = 0; groundDownBody.createFixture(fixtureDef); groundUpPoly = new PolygonShape(); groundUpPoly.setAsBox(480.0f, 10f); fixtureDef = new FixtureDef(); fixtureDef.friction = 0f; fixtureDef.restitution = 0f; fixtureDef.density = 0f; fixtureDef.shape = groundUpPoly; fixtureDef.filter.groupIndex = 0; groundUpBody.createFixture(fixtureDef); groundLeftPoly = new PolygonShape(); groundLeftPoly.setAsBox(10f, 320f); fixtureDef = new FixtureDef(); fixtureDef.friction = 0f; fixtureDef.restitution = 0f; fixtureDef.density = 0f; fixtureDef.shape = groundLeftPoly; fixtureDef.filter.groupIndex = 0; groundLeftBody.createFixture(fixtureDef); groundRightPoly = new PolygonShape(); groundRightPoly.setAsBox(10f, 320f); fixtureDef = new FixtureDef(); fixtureDef.friction = 0f; fixtureDef.restitution = 0f; fixtureDef.density = 0f; fixtureDef.shape = groundRightPoly; fixtureDef.filter.groupIndex = 0; groundRightBody.createFixture(fixtureDef); ballPoly = new CircleShape(); ballPoly.setRadius(16f); fixtureDef = new FixtureDef(); fixtureDef.shape = ballPoly; fixtureDef.density = 1f; fixtureDef.friction = 1f; fixtureDef.restitution = 1f; ballBodyDef = new BodyDef(); ballBodyDef.type = BodyType.DynamicBody; ballBodyDef.position.x = (int) 200; ballBodyDef.position.y = (int) 200; ballBody = world.createBody(ballBodyDef); // ballBody.setLinearVelocity(200f, 200f); // ballBody.applyLinearImpulse(new Vector2(250f, 250f), // ballBody.getLocalCenter()); ballBody.createFixture(fixtureDef); renderer = new Box2DDebugRenderer(true, false, false); } @Override public void dispose() { ballPoly.dispose(); groundLeftPoly.dispose(); groundUpPoly.dispose(); groundDownPoly.dispose(); groundRightPoly.dispose(); world.destroyBody(ballBody); world.dispose(); } @Override public void pause() { } @Override public void render() { world.step(1f/30f, 3, 3); Gdx.gl.glClearColor(1f, 1f, 1f, 1f); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); batch.begin(); vector2 = ballBody.getLinearVelocity(); System.out.println("X=" + vector2.x + " Y=" + vector2.y); ballPosition = ballBody.getPosition(); renderer.render(world,batch.getProjectionMatrix()); // int preX = (int) (vector2.x / Math.abs(vector2.x)); // int preY = (int) (vector2.y / Math.abs(vector2.y)); // // if (Math.abs(vector2.x) == 0.0f) // ballBody1.setLinearVelocity(1.4142137f, vector2.y); // else if (Math.abs(vector2.x) < 1.4142137f) // ballBody1.setLinearVelocity(preX * 5, vector2.y); // // if (Math.abs(vector2.y) == 0.0f) // ballBody1.setLinearVelocity(vector2.x, 1.4142137f); // else if (Math.abs(vector2.y) < 1.4142137f) // ballBody1.setLinearVelocity(vector2.x, preY * 5); batch.draw(sprite, (ballPosition.x - (texture.getRegionWidth() / 2)), (ballPosition.y - (texture.getRegionHeight() / 2))); batch.end(); } @Override public void resize(int arg0, int arg1) { } @Override public void resume() { } } I implement above code but I can not achieve higher moving speed of the ball

    Read the article

  • How to move a rectangle properly?

    - by bodycountPP
    I recently started to learn OpenGL. Right now I finished the first chapter of the "OpenGL SuperBible". There were two examples. The first had the complete code and showed how to draw a simple triangle. The second example is supposed to show how to move a rectangle using SpecialKeys. The only code provided for this example was the SpecialKeys method. I still tried to implement it but I had two problems. In the previous example I declared and instaciated vVerts in the SetupRC() method. Now as it is also used in the SpecialKeys() method, I moved the declaration and instantiation to the top of the code. Is this proper c++ practice? I copied the part where vertex positions are recalculated from the book, but I had to pick the vertices for the rectangle on my own. So now every time I press a key for the first time the rectangle's upper left vertex is moved to (-0,5:-0.5). This ok because of GLfloat blockX = vVerts[0]; //Upper left X GLfloat blockY = vVerts[7]; // Upper left Y But I also think that this is the reason why my rectangle is shifted in the beginning. After the first time a key was pressed everything works just fine. Here is my complete code I hope you can help me on those two points. GLBatch squareBatch; GLShaderManager shaderManager; //Load up a triangle GLfloat vVerts[] = {-0.5f,0.5f,0.0f, 0.5f,0.5f,0.0f, 0.5f,-0.5f,0.0f, -0.5f,-0.5f,0.0f}; //Window has changed size, or has just been created. //We need to use the window dimensions to set the viewport and the projection matrix. void ChangeSize(int w, int h) { glViewport(0,0,w,h); } //Called to draw the scene. void RenderScene(void) { //Clear the window with the current clearing color glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT); GLfloat vRed[] = {1.0f,0.0f,0.0f,1.0f}; shaderManager.UseStockShader(GLT_SHADER_IDENTITY,vRed); squareBatch.Draw(); //perform the buffer swap to display the back buffer glutSwapBuffers(); } //This function does any needed initialization on the rendering context. //This is the first opportunity to do any OpenGL related Tasks. void SetupRC() { //Blue Background glClearColor(0.0f,0.0f,1.0f,1.0f); shaderManager.InitializeStockShaders(); squareBatch.Begin(GL_QUADS,4); squareBatch.CopyVertexData3f(vVerts); squareBatch.End(); } //Respond to arrow keys by moving the camera frame of reference void SpecialKeys(int key,int x,int y) { GLfloat stepSize = 0.025f; GLfloat blockSize = 0.5f; GLfloat blockX = vVerts[0]; //Upper left X GLfloat blockY = vVerts[7]; // Upper left Y if(key == GLUT_KEY_UP) { blockY += stepSize; } if(key == GLUT_KEY_DOWN){blockY -= stepSize;} if(key == GLUT_KEY_LEFT){blockX -= stepSize;} if(key == GLUT_KEY_RIGHT){blockX += stepSize;} //Recalculate vertex positions vVerts[0] = blockX; vVerts[1] = blockY - blockSize*2; vVerts[3] = blockX + blockSize * 2; vVerts[4] = blockY - blockSize *2; vVerts[6] = blockX+blockSize*2; vVerts[7] = blockY; vVerts[9] = blockX; vVerts[10] = blockY; squareBatch.CopyVertexData3f(vVerts); glutPostRedisplay(); } //Main entry point for GLUT based programs int main(int argc, char** argv) { //Sets the working directory. Not really needed gltSetWorkingDirectory(argv[0]); //Passes along the command-line parameters and initializes the GLUT library. glutInit(&argc,argv); //Tells the GLUT library what type of display mode to use, when creating the window. //Double buffered window, RGBA-Color mode,depth-buffer as part of our display, stencil buffer also available glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGBA|GLUT_DEPTH|GLUT_STENCIL); //Window size glutInitWindowSize(800,600); glutCreateWindow("MoveRect"); glutReshapeFunc(ChangeSize); glutDisplayFunc(RenderScene); glutSpecialFunc(SpecialKeys); //initialize GLEW library GLenum err = glewInit(); //Check that nothing goes wrong with the driver initialization before we try and do any rendering. if(GLEW_OK != err) { fprintf(stderr,"Glew Error: %s\n",glewGetErrorString); return 1; } SetupRC(); glutMainLoop(); return 0; }

    Read the article

  • Windows API Programing....

    - by vs4vijay
    Hello There... Its me Vijay.. I m Trying to make a CrossHair(some kind of cursor) On The Screen while running a Game (Counter Strike)... so i did this... ############################# #include<iostream.h> #include<windows.h> #include<conio.h> #include<dos.h> #include<stdlib.h> #include<process.h> #include <time.h> int main() { HANDLE hl = OpenProcess(PROCESS_ALL_ACCESS,TRUE,pid); // Here pid is the process ID of the Game... HDC hDC = GetDC(NULL); //Here i pass NULL for Entire Screen... HBRUSH hb=CreateSolidBrush(RGB(0,255,255)); SelectObject(hDC,hb); POINT p; while(!kbhit()) { int x=1360/2,y=768/2; MoveToEx(hDC,x-20,y,&p); LineTo(hDC,x+20,y); SetPixel(hDC,x,y,RGB(255,0,0)); SetPixel(hDC,x-1,y-1,RGB(255,0,0)); SetPixel(hDC,x-1,y+1,RGB(255,0,0)); SetPixel(hDC,x+1,y+1,RGB(255,0,0)); SetPixel(hDC,x+1,y-1,RGB(255,0,0)); MoveToEx(hDC,x,y-20,&p); LineTo(hDC,x,y+20); } cin.get(); return 0; } #################################### it works fine....at desktop i see crosshair...but my problem is that when i run game...the cross here got disappeared.... so i think i did not handle the process of game... so i pass the HANDLE to the GetDC(hl)... But GetDC take only HWND(Handle To Window)... so i typecast it like this... HWND hl = (HWND)OpenProcess(PROCESS_ALL_ACCESS,TRUE,pid); and passed hl to the GetDC(hl)... but it doesnt work...Whats wrong with the code... plz tell me how do i make a simple shape at the screen on a process or game... PS : (My Compiler Is DevCPP and OS WinXP SP3....)

    Read the article

  • Andengine. Put bullet to pool, when it leaves screen

    - by Ashot
    i'm creating a bullet with physics body. Bullet class (extends Sprite class) has die() method, which unregister physics connector, hide sprite and put it in pool public void die() { Log.d("bulletDie", "See you in hell!"); if (this.isVisible()) { this.setVisible(false); mPhysicsWorld.unregisterPhysicsConnector(physicsConnector); physicsConnector.setUpdatePosition(false); body.setActive(false); this.setIgnoreUpdate(true); bulletsPool.recyclePoolItem(this); } } in onUpdate method of PhysicsConnector i executes die method, when sprite leaves screen physicsConnector = new PhysicsConnector(this,body,true,false) { @Override public void onUpdate(final float pSecondsElapsed) { super.onUpdate(pSecondsElapsed); if (!camera.isRectangularShapeVisible(_bullet)) { Log.d("bulletDie","Dead?"); _bullet.die(); } } }; it works as i expected, but _bullet.die() executes TWICE. what i`m doing wrong and is it right way to hide sprites? here is full code of Bullet class (it is inner class of class that represents player) private class Bullet extends Sprite implements PhysicsConstants { private final Body body; private final PhysicsConnector physicsConnector; private final Bullet _bullet; private int id; public Bullet(float x, float y, ITextureRegion texture, VertexBufferObjectManager vertexBufferObjectManager) { super(x,y,texture,vertexBufferObjectManager); _bullet = this; id = bulletId++; body = PhysicsFactory.createCircleBody(mPhysicsWorld, this, BodyDef.BodyType.DynamicBody, bulletFixture); physicsConnector = new PhysicsConnector(this,body,true,false) { @Override public void onUpdate(final float pSecondsElapsed) { super.onUpdate(pSecondsElapsed); if (!camera.isRectangularShapeVisible(_bullet)) { Log.d("bulletDie","Dead?"); Log.d("bulletDie",id+""); _bullet.die(); } } }; mPhysicsWorld.registerPhysicsConnector(physicsConnector); $this.getParent().attachChild(this); } public void reset() { final float angle = canon.getRotation(); final float x = (float) ((Math.cos(MathUtils.degToRad(angle))*radius) + centerX) / PIXEL_TO_METER_RATIO_DEFAULT; final float y = (float) ((Math.sin(MathUtils.degToRad(angle))*radius) + centerY) / PIXEL_TO_METER_RATIO_DEFAULT; this.setVisible(true); this.setIgnoreUpdate(false); body.setActive(true); mPhysicsWorld.registerPhysicsConnector(physicsConnector); body.setTransform(new Vector2(x,y),0); } public Body getBody() { return body; } public void setLinearVelocity(Vector2 velocity) { body.setLinearVelocity(velocity); } public void die() { Log.d("bulletDie", "See you in hell!"); if (this.isVisible()) { this.setVisible(false); mPhysicsWorld.unregisterPhysicsConnector(physicsConnector); physicsConnector.setUpdatePosition(false); body.setActive(false); this.setIgnoreUpdate(true); bulletsPool.recyclePoolItem(this); } } }

    Read the article

  • How can I extract a list of Minecraft items and recipes?

    - by Sean
    I'm designing a robust system for resolving item dependencies in Minecraft and to do so, I need to maintain a database of items and recipes. Right now, this database has to be hand-crafted (no pun intended); I would like to know if it is possible to somehow query the Minecraft jars (or perhaps more realistically, grep through them) to extract this data automatically. How can this be done? The project is currently in Python, but it can still be ported to Java without much fuss at this stage. (For the curious.)

    Read the article

  • Having trouble's understanding NIF model file format?

    - by NoobScratcher
    I'm attempting too develop a 3rd party application to make it easy to import 3d model part's into my mod for skyrim the plan was to have a fileviewer and preview window of the nif model but since , I don't know what the NIF file format actually is or where to get the vertex data from it or the hole nine yards of parsing a text file in detail I'm at a lost what to do. I'm very good at C++ but not at this super over complicated file formats , id much prefer .obj over the nif file format specification here -- http://niftools.sourceforge.net/doc/nif/index.html If someone could help me in understanding the file format in a natural and simple way and the exact parsing needed to create the 3D Model in the frustum and a explanation on how you figured that out would be happy to know. I use cygwin , notepad++ , win32 7

    Read the article

  • Why is my collision detection not accurate?

    - by optimisez
    After trying and trying, I still cannot understand why the leg of character exceeds the wall but no clipping issue when I hit the wall from below. How should I fix it to make him stand still on the wall? From collideWithBox() function below, it shows that playerDest.Y = boxDest.Y - boxDest.height; will get the position the character should standstill on the wall. Theoretically, the clipping effect won't be happen as the character hit the box from below works with the equation playerDest.Y = boxDest.Y + boxDest.height;. void collideWithBox() { if ( spriteCollide(playerDest, boxDest) && keyArr[VK_UP]) //playerDest.Y += 50; playerDest.Y = boxDest.Y + boxDest.height; else if ( spriteCollide(playerDest, boxDest) && !keyArr[VK_UP]) playerDest.Y = boxDest.Y - boxDest.height; } void initPlayer() { // Create texture. hr = D3DXCreateTextureFromFileEx(d3dDevice, "player.png", 169, 44, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 255), NULL, NULL, &player); playerRect.left = playerRect.top = 0; playerRect.right = 29; playerRect.bottom = 36; playerDest.X = 0; playerDest.Y = 564; playerDest.length = playerRect.right - playerRect.left; playerDest.height = playerRect.bottom - playerRect.top; } void initBox() { hr = D3DXCreateTextureFromFileEx(d3dDevice, "brock.png", 330, 132, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 255), NULL, NULL, &box); boxRect.left = 33; boxRect.top = 0; boxRect.right = 63; boxRect.bottom = 30; boxDest.X = boxDest.Y = 300; boxDest.length = boxRect.right - boxRect.left; boxDest.height = boxRect.bottom - boxRect.top; } bool spriteCollide(Entity player, Entity target) { float left1, left2; float right1, right2; float top1, top2; float bottom1, bottom2; left1 = player.X; left2 = target.X; right1 = player.X + player.length; right2 = target.X + target.length; top1 = player.Y; top2 = target.Y; bottom1 = player.Y + player.height; bottom2 = target.Y + target.height; if (bottom1 < top2) return false; if (top1 > bottom2) return false; if (right1 < left2) return false; if (left1 > right2) return false; return true; }

    Read the article

  • returning correct multiTouch id

    - by Max
    I've spent countless hours on reading tutorials and looking at every question related to multiTouch from here and Stackoverflow. But I just cannot figure out how to do this correctly. I use a loop to get my pointerId, I dont see alot of people doing this but its the only way I've managed to get it somewhat working. I have two joysticks on my screen, one for moving and one for controlling my sprites rotation and the angle he shoots, like in Monster Shooter. Both these work fine. My problem is that when I Move my sprite at the same time as Im shooting, my touchingPoint for my movement is set to the touchingPoint of my shooting, since the x and y is higher on the touchingPoint of my shooting (moving-stick on left side of screen, shooting-stick on right side), my sprite speeds up, this creates an unwanted change in speed for my sprite. I will post my entire onTouch method here with some variable-changes to make it more understandable. Since I do not know where Im going wrong. public void update(MotionEvent event) { if (event == null && lastEvent == null) { return; } else if (event == null && lastEvent != null) { event = lastEvent; } else { lastEvent = event; } int pointerCount = event.getPointerCount(); for (int i = 0; i < pointerCount; i++) { int x = (int) event.getX(i); int y = (int) event.getY(i); int id = event.getPointerId(i); int action = event.getActionMasked(); int actionIndex = event.getActionIndex(); String actionString; switch (action) { case MotionEvent.ACTION_DOWN: actionString = "DOWN"; break; case MotionEvent.ACTION_UP: shooting=false; // when shooting is true, it shoots dragging=false; // when dragging is true, it moves actionString = "UP"; break; case MotionEvent.ACTION_POINTER_DOWN: actionString = "PNTR DOWN"; break; case MotionEvent.ACTION_POINTER_UP: shooting=false; dragging=false; actionString = "PNTR UP"; break; case MotionEvent.ACTION_CANCEL: shooting=false; dragging=false; actionString = "CANCEL"; break; case MotionEvent.ACTION_MOVE: try{ if((int) event.getX(id) > 0 && (int) event.getX(id) < touchingBox && (int) event.getY(id) > touchingBox && (int) event.getY(id) < view.getHeight()){ movingPoint.x = (int) event.getX(id); movingPoint.y = (int) event.getY(id); dragging = true; } else if((int) event.getX(id) > touchingBox && (int) event.getX(id) < view.getWidth() && (int) event.getY(id) > touchingBox && (int) event.getY(id) < view.getHeight()){ shootingPoint.x = (int) event.getX(id); shootingPoint.y = (int) event.getY(id); shooting=true; }else{ shooting=false; dragging=false; } }catch(Exception e){ } actionString = "MOVE"; break; default: actionString = ""; } Wouldnt post this much code if I wasnt at an absolute loss of what I'm doing wrong. I simply can not get a good understanding of how multiTouching works. basicly movingPoint changes for both my first and second finger. I bind it to a box, but aslong as I hold one finger within this box, it changes its value based on where my second finger touches. It moves in the right direction and nothing gives an error, the problem is the speed-change, its almost like it adds up the two touchingPoints.

    Read the article

  • Is there a standard way to track 2d tile positions both locally and on screen?

    - by Magicked
    I'm building a 2D engine based on 32x32 tiles with OpenGL. OpenGL draws from the top left, so Y coordinates go down the screen as they increase. Obviously this is different than a standard graph where Y coordinates move up as they increase. I'm having trouble determining how I want to track positions for both sprites and tile objects (objects that are collections of tiles). My brain wants to set the world position as the bottom left of the object and track every object this way. The problem with this is I would have to translate it to an on screen position on rendering. The positive with this is I could easily visualize (especially in the case of objects made of multiple tiles) how something is structured and needs to be built. Are there standard ways for doing this? Should I just suck it up and get used to positions beginning in the top left? Here are the OpenGL calls to start rendering: // enable textures since we're going to use these for our sprites glEnable(GL_TEXTURE_2D); glClearColor(0.0f, 0.0f, 0.0f, 0.0f); // enable alpha blending glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // disable the OpenGL depth test since we're rendering 2D graphics glDisable(GL_DEPTH_TEST); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, WIDTH, HEIGHT, 0, 1, -1); glMatrixMode(GL_MODELVIEW); I assume I need to change: glOrtho(0, WIDTH, HEIGHT, 0, 1, -1); to: glOrtho(0, WIDTH, 0, HEIGHT, 1, -1);

    Read the article

  • Prediction happening on (sending) client side

    - by Daniel
    This seems like a simple enough concept, but I haven't seen this implemented anywhere yet. Assuming that the server just forwards and verifies data... I'm using mouse-based movement, so it's not too difficult to predict the location of the player 150ms from when the event is sent. I'm thinking it is more accurate than using old data and older data on the receiving clients' side. The question I have, is why can I not find any examples of this? Is there something fundamentally wrong with this that I cannot find anyone implementing or talking about implementing this.

    Read the article

  • How do I implement deceleration for the player character?

    - by tesselode
    Using delta time with addition and subtraction is easy. player.speed += 100 * dt However, multiplication and division complicate things a bit. For example, let's say I want the player to double his speed every second. player.speed = player.speed * 2 * dt I can't do this because it'll slow down the player (unless delta time is really high). Division is the same way, except it'll speed things way up. How can I handle multiplication and division with delta time? Edit: it looks like my question has confused everyone. I really just wanted to be able to implement deceleration without this horrible mass of code: else if speed > 0 then speed = speed - 20 * dt if speed < 0 then speed = 0 end end if speed < 0 then speed = speed + 20 * dt if speed > 0 then speed = 0 end end end Because that's way bigger than it needs to be. So far a better solution seems to be: speed = speed - speed * whatever_number * dt

    Read the article

  • Drawing application on OpenGL for iOS (iPad)

    - by Alesia
    Some help is needed. I'm developing drawing application on OpenGL (deployment target 4.0) for iOS (iPad). We have 3 drawing tools: pen, marker (with alfa) and eraser. I draw with textures, using blending in orthographic projection. I can't use z-ordering because in this case I have to face a lot of troubles with cutting and erasing. The thing that I need is to make the pen be always on the top. When I first use marker and than pen - it's ok. But if I use pen first and marker over the pen - I can't see pen color under marker. I'd appreciate any help or advice. Thank you veeeeeery much!

    Read the article

  • Specifying force and angle in ApplyImpulse in box2d

    - by Deepak Mahalingam
    I need to apply an impulse on a object with a particular force and at a particular angle in Box2d. If I am right the syntax would be the following: body.GetBody().ApplyImpulse(new b2Vec2(direction, power),body.GetBody().GetWorldCenter()); The problem is my direction is in angles. I found a discussion where it was said that the way we can convert an angle into a vector would be as: new b2Vec2(Math.cos(angle*Math.PI/180),Math.sin(angle*Math.PI/180)); Now I am not sure how to combine these two. In other words, if I wish to apply a force of 30 units at an angle of 30 degrees at the center of the object, how should I do it?

    Read the article

  • Using texture() in combination with JBox2D

    - by Valentino Ru
    I'm getting some trouble using the texture() method inside beginShape()/endShape() clause. In the display()-method of my class TowerElement (a bar which is DYNAMIC), I draw the object like following: void display(){ Vec2 pos = level.getLevel().getBodyPixelCoord(body); float a = body.getAngle(); // needed for rotation pushMatrix(); translate(pos.x, pos.y); rotate(-a); fill(temp); // temp is a color defined in the constructor stroke(0); beginShape(); vertex(-w/2,-h/2); vertex(w/2,-h/2); vertex(w/2,h-h/2); vertex(-w/2,h-h/2); endShape(CLOSE); popMatrix(); } Now, according to the API, I can use the texture() method inside the shape definition. Now when I remove the fill(temp) and put texture(img) (img is a PImage defined in the constructor), the stroke gets drawn, but the bar isn't filled and I get the warning texture() is not available with this renderer What can I do in order to use textures anyway? I don't even understand the error message, since I do not know much about different renderers.

    Read the article

  • The input doesn't recognize that I release the key?

    - by joapet99
    I'm creating a window (JOptionPane), in response to a collision. However, if the player is holding a key down when the window pops up, the input doesn't trigger a key release when the key is released. I don't think you can just check it with a isRelease function in the input, since the input is kind of corrupt. Can you help me? The way I check if the key is down: if(input.isKeyDown(Input.KEY_A)&& TestLevel.isFighting == false){ if(owner.canMoveLeft){ position.x -= speed * delta; } } I am not handling the key release by myself, but if I check if the key is down it should work. But it doesn't.

    Read the article

  • Making video from 3D gaphics in OpenGL

    - by MVTC
    What are some of the preferred methods or libraries for creating video from an OpenGL graphics simulation? For example, I want to create a visualization(video) of an N-Body gravity simulation by rendering non-real-time OpenGL frames. The simulation is already coded, I just don't know how to convert it to video. EDIT: I am also interested in providing the described functionality: The user can adjust parameters including the time step between captured frames and then initiate the simulation. The user waits for the simulation to complete, and then can watch the results. The user is able to increase or decrease the playback speed of the simulation whereas in slow motion, more frames are used i.e., you see higher resolution time steps, and when the speed is increased, you see lower resolution time steps at a higher rate, but the frames per second flashing on the screen is constant.

    Read the article

< Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >