Search Results

Search found 33291 results on 1332 pages for 'development environment'.

Page 600/1332 | < Previous Page | 596 597 598 599 600 601 602 603 604 605 606 607  | Next Page >

  • Really weird GL Behaviour, uniform not "hitting" proper mesh? LibGdx

    - by HaMMeReD
    Ok, I got some code, and you select blocks on a grid. The selection works. I can modify the blocks to be raised when selected and the correct one shows. I set a color which I use in the shader. However, I am trying to change the color before rendering the geometry, and the last rendered geometry (in the sequence) is rendered light. However, to debug logic I decided to move the block up and make it white, in which case one block moves up and another block becomes white. I checked all my logic and it knows the correct one is selected and it is showing in, in the correct place and rendering it correctly. When there is only 1 it works properly. Video Of the bug in action, note how the highlighted and elevated blocks are not the same block, however the code for color and My Renderer is here (For the items being drawn) public void render(Renderer renderer) { mGrid.render(renderer, mGameState); for (Entity e:mGameEntities) { UnitTypes ut = UnitTypes.valueOf((String)e.getObject(D.UNIT_TYPE.ordinal())); if (ut == UnitTypes.Soldier) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.texture_soldier.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); if (mSelectedEntities.contains(e)) { mEntityMatrix.translate(pos.x, 1f, pos.y); renderer.testShader.setUniformf("v_color", 0.5f,0.5f,0.5f,1f); } else { mEntityMatrix.translate(pos.x, 0f, pos.y); renderer.testShader.setUniformf("v_color", 1f,1f,1f,1f); } mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_soldier.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } else if (ut == UnitTypes.Enemy_Infiltrator) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.testShader.setUniformf("v_color", 1.0f,1,1,1.0f); renderer.texture_enemy_infiltrator.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); mEntityMatrix.translate(pos.x, 0f, pos.y); mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_enemy_infiltrator.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } } }

    Read the article

  • Hit Detection When rotating the camera

    - by SD1990
    This bug/feature has been plaguing me for a while and i want to know the best way to fix it. I'm testing simple hit detection with a wall, like: if (Forward button) if(Inv.w.z < -49 || Inv.w.z > 49) pos.z = 0.0f; else if(Inv.w.x < -49 || Inv.w.x > 49) pos.z = 0.0f; else pos.z = +1.0f; where Inv.w. is the camera positions. Now obviously when i now hit that certain point i can no longer move away from the wall or anywhere in fact. How can i change this code to allow for the camera to be turned away from the wall so therefore i should be allowed to move? for example, the player hits the wall and i cant move until i turn around or to the side? I know its something to do with velocity but im pretty new to this so please bare with me if this is easy.

    Read the article

  • Enabling and Disabling components in Unity

    - by Blue
    I'm trying to create an enable/ disable game objects in Unity. I used GameObject.SetActiveRecursively but it only works one-way. I used a collider in which when an object enters the collider. The game objects become enabled. When they leave or get to a certain point, they disable. How would I make this a two way system, making it able to be enabled while inside the collider and disabled when outside the collider? -- The collider is in the game object who is being disabled and enabled. According to this information from Unity Answers, the object becomes disabled. So how would I make the object enabled?

    Read the article

  • Question about design

    - by lukeluke
    Two fast questions about two design decisions: Suppose that you are checking collisions between game elements. When you find a collision between object 1 and object 2, do you play immediately a sound effect or do you insert it in a list and, in a later a stage, do you process all sound effects? Same question as above for user input. When the user presses key 'keypad left' do you insert the event in a queue and process it later or do you update character position immediately? Thx

    Read the article

  • Nifty gui hide/show image

    - by Mario
    I have a simple screen made with Nifty gui. On this game screen I want to put a simple image controls sound: just on and not. So I have two images to switch and get music stop or run. Problem is: how can I hide/disply an image with nifty gui? Here my java code when player click on image: Screen screen = nifty.getCurrentScreen(); Element el = screen.findElementByName("iconOn"); el.setVisible(false); el = screen.findElementByName("iconOff"); el.setVisible(true); This code doesn't work :( Thanks to everyone could help me

    Read the article

  • C# Collision Math Help

    - by user36037
    I am making my own collision detection in MonoGame. I have a PolyLine class That has a property to return the normal of that PolyLine instance. I have a ConvexPolySprite class that has a List LineSegments. I hav a CircleSprite class that has a Center Property and a Radius Property. I am using a static class for the collision detection method. I am testing it on a single line segment. Vector2(200,0) = Vector2(300, 200) The problem is it detects the collision anywhere along the path of line out into space. I cannot figure out why. Thanks in advance; public class PolyLine { //--------------------------------------------------------------------------------------------------------------------------- // Class Properties /// <summary> /// Property for the upper left-hand corner of the owner of this instance /// </summary> public Vector2 ParentPosition { get; set; } /// <summary> /// Relative start point of the line segment /// </summary> public Vector2 RelativeStartPoint { get; set; } /// <summary> /// Relative end point of the line segment /// </summary> public Vector2 RelativeEndPoint { get; set; } /// <summary> /// Property that gets the absolute position of the starting point of the line segment /// </summary> public Vector2 AbsoluteStartPoint { get { return ParentPosition + RelativeStartPoint; } }//end of AbsoluteStartPoint /// <summary> /// Gets the absolute position of the end point of the line segment /// </summary> public Vector2 AbsoluteEndPoint { get { return ParentPosition + RelativeEndPoint; } }//end of AbsoluteEndPoint public Vector2 NormalizedLeftNormal { get { Vector2 P = AbsoluteEndPoint - AbsoluteStartPoint; P.Normalize(); float x = P.X; float y = P.Y; return new Vector2(-y, x); } }//end of NormalizedLeftNormal //--------------------------------------------------------------------------------------------------------------------------- // Class Constructors /// <summary> /// Sole ctor /// </summary> /// <param name="parentPosition"></param> /// <param name="relStart"></param> /// <param name="relEnd"></param> public PolyLine(Vector2 parentPosition, Vector2 relStart, Vector2 relEnd) { ParentPosition = parentPosition; RelativeEndPoint = relEnd; RelativeStartPoint = relStart; }//end of ctor }//end of PolyLine class public static bool Collided(CircleSprite circle, ConvexPolygonSprite poly) { var distance = Vector2.Dot(circle.Position - poly.LineSegments[0].AbsoluteEndPoint, poly.LineSegments[0].NormalizedLeftNormal) + circle.Radius; if (distance <= 0) { return false; } else { return true; } }//end of collided

    Read the article

  • Problem with gluOrtho2D()

    - by Shashwat
    I was trying to understand the gluOrtho2D function. I have drawn 4 lines originating from the center reaching up to 4 corners of the screen. You can follow the below code. osize is a variable which is used to set the parameters of gluOrtho2D. It will create a window of size 2*osize. If works fine when osize is 1. Lines reach the corners. But as I increase the value of osize, the length of the lines decreases (cross becomes smaller and does not cover the whole screen). But I think it should reach the corner. void display() { glClear( GL_COLOR_BUFFER_BIT ); //glViewport(0, 0, 100, 100); glMatrixMode (GL_PROJECTION); float osize = 1.2; //glOrtho(-osize*1.0, osize*1.0, osize*1.0, -osize*1.0, -1.0, 1.0); gluOrtho2D(-osize*1.0, osize*1.0, osize*1.0, -osize*1.0); glMatrixMode (GL_MODELVIEW); glBegin(GL_LINES); glColor3f(0.0, 0.0, 1.0); glVertex2f(0.0, 0.0); glVertex2f(-osize*1.0, -osize*1.0); glVertex2f(0.0, 0.0); glVertex2f(-osize*1.0, osize*1.0); glVertex2f(0.0, 0.0); glVertex2f(osize*1.0, -osize*1.0); glVertex2f(0.0, 0.0); glVertex2f(osize*1.0, osize*1.0); glEnd(); glutSwapBuffers(); //includes glFlush(); } What is the problem?

    Read the article

  • Removing/Adding a specific variable from an object inside javascript array? [migrated]

    - by hustlerinc
    I have a map array with objects stuffed with variables looking like this: var map = [ [{ground:0, object:1}, {ground:0, item:2}, {ground:0, object:1, item:2}], [{ground:0, object:1}, {ground:0, item:2}, {ground:0, object:1, item:2}] ]; Now I would like to be able to delete and add one of the variables like item:2. 1) What would I use to delete specific variables? 2) What would I use to add specific variables? I just need 2 short lines of code, the rest like detecting if and where to execute I've figured out. I've tried delete map[i][j].item; with no results. Help appreciated.

    Read the article

  • Game Asset Size Over Time

    - by jterrace
    The size (in bytes) of games have been growing over time. There are probably many factors contributing to this: trailer/cut scene videos being bundled with the game, more and higher-quality audio, multiple levels of detail being used, etc. What I'd really like to know is how the size of 3D models and textures that games ship with have changed over time. For example, if one were to look at the size of meshes and textures for Quake I (1996), Quake II (1997), Quake III: Arena (1999), Quake 4 (2005), and Enemy Territory: Quake Wars (2007), I'd imagine a steady increase in file size. Does anyone know of a data source for numbers like this?

    Read the article

  • Determining if something is on the right or left side of an object?

    - by meds
    I have a character in a 3D world which is facing an arbitrary direction on a flat plane, the player can click on the left or right side of the character and based on which side is clicked on a different action happens. How can I determine which side the click occured on? Obviously for straight on ahead (0,0,1) I can simply use the x coordinate of the click point to determine if it's the left or right hand side, but what about other cases?

    Read the article

  • How do I create a fire sphere (fireball) in opengl? (opengl Visual C++)

    - by gn66
    I'm making an evil Pacman in OpenGL and I need to make my spheres look like a fireballs, does anyone know how I do that? And what material colour should I use? Also, is there a colour palette to change the colour of my materials? This is how I create a sphere: GLfloat mat_ambient[] = { 0.25, 0.25, 0.25, 1.0 }; GLfloat mat_diffuse[] = { 0.4, 0.4, 0.4, 1.0 }; GLfloat mat_specular[] = { 0.774597, 0.774597, 0.774597, 1.0 }; GLfloat mat_shine = 0.6; glMaterialfv (GL_FRONT, GL_AMBIENT, mat_ambient); glMaterialfv (GL_FRONT, GL_DIFFUSE, mat_diffuse); glMaterialfv (GL_FRONT, GL_SPECULAR, mat_specular); glMaterialf (GL_FRONT, GL_SHININESS, mat_shine * 128); glPushMatrix(); glTranslatef(x,y,0); glutSolidSphere(size, 20, 10); glFlush(); glPopMatrix();

    Read the article

  • Creating a interactive grid for puzzle game

    - by Noupoi
    I am trying to make a slitherlink game, and am not too sure how to approach creating the game, more specifically the grid structure on which the puzzle will be played on. This is what a empty and completed slitherlink grid would look like. The numbers in the squares are sort of clues and the areas between the dots need to be clickable. http://i.stack.imgur.com/U1kXn.gif http://i.stack.imgur.com/RMwiv.gif I would like to create the game in VB .NET. What data structures should I try to use, and would it be beneficial using any frameworks such as XNA?

    Read the article

  • Threads slowing down application and not working properly

    - by Belgin
    I'm making a software renderer which does per-polygon rasterization using a floating point digital differential analyzer algorithm. My idea was to create two threads for rasterization and have them work like so: one thread draws each even scanline in a polygon and the other thread draws each odd scanline, and they both start working at the same time, but the main application waits for both of them to finish and then pauses them before continuing with other computations. As this is the first time I'm making a threaded application, I'm not sure if the following method for thread synchronization is correct: First of all, I use two global variables to control the two threads, if a global variable is set to 1, that means the thread can start working, otherwise it must not work. This is checked by the thread running an infinite loop and if it detects that the global variable has changed its value, it does its job and then sets the variable back to 0 again. The main program also uses an empty while to check when both variables become 0 after setting them to 1. Second, each thread is assigned a global structure which contains information about the triangle that is about to be rasterized. The structures are filled in by the main program before setting the global variables to 1. My dilemma is that, while this process works under some conditions, it slows down the program considerably, and also it fails to run properly when compiled for Release in Visual Studio, or when compiled with any sort of -O optimization with gcc (i.e. nothing on screen, even SEGFAULTs). The program isn't much faster by default without threads, which you can see for yourself by commenting out the #define THREADS directive, but if I apply optimizations, it becomes much faster (especially with gcc -Ofast -march=native). N.B. It might not compile with gcc because of fscanf_s calls, but you can replace those with the usual fscanf, if you wish to use gcc. Because there is a lot of code, too much for here or pastebin, I created a git repository where you can view it. My questions are: Why does adding these two threads slow down my application? Why doesn't it work when compiling for Release or with optimizations? Can I speed up the application with threads? If so, how? Thanks in advance.

    Read the article

  • OpenGL : sluggish performance in extracting texture from GPU

    - by Cyan
    I'm currently working on an algorithm which creates a texture within a render buffer. The operations are pretty complex, but for the GPU this is a simple task, done very quickly. The problem is that, after creating the texture, i would like to save it. This requires to extract it from GPU memory. For this operation, i'm using glGetTexImage(). It works, but the performance is sluggish. No, i mean even slower than that. For example, an 8MB texture (uncompressed) requires 3 seconds (yes, seconds) to be extracted. That's mind puzzling. I'm almost wondering if my graphic card is connected by a serial link... Well, anyway, i've looked around, and found some people complaining about the same, but no working solution so far. The most promising advise was to "extract data in the native format of the GPU". Which i've tried and tried, but failed so far. Edit : by moving the call to glGetTexImage() in a different place, the speed has been a bit improved for the most dramatic samples : looking again at the 8MB texture, it knows requires 500ms, instead of 3sec. It's better, but still much too slow. Smaller texture sizes were not affected by the change (typical timing remained into the 60-80ms range). Using glFinish() didn't help either. Note that, if i call glFinish() (without glGetTexImage), i'm getting a fixed 16ms result, whatever the texture size or complexity. It really looks like the timing for a frame at 60fps. The timing is measured for the full rendering + saving sequence. The call to glGetTexImage() alone does not really matter. That being said, it is this call which changes the performance. And yes, of course, as stated at the beginning, the texture is "created into the GPU", hence the need to save it.

    Read the article

  • How to display consistent background image

    - by Tofu_Craving_Redish_BlueDragon
    Drawing a large background is relatively slow in PyGame. In order to avoid drawing BG every frame, you could draw it once, then do nothing. However, if something is overdrawn onto the surface and keeps moving, you will need to redraw the background in order to "erase" the color pixels left by moving object; otherwise, you will have "traces" of the moving object. I have a moving object in my PyGame. However, I do not want to "clear the color buffer" by redrawing the background image. Redrawing the background image every frame is slow. My solution : I will "clear" only required portions (where the "traces" of moving object are left) of the "buffer" by redrawing portions of background. Is there any other better way to have a consistent background?

    Read the article

  • Why does OpenGL seem to ignore my glBindTexture call?

    - by Killrazor
    I'm having problems making a simple sprite rendering. I load 2 different textures. Then, I bind these textures and draw 2 squares, one with each texture. But only the texture of the first rendered object is drawn in both squares. Its like if I'd only use a texture or as if glBindTexture don't work properly. I know that GL is a state machine, but I think that you only need to change active texture with glBindTexture. I load texture with this method: bool CTexture::generate( utils::CImageBuff* img ) { assert(img); m_image = img; CHECKGL(glGenTextures(1,&m_textureID)); CHECKGL(glBindTexture(GL_TEXTURE_2D,m_textureID)); CHECKGL(glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR)); CHECKGL(glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR)); //CHECKGL(glTexImage2D(GL_TEXTURE_2D,0,img->getBpp(),img->getWitdh(),img->getHeight(),0,img->getFormat(),GL_UNSIGNED_BYTE,img->getImgData())); CHECKGL(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img->getWitdh(), img->getHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, img->getImgData())); return true; } And I bind textures with this function: void CTexture::bind() { CHECKGL(glBindTexture(GL_TEXTURE_2D,m_textureID)); } Also, I draw sprites with this method void CSprite2D::render() { CHECKGL(glLoadIdentity()); CHECKGL(glEnable(GL_TEXTURE_2D)); CHECKGL(glEnable(GL_BLEND)); CHECKGL(glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)); m_texture->bind(); CHECKGL(glPushMatrix()); CHECKGL(glBegin(GL_QUADS)); CHECKGL(glTexCoord2f(m_textureAreaStart.s,m_textureAreaStart.t)); // 0,0 by default CHECKGL(glVertex3i(m_position.x,m_position.y,0)); CHECKGL(glTexCoord2f(m_textureAreaEnd.s,m_textureAreaStart.t)); // 1,0 by default CHECKGL(glVertex3i( m_position.x + m_dimensions.x, m_position.y, 0)); CHECKGL(glTexCoord2f(m_textureAreaEnd.s, m_textureAreaEnd.t)); // 1,1 by default CHECKGL(glVertex3i( m_position.x + m_dimensions.x, m_position.y + m_dimensions.y, 0)); CHECKGL(glTexCoord2f(m_textureAreaStart.s, m_textureAreaEnd.t)); // 0,1 by default CHECKGL(glVertex3i( m_position.x, m_position.y + m_dimensions.y,0)); CHECKGL(glPopMatrix()); CHECKGL(glDisable(GL_BLEND)); } Edit: I bring also the check error code: int CheckGLError(const char *GLcall, const char *file, int line) { GLenum errCode; //avoids infinite loop int errorCount = 0; while ( (errCode=glGetError()) != GL_NO_ERROR && ++errorCount < 3000) { utils::globalLogPtr log = utils::CGLogFactory::getLogInstance(); const GLubyte *errString; errString = gluErrorString(errCode); std::stringstream ss; ss << "In "<< __FILE__<<"("<< __LINE__<<") "<<"GL error with code: " << errCode<<" at file " << file << ", line " << line << " with message: " << errString << "\n"; log->addMessage(ss.str(),ZEL_APPENDER_GL,utils::LOGLEVEL_ERROR); } return 0; }

    Read the article

  • Android real time multiplayer over LAN

    - by Heigo
    I've developed several games for the android platform and now planning to create my first multiplayer game. What I have in mind is basically just a 2-player game witch you can play with 2 phones over local area connection/WiFi. Both phones need to be able to pass 3 integer values to the other phone in real time. So far I have considered using Socket's, but before I dig into it too deep I wanted to ask if there might be a better approach? Thanks!

    Read the article

  • How to configure Bullet for LookAt?

    - by AllCoder
    I'm having problems positioning Bullet objects. I am doing: ToolVec3 origin = ToolVec3( obj_posx, obj_posy, obj_posz ); ToolVec3 vmod = ToolVec3( object_sizex / 2.0f, object_sizey / 2.0f, object_sizez / 2.0f ); btTransform shapeTransform = btTransform::getIdentity(); shapeTransform.setOrigin( btVector3(origin.x+vmod.x, origin.y+vmod.y, origin.z+vmod.z) ); btDefaultMotionState* myMotionState = new btDefaultMotionState(shapeTransform); btRigidBody::btRigidBodyConstructionInfo rbInfo(mass,myMotionState,m_collisionShapes[2],localInertia); btRigidBody* body = new btRigidBody(rbInfo); I then do: btCollisionObject* colObj = m_dynamicsWorld->getCollisionObjectArray()[i]; btRigidBody* body = btRigidBody::upcast(colObj); if(body && body->getMotionState()) { btDefaultMotionState* myMotionState = (btDefaultMotionState*)body->getMotionState(); myMotionState->m_graphicsWorldTrans.getOpenGLMatrix(m); } else { colObj->getWorldTransform().getOpenGLMatrix(m); } And after obtaining the matrix m, I paste it as model matrix. I am observing few things: I must add some weird "size / 2" to object's position, to have it drawed normally, I have following "up" look at vector defined: "0.0f, -1.0f, 0.0f" – basically, Y grows up, Z grows forward (to monitor), BUT – x grows LEFT, I think there is some conflict with the X direction.. I cannot obtain consistent positioning having world setup like this How to configure this in Bullet? Why the weird + size/2 requirement?

    Read the article

  • How much isometric sprites can one optimize by mirroring and alike?

    - by Tom
    I am working on a basic isometric game, and am struggling to find the correct mirrors. I have managed to get SE out of SW, by scaling the sprite on X axis by -1. Same applies for NE angle. Something is bugging me, that I should be able to also mirror N to S, but I cannot manage to pull this one off. Am I just too sleepy and trying to do the impossible, or a basic -1 scale on Y axis is not enough? What are the common used mirror table for optimizing 8 angle (N, NE, E, SE, S, SW, W, NW) isometric sprites?

    Read the article

  • Data-driven animations

    - by saadtaame
    Say you are using C/SDL for a 2D game project. It's often the case that people use a structure to represent a frame in an animation. The struct consists of an image and how much time the frame is supposed to be visible. Is this data sufficient to represent somewhat complex animation? Is it a good idea to separate animation management code and animation data? Can somebody provide a link to animations tutorials that store animations in a file and retrieve them when needed. I read this in a book (AI game programming wisdom) but would like to see a real implementation.

    Read the article

  • How to decompose a rectangular shape in a Voronoi diagram, only generating convex shapes?

    - by DevilWithin
    I think this is a very straighforward question, lets say i have a building in 2D, a rectangle shape. Now i want to decompose that area in a lot of convex shapes, as seen in a voronoi diagram, or closely like it, just so I can add those shapes to the physics engine, and have a realistic destruction. Bonus: Possible suggestions on how to make the effect more dynamic and interesting. Please keep in mind we re talking about realtime calculations..

    Read the article

  • XNA stopped compiling my model x files

    - by HuseyinUslu
    So I've a 3d game project I'm working on and I'm using 2 model files (SkyBlock.x and AimedBlock.x). So until now everything was all good and my models files were compiled all okay and I was able to use them within my game. With the latest changes (which I don't know what caused it really) - XNA stopped compiling my model files and instead only outputs files; AimedBlockxnb - 1kb SkyDome.xnb - 1kb SkyDomeTexture.xnb - 1389 kb SkyDomeTexture_0.xnb - 419 kb So I created a test XNA game project and moved all my asset's to new solution content project's and tried compiling them and saw that they're all good. AimedBlockxnb - 2kb SkyDome.xnb - 13kb SkyDomeTexture.xnb - 4097 kb SkyDomeTexture_0.xnb - 683 kb So I guess my main project sucks there but I couldn't came with a solution. I even tried overwriting my game's content project with new game's content project (which was all okay) but it didn't work. Anybody had similar issues?

    Read the article

  • (Android) How are OpenGL ES 1 framebuffers and textures sized?

    - by jens
    I am trying to draw to a texture using a framebuffer using OpenGL ES 1.1 on Android, Java. Afterwords I want to overlay this texture full-screen over my game. In theory, this works like a charm, but somehow the coordinates are off. For testing I drew something at (0,0) with width and height 200, and it partly is off-screen. This is how I create the framebuffer: fb = new int[1]; depthRb = new int[1]; renderTex = new int[1]; gl11ep.glGenFramebuffersOES(1, fb, 0); gl11ep.glGenRenderbuffersOES(1, depthRb, 0); // the depth buffer gl.glGenTextures(1, renderTex, 0);// generate texture gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); texBuffer = ByteBuffer.allocateDirect(buf.length*4).order(ByteOrder.nativeOrder()).asIntBuffer(); gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE, texW, texH, 0, GL10.GL_LUMINANCE, GL10.GL_UNSIGNED_BYTE, texBuffer); gl11ep.glBindRenderbufferOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); gl11ep.glRenderbufferStorageOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, GL11ExtensionPack.GL_DEPTH_COMPONENT16, texW, texH); Before I draw, I do this: gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, fb[0]); gl.glClearColor(0f, 0f, 0f, 0f); // specify texture as color attachment gl11ep.glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D, renderTex[0], 0); // attach render buffer as depth buffer gl11ep.glFramebufferRenderbufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_DEPTH_ATTACHMENT_OES, GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); I set texW = 1024 and texH = 512. When rendering this texture fullscreen, with a lightmask (size 200x200) placed at (0, 0) and (texW/2, texH/2). You can see that it seems like the coordinate system doesnt start at (0,0) as that light overlaps the screen and the images are not drawn as squares (my lightcone-texture is a circle, not an ellipse). So, how is the coordinate system of this offscreen-drawn texture defined? Thanks

    Read the article

  • XNA 2D Rotated Rectangle Collision Response

    - by Kyle Uithoven
    I am using Rotated Rectangles which collide using the Separating Axis Theorem and they work perfectly fine for collision detection using Intersects and Contains. However, I am starting to use faster objects in my game now and there is the issue of the two object overlapping during collision due to their higher velocities. I would like to do a collision response where I find out how much they are overlapping in the X and Y and put position them outside of each other. I would like to use something like this: http://go.colorize.net/xna/2d_collision_response_xna/index.html. But I am having some issues trying to adapt this to handle the rotation of the bounds. Is this possible? Are there any resources out there that I can look at?

    Read the article

  • Artificial Intelligence ... how to make an object roam freely/avoid other objects, and model consciousness? [on hold]

    - by help bonafide pigeons
    Say a simple free roam battle scene in which a player runs around freely and engages in battle with other enemies/objects, as shown below: The dragon/dinosaur (or whatever that thing I drew appears to be) will, by some measure, try and avoid attacks so it is modeled to appear to have a conscious desire to avoid pain. My question is ... since this is very complex, many possible strategies for solving this, algorithms, etc., what is the basic idea behind how this would be accomplished in any sort? Like, we can assume the enemy in the picture is not just going to aimlessly hop around and avoid, but freely be modeled to behave as if it were really exploring/fighting. For the best example I can give, witness the behavior of the enemies in Final Fantasy 12 in this video: http://www.youtube.com/watch?v=mO0TkmhiQ6w How do the pros, or how would anyone attempt solve/implement this? PS: I have tried several times to give an image the "illusion" that is has a conciousness, but aside from emulating a real animal's consciousness in complete, I fall short and get choppy moving images that follow predictable patterns, error-prone movements, and the worst imaginable scenario of a battle engagement.

    Read the article

< Previous Page | 596 597 598 599 600 601 602 603 604 605 606 607  | Next Page >