Search Results

Search found 33291 results on 1332 pages for 'development environment'.

Page 427/1332 | < Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >

  • Unity3D: How To Smoothly Switch From One Camera To Another

    - by www.Sillitoy.com
    The Question is basically self explanatory. I have a scene with many cameras and I'd like to smoothly switch from one to another. I am not looking for a cross fade effect but more to a camera moving and rotating the view in order to reach the next camera point of view and so on. To this end I have tried the following code: firstCamera.transform.position.x = Mathf.Lerp(firstCamera.transform.position.x, nextCamer.transform.position.x,Time.deltaTime*smooth); firstCamera.transform.position.y = Mathf.Lerp(firstCamera.transform.position.y, nextCamera.transform.position.y,Time.deltaTime*smooth); firstCamera.transform.position.z = Mathf.Lerp(firstCamera.transform.position.z, nextCamera.transform.position.z,Time.deltaTime*smooth); firstCamera.transform.rotation.x = Mathf.Lerp(firstCamera.transform.rotation.x, nextCamera.transform.rotation.x,Time.deltaTime*smooth); firstCamera.transform.rotation.z = Mathf.Lerp(firstCamera.transform.rotation.z, nextCamera.transform.rotation.z,Time.deltaTime*smooth); firstCamera.transform.rotation.y = Mathf.Lerp(firstCamera.transform.rotation.y, nextCamera.transform.rotation.y,Time.deltaTime*smooth); But the result is actually not that good.

    Read the article

  • Tile-based maps in AS3

    - by Ashley
    I want to make a tile-based platformer in AS3. I want my game to read an external maps file (in xml or json or somethimg similar) to draw a tile-based map. I've seen loads of tutorials for this in AS2 and other languages, and the few I've found in AS3 are either incomplete or filled with extra unnecessary features. I just want to be able to draw a basic map from sprites in Flash. Any links or information to point me in the right direction would be appreciated.

    Read the article

  • Simultaneous Animation for a GameObject - Unity3D

    - by Fahim Ali Zain
    Its my second week with Unity. I am doing a 2D game and I have a small GameObject which should change its sprite along with following a definite path defined in Animation Curves. I did both of them in separate .anim files since the transform animation had many keyframes, i thought it wont be good to put the '2' sprite keyframe repeatedly along side the transform keyframe. But the problem is, I cant get it both working together at the same time. I dont want any blending because the animation is timed well already. Also, I tried deleting the sprite change animation and tried it under script changing the SpriteRenderer.Sprite property under Update(); but it works only when the Animator component is disabled in the GameObject. Any Solutions ? :)

    Read the article

  • Drawing multiple Textures as tilemap

    - by DocJones
    I am trying to draw a 2d game map and the objects on the map in a single pass. Here is my OpenGL initialization code // Turn off unnecessary operations glDisable(GL_DEPTH_TEST); glDisable(GL_LIGHTING); glDisable(GL_CULL_FACE); glDisable(GL_STENCIL_TEST); glDisable(GL_DITHER); glEnable(GL_BLEND); glEnable(GL_TEXTURE_2D); // activate pointer to vertex & texture array glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); My drawing code is being called by a NSTimer every 1/60 s. Here is the drawing code of my world object: - (void) draw:(NSRect)rect withTimedDelta:(double)d { GLint *t; glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, [_textureManager textureByName:@"blocks"]); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); for (int x=0; x<[_map getWidth] ; x++) { for (int y=0; y<[_map getHeight] ; y++) { GLint v[] = { 16*x ,16*y, 16*x+16,16*y, 16*x+16,16*y+16, 16*x ,16*y+16 }; t=[_textureManager getBlockWithNumber:[_map getBlockAtX:x andY:y]]; glVertexPointer(2, GL_INT, 0, v); glTexCoordPointer(2, GL_INT, 0, t); glDrawArrays(GL_QUADS, 0, 4); } } } (_textureManager is a Singelton only loading a texture once!) The object drawing codes is identical (except the nested loops) in terms of OpenGL calls: - (void) drawWithTimedDelta:(double)d { GLint *t; GLint v[] = { 16*xpos ,16*ypos, 16*xpos+16,16*ypos, 16*xpos+16,16*ypos+16, 16*xpos ,16*ypos+16 }; glBindTexture(GL_TEXTURE_2D, [_textureManager textureByName:_textureName]); t=[_textureManager getBlockWithNumber:12]; glVertexPointer(2, GL_INT, 0, v); glTexCoordPointer(2, GL_INT, 0, t); glDrawArrays(GL_QUADS, 0, 4); } As soon as my central drawing routine calls the two drawing methods the second call overlays the first one. i would expect the call to world.draw to draw the map and "stamp" the objects upon it. Debugging shows me, that the first call is performed correctly (world is being drawn), but the following call to all objects ONLY draws the objects, the rest of the scene is getting black. I think i need to blend the drawn textures, but i cant seem to figure out how. Any help is appreciated. Thanks PS: Here is the github link to the project. It may not be in sync of my post here, but for some more in-depth analysis it may help.

    Read the article

  • Can one draw a cube using different method/drawing mode?

    - by den-javamaniac
    Hi. I've just started learning gamedev (in particular android EGL based) and have ran over a code from Pro Android Games 2 that looks as follows: /* * Copyright (C) 2007 Google Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package opengl.scenes.cubes; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.IntBuffer; import javax.microedition.khronos.opengles.GL10; public class Cube { public Cube(){ int one = 0x10000; int vertices[] = { -one, -one, -one, one, -one, -one, one, one, -one, -one, one, -one, -one, -one, one, one, -one, one, one, one, one, -one, one, one, }; int colors[] = { 0, 0, 0, one, one, 0, 0, one, one, one, 0, one, 0, one, 0, one, 0, 0, one, one, one, 0, one, one, one, one, one, one, 0, one, one, one, }; byte indices[] = { 0, 4, 5, 0, 5, 1, 1, 5, 6, 1, 6, 2, 2, 6, 7, 2, 7, 3, 3, 7, 4, 3, 4, 0, 4, 7, 6, 4, 6, 5, 3, 0, 1, 3, 1, 2 }; // Buffers to be passed to gl*Pointer() functions // must be direct, i.e., they must be placed on the // native heap where the garbage collector cannot vbb.asIntBuffer() // move them. // // Buffers with multi-byte datatypes (e.g., short, int, float) // must have their byte order set to native order ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length*4); vbb.order(ByteOrder.nativeOrder()); mVertexBuffer = vbb.asIntBuffer(); mVertexBuffer.put(vertices); mVertexBuffer.position(0); ByteBuffer cbb = ByteBuffer.allocateDirect(colors.length*4); cbb.order(ByteOrder.nativeOrder()); mColorBuffer = cbb.asIntBuffer(); mColorBuffer.put(colors); mColorBuffer.position(0); mIndexBuffer = ByteBuffer.allocateDirect(indices.length); mIndexBuffer.put(indices); mIndexBuffer.position(0); } public void draw(GL10 gl) { gl.glFrontFace(GL10.GL_CW); gl.glVertexPointer(3, GL10.GL_FIXED, 0, mVertexBuffer); gl.glColorPointer(4, GL10.GL_FIXED, 0, mColorBuffer); gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE, mIndexBuffer); } private IntBuffer mVertexBuffer; private IntBuffer mColorBuffer; private ByteBuffer mIndexBuffer;} So it suggests to draw a cube using triangles. My question is: can I draw the same cube using GL_TPOLYGON? If so, isn't that an easier/more understandable way to do things?

    Read the article

  • Literature for Inverse Kinematics: Joint Limits and beyond

    - by Jeff
    Recently I've been playing around with Inverse Kinematics and have been pretty impressed with the results. Naturally I want to take it further, but have no clue where to start. In particular, I would like to introduce joint limits (ie for a prismatic joint how far it can move, hinge joint what angles it has to be between, etc etc). Currently I understand how to produce the Jacobian matrix for the various joint types. I am particularly looking for literature (preferably free, and preferably easy to understand) on various ways to implement joint limits. Also I would like to find out different ideas on how inverse kinematics can be used.

    Read the article

  • Physics/Graphics Components

    - by Brett Powell
    I have spent the last 48 hours reading up on Object Component systems, and feel I am ready enough to start implementing it. I got the base Object and Component classes created, but now that I need to start creating the actual components I am a bit confused. When I think of them in terms of HealthComponent or something that would basically just be a property, it makes perfect sense. When it is something more general as a Physics/Graphics component, I get a bit confused. My Object class looks like this so far (If you notice any changes I should make please let me know, still new to this)... typedef unsigned int ID; class GameObject { public: GameObject(ID id, Ogre::String name = ""); ~GameObject(); ID &getID(); Ogre::String &getName(); virtual void update() = 0; // Component Functions void addComponent(Component *component); void removeComponent(Ogre::String familyName); template<typename T> T* getComponent(Ogre::String familyName) { return dynamic_cast<T*>(m_components[familyName]); } protected: // Properties ID m_ID; Ogre::String m_Name; float m_flVelocity; Ogre::Vector3 m_vecPosition; // Components std::map<std::string,Component*> m_components; std::map<std::string,Component*>::iterator m_componentItr; }; Now the problem I am running into is what would the general population put into Components such as Physics/Graphics? For Ogre (my rendering engine) the visible Objects will consist of multiple Ogre::SceneNode (possibly multiple) to attach it to the scene, Ogre::Entity (possibly multiple) to show the visible meshes, and so on. Would it be best to just add multiple GraphicComponent's to the Object and let each GraphicComponent handle one SceneNode/Entity or is the idea to have one of each Component needed? For Physics I am even more confused. I suppose maybe creating a RigidBody and keeping track of mass/interia/etc. would make sense. But I am having trouble thinking of how to actually putting specifics into a Component. Once I get a couple of these "Required" components done, I think it will make a lot more sense. As of right now though I am still a bit stumped.

    Read the article

  • Advice for programming a lobby for a network multiplayer game?

    - by Milo
    I'm working on learning network programming. I'm working on a simple card game. The basic idea is: Players enter the lobby Players see tables Players sit at an empty seat Once they sit, they do not need any information from the lobby, they see the card table and the data about the other players and so forth. I've programmed the server portion for the game itself. The clients connect to my server object and the server then receives and sends messages; quite simple. The tricky concepts for me are: What's a good way to run many tables at the same time? What's a good way to keep the lobby consistently updated for each person in the lobby (eg: MSG_TABLE_FILLED, 22) Ideally I'd like to have 1 server exe for all of this and to have to deal with multithreading as little as possible. I'm going to use the enet library. I was thinking that each time a game session starts, I push a new Game and I map the client IPs to that table, then I just route messages from those clients to that Game. Since enet supports channels I was thinking of using 2 channels per table, one for the game messages and one for in game chat. Would something like this work? Does anyone have any advice / design ideas for a game with a lobby and many tables? Is there a usual way this is done that I'm overlooking? Any conceptual ideas or even c/c++ code examples would be very helpful. Thanks

    Read the article

  • Discovering path through unknown territory

    - by TravisG
    Let's say all the AI knows about it's surroundings is a pixel-map that it has which clearly shows walkable terrain and obstacles. I want the AI to be able to traverse this terrain until it finds an exit point. There are some restrictions: There is always a way to the exit in the entire map that the AI walks around in, but there may be dead ends. The path to the exit is always pretty random, meaning that if you stand at crossroads, nothing indicates which direction would be the right one to go. It doesn't matter if the AI reaches a dead end, but it has to be able walk back out of it to a previously not inspected location and continue its search there. Initially, the AI starts out knowing only the starting area of the whole map. As it walks around, new points will be added to the pixel-map as the AI corresponding to the AIs range of sight (think of it like the AI is clearing the fog of war) The problem is in 2D space. All I have is the pixel map. There are no paths in the pixel map which are "too narrow". The AI fits through everything. It shouldn't be a brute force solution. E.g. it would be possible to simply find a path to each pixel in the pixel map that is yet undiscovered (with A*, for example), which will lead to the AI discovering new pixels. This could be repeated until the end is reached. The path doesn't have to be the shortest path (this is impossible without knowing the entire map beforehand), but when movements within the visible area are calculated, the shortest and from a human standpoint most logical path should be taken (e.g. if you can see a way out of your room into a hallway, you would obviously go there instead of exploring the corner of your current room). What kind of approaches to solve this problem are there?

    Read the article

  • LWJGL Text Rendering

    - by Trixmix
    Currently in my project I am using LWJGL and the Slick2D library to render text onto the screen. Here is a more specific example: Font f = new Font("Times New Roman",Font.BOLD,18); font = new UnicodeFont(f); font.getEffects().add(new ColorEffect(Color.white)); font.addAsciiGlyphs(); try { font.loadGlyphs(); } catch (SlickException e) { e.printStackTrace(); } then i use font.drawString to write onto the screen. This is a quick easy way but it has a lot of disadvantages. for example font.loadGlyphs take a very long time 1-3 seconds. so when i want to change a color or font type then i have to wait 1-3 seconds which means I cannot do it while rendering (ie. cant have different color text on the same screen). My question is what is a better way of drawing multicolored text onto the screen? I use slick2d only for the text rendering so maybe i can fully get rid of the library and draw text some other way... If you have an answer please leave a quick short example. Thanks!

    Read the article

  • How are bullets simulated in video games?

    - by mahen23
    I have been playing games like MW2 recently and, as a programmer, I tend to ask myself how do they make the game so immersive. For example, how to they simulate bullet speed. When an NPC fires a bullet from his gun, does the bullet really travel from his gun to the given target or do they they completely ignore this part and just put a bullet hole on the target? If the bullet is really travelling from the gun to the target, at what speed is it actually travelling?

    Read the article

  • ScreenManagement better practices ?! Textbox not focusing

    - by xykudyax
    I saw a question here using DataTemplates with WPF for ScreenManagement, I was curious and I gave it a try I think the ideia is amazing and very clean. Though I'm new to WPF and I read a lot of times that almost everything should be made in XAML and very little should be "coded behind". My questions resolves about using the datatemplate ideia, WHERE should the code that calls the transitions be? where should I define which commands are avaiable in which screens. For example: [ScreenA] Commands: Pressing B - Goes to state B Pressing ESC - Exits [ScreenB] Commands: Pressing A - Goes to state A Pressing SPACE - Exits where do I define the keyEventHandlers? and where do I call the next screen? I'm doing this as an hobby for learning and "if you are learning, better learn it right" :) Thank you for your time. Yes the Q/A I was talking is: What's a good way to handle game screen management in WPF? What I've done so far was to create a Screen class (derived from UserControl) and create some virtual methods: - one for Initializing stuff (like focus a given component by default) - another for inputHandling I handle it by using a switch case and by listening to the PreviewKeyDown event from the parent container (MainWindow) Im not able to do it another way! Help?!. - and a finally one that removes the keyEvent method (when the screen is terminated) Parent.PreviewKeyDown -= OnKeyDown; am I doing okay? I face a problem. When I add a new screen (userControl) containing a TextBox I'm not able to give it autofocus :/ The Caret is there but is not blinking and I have to hit "TAB" before being able to input anything at all :/

    Read the article

  • Game Physics: Implementing Normal Reaction from ground correctly

    - by viraj
    I am implementing a simple side scrolling platform game. I am using the following strategy while coding the physics: Gravity constantly acts on the character. When the character is touching the floor, a normal reaction is exerted by the floor. I face the following problem: If the character is initially at a height, he acquires velocity in the -Y direction. Thus, when he hits the floor, he falls through even though normal force is being exerted. I could fix this by setting the Y velocity to 0, and placing him above the floor if he has collided with it. But this often leads to the character getting stuck in the floor or bouncing around it. Is there a better approach ?

    Read the article

  • Bloom shader makes it impossible to render black?

    - by Mathias Lykkegaard Lorenzen
    I am playing around with the bloom shader from the XNA sample page, to do some glow shading. I am rendering primitive vector-ish squares of linelists/linestrips, on a background. However, I am facing a few problems. With a black background and white squares, I can actually see the squares. However, with a white background and black squares, I can't see them at all. Why is this happening, and is there any way of me fixing it? Can I modify my bloom shader to also "glow" dark elements, if that's what is causing it?

    Read the article

  • 3D picking lwjgl

    - by Wirde
    I have written some code to preform 3D picking that for some reason dosn't work entirely correct! (Im using LWJGL just so you know.) I posted this at stackoverflow at first but after researching some more in to my problem i found this neat site and tought that you guys might be more qualified to answer this question. This is how the code looks like: if(Mouse.getEventButton() == 1) { if (!Mouse.getEventButtonState()) { Camera.get().generateViewMatrix(); float screenSpaceX = ((Mouse.getX()/800f/2f)-1.0f)*Camera.get().getAspectRatio(); float screenSpaceY = 1.0f-(2*((600-Mouse.getY())/600f)); float displacementRate = (float)Math.tan(Camera.get().getFovy()/2); screenSpaceX *= displacementRate; screenSpaceY *= displacementRate; Vector4f cameraSpaceNear = new Vector4f((float) (screenSpaceX * Camera.get().getNear()), (float) (screenSpaceY * Camera.get().getNear()), (float) (-Camera.get().getNear()), 1); Vector4f cameraSpaceFar = new Vector4f((float) (screenSpaceX * Camera.get().getFar()), (float) (screenSpaceY * Camera.get().getFar()), (float) (-Camera.get().getFar()), 1); Matrix4f tmpView = new Matrix4f(); Camera.get().getViewMatrix().transpose(tmpView); Matrix4f invertedViewMatrix = (Matrix4f)tmpView.invert(); Vector4f worldSpaceNear = new Vector4f(); Matrix4f.transform(invertedViewMatrix, cameraSpaceNear, worldSpaceNear); Vector4f worldSpaceFar = new Vector4f(); Matrix4f.transform(invertedViewMatrix, cameraSpaceFar, worldSpaceFar); Vector3f rayPosition = new Vector3f(worldSpaceNear.x, worldSpaceNear.y, worldSpaceNear.z); Vector3f rayDirection = new Vector3f(worldSpaceFar.x - worldSpaceNear.x, worldSpaceFar.y - worldSpaceNear.y, worldSpaceFar.z - worldSpaceNear.z); rayDirection.normalise(); Ray clickRay = new Ray(rayPosition, rayDirection); Vector tMin = new Vector(), tMax = new Vector(), tempPoint; float largestEnteringValue, smallestExitingValue, temp, closestEnteringValue = Camera.get().getFar()+0.1f; Drawable closestDrawableHit = null; for(Drawable d : this.worldModel.getDrawableThings()) { // Calcualte AABB for each object... needs to be moved later... firstVertex = true; for(Surface surface : d.getSurfaces()) { for(Vertex v : surface.getVertices()) { worldPosition.x = (v.x+d.getPosition().x)*d.getScale().x; worldPosition.y = (v.y+d.getPosition().y)*d.getScale().y; worldPosition.z = (v.z+d.getPosition().z)*d.getScale().z; worldPosition = worldPosition.rotate(d.getRotation()); if (firstVertex) { maxX = worldPosition.x; maxY = worldPosition.y; maxZ = worldPosition.z; minX = worldPosition.x; minY = worldPosition.y; minZ = worldPosition.z; firstVertex = false; } else { if (worldPosition.x > maxX) { maxX = worldPosition.x; } if (worldPosition.x < minX) { minX = worldPosition.x; } if (worldPosition.y > maxY) { maxY = worldPosition.y; } if (worldPosition.y < minY) { minY = worldPosition.y; } if (worldPosition.z > maxZ) { maxZ = worldPosition.z; } if (worldPosition.z < minZ) { minZ = worldPosition.z; } } } } // ray/slabs intersection test... // clickRay.getOrigin().x + clickRay.getDirection().x * f = minX // clickRay.getOrigin().x - minX = -clickRay.getDirection().x * f // clickRay.getOrigin().x/-clickRay.getDirection().x - minX/-clickRay.getDirection().x = f // -clickRay.getOrigin().x/clickRay.getDirection().x + minX/clickRay.getDirection().x = f largestEnteringValue = -clickRay.getOrigin().x/clickRay.getDirection().x + minX/clickRay.getDirection().x; temp = -clickRay.getOrigin().y/clickRay.getDirection().y + minY/clickRay.getDirection().y; if(largestEnteringValue < temp) { largestEnteringValue = temp; } temp = -clickRay.getOrigin().z/clickRay.getDirection().z + minZ/clickRay.getDirection().z; if(largestEnteringValue < temp) { largestEnteringValue = temp; } smallestExitingValue = -clickRay.getOrigin().x/clickRay.getDirection().x + maxX/clickRay.getDirection().x; temp = -clickRay.getOrigin().y/clickRay.getDirection().y + maxY/clickRay.getDirection().y; if(smallestExitingValue > temp) { smallestExitingValue = temp; } temp = -clickRay.getOrigin().z/clickRay.getDirection().z + maxZ/clickRay.getDirection().z; if(smallestExitingValue < temp) { smallestExitingValue = temp; } if(largestEnteringValue > smallestExitingValue) { //System.out.println("Miss!"); } else { if (largestEnteringValue < closestEnteringValue) { closestEnteringValue = largestEnteringValue; closestDrawableHit = d; } } } if(closestDrawableHit != null) { System.out.println("Hit at: (" + clickRay.setDistance(closestEnteringValue).x + ", " + clickRay.getCurrentPosition().y + ", " + clickRay.getCurrentPosition().z); this.worldModel.removeDrawableThing(closestDrawableHit); } } } I just don't understand what's wrong, the ray are shooting and i do hit stuff that gets removed but the result of the ray are verry strange it sometimes removes the thing im clicking at, sometimes it removes things thats not even close to what im clicking at, and sometimes it removes nothing at all. Edit: Okay so i have continued searching for errors and by debugging the ray (by painting smal dots where it travles) i can now se that there is something oviously wrong with the ray that im sending out... it has its origin near the world center (nearer or further away depending on where on the screen im clicking) and always shots to the same position no matter where I direct my camera... My initial toughts is that there might be some error in the way i calculate my viewMatrix (since it's not possible to get the viewmatrix from the gluLookAt method in lwjgl; I have to build it my self and I guess thats where the problem is at)... Edit2: This is how i calculate it currently: private double[][] viewMatrixDouble = {{0,0,0,0}, {0,0,0,0}, {0,0,0,0}, {0,0,0,1}}; public Vector getCameraDirectionVector() { Vector actualEye = this.getActualEyePosition(); return new Vector(lookAt.x-actualEye.x, lookAt.y-actualEye.y, lookAt.z-actualEye.z); } public Vector getActualEyePosition() { return eye.rotate(this.getRotation()); } public void generateViewMatrix() { Vector cameraDirectionVector = getCameraDirectionVector().normalize(); Vector side = Vector.cross(cameraDirectionVector, this.upVector).normalize(); Vector up = Vector.cross(side, cameraDirectionVector); viewMatrixDouble[0][0] = side.x; viewMatrixDouble[0][1] = up.x; viewMatrixDouble[0][2] = -cameraDirectionVector.x; viewMatrixDouble[1][0] = side.y; viewMatrixDouble[1][1] = up.y; viewMatrixDouble[1][2] = -cameraDirectionVector.y; viewMatrixDouble[2][0] = side.z; viewMatrixDouble[2][1] = up.z; viewMatrixDouble[2][2] = -cameraDirectionVector.z; /* Vector actualEyePosition = this.getActualEyePosition(); Vector zaxis = new Vector(this.lookAt.x - actualEyePosition.x, this.lookAt.y - actualEyePosition.y, this.lookAt.z - actualEyePosition.z).normalize(); Vector xaxis = Vector.cross(upVector, zaxis).normalize(); Vector yaxis = Vector.cross(zaxis, xaxis); viewMatrixDouble[0][0] = xaxis.x; viewMatrixDouble[0][1] = yaxis.x; viewMatrixDouble[0][2] = zaxis.x; viewMatrixDouble[1][0] = xaxis.y; viewMatrixDouble[1][1] = yaxis.y; viewMatrixDouble[1][2] = zaxis.y; viewMatrixDouble[2][0] = xaxis.z; viewMatrixDouble[2][1] = yaxis.z; viewMatrixDouble[2][2] = zaxis.z; viewMatrixDouble[3][0] = -Vector.dot(xaxis, actualEyePosition); viewMatrixDouble[3][1] =-Vector.dot(yaxis, actualEyePosition); viewMatrixDouble[3][2] = -Vector.dot(zaxis, actualEyePosition); */ viewMatrix = new Matrix4f(); viewMatrix.load(getViewMatrixAsFloatBuffer()); } Would be verry greatfull if anyone could verify if this is wrong or right, and if it's wrong; supply me with the right way of doing it... I have read alot of threads and documentations about this but i can't seam to wrapp my head around it... Edit3: Okay with the help of Byte56 (thanks alot for the help) i have now concluded that it's not the viewMatrix that is the problem... I still get the same messedup result; anyone that think that they can find the error in my code, i certenly can't, have bean working on this for 3 days now :(

    Read the article

  • android game performance regarding timers

    - by iQue
    Im new to the game-dev world and I have a tendancy to over-simplify my code, and sometimes this costs me alot fo memory. Im using a custom TimerTask that looks like this: public class Task extends TimerTask { private MainGamePanel panel; public Task(MainGamePanel panel) { this.panel=panel; } /** * When the timer executes, this code is run. */ public void run() { panel.createEnemies(); } } this task calls this method from my view: public void createEnemies() { Bitmap bmp = BitmapFactory.decodeResource(getResources(), R.drawable.female); if(enemyCounter < 24){ enemies.add(new Enemy(bmp, this)); } enemyCounter++; } Since I call this in the onCreate-method instead of in my views contructor (because My enemies need to get width and height of view). Im wondering if this will work when I have multiple levels in game (start a new intent). And if this kind of timer really is the best way to add a delay between the spawning-time of my enemies performance-wise. adding code for my timer if any1 came here cus they dont understand timers: private Timer timer1 = new Timer(); private long delay1 = 5*1000; // 5 sec delay public void surfaceCreated(SurfaceHolder holder) { timer1.schedule(new Task(this), 0, delay1); //I call my timer and add the delay thread.setRunning(true); thread.start(); }

    Read the article

  • Facebook Game Rejected: "Your app icon must not overlap with content in your cover image"

    - by peterwilli
    Sorry if this isnt the right stackexchange site to ask this, it was really hard to determine. My FB game just recently got rejected for 2 reasons. The first I fixed nicely and is irrelevant but the second I just can't see to figure out what they mean and I was hoping someone else got the same issue and did know what they meant. These are the errors: You can ignore the error under "Banners" The web preview of my game looks like this now: All I know is that the rejection has something to do with the cover image, not the icons or the screenshots. Please let me know what to do to get approved. Thanks a lot!

    Read the article

  • Project collision shapes to plane for 2.5D collision detection

    - by Jkh2
    I am working on a top down 2.5D game. In the game anything that overlaps on the screen should be 'colliding' with each other regardless of whether they are on the same plane in the 3D world. This is illustrated below from a side-ways view: The orange and green circles are spheres floating in the 3D world. They are projected onto a plane parallel to the viewport plane (y = 0 in the image) and if they overlap there is a collision event between them. These spheres are attached to other meshes to represent the sphere bounding boxes for collisions. The way I plan to implement this at the moment is the following: Get the 3D world position at the center of the sphere. Use Camera.WorldToViewportPoint to project the point to the viewport plane. Move a Sphere Collider with the radius of the sphere to that point. Test for collisions using unity colliders. My question is how to extend this to work for rotated cuboids. For instance if I have two rotated cuboids, if I follow the logic above it would not work as intended as the cuboids may not collide but they could still be intersected on the view plane. An example is below: Is there a way to project a cuboid that would be aligned with the plane? Would it be a valid cuboid for all rotations if I did this?

    Read the article

  • cocos2d-x - object creation and management in game design

    - by Jason
    How do others keep track of everything going on in their games? I am working on a new game and I am quickly realizing everything that I need to keep track of. Example: Maybe a layerManager that keeps track of all the layers and what is happening for a particular scene. Maybe a sceneManager for sharing objects among scenes But then getting to game play itself, what if you have 100 objects on the screen each with its own state and happenings, there needs tobe a way to keep track of all of that. Drawing everything out is really helping me. Can anyone share with me how they go about object tracking/management? I am seeing a few different managers and then maybe even a parent object that manages the managers..is my thinking way off? Any design patterns that may be useful for me to read about? Update: doing some reading and maybe a Factory pattern might apply.

    Read the article

  • Multiple render targets and gamma correctness in Direct3D9

    - by Mario
    Let's say in a deferred renderer when building your G-Buffer you're going to render texture color, normals, depth and whatever else to your multiple render targets at once. Now if you want to have a gamma-correct rendering pipeline and you use regular sRGB textures as well as rendertargets, you'll need to apply some conversions along the way, because your filtering, sampling and calculations should happen in linear space, not sRGB space. Of course, you could store linear color in your textures and rendertargets, but this might very well introduce bad precision and banding issues. Reading from sRGB textures is easy: just set SRGBTexture = true; in your texture sampler in your HLSL effect code and the hardware does the conversion sRGB-linear for you. Writing to an sRGB rendertarget is theoretically easy, too: just set SRGBWriteEnable = true; in your effect pass in HLSL and your linear colors will be converted to sRGB space automatically. But how does this work with multiple rendertargets? I only want to do these corrections to the color textures and rendertarget, not to the normals, depth, specularity or whatever else I'll be rendering to my G-Buffer. Ok, so I just don't apply SRGBTexture = true; to my non-color textures, but when using SRGBWriteEnable = true; I'll do a gamma correction to all the values I write out to my rendertargets, no matter what I actually store there. I found some info on gamma over at Microsoft: http://msdn.microsoft.com/en-us/library/windows/desktop/bb173460%28v=vs.85%29.aspx For hardware that supports Multiple Render Targets (Direct3D 9) or Multiple-element Textures (Direct3D 9), only the first render target or element is written. If I understand correctly, SRGBWriteEnable should only be applied to the first rendertarget, but according to my tests it doesn't and is used for all rendertargets instead. Now the only alternative seems to be to handle these corrections manually in my shader and only correct the actual color output, but I'm not totally sure, that this'll not have any negative impact on color correctness. E.g. if the GPU does any blending or filtering or multisampling after the Linear-sRGB conversion... Do I even need gamma correction in this case, if I'm just writing texture color without lighting to my rendertarget? As far as I know, I DO need it because of the texture filtering and mip sampling happening in sRGB space instead, if I don't correct for it. Anyway, it'd be interesting to hear other people's solutions or thoughts about this.

    Read the article

  • PyGame QIX clone, filling areas

    - by astropanic
    I'm playing around with PyGame. Now I'm trying to implement a QIX clone. I have my game loop, and I can move the player (cursor) on the screen. In QIX, the movment of the player leaves a trace (tail) on the screen, creating a polyline. If the polyline with the screen boundaries creates a polygon, the area is filled. How I can accomplish this behaviour ? How store the tail in memory ? How to detect when it build a closed shape that should be filled ? I don't need an exact working solution, some pointers, algo names would be cool.

    Read the article

  • What datastructure would you use for a collision-detection in a tilemap?

    - by Solom
    Currently I save those blocks in my map that could be colliding with the player in a HashMap (Vector2, Block). So the Vector2 represents the coordinates of the blog. Whenever the player moves I then iterate over all these Blocks (that are in a specific range around the player) and check if a collision happened. This was my first rough idea on how to implement the collision-detection. Currently if the player moves I put more and more blocks in the HashMap until a specific "upper bound", then I clear it and start over. I was fully aware that it was not the brightest solution for the problem, but as said, it was a rough first implementation (I'm still learning a lot about game-design and the data-structure). What data-structure would you use to save the Blocks? I thought about a Queue or even a Stack, but I'm not sure, hence I ask.

    Read the article

  • Strategies to Defeat Memory Editors for Cheating - Desktop Games

    - by ashes999
    I'm assuming we're talking about desktop games -- something the player downloads and runs on their local computer. Many are the memory editors that allow you to detect and freeze values, like your player's health. How do you prevent cheating via memory-modifiation? What strategies are effective to combat this kind of cheating? For reference, I know that players can: - Search for something by value or range - Search for something that changed value - Set memory values - Freeze memory values I'm looking for some good ones. Two I use that are mediocre are: Displaying values as a percentage instead of the number (eg. 46/50 = 92% health) A low-level class that holds values in an array and moves them with each change. (For example, instead of an int, I have a class that's an array of ints, and whenever the value changes, I use a different, randomly-chosen array item to hold the value)

    Read the article

  • Geometry Shader input vertices order

    - by NPS
    MSDN specifies (link) that when using triangleadj type of input to the GS, it should provide me with 6 vertices in specific order: 1st vertex of the triangle processed, vertex of an adjacent triangle, 2nd vertex of the triangle processed, another vertex of an adjacent triangle and so on... So if I wanted to create a pass-through shader (i.e. output the same triangle I got on input and nothing else) I should return vertices 0, 2 and 4. Is that correct? Well, apparently it isn't because I did just that and when I ran my app the vertices were flickering (like changing positions/disappearing/showing again or sth like that). But when I instead output vertices 0, 1 and 2 the app rendered the mesh correctly. I could provide some code but it seems like the problem is in the input vertices order, not the code itself. So what order do input vertices to the GS come in?

    Read the article

  • Open source clone for Starcraft

    - by sinekonata
    Two questions about a SC:Broodwar clone. Is there one yet? How likely is legal pursuit? Since almost all games I usually play now have an FOS alternative from alpha to way polished, I was wondering why can't I find one for SC, one of the biggest titans of the gaming community? So my first question is, is there a game that was made with the intention to emulate SC? Is it that I didn't look well enough? Could it really be that no one tackled what seems like a small effort compared to the creation of a game engine like Spring or games like Rigs of Rod or Minetest? And since SC is not being maintained at all shouldn't the incentive to see a bug free modable balanced version huge? What am I not getting here? In the event that there is none, is it a legal problem? Could it be that people expect Blizzard to release sources themselves? Or that developers don't see the point in having SC mechanics without the patented lore and aesthetics? And the trickier question, if I were to make SC an open source game, a total clone of it for the purposes of maintenance, modability, etc. Would Blizzard really sue a team of developer fans that just do them a favour knowing they don't lose any money from Korea broadcasts? Or would they do it not to set precedents. So thanks for reading all that, hope I'm not the only one to think it's weird that no one talks about it. See you.

    Read the article

< Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >