Search Results

Search found 25518 results on 1021 pages for 'iterative development'.

Page 504/1021 | < Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >

  • Player Movement DirectX

    - by SullY
    I'm reading on a Book that's about Gamedevelopment with C++ and DirectX 9. There is something that interrests me: It says that playermovements are increasing with the power of the CPU. Becouse a faster CPU will move the player with every frame ( better CPU = better FPS ) To bypass it, it says you have just to multiplicate time*movementfactor . I'd like to know is there an another way to bypass it ?

    Read the article

  • Mesh with Alpha Texture doesn't blend properly

    - by faulty
    I've followed example from various place regarding setting OutputMerger's BlendState to enable alpha/transparent texture on mesh. The setup is as follows: var transParentOp = new BlendStateDescription { SourceBlend = BlendOption.SourceAlpha, DestinationBlend = BlendOption.InverseDestinationAlpha, BlendOperation = BlendOperation.Add, SourceAlphaBlend = BlendOption.Zero, DestinationAlphaBlend = BlendOption.Zero, AlphaBlendOperation = BlendOperation.Add, }; I've made up a sample that display 3 mesh A, B and C, where each overlaps one another. They are drawn sequentially, A to C. Distance from camera where A is nearest and C is furthest. So, the expected output is that A see through and saw part of B and C. B will see through and saw part of C. But what I get was none of them see through in that order, but if I move C closer to the camera, then it will be semi transparent and see through A and B. B if move closer to camera will see A but not C. Sort of reverse. So it seems that I need to draw them in reverse order where furthest from camera is drawn first then nearest to camera is drawn last. Is it suppose to be done this way, or I can actually configure the blendstate so it works no matter in which order i draw them? Thanks

    Read the article

  • Maya is lagging in a specific way...?

    - by Aerovistae
    My Maya installation worked perfectly. It is not my computer. Something caused it to stop working overnight, somehow. When I try to drag a vertex or something like that, it moves the vertex, but then I have to click like 3 times somewhere outside the mesh before the actual mesh will catch up and follow the vertex. Until I do that, it just stays as it was, with a floating vertex somewhere inside it or outside it. It makes modeling borderline impossible and completely infuriating. What ought to be happening is what we're all used to-- as I move the vertex, the mesh follows it actively, so I can see what it looks like at every given moment until I release the vertex in its new position. Other weird thing: this only applies to complex meshes, like a couple thousand faces. A simple cube works fine. What gives?? Anybody?

    Read the article

  • Unity, changing gravity game & stopping character when he hit a wall.

    - by Sylario
    i am currently working on a 2d puzzle game in the unity engine. One of the aspect of this game is the possibility to rotate the level of 90°. It also rotate the gravity. The main character is not directly controlled by the player, but instead fall when the level is rotated. When the main character hit a wall, he should stop to move. If i do not stop it, it kind of blink ans shake against the wall. To stop it i detect collision and depending on the current rotation state, the player will stop at "vertical" or "horizontal" tags when a OnCollisonEnter occurs. I must do that because when the player fall on his relative ground, He must not stop like if he had touch a wall. My problem is the 'side' of platform, or the 'top' of wall, they use the same tag and thus do not give the correct tag to my character. I tried to put a very small invisible box on top/side of elements but the collision occurs nevertheless. It seems when the player falls and hit something he go through a bit before being replaced at correct position by unity. Is there a way : 1 ) to doesn't stop my character but to make it appear immobile on screen 2 ) To detect a "i cannot move anymore" collision other than by using collision?

    Read the article

  • How to sync client and server at the first frame

    - by wheelinlight
    I'm making a game where an authoritative server sends information to all clients about states and positions for objects in a 3d world. The player can control his character by clicking on the screen to set a destination for the character, much like in the Diablo series. I've read most information I can find online about interpolation, reconciliation, and general networking architecture (Valve's for instance). I think I understand everything but one thing seems to be missing in every article I read. Let say we have an interpolation delay of 100ms, server tickrate=50ms, latency=200ms; How do I know when 100ms has past on the client? If the server sends the first update on t=0, can I assume it arrives at t=200, therefore assuming that all packets takes the same amount of time to reach the client? What if the first packet arrives a little quick, for instance at t=150. I would then be starting the client with t=150 and at t=250 it will think it has past 100ms since its connect to the server when it in fact only 50ms has past. Hopefully the above paragraph is understandable. The summarized question would be: How do I know at what tick to start simulating the client? EDIT: This is how I ended up doing it: The client keeps a clock (approximately) in sync with the server. The client then simulates the world at simulationTime = syncedTime - avg(RTT)/2 - interpolationTime The round-trip time can fluctuate so therefore I average it out over time. By only keeping the most recent values when calculating the average I hope to adapt to more permanent changes in latency. It's still to early to draw any conclusion. I'm currently simulating bad network connections, but it's looking good so far. Anyone see any possible problems?

    Read the article

  • Finding graphical help? [duplicate]

    - by Jim Hurley
    This question already has an answer here: Where can I go to find a game graphic artist? [on hold] 4 answers If I am making an amateur video game team to design and produce a project, where would I go to find someone to make 3D models? So far, I have my own story design and programming, someone doing engine programming, someone doing the sound, and someone possibly doing texture design. However, there is no one creating meshes. What would be the best course of action to find help?

    Read the article

  • libgdx loading textures fails [duplicate]

    - by Chris
    This question already has an answer here: Why do I get this file loading exception when trying to draw sprites with libgdx? 4 answers I'm trying to load my texture with playerTex = new Texture(Gdx.files.internal("player.jpg")); player.jpg is located under my-gdx-game-android/assets/data/player.jpg I get an exception like this: Full Code: @Override public void create() { camera = new OrthographicCamera(); camera.setToOrtho(false, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); batch = new SpriteBatch(); FileHandle file = Gdx.files.internal("player.jpg"); playerTex = new Texture(file); player = new Rectangle(); player.x = 800-20; player.y = 250; player.width = 20; player.height = 80; } @Override public void dispose() { // dispose of all the native resources playerTex.dispose(); batch.dispose(); } @Override public void render() { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); camera.update(); batch.setProjectionMatrix(camera.combined); batch.begin(); batch.draw(playerTex, player.x, player.y); batch.end(); if(Gdx.input.isKeyPressed(Keys.DOWN)) player.y -= 50 * Gdx.graphics.getDeltaTime(); if(Gdx.input.isKeyPressed(Keys.UP)) player.y += 50 * Gdx.graphics.getDeltaTime(); }

    Read the article

  • Box2dWeb positioning relative to HTML5 Canvas

    - by Joe
    I'm new with HTML5 canvas and Box2DWeb and I'm trying to make an Asteroids game. So far I think I'm doing okay, but one thing I'm struggling to comprehend is how positioning works in relation to the canvas. I understand that Box2DWeb is only made to deal with physical simulation, but I don't know how to deal with positioning on the canvas. The canvas is 100% viewport and thus can vary size. I want to fill the screen with some asteroids, but if I hardcore certain values such as bodyDef.position.x = Math.random() * 50; the asteroid may appear off canvas for someone with a smaller screen? Can anybody help me understand how I can deal with relative positioning on the canvas?

    Read the article

  • Zooming to point of interest

    - by user1010005
    I have the following variables: Point of interest which is the position(x,y) in pixels of the place to focus. Screen width,height which are the dimensions of the window. Zoom level which sets the zoom level of the camera. And this is the code I have so far. void Zoom(int pointOfInterestX,int pointOfInterstY,int screenWidth, int screenHeight,int zoomLevel) { glTranslatef( (pointOfInterestX/2 - screenWidth/2), (pointOfInterestY/2 - screenHeight/2),0); glScalef(zoomLevel,zoomLevel,zoomLevel); } And I want to do zoom in/out but keep the point of interest in the middle of the screen. but so far all of my attempts have failed and I would like to ask for some help.

    Read the article

  • AndEngine GLES2 - getting Black screen on emulator 4.1

    - by dizworld.com
    I'm new in andengine . I create following code public class MainActivity extends BaseGameActivity { static final int CAMERA_WIDTH = 800; static final int CAMERA_HEIGHT = 480; public Font mFont; public Camera mCamera; //A reference to the current scene public Scene mCurrentScene; public static BaseActivity instance; public EngineOptions onCreateEngineOptions() { instance = this; mCamera = new Camera(0, 0, CAMERA_WIDTH, CAMERA_HEIGHT); return new EngineOptions(true, ScreenOrientation.LANDSCAPE_SENSOR, new RatioResolutionPolicy(CAMERA_WIDTH, CAMERA_HEIGHT), mCamera); } @Override public void onCreateResources(OnCreateResourcesCallback arg0) throws Exception { mFont = FontFactory.create(this.getFontManager(),this.getTextureManager(), 256, 256,Typeface.create(Typeface.DEFAULT, Typeface.BOLD), 32); mFont.load(); } @Override public void onCreateScene(OnCreateSceneCallback arg0) throws Exception { mEngine.registerUpdateHandler(new FPSLogger()); mCurrentScene = new Scene(); Log.v("Scene","enter"); mCurrentScene.setBackground(new Background(0.09804f, 0.7274f, 0.8f)); // return mCurrentScene; } @Override public void onPopulateScene(Scene arg0, OnPopulateSceneCallback arg1) throws Exception { // TODO Auto-generated method stub } } I got code on sites there is returning scene but in AndEngine GLES2 in method onCreateScene() there is no return scene ... so my First run is BLACK .. any suggestion :)

    Read the article

  • How are larger games organized?

    - by Matthew G.
    I'm using Java, but the language I'm using here is probably irrelevant. I'd like to create an economy based on an ancient civilization. I'm not sure how to design it. If I were working on a smaller game, like a copy of "Space Invaders", I'd have no problem structuring it like this. Game -Main Control Class --Graphics Class --Player Class --Enemy class I'd pass the graphics class to both the player and enemy class so they could call graphics functions. I don't understand how I'd do this for larger projects. Do I create a country class that contains a bunch of towns? Do the towns contain a lot building class, most contain classes of people? Do I make a path finding class that the player can access to get around? How exactly do I structure this and pass all these references around? Thanks.

    Read the article

  • How does opengl-es 2 assemble primitives?

    - by stephelton
    Two things I'm quite confused about. 1) OpenGL ES 2.0 creates primitives before the vertex shader is invoked. Why, then, does it not automatically provide the vertex shader the position of the vertex? 2) OpenGL ES 2.0 supports glDrawElements(), but it does not support glEnableClientState() or GL_VERTEX_ARRAY, so how can this call possibly be used to construct primitives? NOTE: this is OpenGL ES 2.0, NOT normal OpenGL! Thanks!

    Read the article

  • what knowledge would I need to make a good simulation games

    - by Skeith
    I have an idea for a game like theme park but don't know how simulation games are made. I am not some noob on his first game so I appreciated constructive answers instead of "its hard, don't do it". What I want is to know how simulation game mechanics are put together. I figure it would be heaver on the AI than normal games and not knowing much about AI would like to know some programming techniques I should look into for this style game. specific techniques please not just a book on ai. what sort of architecture would be used? I guess it would have some sort of probability engine with pre designed events that are triggered based on the AI state. Would it use a FSM or be purely event driven ? Any information on how a sims game functions would be cool.

    Read the article

  • Using Instance Nodes, worth it?

    - by Twitch
    I am making a 2d game where there are various environments with lots and lots of objects. There is a forest scene with like 1200 objects in total(trees mainly), of which around 100 are visible on the camera at any given time, as you move through the level. These are comprised of around 20 different kind of trees and other props. Each object is usually 2-6 triangles with a transparent texture. My developer asked me to replace each object in the scene with a node, and keeping only a minimal amount of actual objects which would be 300+ or so(?), since there are a few modified unique meshes. So he can instantiate the actual objects to keep the game light. Is this actually effective? And if so how much? I 've read about draw calls and such and I suppose that if I combine each texture (10 kinds of trees) in 1 mesh it will have the same effect?

    Read the article

  • Dynamic content realoding

    - by Kikaimaru
    Is there a relatively simple way to dynamicaly reload content files? (ie: effect files) I know i can do following: Detect change of file Run content pipeline to rebuild that specific file Unload ALL content that was loaded Load All content And use double references to reference content files. Problem is with step 3 (and step 2 isn't that nice too). But i need to unload everything because if i have model Hero.x which references Model.fx effect, and i change Model.fx file, i need to reload Hero.x file which will then call LoadExternalReference on Model.fx. So I guess question is, did someone mange to make this work without rewriting whole ContentManager (and every ContentReader) and tracking calls to LoadExternalReference?

    Read the article

  • What are the steps taken by this GLSL code?

    - by user827992
    1 void main(void) 2 { 3 vec2 pos = mod(gl_FragCoord.xy, vec2(50.0)) - vec2(25.0); 4 float dist_squared = dot(pos, pos); 5 6 gl_FragColor = (dist_squared < 400.0) 7 ? vec4(.90, .90, .90, 1.0) 8 : vec4(.20, .20, .40, 1.0); 9 } taken from http://people.freedesktop.org/~idr/OpenGL_tutorials/03-fragment-intro.html Now, this looks really trivial and simple, but my problem is with the mod function. This function is taking 2 vec2 as inputs but is supposed to take just 2 atomic arguments according to the official documentation, also this function makes an implicit use of the floor function that only accepts, again, 1 atomic argument. Can someone explain this to me step by step and point out what I'm not getting here? It's some kind of OpenGL trick? OpenGL Math trick? in the GLSL docs i always find and explicit reference to the type accepted by the function and vec2 it's not there.

    Read the article

  • Long running calculation on background thread

    - by SundayMonday
    In my Cocos2D game for iOS I have a relatively long running calculation that happens at a fairly regular interval (every 1-2 seconds). I'd like to run the calculation on a background thread so the main thread can keep the animation smooth. The calculation is done on a grid. Average grid size is about 100x100 where each cell stores an integer. Should I copy this grid when I pass it to the background thread? Or can I pass a reference and just make sure I don't write to the grid from the main thread before the background thread is done? Copying seems a bit wasteful but passing a reference seems risky. So I thought I'd ask.

    Read the article

  • What tools should I consider if my aim is to make a game available to as many platforms as possible?

    - by Kensai
    We're planning on developing a 2D, grid-based puzzle game, and although it's still very early in the planning stages, we'd like to make our decisions well from the beginning. Our strategy will be to make the game available to as many platforms as possible, for example PCs (Windows, Mac and/or Linux), mobile phones (iPhone and/or Android based phones), game consoles (XBLA and/or PSN) PC will have an emphasis, but I believe that's the most flexible platform so that shouldn't be a problem. So, what programming language, game engine, frameworks and all around tools would be best suited for our goal? P.S.: I'm betting a set of tools won't cover ALL of them, and that there will still be some kind of "translating" effort for some platforms, but we'd like to know what the most far reaching are.

    Read the article

  • Which isometric angles can be mirrored (and otherwise transformed) for optimization?

    - by Tom
    I am working on a basic isometric game, and am struggling to find the correct mirrors. Mirror can be any form of transform. I have managed to get SE out of SW, by scaling the sprite on X axis by -1. Same applies for NE angle. Something is bugging me, that I should be able to also mirror N to S, but I cannot manage to pull this one off. Am I just too sleepy and trying to do the impossible, or a basic -1 scale on Y axis is not enough? What are the common used mirror table for optimizing 8 angle (N, NE, E, SE, S, SW, W, NW) isometric sprites?

    Read the article

  • Need a good quality bitmap rotation algorithm for Android

    - by Lumis
    I am creating a kaleidoscopic effect on an android tablet. I am using the code below to rotate a slice of an image, but as you can see in the image when rotating a bitmap 60 degrees it distorts it quite a lot (red rectangles) – it is smudging the image! I have set dither and anti-alias flags but it does not help much. I think it is just not a very sophisticated bitmap rotation algorithm. canvas.save(); canvas.rotate(angle, screenW/2, screenH/2); canvas.drawBitmap(picSlice, screenW/2, screenH/2, pOutput); canvas.restore(); So I wonder if you can help me find a better way to rotate a bitmap. It does not have to be fast, because I intend to use a high quality rotation only when I want to save the screen to the SD card - I would redraw the screen in memory before saving. Do you know any comprehensible or replicable algorithm for bitmap rotation that I could programme or use as a library? Or any other suggestion? EDIT: The answers below made me think if Android OS had bilinear or bicubic interpolation option and after some search I found that it does have its own version of it called FilterBitmap. After applying it to my paint pOutput.setFilterBitmap(true); I get much better result

    Read the article

  • Building dynamic bounding box hierachies.

    - by adivasile
    I've been reading about collision detection and I saw that the first part was a coarse detection which generates possible contacts using bounding box hierarchies. I understand the concept of splitting up your objects in groups, to speed up the detection phase, but I'm a little confused on how do you actually build the hierachy, more so on what criteria is used to group them together. Do I iterate through all the objects in the scene, and check the distance between them to see where they should be inserted in the tree? Do you know some resources that may shed some light on this topic for me?

    Read the article

  • Blending effect on textures

    - by joecks
    Hi i am trying to build screen animation like flickering, interlace, color separation similar to old style malfunctioning Amiga screens. The intended effects are shown in this video. I am using libgdx and I already discovered the universal tween engine, which helps a lot to build transitional animations, but how should I approach those blending effects, any suggestions? I will specify my question once I learned more about libgdx, but maybe you could give me some hints already. Thanks!

    Read the article

  • How are trajectories calculated and transmitted to other players in Multi-Player ?

    - by giulio
    I play alot of COD4. And can see tracers for gunfire, missles, care packages fall from helicopters etc. There is alot of activity. I am curious to know the algorithm (at a high level) that manages all this action when you have 20 people on a map shooting each other to death ? This question touches on the subject but doesn't ask for a more in-depth answer as to how you the developers go about calculating and transmitting movement and collision detection for projectiles, be it missles/bullets or any other object that is flying through the air in real-time.

    Read the article

  • Getting a mirrored mesh from my data structure

    - by Steve
    Here's the background: I'm in the beginning stages of an RTS game in Unity. I have a procedurally generated terrain with a perlin-noise height map, as well as a function to generate a river. The problem is that the graphical creation of the map is taking the data structure of the map and rotating it by 180 degrees. I noticed this problem when I was creating my rivers. I would set the River's height to flat, and noticed that the actual tiles that were flat in the graphical representation were flipped and mirrored. Here's 3 screenshots of the map from different angles: http://imgur.com/a/VLHHq As you can see, if you flipped (graphically) the river by 180 degrees on the z axis, it would fit where the terrain is flattened. I have a suspicion it is being caused by a misunderstanding on my part of how vertices work. Alas, here is a snippet of the code that is used: This code here creates a new array of Tile objects, which hold the information for each tile, including its type, coordinate, height, and it's 4 vertices public DTileMap (int size_x, int size_y) { this.size_x = size_x; this.size_y = size_y; //Initialize Map_Data Array of Tile Objects map_data = new Tile[size_x, size_y]; for (int j = 0; j < size_y; j++) { for (int i = 0; i < size_x; i++) { map_data [i, j] = new Tile (); map_data[i,j].coordinate.x = (int)i; map_data[i,j].coordinate.y = (int)j; map_data[i,j].vertices[0] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -j * GTileMap.TileMap.tileSize); map_data[i,j].vertices[1] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[2] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[3] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); } } This code sets the river tiles to height 0 foreach (Tile t in map_data) { if (t.realType == "Water") { t.vertices[0].y = 0f; t.vertices[1].y = 0f; t.vertices[2].y = 0f; t.vertices[3].y = 0f; } } And below is the code to generate the actual graphics from the data: public void BuildMesh () { DTileMap.DTileMap map = new DTileMap.DTileMap (size_x, size_z); int numTiles = size_x * size_z; int numTris = numTiles * 2; int vsize_x = size_x + 1; int vsize_z = size_z + 1; int numVerts = vsize_x * vsize_z; // Generate the mesh data Vector3[] vertices = new Vector3[ numVerts ]; Vector3[] normals = new Vector3[numVerts]; Vector2[] uv = new Vector2[numVerts]; int[] triangles = new int[ numTris * 3 ]; int x, z; for (z=0; z < vsize_z; z++) { for (x=0; x < vsize_x; x++) { normals [z * vsize_x + x] = Vector3.up; uv [z * vsize_x + x] = new Vector2 ((float)x / size_x, 1f - (float)z / size_z); } } for (z=0; z < vsize_z; z+=1) { for (x=0; x < vsize_x; x+=1) { if (x == vsize_x - 1 && z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z - 1].vertices [3]; } else if (z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z - 1].vertices [2]; } else if (x == vsize_x - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z].vertices [1]; } else { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [0]; vertices [z * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [1]; vertices [(z+1) * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [2]; vertices [(z+1) * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [3]; } } } } for (z=0; z < size_z; z++) { for (x=0; x < size_x; x++) { int squareIndex = z * size_x + x; int triOffset = squareIndex * 6; triangles [triOffset + 0] = z * vsize_x + x + 0; triangles [triOffset + 2] = z * vsize_x + x + vsize_x + 0; triangles [triOffset + 1] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 3] = z * vsize_x + x + 0; triangles [triOffset + 5] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 4] = z * vsize_x + x + 1; } } // Create a new Mesh and populate with the data Mesh mesh = new Mesh (); mesh.vertices = vertices; mesh.triangles = triangles; mesh.normals = normals; mesh.uv = uv; // Assign our mesh to our filter/renderer/collider MeshFilter mesh_filter = GetComponent<MeshFilter> (); MeshCollider mesh_collider = GetComponent<MeshCollider> (); mesh_filter.mesh = mesh; mesh_collider.sharedMesh = mesh; calculateMeshTangents (mesh); BuildTexture (map); } If this looks familiar to you, its because i got most of it from Quill18. I've been slowly adapting it for my uses. And please include any suggestions you have for my code. I'm still in the very early prototyping stage.

    Read the article

  • When dealing with a static game board, what are some methods to make it more interesting?

    - by Ólafur Waage
    Let's say you have a game board that you look at. It does not move but there is some action going on. For example Chess, Checkers, Solitaire. The game I'm working on is not one of this but it's a good reference. What are some methods you can apply to the game or the design that increases the appeal of the user to the game. Of course you can make it prettier but what are some other methods you can use? For example: Visual cues, game design changes, user interface arrangement, etc.

    Read the article

< Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >