Search Results

Search found 25496 results on 1020 pages for 'development fabric'.

Page 443/1020 | < Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >

  • Is good practice to optimize FPS even when it's above the lower limit to give illusion of movement?

    - by rraallvv
    I started over 50 FPS on the iPhone, but now I'm bellow 30 PFS, I've seen most iPhone games clamped to either 60 or 30 FPS, even when 24 or less would give the illusion of movement. I've concidered my limit to be a little bit over 15 FPS, in fact my physics simulation is updated at that rate (15.84 steps/s) as that is the lowest that still give fluid movement, a bit lower gives jerky motion. Is there a practical reason why to clamp FPS way above the lower limit? Update: The following image could help to clarify I can independently set the physic simulation step, frame rate, and simulation interval update. My concern is why should I clamp any of those to values greater than the minimum? For instance to conserve battery life I could just to choose the lower limits, but it seems that 60 or 30 FPS are the most used values.

    Read the article

  • In a browser, is it best to use one huge spritesheet or many (10000) different PNG's?

    - by Nick
    I'm creating a game in jQuery, where I use about 10000 32x32 tiles. Until now, I have been using them all separately (no sprite sheet). An average map uses about 2000 tiles (sometimes re-used PNG's but all separate divs) and the performance ranges from stable (Chrome) to a bit laggy (Firefox). Each of these divs are positioned absolutely using CSS. They do not need to be updated every tick, just when a new map is loaded. Would it be better for performance to use spritesheet methods for the divs using CSS background-positioning, like gameQuery does? Thank you in advance!

    Read the article

  • Annoying flickering of vertices and edges (possible z-fighting)

    - by Belgin
    I'm trying to make a software z-buffer implementation, however, after I generate the z-buffer and proceed with the vertex culling, I get pretty severe discrepancies between the vertex depth and the depth of the buffer at their projected coordinates on the screen (i.e. zbuffer[v.xp][v.yp] != v.z, where xp and yp are the projected x and y coordinates of the vertex v), sometimes by a small fraction of a unit and sometimes by 2 or 3 units. Here's what I think is happening: Each triangle's data structure holds the plane's (that is defined by the triangle) coefficients (a, b, c, d) computed from its three vertices from their normal: void computeNormal(Vertex *v1, Vertex *v2, Vertex *v3, double *a, double *b, double *c) { double a1 = v1 -> x - v2 -> x; double a2 = v1 -> y - v2 -> y; double a3 = v1 -> z - v2 -> z; double b1 = v3 -> x - v2 -> x; double b2 = v3 -> y - v2 -> y; double b3 = v3 -> z - v2 -> z; *a = a2*b3 - a3*b2; *b = -(a1*b3 - a3*b1); *c = a1*b2 - a2*b1; } void computePlane(Poly *p) { double x = p -> verts[0] -> x; double y = p -> verts[0] -> y; double z = p -> verts[0] -> z; computeNormal(p -> verts[0], p -> verts[1], p -> verts[2], &p -> a, &p -> b, &p -> c); p -> d = p -> a * x + p -> b * y + p -> c * z; } The z-buffer just holds the smallest depth at the respective xy coordinate by somewhat casting rays to the polygon (I haven't quite got interpolation right yet so I'm using this slower method until I do) and determining the z coordinate from the reversed perspective projection formulas (which I got from here: double z = -(b*Ez*y + a*Ez*x - d*Ez)/(b*y + a*x + c*Ez - b*Ey - a*Ex); Where x and y are the pixel's coordinates on the screen; a, b, c, and d are the planes coefficients; Ex, Ey, and Ez are the eye's (camera's) coordinates. This last formula does not accurately give the exact vertices' z coordinate at their projected x and y coordinates on the screen, probably because of some floating point inaccuracy (i.e. I've seen it return something like 3.001 when the vertex's z-coordinate was actually 2.998). Here is the portion of code that hides the vertices that shouldn't be visible: for(i = 0; i < shape.nverts; ++i) { double dist = shape.verts[i].z; if(z_buffer[shape.verts[i].yp][shape.verts[i].xp].z < dist) shape.verts[i].visible = 0; else shape.verts[i].visible = 1; } How do I solve this issue? EDIT I've implemented the near and far planes of the frustum, with 24 bit accuracy, and now I have some questions: Is this what I have to do this in order to resolve the flickering? When I compare the z value of the vertex with the z value in the buffer, do I have to convert the z value of the vertex to z' using the formula, or do I convert the value in the buffer back to the original z, and how do I do that? What are some decent values for near and far? Thanks in advance.

    Read the article

  • Game state management (Game, Menu, Titlescreen, etc)

    - by munchor
    Basically, in every single game I've made so far, I always have a variable like "current_state", which can be "game", "titlescreen", "gameoverscreen", etc. And then on my Update function I have a huge: if current_state == "game" game stuf ... else if current_state == "titlescreen" ... However, I don't feel like this is a professional/clean way of handling states. Any ideas on how to do this in a better way? Or is this the standard way?

    Read the article

  • 2D wave-like sprite movement XNA

    - by TheBroodian
    I'm trying to create a particle that will 'circle' my character. When the particle is created, it's given a random position in relation to my character, and a box to provide boundaries for how far left or right this particle should circle. When I use the phrase 'circle', I'm referring to a simulated circling, i.e., when moving to the right, the particle will appear in front of my character, when passing back to the left, the particle will appear behind my character. That may have been too much context, so let me cut to the chase: In essence, the path I would like my particle to follow would be akin to a sine wave, with the left and right sides of the provided rectangle being the apexes of the wave. The trouble I'm having is that the position of the particle will be random, so it will never be produced at the same place within the wave twice, but I have no idea how to create this sort of behavior procedurally.

    Read the article

  • Frame timing for GLFW versus GLUT

    - by linello
    I need a library which ensures me that the timing between frames are more constant as possible during an experiment of visual psychophics. This is usually done synchronizing the refresh rate of the screen with the main loop. For example if my monitor runs at 60Hz I would like to specify that frequency to my framework. For example if my gameloop is the following void gameloop() { // do some computation printDeltaT(); Flip buffers } I would like to have printed a constant time interval. Is it possible with GLFW?

    Read the article

  • How is constant buffer allocation handled in DX11?

    - by Marek
    I'm starting with DX11 and I'm not sure if I'm doing the things right. I want to have both pixel and vertex shader program in one file. Both use some shared and some different constant buffers. So it looks like this: Shader.fx cbuffer ForVS : register(b0) { float4x4 wvp; }; cbuffer ForVSandPS : register(b1) { float4 stuff; float4 stuff2; }; cbuffer ForVS2 : register(b2) { float4 stuff; float4 stuff2; }; cbuffer ForPS : register(b3) { float4 stuff; float4 stuff2; }; .... And in code I use mContext->VSSetConstantBuffers( 0, 1, bufferVS); mContext->VSSetConstantBuffers( 1, 1, bufferVS_PS); mContext->VSSetConstantBuffers( 2, 1, bufferVS2); mContext->PSSetConstantBuffers( 1, 1, bufferVS_PS); mContext->PSSetConstantBuffers( 3, 1, bufferPS); The numbering of buffers in PS is what bugs me, is it alright to bind random slots to shaders (in this example 1 and 3)? Does that mean it still uses just two buffers or does it initialize 0 and 2 buffer pointers to empty? Thank you.

    Read the article

  • What causes player box/world geometry glitches in old games?

    - by Alexander
    I'm looking to understand and find the terminology for what causes - or allows - players to interfere with geometry in old games. Famously, ID's Quake3 gave birth to a whole community of people breaking the physics by jumping, sliding, getting stuck and launching themselves off points in geometry. Some months ago (though I'd be darned if I can find it again!) I saw a conference held by Bungie's Vic DeLeon and a colleague in which Vic briefly discussed the issues he ran into while attempting to wrap 'collision' objects (please correct my terminology) around environment objects so that players could appear as though they were walking on organic surfaces, while not clipping through them or appear to be walking on air at certain points, due to complexities in the modeling. My aim is to compose a case study essay for University in which I can tackle this issue in games, drawing on early exploits and how techniques have changed to address such exploits and to aid in the gameplay itself. I have 3 current day example of where exploits still exist, however specifically targeting ID Software clearly shows they've massively improved their techniques between Q3 and Q4. So in summary, with your help please, I'd like to gain a slightly better understanding of this issue as a whole (its terminology mainly) so I can use terms and ask the right questions within the right contexts. In practical application, I know what it is, I know how to do it, but I don't have the benefit of level design knowledge yet and its technical widgety knick-knack terms =) Many thanks in advance AJ

    Read the article

  • Exporting .jar files with Jarsplice

    - by SystemNetworks
    Help! I'm Using Mac OS X 10.8 Mountain Lion and Using Eclipse. I'm using the library called Slick and Lwjgl. When i first exported it, it has a .jar file. I followed some You Tube Tutorials (Different, they don't have slick) It worked for them. I don't know why it dosen't work for me. Should i put Slick-util too? I didn't even use lwjgl btw. Please help!!! Jars I used(Libraries) Slick LWJGL(I didn't use it) Tutorials I followed TheCodingUniverse(Exporting) TheNewBoston(The Code and Set-up) Programs I used Eclipse IDE Java Jarsplice No warnings found or errors. It is perfect! But Nothing shows up in the screen everytime I pressed the jar(After Jarsplice) Help!!!

    Read the article

  • Generating grammatically correct MUD-style attack descriptions

    - by Extrakun
    I am currently working on a text based game, where the outcome of a combat round goes something like this %attacker% inflicts a serious wound (12 points damage) on %defender% Right now, I just swap %attacker% with the name of the attacker, and %defender% for the name of the defender. However, the description works, but don't read correctly. Since the game is just all text, I don't want to resort to generic descriptions (Such as "You use Attack on Goblin for 5 damage", which arguably solve the problem) How do I generate correct descriptions for cases where %attacker% refers to "You", the player? "You inflicts..." is wrong "Bees", or other plural? I need somehow to know I should prefix the name with a "The " If %attacker% is a generic noun, such as "Goblin", it will read weird as opposed to %attacker% being a name. Compare "Goblin inflicts..." vs. "Aldraic Swordbringer inflicts...." How does text-based games usually resolve such issues?

    Read the article

  • HLSL How to flip geometry horizontally

    - by cubrman
    I want to flip my asymmetric 3d model horizontally in the vertex shader alongside an arbitrary plane parallel to the YZ plane. This should switch everything for the model from the left hand side to the right hand side (like flipping it in Photoshop). Doing it in pixel shader would be a huge computational cost (extra RT, more fullscreen samples...), so it must be done in the vertex shader. Once more: this is NOT reflection, i need to flip THE WHOLE MODEL. I thought I could simply do the following: Turn off culling. Run the following code in the vertex shader: input.Position = mul(input.Position, World); // World[3][0] holds x value of the model's pivot in the World. if (input.Position.x <= World[3][0]) input.Position.x += World[3][0] - input.Position.x; else input.Position.x -= input.Position.x - World[3][0]; ... The model is never drawn. Where am I wrong? I presume that messes up the index buffer. Can something be done about it? P.S. it's INSANELY HARD to format code here. Thanks to Panda I found my problem. SOLUTION: // Do thins before anything else in the vertex shader. Position.x *= -1; // To invert alongside the object's YZ plane.

    Read the article

  • Designing spawning system

    - by Vlad
    I played this game recently http://www.kongregate.com/games/JuicyBeast/knightmare-tower and I am amazed by the way how different monsters are beign spawned. I personally developed my own shooter game and I added time based but also count based spawing system. By count based I mean when there are 5 enemies on stage stop spawning. But this is one example. My question is how are these spawning mechanism built, is there some pattern or some theory how they are built? Are there some online materials/pages where I can improve my knowledge? To sumarize, let just say we have 6 types of monsters. I start the game and kill of monsters of type 1,2 and 3 all the time. Once I pass the first ceiling, like in the game above, monster type 4 appear. ANd so on. As I progress trough the game, the same system of 6 types of monsters stay, but they become more and more resilient and dangerous. So I must also improve to be able to destroy the same monsters but now stronger. My question is simple, are there some theories built or written for developing this type of inteligent systems? Note: This is a general question, not tied up with some game or how exactly should the game work. I am capable to program my own mechanisms but I think I need some help. Thanks.

    Read the article

  • The practical cost of swapping effects

    - by sebf
    Hello, I use XNA for my projects and on those forums I sometimes see references to the fact that swapping an effect for a mesh has a relatively high cost, which surprises me as I thought to swap an effect was simply a case of copying the replacement shader program to the GPU along with appropriate parameters. I wondered if someone could explain exactly what is costly about this process? And put, if possible, 'relatively' into context? For example say I wanted to use a short shader to help with picking, I would: Change the effect on every object, calculting a unique color to identify it and providing it to the shader. Draw all the objects to a render target in memory. Get the color from the target and use it to look up the selected object. What portion of the total time taken to complete that process would be spent swapping the shaders? My instincts would say that rendering the scene again, no matter how simple the shader, would be an order of magnitude slower than any other part of the process so why all the concern over effects?

    Read the article

  • how do I set quad buffering with jogl 2.0

    - by tony danza
    I'm trying to create a 3d renderer for stereo vision with quad buffering with Processing/Java. The hardware I'm using is ready for this so that's not the problem. I had a stereo.jar library in jogl 1.0 working for Processing 1.5, but now I have to use Processing 2.0 and jogl 2.0 therefore I have to adapt the library. Some things are changed in the source code of Jogl and Processing and I'm having a hard time trying to figure out how to tell Processing I want to use quad buffering. Here's the previous code: public class Theatre extends PGraphicsOpenGL{ protected void allocate() { if (context == null) { // If OpenGL 2X or 4X smoothing is enabled, setup caps object for them GLCapabilities capabilities = new GLCapabilities(); // Starting in release 0158, OpenGL smoothing is always enabled if (!hints[DISABLE_OPENGL_2X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(2); } else if (hints[ENABLE_OPENGL_4X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); } capabilities.setStereo(true); // get a rendering surface and a context for this canvas GLDrawableFactory factory = GLDrawableFactory.getFactory(); drawable = factory.getGLDrawable(parent, capabilities, null); context = drawable.createContext(null); // need to get proper opengl context since will be needed below gl = context.getGL(); // Flag defaults to be reset on the next trip into beginDraw(). settingsInited = false; } else { // The following three lines are a fix for Bug #1176 // http://dev.processing.org/bugs/show_bug.cgi?id=1176 context.destroy(); context = drawable.createContext(null); gl = context.getGL(); reapplySettings(); } } } This was the renderer of the old library. In order to use it, I needed to do size(100, 100, "stereo.Theatre"). Now I'm trying to do the stereo directly in my Processing sketch. Here's what I'm trying: PGraphicsOpenGL pg = ((PGraphicsOpenGL)g); pgl = pg.beginPGL(); gl = pgl.gl; glu = pg.pgl.glu; gl2 = pgl.gl.getGL2(); GLProfile profile = GLProfile.get(GLProfile.GL2); GLCapabilities capabilities = new GLCapabilities(profile); capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); capabilities.setStereo(true); GLDrawableFactory factory = GLDrawableFactory.getFactory(profile); If I go on, I should do something like this: drawable = factory.getGLDrawable(parent, capabilities, null); but drawable isn't a field anymore and I can't find a way to do it. How do I set quad buffering? If I try this: gl2.glDrawBuffer(GL.GL_BACK_RIGHT); it obviously doesn't work :/ Thanks.

    Read the article

  • How to emulate Mode 13h in a modern 3D renderer?

    - by David Gouveia
    I was indulging in nostalgia and remembered the first game I created, which used Mode 13h. This mode was really simple to work with, since it was essentially just an array of bytes with an element for each pixel on the screen (using an indexed color scheme). So I thought it might be fun to create something nowadays under these restrictions, but on modern hardware. The API could be as simple as: public class Mode13h { public byte[] VideoMemory = new byte[320 * 200]; public Color[] Palette = new Color[256]; } Now I'm wondering what would be the best way to get this data on the screen, using something like XNA / DirectX / OpenGL. The only solution I could think of was to create a texture with the same size as the VideoMemory array, write the contents of VideoMemory to it every frame, then render that texture in a full screen quad with the correct aspect ratio and using point texture filtering for that retro look. Is there a better way?

    Read the article

  • Multi-Threaded Pipelined Game Engine Data Synchronization Questions

    - by Douglas
    Let's say I'm setting up a worker pool based game engine with pipelining. Let's say I have 4 stages in my pipeline as such: Stage 1: Physics Stage 2: AI/Input Stage 3: Game Logic Stage 4: Rendering Now let's say that the physics detects a collision between a bullet and a character in stage 1. Two frames later the game logic may choose to remove that bullet from the simulation, however none of the other copies of the data for the other pipeline stages will get this information. How is this sort of thing and other things like it get handled? Do you generally make changes like this to every pipeline stage's data at the end of a frame?

    Read the article

  • Everything "invisible" when launching map from launcher

    - by Predanoob
    Excuse my noobiness, but I downloaded the SDK, and I tried the map Forest from within the editor and it worked fine. However if I launch it from the Launcher using the console it looks like this: http://i.stack.imgur.com/U7rPU.jpg I can use the weapons(although they are invisible), and interact with objects despite not seeing them. I also did my own map same problem. What am I doing wrong? ?(

    Read the article

  • Love2D engine for Lua; What about 3D?

    - by shadowprotocol
    Lua has been really awesome to learn, it's so simple. I really enjoy scripting languages, and I had an equally enjoyable time learning Python. The Love engine, http://love2d.org/, is really awesome, but I'm looking for something that can handle 3D as well. Is there anything that accommodates 3D in Lua? I'm still intrigued by the particle system of LOVE anyway and may just turn my idea into a 2D project with Particle lighting :) EDIT: I removed comments about Python - I want this to be a Lua topic. Thanks

    Read the article

  • SDL2 with OpenGL -- weird results, what's wrong?

    - by ber4444
    I'm porting an app to iOS, and therefore need to upgrade it to SDL2 from SDL1.2 (so far I'm testing it as an on OS X desktop app only). However, when running the code with SDL2, I'm getting weird results as shown on the second image below (the first image is how it looks with SDL, correctly). The single changeset that causes this is this one, do you see something obviously wrong there, or does SDL2 have some OpenGL nuances I'm unaware of? My SDL is based on changeset dd7e57847ea9 from HG (since then there is one "Allow specifying of OpenGL 3.2 Core Profile on Mac OS X" commit, not sure if that would help).

    Read the article

  • Handling window resize with arbitrary aspect ratios

    - by DormoTheNord
    I'm currently making a 2D game using SFML. I want the aspect ratio to be maintained when the user resizes the window. I also want the game to work with any arbitrary aspect ratio (like any media player would). Here is the code I have so far: void os::GameEngine::setCameraViewport() { sf::FloatRect tempViewport; float viewAspectRatio = (float)aspectRatio.x / aspectRatio.y; float screenAspectRatio = (float)gameWindow.getSize().x / gameWindow.getSize().y; if (viewAspectRatio > screenAspectRatio) { // Viewport is wider than screen, fit on X } else if (viewAspectRatio < screenAspectRatio) { // Screen is wider than viewport, fit on Y } else // window aspect ratio matches view aspect ratio { tempViewport.height = 1; tempViewport.width = 1; tempViewport.left = 0; tempViewport.top = 0; } viewport = tempViewport; camera.setViewport(viewport); gameWindow.setView(camera); } The problem is I'm having trouble with the logic to determine the properties of the viewport.

    Read the article

  • What tools should I consider if my strategy is to make a game available to as many platforms as possible?

    - by Kenji Kina
    We're planning on developing a 2D, grid-based puzzle game, and although it's still very early in the planning stages, we'd like to make our decisions well from the beginning. Our strategy will be to make the game available to as many platforms as possible, for example PCs (Windows, Mac and/or Linux), mobile phones (iPhone and/or Android based phones), game consoles (XBLA and/or PSN) PC will have an emphasis, but I believe that's the most flexible platform so that shouldn't be a problem. So, what programming language, game engine, frameworks and all around tools would be best suited for our goal? P.S.: I'm betting a set of tools won't cover ALL of them, and that there will still be some kind of "translating" effort for some platforms, but we'd like to know what the most far reaching are.

    Read the article

  • How do I fix these compiler errors in Apple Crunch?

    - by BluFire
    I've been looking around and I finally got the full source code for a game called Apple-Crunch from Google Code. But when I put it into my project, the source code included so many errors in the class files such as: cannot be resolved into a type the constructor is undefined the method method() is undefined for the type Sprite class.java I downloaded the source directly from the command-line and noticed errors popping up on my project. Since I couldn't figure out how to import the actual folder into my workspace (it wouldn't show up on existing projects) I decided to copy and overwrite the folders into the project. The errors were still there so I looked at the class files and noticed that the classes with errors extended from RokonActivity. I then proceeded to add to the libs folder the Rokon library in hopes to fix the errors. Sadly it didn't work and now I don't what to do to fix the errors. How do I fix the errors without having to manually change the code? The source code should be fully functional so why are there errors?

    Read the article

  • Is there any way to enable the HiDef graphics profile property on a Silverlight 5 3d Web App?

    - by Daniel
    I have an XNA Windows Game that uses the HiDef profile to load complex fbx and obj files. Trying to move it over to a Silverlight 3d Web App, Silverlight seems to only want to use the Reach profile, and I get an error that the Reach profile does not support a sufficient number of primitive draws per call. Is there any way to change to HiDef in Silverlight 5? It is not in the project properties and attempting to change it in mainpage.xaml.cs only gives me the option of setting it to Reach.

    Read the article

  • libgdx arrays onTouch() method and delays for objects

    - by johnny-b
    i am trying to create random bullets but it is not working for some reason. also how can i make a delay so the bullets come every 30 seconds or 1 minute???? also the onTouch method does not work and it is not taking the bullet away???? shall i put the array in the GameRender class? thanks public class GameWorld { public static Ball ball; private Bullet bullet1; private ScrollHandler scroller; private Array<Bullet> bullets = new Array<Bullet>(); public GameWorld() { ball = new Ball(280, 273, 32, 32); bullet = new Bullet(-300, 200); scroller = new ScrollHandler(0); bullets.add(new Bullet(bullet.getX(), bullet.getY())); bullets = new Array<Bullet>(); Bullet bullet = null; float bulletX = 0.0f; float bulletY = 0.0f; for (int i=0; i < 10; i++) { bulletX = MathUtils.random(-10, 10); bulletY = MathUtils.random(-10, 10); bullet = new Bullet(bulletX, bulletY); bullets.add(bullet); } } public void update(float delta) { ball.update(delta); bullet.update(delta); scroller.update(delta); } public static Ball getBall() { return ball; } public ScrollHandler getScroller() { return scroller; } public Bullet getBullet1() { return bullet1; } } i also tried this and it is not working, i used this in the GameRender class Array<Bullet> enemies=new Array<Bullet>(); //in the constructor of the class enemies.add(new Bullet(bullet.getX(), bullet.getY())); // this throws an exception for some reason??? this is in the render method for(int i=0; i<bullet.size; i++) bullet.get(i).draw(batcher); //this i am using in any method that will allow me from the constructor to update to render for(int i=0; i<bullet.size; i++) bullet.get(i).update(delta); this is not taking the bullet out @Override public boolean touchDown(int screenX, int screenY, int pointer, int button) { for(int i=0; i<bullet.size; i++) if(bullet.get(i).getBounds().contains(screenX,screenY)) bullet.removeIndex(i--); return false; } thanks for the help anyone.

    Read the article

  • Using Behavior Trees and Events together

    - by weichsem
    I am beginning to work with behavior trees and am unsure how events should be handled within the tree. Lets say we have a space game where the player is dogfighting with a handful of other ships, some friendly some not. The player destroys a ship and the rest of the hostile ships should then start to retreat. How was should the shipWasDestroyed event effect the other ship's behavior trees so that they start running the retreat behavior? One way I could think of doing this is have all the conditions I care about be high level nodes that effectively state change the ship. This would mean I'd have to check all of these state change conditions on every frame the behavior tree was run, even if they are very rare occurrences. I'd prefer not doing this for performance and complexity reasons. From looking at the Halo papers on behavior trees it seems that they handled this by dynamically placing nodes into the tree when the event occurred. It seems like calculating where the new node should go could be problematic depending on the current state of the running behavior. How is this normally handled?

    Read the article

< Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >