Search Results

Search found 35343 results on 1414 pages for 'development tools'.

Page 644/1414 | < Previous Page | 640 641 642 643 644 645 646 647 648 649 650 651  | Next Page >

  • openGL textures in bitmap mode

    - by evenex_code
    For reasons detailed here I need to texture a quad using a bitmap (as in, 1 bit per pixel, not an 8-bit pixmap). Right now I have a bitmap stored in an on-device buffer, and am mounting it like so: glBindBuffer(GL_PIXEL_UNPACK_BUFFER, BFR.G[(T+1)%2]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, W, H, 0, GL_COLOR_INDEX, GL_BITMAP, 0); The OpenGL spec has this to say about glTexImage2D: "If type is GL_BITMAP, the data is considered as a string of unsigned bytes (and format must be GL_COLOR_INDEX). Each data byte is treated as eight 1-bit elements..." Judging by the spec, each bit in my buffer should correspond to a single pixel. However, the following experiments show that, for whatever reason, it doesn't work as advertised: 1) When I build my texture, I write to the buffer in 32-bit chunks. From the wording of the spec, it is reasonable to assume that writing 0x00000001 for each value would result in a texture with 1-px-wide vertical bars with 31-wide spaces between them. However, it appears blank. 2) Next, I write with 0x000000FF. By my apparently flawed understanding of the bitmap mode, I would expect that this should produce 8-wide bars with 24-wide spaces between them. Instead, it produces a white 1-px-wide bar. 3) 0x55555555 = 1010101010101010101010101010101, therefore writing this value ought to create 1-wide vertical stripes with 1 pixel spacing. However, it creates a solid gray color. 4) Using my original 8-bit pixmap in GL_BITMAP mode produces the correct animation. I have reached the conclusion that, even in GL_BITMAP mode, the texturer is still interpreting 8-bits as 1 element, despite what the spec seems to suggest. The fact that I can generate a gray color (while I was expecting that I was working in two-tone), as well as the fact that my original 8-bit pixmap generates the correct picture, support this conclusion. Questions: 1) Am I missing some kind of prerequisite call (perhaps for setting a stride length or pack alignment or something) that will signal to the texturer to treat each byte as 8-elements, as it suggests in the spec? 2) Or does it simply not work because modern hardware does not support it? (I have read that GL_BITMAP mode was deprecated in 3.3, I am however forcing a 3.0 context.) 3) Am I better off unpacking the bitmap into a pixmap using a shader? This is a far more roundabout solution than I was hoping for but I suppose there is no such thing as a free lunch.

    Read the article

  • Purchasing a TV show adaptation rights, how does it work?

    - by Mikalichov
    Basically, I was thinking about a game based on a TV show, just for fun, and ended up thinking "well, it's not like it can be made anyway". Or can it? In the present situation, developing a game by myself/ourselves on my/our free time, and then using crowdfunding to purchase the rights is not that crazy, if the show is really popular... and the rights not too expensive. Purchasing the rights of the whole show is obiously a sh!tload of money, but what about adaptation rights? What is the range of price it can be? Is it a percentage of the full rights? Does it depend on the kind of adaptation (novel vs. toy vs. game)? ps: if it can help answer, I was thinking about a MLPFIM retro RPG. Please don't laugh at me.

    Read the article

  • Sprite batching in OpenGL

    - by Roy T.
    I've got a JAVA based game with an OpenGL rendering front that is drawing a large amount of sprites every frame (during testing it peaked at 700). Now this game is completely unoptimized. There is no spatial partitioning (so a sprite is drawn even if it isn't on screen) and every sprite is drawn separately like this: graphics.glPushMatrix(); { graphics.glTranslated(x, y, 0.0); graphics.glRotated(degrees, 0, 0, 1); graphics.glBegin(GL2.GL_QUADS); graphics.glTexCoord2f (1.0f, 0.0f); graphics.glVertex2d(half_size , half_size); // upper right // same for upper left, lower left, lower right graphics.glEnd(); } graphics.glPopMatrix(); Currently the game is running at +-25FPS and is CPU bound. I would like to improve performance by adding spatial partitioning (which I know how to do) and sprite batching. Not drawing sprites that aren't on screen will help a lot, however since players can zoom out it won't help enough, hence the need for batching. However sprite batching in OpenGL is a bit of mystery to me. I usually work with XNA where a few classes to do this are built in. But in OpenGL I don't know what to do. As for further optimization, the game I'm working on as a few interesting characteristics. A lot of sprites have the same texture and all the sprites are square. Maybe these characteristics will help determine an efficient batching technique?

    Read the article

  • Working Qt controls in a 3d environment

    - by Jay
    I need some advice from a Qt expert. The background: I have a 3D engine (ogre3d) working in concert with Qt. The 3D Content is displayed in a widget (using a custom OS window in the client area). I'm able to overlay arbitrary Qt widgets onto the 3d world using the widget render() method and a shared bitmap. This makes a great "heads up display". I can use the standard Qt style sheets and animation using this technique. My goal I'd like to go a step further and allow the user to move these rendered widgets using the mouse. I'd like some advice on the best way to implement this. Possible solutions: The widgets in the HUD are not part of the inheritance chain. I render them manually. They don't get events though. I could add them to the inheritance chain so they get events in the usual way. Then I would need to change them to render to my shared bitmap instead of to the operating system. I looked at this once but couldn't find enough information to implement it. Capture mouse events in the 3D display widget and EMIT them to child controls. I basically create my own event handling chain. Any suggestions on how to implement this? I'm also considering switching to Qt5. I'm not sure how that might affect this decision.

    Read the article

  • Building an instance system.

    - by Kyle C
    I am looking into how to design an instance system for the game I am working on. I have always wondered how these are created in games like World of Warcraft, where instances == dungeons/raids/etc). Areas that are separated from players other than those in your group, but have specific logic to them. Specifically how can you reuse your existing code base and not have a bunch of checks everywhere ? if (isInstance) do x; else do y; I don't know if this will make too much of a difference on any answers, but we're using a pretty classic "Object as pure aggregation" component system for our entities.

    Read the article

  • How do I add a Rigid body and a box collider component to a Texture2D?

    - by gamenewdev
    I am making a snake game. I'm basing it on a basic tutorial game, which does no collision detection, wall checking or different levels. All snake head, piece, food, even the background is made of Texture2D. I want the head of the snake to detects 2D collisions with them, but Rect.contains isn't working. I'd prefer to detect collisions by onTriggerEnter() for which I need to add BoxCollider to my snakeHead.

    Read the article

  • Why do I get a blinking screen when running lwjgl?

    - by SystemNetworks
    I didn't have any errors. But When I run my lwjgl game, it gives me a blinking screen. Here is the code: package L1F3; import org.lwjgl.opengl.Display; import org.lwjgl.opengl.DisplayMode; import org.lwjgl.LWJGLException; import static org.lwjgl.opengl.GL11.*; public class Main { public static void main(String[] args) { try { Display.setDisplayMode(new DisplayMode(640, 480)); Display.setTitle("A fresh display!"); Display.create(); } catch (LWJGLException e) { e.printStackTrace(); Display.destroy(); System.exit(1); } while(!Display.isCloseRequested()) { Display.update(); } Display.destroy(); System.exit(0); } } How do I stop the blinking screen? I was thinking its my framerate. I deleted Display.sync but it still gives me all white and black. Last time it didn't give me a blinking screen. EDIT When I remove Display.update() , it gives me a perfect screen, no blinking or no white. Will my game work without it? I can also close it perfectly.

    Read the article

  • Storing large array of tiles, but allowing easy access to data

    - by Cyral
    I've been thinking about this for a while. I have a 2D tile bases platformer in XNA with a large array of tile data, I've been running into memory problems with large maps. (I will add chunks soon!) Currently, Each tile contains an Item along with other properties like how its rotated, if it has forground / background, etc. An Item is static and has properties like the name, tooltip, type of item, how much light it emits, the collision it does to player, etc. Examples: public class Item { public static List<Item> Items; public Collision blockCollisionType; public string nameOfItem; public bool someOtherVariable,etc,etc public static Item Air public static Item Stone; public static Item Dirt; static Item() { Items = new List<Item>() { (Stone = new Item() { nameOfItem = "Stone", blockCollisionType = Collision.Solid, }), (Air = new Item() { nameOfItem = "Air", blockCollisionType = Collision.Passable, }), }; } } Would be an Item, The array of Tiles would contain a Tile for each point, public class Tile { public Item item; //What type it is public bool onBackground; public int someOtherVariables,etc,etc } Now, Most would probably use an enum, or a form of ID to identify blocks. Well my system is really nice just to find out about an item. I can simply do tiles[x,y].item.Name To get the name for example. I realized my Item property of the tile is over 1000 Bytes! Wow! What I'm looking for is a way to use an ID (Int or byte depending on how many items) instead of an Item but still have a method for retreiving data about the type of item a tile contains.

    Read the article

  • CUDA 4.1 Update

    - by N0xus
    I'm currently working on porting a particle system to update on the GPU via the use of CUDA. With CUDA, I've already passed over the required data I need to the GPU and allocated and copied the date via the host. When I build the project, it all runs fine, but when I run it, the project says I need to allocate my h_position pointer. This pointer is my host pointer and is meant to hold the data. I know I need to pass in the current particle position to the required cudaMemcpy call and they are currently stored in a list with a for loop being created and interated for each particle calling the following line of code: m_particleList[i].positionY = m_particleList[i].positionY - (m_particleList[i].velocity * frameTime * 0.001f); My current host side cuda code looks like this: float* h_position; // Your host pointer. This holds the data (I assume it's already filled with the data.) float* d_position; // Your device pointer, we will allocate and fill this float* d_velocity; float* d_time; int threads_per_block = 128; // You should play with this value int blocks = m_maxParticles/threads_per_block + ( (m_maxParticles%threads_per_block)?1:0 ); const int N = 10; size_t size = N * sizeof(float); cudaMalloc( (void**)&d_position, m_maxParticles * sizeof(float) ); cudaMemcpy( d_position, h_position, m_maxParticles * sizeof(float), cudaMemcpyHostToDevice); Both of which were / can be found inside my UpdateParticle() method. I had originally thought it would be a simple case of changing the h_position variable in the cudaMemcpy to m_particleList[i] but then I get the following error: no suitable conversion function from "ParticleSystemClass::ParticleType" to "const void *" exists I've probably messed up somewhere, but could someone please help fix the issues I'm facing. Everything else seems to running fine, it's just when I try to run the program that certain things hit the fan.

    Read the article

  • CUDA 4.1 Particle Update

    - by N0xus
    I'm using CUDA 4.1 to parse in the update of my Particle system that I've made with DirectX 10. So far, my update method for the particle systems is 1 line of code within a for loop that makes each particle fall down the y axis to simulate a waterfall: m_particleList[i].positionY = m_particleList[i].positionY - (m_particleList[i].velocity * frameTime * 0.001f); In my .cu class I've created a struct which I copied from my particle class and is as follows: struct ParticleType { float positionX, positionY, positionZ; float red, green, blue; float velocity; bool active; }; Then I have an UpdateParticle method in the .cu as well. This encompass the 3 main parameters my particles need to update themselves based off the initial line of code. : __global__ void UpdateParticle(float* position, float* velocity, float frameTime) { } This is my first CUDA program and I'm at a loss to what to do next. I've tried to simply put the particleList line in the UpdateParticle method, but then the particles don't fall down as they should. I believe it is because I am not calling something that I need to in the class where the particle fall code use to be. Could someone please tell me what it is I am missing to get it working as it should? If I am doing this completely wrong in general, the please inform me as well.

    Read the article

  • Why does the player fall down when in between platforms? Tile based platformer

    - by inzombiak
    I've been working on a 2D platformer and have gotten the collision working, except for one tiny problem. My games a tile based platformer and whenever the player is in between two tiles, he falls down. Here is my code, it's fire off using an ENTER_FRAME event. It's only for collision from the bottom for now. var i:int; var j:int; var platform:Platform; var playerX:int = player.x/20; var playerY:int = player.y/20; var xLoopStart:int = (player.x - player.width)/20; var yLoopStart:int = (player.y - player.height)/20; var xLoopEnd:int = (player.x + player.width)/20; var yLoopEnd:int = (player.y + player.height)/20; var vy:Number = player.vy/20; var hitDirection:String; for(i = yLoopStart; i <= yLoopEnd; i++) { for(j = xLoopStart; j <= xLoopStart; j++) { if(platforms[i*36 + j] != null && platforms[i*36 + j] != 0) { platform = platforms[i*36 + j]; if(player.hitTestObject(platform) && i >= playerY) { hitDirection = "bottom"; } } } } This isn't the final version, going to replace hitTest with something more reliable , but this is an interesting problem and I'd like to know whats happening. Is my code just slow? Would firing off the code with a TIMER event fix it? Any information would be great.

    Read the article

  • Client Side Prediction

    - by user13842
    I have a question regarding Client Side prediction. Ive tried to search topics about my specific problem but couldn't find anything which really answered my problem. Most tutorials and explanations assume that the Client sends messages like "Move my player up by 1 Position", but what if i send messages like "Set my player's velocity to x"? Since it's hard to explain with text, i made a graphic explaining my problem. The main problem is, that the player sets his own velocity, due to Client Side Prediction, earlier than the Server. So if 2 different velocities overlap, the Server would get out of sync. How can i tackle that problem? Thanks a lot. Graphic: http://img27.imageshack.us/img27/6083/clientpred.png (Ignore the 5.5cm)

    Read the article

  • Can anyone point me to some open source directX rendering engines or frameworks? [on hold]

    - by Jim
    I'm completely new to graphics API programmming, but not at all new to the theory and principle operation of game engines and rendering engines. That being said, I want to do some experiments of rendering very dense geometry scenes in a basic rendering engine or game engine. I don't need a lot of bells and whistles. What I need is enough control that I can implement my own scene graph algorithms and control the rendering pipeline very specifically. My ideal candidate engine would be either a rendering engine or game engine with a modular design that might be ready to go out of the box but would be simple enough in case I need to rip out some of the guts in the rendering management and implement my own. It's a tough call because I'm right at the level where it's almost better to go from scratch, but there's no sense in having to build every single basic thing such as heirarchical transforms, etc. I just want to work with rendering optimization to push dense geometry for maximum FPS. Does anyone have a suggestion for an engine or basic framework to use? I requested DirectX in my title because I figured it would likely be better supported and less likely for me to run into some obscure less-documented problem. But OpenGL might be acceptable if the recommended framework was definitely better than my other options. EDIT: I should add that I really want GPU tessellation support (part of adding to the density of geometry detail).

    Read the article

  • I am looking to make a spaceship tilt as it corners but I cant get it to return

    - by bobthemac
    I am using the TL game engine I am not allowed to use a physics engine but I need to make the spaceship lean as it corners, I can make it lean but cannot make it return to its starting position. I have looked at implementing some kind of spring physics but I don't understand it. Here is my code so far if(myEngine->KeyHeld(Key_A)) { car->RotateY(carSteer * frameTime); if(carSteer >= -carMaxSteer) { carSteer -= carSteerIncrement; car->RotateLocalZ(-(carSteer * frameTime)); } } if(!myEngine->KeyHeld(Key_A)) { if(carSteer < 0) { carSteer = 0; } } if(myEngine->KeyHeld(Key_D)) { car->RotateY(carSteer * frameTime); if(carSteer <= carMaxSteer) { carSteer += carSteerIncrement; car->RotateLocalZ(-(carSteer * frameTime)); } } if(!myEngine->KeyHeld(Key_D)) { if(carSteer > 0) { carSteer = 0; } } All the functions I am calling are built into the engine and I did not write them. Any Help Would Be Appreciated Thanks.

    Read the article

  • Game planning and software design? I feel that UML is not convenient

    - by user1542
    In my university, they always emphasize and hype about UML design and stuff, in which I feel it is not going to work well with game structure design. Now, I just want a professional advice on how should I begin my game designing? The story is I have some skill in programming and have done many minor game such as getting some 2D platformer working to some extend. The problems that I find about my program is the poor quality design. After coding for a while, things start to break down due to poor planning (When I add new feature, it tends to make me have to recode the whole program). However, to plan everything out without a single design flaw is a bit too ideal. Therefore, any advice to how should I plan my game? How should I put it into visible pictures, so that me and my friends are able to overview the designs? I planned to start coding a game with my friend. This is going to be my first teamwork, so any professional advices would be a pleasure. Is there any other alternatives than UML? Another question is how does "prototyping" normally looks like?

    Read the article

  • Is there a good reason I shouldn't use a java applet for a game?

    - by ryeguy
    I want to make a multiplayer browser-based game. The nice thing about using an applet is that I can make the client and the server in the same language (java/closure/scala/etc). I know there's html5 and javascript, but server side javascript isn't as mature as the jvm platform and browser support is still kind of flaky. Applets don't seem to be widely used (except for Runescape), but is there a reason they're unsuitable or is it just because of the bad reputation they developed in their infancy?

    Read the article

  • Threads slowing down application and not working properly

    - by Belgin
    I'm making a software renderer which does per-polygon rasterization using a floating point digital differential analyzer algorithm. My idea was to create two threads for rasterization and have them work like so: one thread draws each even scanline in a polygon and the other thread draws each odd scanline, and they both start working at the same time, but the main application waits for both of them to finish and then pauses them before continuing with other computations. As this is the first time I'm making a threaded application, I'm not sure if the following method for thread synchronization is correct: First of all, I use two global variables to control the two threads, if a global variable is set to 1, that means the thread can start working, otherwise it must not work. This is checked by the thread running an infinite loop and if it detects that the global variable has changed its value, it does its job and then sets the variable back to 0 again. The main program also uses an empty while to check when both variables become 0 after setting them to 1. Second, each thread is assigned a global structure which contains information about the triangle that is about to be rasterized. The structures are filled in by the main program before setting the global variables to 1. My dilemma is that, while this process works under some conditions, it slows down the program considerably, and also it fails to run properly when compiled for Release in Visual Studio, or when compiled with any sort of -O optimization with gcc (i.e. nothing on screen, even SEGFAULTs). The program isn't much faster by default without threads, which you can see for yourself by commenting out the #define THREADS directive, but if I apply optimizations, it becomes much faster (especially with gcc -Ofast -march=native). N.B. It might not compile with gcc because of fscanf_s calls, but you can replace those with the usual fscanf, if you wish to use gcc. Because there is a lot of code, too much for here or pastebin, I created a git repository where you can view it. My questions are: Why does adding these two threads slow down my application? Why doesn't it work when compiling for Release or with optimizations? Can I speed up the application with threads? If so, how? Thanks in advance.

    Read the article

  • Can I Base My Game on Another Game and Earn Money? [closed]

    - by Neb
    Possible Duplicate: How closely can a game resemble another game without legal problems I want make a game similar to Pocket Tanks but for Android and then sell it. Since I am not directly copying anything from Pocket Tanks, but simply using it to give me ideas, I should be allowed to make it. I don't want to finish making my game and then get into some legal trouble, so I wanted to ask here if its allowed. If this is the wrong place to ask, can you tell me where I could ask this question?

    Read the article

  • Using PhysX, how can I predict where I will need to generate procedural terrain collision shapes?

    - by Sion Sheevok
    In this situation, I have terrain height values I generate procedurally. For rendering, I use the camera's position to generate an appropriate sized height map. For collision, however, I need to have height fields generated in areas where objects may intersect. My current potential solution, which may be naive, is to iterate over all "awake" physics actors, use their bounds/extents and velocities to generate spheres in which they may reside after a physics update, then generate height values for ranges encompassing clustered groups of actors. Much of that data is likely already calculated by PhysX already, however. Is there some API, maybe a set of queries, even callbacks from the spatial system, that I could use to predict where terrain height values will be needed?

    Read the article

  • Write depth buffer to texture

    - by innochenti
    I need to read depth buffer from GPU and write it to texture. How this can be done? Here is how texture for depth buffer is created: depthBufferDesc.Width = screenWidth; depthBufferDesc.Height = screenHeight; depthBufferDesc.MipLevels = 1; depthBufferDesc.ArraySize = 1; depthBufferDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; depthBufferDesc.SampleDesc.Count = 1; depthBufferDesc.SampleDesc.Quality = 0; depthBufferDesc.Usage = D3D10_USAGE_DEFAULT; depthBufferDesc.BindFlags = D3D10_BIND_DEPTH_STENCIL; depthBufferDesc.CPUAccessFlags = 0; depthBufferDesc.MiscFlags = 0; m_device->CreateTexture2D(&depthBufferDesc, NULL, m_depthStencilBuffer); Also, I've got another question: is it possible to bind depth buffer texture as sampler to the pixel shader?

    Read the article

  • Assigning a colour to imorted obj. files that are being used as default material

    - by Salino
    I am having a problem with assigning a colour to the different meshes that I have on one object. The technique that I have used is the first approach on this site. Is it possible to export a simulation (animation) from Blender to Unity? So what I would like to do is the following. I have about 107 meshes that are different frames from my shape key animation of my blender model. What I would like to have is that the first mesh will be bright green and up to the 40th mesh the colour turns to be white /greyish... the best would be if I could assign every mesh by hand a colour, however they are all default materials. And if I assign the object a colour, the whole "animation" is going to be in that colour

    Read the article

  • 2d, Top-down map with different levels

    - by Ktash
    So, I'm creating a 2d, top down, sprite based (tiled) game, and right now I'm working on maps (well, a map editor at the moment, but it will be creating my maps, so basically the same thing). The scenario So, I'm thinking about efficiency and creating a map in pieces. In each piece, I plan on having 'layers'. Basically, I plan on rendering it down to a 'below hero' level, and an 'above hero' level, with the hero rendered in between obviously. There will likely also be a 'on level with hero' layer, but I'm not quite there yet. Not even worrying about events or interaction yet. Just looking to get a hero on the screen. Now for movement, I obviously need to know what tiles can be moved and in what direction. My plan at the moment is each tile getting 8 bits (4 'can enter in direction' bits, 4 'can leave in direction'). This will allow me to limit movement and even allow one way directional movement. The dilemma This works great for a lot of scenarios. It will allow me to store a map in essentially 3 layers, a string, and gives me flexibility going forward. However, I can't create maps that themselves have layers. A good example is a bridge where the user can go under or over the bridge without invalid moves being allowed. I can't create a platform and allow movement underneath. These are things I would like to be able to include in my game. My idea In theory, I could allow multiple hero layers and then allow multiple sets of 'below' and 'above' layers (or sandwich layers). But this complicates my system, and makes movement between maps potentially tricky (If the hero is on the third layer at the edge of a map, but that corresponds to the second layer on the other map, how can I allow or disallow movement). My question Is there a better way to manage multiple maps with multiple levels like this where a users level may be 'connected' on different levels on different maps? Or even... Am I doing this the hard way? Is there a more standard way to handle top-down 2d tiled maps that I am just not aware of? Things to note or that might be helpful This will be done in Javascript (transferred around in JSON) State will need to be transferred quickly, so a map-id and x/y/direction should be enough to get me a boolean 'can move' value Maps will not be standard sized (though they will be in a certain number of tiles) Making an editor tool so that I can have others help, so something that I can create in a tool would be helpful 'Teleportation' locations will likely need to exist to get into building maps and to and from different map sets (which will not necessarily be connected), but have not been created yet (lumping in with events at the moment).

    Read the article

  • How do you deal with transitions in animating walking?

    - by Aerovistae
    I'm pretty new to this whole animating models thing. Just learning the ropes. I got a nice walking animation going, which I can loop while a character is walking, but what about when they stop walking? I mean, they could be at any point in the animation at the time the player stops walking. How do I get them to smoothly return to a standing still position without having them snap into that position? The same goes for starting walking from a standing still position. Do you need a separate animation? How is this dealt with?

    Read the article

  • What is the minimum of shader I need to use to run basic calculation on GPU?

    - by Jinxi
    I read, that the Hull Shader, Domain Shader, Geometry Shader and Pixel Shader can be used optional. So, is the Vertex Shader optional too? If no: What does a basic Vertex Shader look like? Just like a simple pass through? Is the Vertex Shader necessary to tell what kind of datastructure (Van Stripes or Meshes) are used? What can I do, with just the vertex shader? Are the fixed functions working without any help of programming a programmable stage?

    Read the article

  • How to calculate new velocities between resting objects (AABB) after accelerations?

    - by Tiedye
    lately I have been trying to create a 2D platformer engine in C++ with Direct2D. The problem I am currently having is getting objects that are resting against each other to interact correctly after accelerations like gravity have been applied to them. Right now I can detect collisions and respond to them correctly (I think) and when objects collide they remember what other objects they're resting against so objects can be pushed by other objects (note that there is no bounce in any collisions so when objects collide they are guaranteed to become resting until something else happens). Every time the simulation advances, the acceleration for objects is applied to their velocities (for example vx += ax * t, where t is time elapsed since last advancement). After these accelerations are applied, I want to check if any objects that are resting against each other are moving at different speeds than their counterparts (as different objects can have different accelerations) and depending on that difference either unlink the two objects so they are no longer resting, or even out their velocities so they are moving at the same speed once again. I am having trouble creating an algorithm that can do this across many resting objects. Here's a diagram to help explain my problem

    Read the article

< Previous Page | 640 641 642 643 644 645 646 647 648 649 650 651  | Next Page >