Search Results

Search found 25952 results on 1039 pages for 'development lifecycle'.

Page 382/1039 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • Algorithmically generating neon layers on pixel grid

    - by user190929
    In an attempt at a screensaver I am making, I am a fan of neo-like graphics, which, of course, look great against a black background. As I understand it, neon, graphically speaking, is essentially a gradient of a color, brightest in the center, and gets darker proceeding outward. Although, more accurate is similar, but separating it into tubes and glow. The tubes are mostly white, while the glow is where most of the color is seen. Well... the tubes could also be a light variant of the color, you could say. The glow is darker. Anyhow, my question is, how could you generate such things given an initial pattern of pixels that would be the tubes? For example, let's say I want to make a neon 'H'. I, via the libraries, can attain the rectangles of pixels which represent it, but I want to make it look neonized. How could I algorithmically achieve such an effect given a base tube shape and base color? EDIT: ok, I mistated that. Got a bit distracted. My purpose for this was similar to a neon effect, but not. Sorry about that. What I am looking for is something like this: Start with a pattern of pixels: [!][!][!][!][!][!][!][!] [!][!][O][!][!][!][!][!] [!][!][O][O][!][!][!][!] [!][!][!][!][O][!][!][!] [!][!][!][!][!][!][!][!] How to I find the U pixels? [!][E][E][E][!][!][!][!] [!][E][O][E][E][!][!][!] [!][E][O][O][E][E][!][!] [!][E][E][E][O][E][!][!] [!][!][!][E][E][E][!][!] Sorry if that looks bad.

    Read the article

  • Suitability of ground fog using layered alpha quads?

    - by Nick Wiggill
    A layered approach would use a series of massive alpha-textured quads arranged parallel to the ground, intersecting all intervening terrain geometry, to provide the illusion of ground fog quite effectively from high up, looking down, and somewhat less effectively when inside the fog and looking toward the horizon (see image below). Alternatively, a shader-heavy approach would instead calculate density as function of view distance into the ground fog substrate, and output the fragment value based on that. Without having to performance-test each approach myself, I would like first to hear others' experiences (not speculation!) on what sort of performance impact the layered alpha texture approach is likely to have. I ask specifically due to the oft-cited impacts of overdraw (not sure how fill-rate bound your average desktop system is). A list of games using this approach, particularly older games, would be immensely useful: if this was viable on pre DX9/OpenGL2 hardware, it is likely to work fine for me. One big question is in regards to this sort of effect: (Image credit goes to Lume of lume.com) Notice how the vertical fog gradation is continuous / smooth. OTOH, using textured quad layers, I can only assume that layers would be mighty obvious when walking through them -- the more sparse they were, the more obvious this would be. This is in contrast to where fog planes are aligned to face the player every frame, where this coarseness would be much less obvious.

    Read the article

  • Crafty.js multiplayer platform game, keeping players in sync

    - by johnwards
    I'm using crafty.js to create a very simple platform game. It doesn't need to stop cheating, it's actually just seeing other players move around, and it doesn't need to have collision detection between players. They are "shadows". How I've gone about it so far is to use http://pubnub.com to send messages between clients. These messages are simple. The first if a new player arrival, the second is a key down and the third is a key up. The code is here: https://github.com/whiteoctober/craftyconcept However I've hit against the old chestnut of keeping everything in sync. At the moment I'm letting the each of the clients decide where to place the other players based on the received key events, I also only move "you" until I get a key press event back from pubsub. My thinking here is to try and keep things in sync! However it isn't perfect, http://www.whiteoctober.co.uk/john/gametest/, things can get out of sync very easily. Key presses arrive in the wrong order etc. Is there any simple solutions to this, I would like to keep it all client side (with pubnub) and not have a central server with positions etc if possible.

    Read the article

  • Getting a sphere to roll down a .FBX object Unity3D/C#

    - by Timothy Williams
    I'm working on a little ramp and ball game in Unity, I modeled the ramp outside Unity and exported it to a .FBX file, then I imported the ramp in to Unity. I set up the ball and ramp, both have Rigidbodies, Ramp is set to isKinematic = true, yet when I play the game the ball just falls right through the ramp and hits the floor below it fine. So it's something wrong with the ramp. Am I doing something wrong? Are .FBX files unable to apply physics? Thanks, Tim.

    Read the article

  • How closely can a game resemble another game without legal problems

    - by Fuu
    The majority of games build on successes of other games and many are downright clones. So where is the limit of emulating before legal issues come into play? Is it down to literary or graphic work like characters and storyline that cause legal problems, or can someone actually claim to own gameplay mechanics? There are so many similar clone games out there that the rules are probably very slack or nonexistent, but I'd like to hear the views of more experienced developers / designers.

    Read the article

  • Converting openGl code to DirectX

    - by Fredrik Boston Westman
    First of all, this is kind of a follow up question on @byte56 excellent anwser on this question concerning picking algorithms. I'm trying to convert one of his code examples to directX 11 however I have run in to some problems ( I can pick but the picking is way off), and I wanted to make sure I had done it rigth before moving on and checking the rest of my code. I am not that familiar with openGl but I can imagine openGl has diffrent coordinations systems, and functions that alters how you must implement to code abit. This is his code example: public Ray GetPickRay() { int mouseX = Mouse.getX(); int mouseY = WORLD.Byte56Game.getHeight() - Mouse.getY(); float windowWidth = WORLD.Byte56Game.getWidth(); float windowHeight = WORLD.Byte56Game.getHeight(); //get the mouse position in screenSpace coords double screenSpaceX = ((float) mouseX / (windowWidth / 2) - 1.0f) * aspectRatio; double screenSpaceY = (1.0f - (float) mouseY / (windowHeight / 2)); double viewRatio = Math.tan(((float) Math.PI / (180.f/ViewAngle) / 2.00f))* zoomFactor; screenSpaceX = screenSpaceX * viewRatio; screenSpaceY = screenSpaceY * viewRatio; //Find the far and near camera spaces Vector4f cameraSpaceNear = new Vector4f((float) (screenSpaceX * NearPlane), (float) (screenSpaceY * NearPlane), (float) (-NearPlane), 1); Vector4f cameraSpaceFar = new Vector4f((float) (screenSpaceX * FarPlane), (float) (screenSpaceY * FarPlane), (float) (-FarPlane), 1); //Unproject the 2D window into 3D to see where in 3D we're actually clicking Matrix4f tmpView = Matrix4f(view); Matrix4f invView = (Matrix4f) tmpView.invert(); Vector4f worldSpaceNear = new Vector4f(); Matrix4f.transform(invView, cameraSpaceNear, worldSpaceNear); Vector4f worldSpaceFar = new Vector4f(); Matrix4f.transform(invView, cameraSpaceFar, worldSpaceFar); //calculate the ray position and direction Vector3f rayPosition = new Vector3f(worldSpaceNear.x, worldSpaceNear.y, worldSpaceNear.z); Vector3f rayDirection = new Vector3f(worldSpaceFar.x - worldSpaceNear.x, worldSpaceFar.y - worldSpaceNear.y, worldSpaceFar.z - worldSpaceNear.z); rayDirection.normalise(); return new Ray(rayPosition, rayDirection); } All rigths reserved to him of course This is my DirectX 11 code : void GraphicEngine::pickRayVector(float mouseX, float mouseY,XMVECTOR& pickRayInWorldSpacePos, XMVECTOR& pickRayInWorldSpaceDir) { float PRVecX, PRVecY; float nearPlane = 0.1f; float farPlane = 200.0f; floar viewAngle = 0.4 * 3.14; PRVecX = ((( 2.0f * mouseX) / ClientWidth ) - 1 ) * tan((viewAngle)/2); PRVecY = (1-(( 2.0f * mouseY) / ClientHeight)) * tan((viewAngle)/2); XMVECTOR cameraSpaceNear = XMVectorSet(PRVecX * nearPlane,PRVecY * nearPlane, -nearPlane, 1.0f); XMVECTOR cameraSpaceFar = XMVectorSet(PRVecX * farPlane,PRVecY * farPlane, -farPlane, 1.0f); // Transform 3D Ray from View space to 3D ray in World space XMMATRIX invMat; XMVECTOR matInvDeter; invMat = XMMatrixInverse(&matInvDeter, cam->getCameraView()); //Inverse of View Space matrix is World space matrix XMVECTOR worldSpaceNear = XMVector3TransformCoord(cameraSpaceNear, invMat); XMVECTOR worldSpaceFar = XMVector3TransformCoord(cameraSpaceFar, invMat); pickRayInWorldSpacePos = worldSpaceNear; pickRayInWorldSpaceDir = worldSpaceFar-worldSpaceNear; pickRayInWorldSpaceDir = XMVector3Normalize(pickRayInWorldSpaceDir); } A couple of notes: The mouse coordinates are already converted so that the top left corner of the client window would be (0,0) and the bottom rigth (800,600) ( or whatever resolution you would have) I hadn't used any far or near plane before, so i just made some arbitrary number up for them. To my understanding it shouldnt matter as long as the object you are trying to pick is in between the range of thoese numbers The viewAngle is the same angle that I used when setting the camera view with XMMatrixPerspectiveFovLH , I just hadn't made it a member variable of my Camera class yet. I removed the variable aspectRation and zoomFactor because I assumed that they where related to some specific function of his game. Now I'm not sure, but I think the problems lies either withing the mouse to viewspace conversion, maby that we use diffrent coordinations systems. Either that or how i transform the matrixes in the the end, because i know order is important when it comes to matrixes. Any help is appriciated! Thanks in advance. Edit: One more note, my code is in c++

    Read the article

  • Powder games: how do they work?

    - by Marc Müller
    Hey guys, I recently found these two gems: http://powdertoy.co.uk/ http://dan-ball.jp/en/javagame/dust/ My question is: How are the physics with so many elements efficiently handled? Am I just severely underestimating modern computing power or is it possible to 'just' have a two-dimensional array, each cell of which describes what is placed at the according position and simulate each cell in every step. Or are there more complex things being done like summarising large areas of the same kind into a single data set and separating said set as needed? Are there any open-source games like this I could look at?

    Read the article

  • Box Collider isn't rotating with Game Object

    - by pek
    I have a method that creates a room by instantiating a prefab, places it in a grid and the re-sizes the collider based on a room definition (location in grid, rotation, width and height). Here is the method: public void CreateRoom(RoomAction action) { GameObject roomGameObject = Instantiate(this.roomPrefab, Vector3.zero, action.RoomPrefab.transform.rotation) as GameObject; roomGameObject.transform.parent = this.transform; roomGameObject.transform.localPosition = new Vector3(action.MansionOffsetX, 0, -action.MansionOffsetY) * this.blockSize; roomGameObject.transform.localPosition += new Vector3((action.Room.Width * this.blockSize) / 2, 0, -((action.Room.Height * this.blockSize) / 2)); BoxCollider roomCollider = roomGameObject.GetComponent<BoxCollider>(); roomCollider.isTrigger = true; roomCollider.center = new Vector3(0, this.height / 2, 0); roomCollider.size = new Vector3(action.Room.Width * this.blockSize, this.height, action.Room.Height * this.blockSize); roomGameObject.transform.RotateAroundLocal(roomGameObject.transform.up, Mathf.Deg2Rad * -90 * action.Rotation); } The problem I'm having is that, while the room rotates correctly, but for some reason, the collider isn't rotating with the game object. Here is a screenshot: Any idea on what am I doing wrong?

    Read the article

  • Drawing multiple triangles at once isn't working

    - by Deukalion
    I'm trying to draw multiple triangles at once to make up a "shape". I have a class that has an array of VertexPositionColor, an array of Indexes (rendered by this Triangulation class): http://www.xnawiki.com/index.php/Polygon_Triangulation So, my "shape" has multiple points of VertexPositionColor but I can't render each triangle in the shape to "fill" the shape. It only draws the first triangle. struct ShapeColor { // Properties (not all properties) VertexPositionColor[] Points; int[] Indexes; } First method that I've tried, this should work since I iterate through the index array that always are of "3s", so they always contain at least one triangle. //render = ShapeColor for (int i = 0; i < render.Indexes.Length; i += 3) { device.DrawUserIndexedPrimitives<VertexPositionColor> ( PrimitiveType.TriangleList, new VertexPositionColor[] { render.Points[render.Indexes[i]], render.Points[render.Indexes[i+1]], render.Points[render.Indexes[i+2]] }, 0, 3, new int[] { 0, 1, 2 }, 0, 1 ); } or the method that should work: device.DrawUserIndexedPrimitives<VertexPositionColor> ( PrimitiveType.TriangleList, render.Points, 0, render.Points.Length, render.Indexes, 0, render.Indexes.Length / 3, VertexPositionColor.VertexDeclaration ); No matter what method I use this is the "typical" result from my Editor (in Windows Forms with XNA) It should show a filled shape, because the indexes are right (I've checked a dozen of times) I simply click the screen (gets the world coordinates, adds a point from a color, when there are 3 points or more it should start filling out the shape, it only draws the lines (different method) and only 1 triangle). The Grid isn't rendered with "this" shape. Any ideas?

    Read the article

  • Multiple Sprites using foreach Collison Detection in XNA (C#)

    - by Bradley Kreuger
    Back again from my last question. Now I was curious I use a foreach statement to use the same shot class. How would I go about doing collison detection. I used the tutorial here on how to shoot a fireball http://www.xnadevelopment.com/tutorials.shtml. I tried to put in several places a foreach to look at all of them to see if they have reached the borders of my sprite hero but doesn't seem to do anything. If again some one might know of a good site that has tutorials to explain collision detection a little bit better that would be appriecated.

    Read the article

  • what is the simplest 3d software for unity?

    - by kdavis8
    Ive heard a lot about Daz studio, Poser, Maya, K-3d, Anim8or, Blender, and all the rest. My question is which one is the best choice in terms of simplicity and quality. price is not an issue really. I'm programming games in java for android mobile devices at the moment but i will eventually move onto larger platforms. I would like to utilize unity3d for the game programming itself and utilize a 3d modeling software just to create the game objects. I just need to know the best one to get started with from scratch or should i use a combination of multiple ones? Any insight for this would be great, thanks!

    Read the article

  • XNA: Networking gone totally out of sync

    - by MesserChups
    I'm creating a multiplayer interface for a game in 2D some of my friends made, and I'm stuck with a huge latency or sync problem. I started by adapting my game to the msdn xna network tutorial and right now when I join a SystemLink network session (1 host on PC and 1 client on Xbox) I can move two players, everything is ok, but few minutes later the two machines start being totally out of synchronization. When I move one player it takes 10 or 20 seconds (increasing with TIME) to take effect on the second machine. I've tried to : Create a thread which calls NetworkSession.Update() continuously as suggested on this forum, didn't worked. Call the Send() method one frame on 10, and the receive() method at each frame, didn't worked either. I've cleaned my code, flushed all buffers at each call and switched the host and client but the problem still remain... I hope you have a solution because I'm running out of ideas... Thanks SendPackets() code : protected override void SendPackets() { if ((NetworkSessionState)m_networkSession.SessionState == NetworkSessionState.Playing) //Only while playing { //Write in the packet manager m_packetWriter.Write(m_packetManager.PacketToSend.ToArray(), 0, (int)m_packetManager.PacketToSend.Position); m_packetManager.ResetPacket(); //flush //Sends the packets to all remote gamers foreach (NetworkGamer l_netGamer in m_networkSession.RemoteGamers) { if (m_packetWriter.Length != 0) { FirstLocalNetGamer.SendData(m_packetWriter, SendDataOptions.None, l_netGamer); } } m_packetWriter.Flush();//m m_packetWriter.Seek(0, 0); } } ReceivePackets() code : public override void ReceivePackets() { base.ReceivePackets(); if ((NetworkSessionState)m_networkSession.SessionState == NetworkSessionState.Playing) //Only while playing { if (m_networkSession.LocalGamers.Count > 0) //Verify that there's at least one local gamer { foreach (LocalNetworkGamer l_localGamer in m_networkSession.LocalGamers) { //every LocalNetworkGamer must read to flush their stream // Keep reading while packets are available. NetworkGamer l_oldSender = null; while (l_localGamer.IsDataAvailable) { // Read a single packet, even if we are the host, we must read to clear the queue NetworkGamer l_newSender; l_localGamer.ReceiveData(m_packetReader, out l_newSender); if (l_newSender != l_oldSender) { if ((!l_newSender.IsLocal) && (l_localGamer == FirstLocalNetGamer)) { //Parsing PacketReader to MemoryStream m_packetManager.Receive(new MemoryStream(m_packetReader.ReadBytes(m_packetReader.Length))); } } l_oldSender = l_newSender; m_packetReader.BaseStream.Flush(); m_packetReader.BaseStream.Seek(0, SeekOrigin.Begin); } } m_packetManager.ParsePackets(); } } }

    Read the article

  • FrameBuffer Render to texture not working all the way

    - by brainydexter
    I am learning to use Frame Buffer Objects. For this purpose, I chose to render a triangle to a texture and then map that to a quad. When I render the triangle, I clear the color to something blue. So, when I render the texture on the quad from fbo, it only renders everything blue, but doesn't show up the triangle. I can't seem to figure out why this is happening. Can someone please help me out with this ? I'll post the rendering code here, since glCheckFramebufferStatus doesn't complain when I setup the FBO. I've pasted the setup code at the end. Here is my rendering code: void FrameBufferObject::Render(unsigned int elapsedGameTime) { glBindFramebuffer(GL_FRAMEBUFFER, m_FBO); glClearColor(0.0, 0.6, 0.5, 1); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // adjust viewport and projection matrices to texture dimensions glPushAttrib(GL_VIEWPORT_BIT); glViewport(0,0, m_FBOWidth, m_FBOHeight); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, m_FBOWidth, 0, m_FBOHeight, 1.0, 100.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); DrawTriangle(); glPopAttrib(); // setting FrameBuffer back to window-specified Framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); //unbind // back to normal viewport and projection matrix //glViewport(0, 0, 1280, 768); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, 1.33, 1.0, 1000.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glClearColor(0, 0, 0, 0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); render(elapsedGameTime); } void FrameBufferObject::DrawTriangle() { glPushMatrix(); glBegin(GL_TRIANGLES); glColor3f(1, 0, 0); glVertex2d(0, 0); glVertex2d(m_FBOWidth, 0); glVertex2d(m_FBOWidth, m_FBOHeight); glEnd(); glPopMatrix(); } void FrameBufferObject::render(unsigned int elapsedTime) { glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, m_TextureID); glPushMatrix(); glTranslated(0, 0, -20); glBegin(GL_QUADS); glColor4f(1, 1, 1, 1); glTexCoord2f(1, 1); glVertex3f(1,1,1); glTexCoord2f(0, 1); glVertex3f(-1,1,1); glTexCoord2f(0, 0); glVertex3f(-1,-1,1); glTexCoord2f(1, 0); glVertex3f(1,-1,1); glEnd(); glPopMatrix(); glBindTexture(GL_TEXTURE_2D, 0); glDisable(GL_TEXTURE_2D); } void FrameBufferObject::Initialize() { // Generate FBO glGenFramebuffers(1, &m_FBO); glBindFramebuffer(GL_FRAMEBUFFER, m_FBO); // Add depth buffer as a renderbuffer to fbo // create depth buffer id glGenRenderbuffers(1, &m_DepthBuffer); glBindRenderbuffer(GL_RENDERBUFFER, m_DepthBuffer); // allocate space to render buffer for depth buffer glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, m_FBOWidth, m_FBOHeight); // attaching renderBuffer to FBO // attach depth buffer to FBO at depth_attachment glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_DepthBuffer); // Adding a texture to fbo // Create a texture glGenTextures(1, &m_TextureID); glBindTexture(GL_TEXTURE_2D, m_TextureID); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, m_FBOWidth, m_FBOHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0); // onlly allocating space glBindTexture(GL_TEXTURE_2D, 0); // attach texture to FBO glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_TextureID, 0); // Check FBO Status if( glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "\n Error:: FrameBufferObject::Initialize() :: FBO loading not complete \n"; // switch back to window system Framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); } Thanks!

    Read the article

  • LWJGL texture bleeding fix won't work

    - by user1990950
    I tried a lot of things to fix texture bleeding, but nothing works. I don't want to add a transparent border around my textures, because I already got too many and it would take too much time and I can't do it with code because I'm loading textures with slick. My textures are seperate textures and they seem to wrap on the other side (texture bleeding). Here are the textures that are "bleeding": The head, body, arm and leg are seperate textures. Here's the code I'm using to draw a texture: public static void drawTextureN(Texture texture, Vector2f position, Vector2f translation, Vector2f origin,Vector2f scale,float rotation, Color color, FlipState flipState) { texture.setTextureFilter(GL11.GL_NEAREST); color.bind(); texture.bind(); GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_S, GL12.GL_CLAMP_TO_EDGE); GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_T, GL12.GL_CLAMP_TO_EDGE); GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_NEAREST); GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_NEAREST); GL11.glTranslatef((int)position.x, (int)position.y, 0); GL11.glTranslatef(-(int)translation.x, -(int)translation.y, 0); GL11.glRotated(rotation, 0f, 0f, 1f); GL11.glScalef(scale.x, scale.y, 1); GL11.glTranslatef(-(int)origin.x, -(int)origin.y, 0); float pixelCorrection = 0f; GL11.glBegin(GL11.GL_QUADS); GL11.glTexCoord2f(0,0); GL11.glVertex2f(0,0); GL11.glTexCoord2f(1,0); GL11.glVertex2f(texture.getTextureWidth(),0); GL11.glTexCoord2f(1,1); GL11.glVertex2f(texture.getTextureWidth(),texture.getTextureHeight()); GL11.glTexCoord2f(0,1); GL11.glVertex2f(0,texture.getTextureHeight()); GL11.glEnd(); GL11.glLoadIdentity(); } I tried a half pixel correction but it didn't make any sense because GL12.GL_CLAMP_TO_EDGE. I set pixelCorrection to 0, but it still wont work.

    Read the article

  • Game actions that take multiple frames to complete

    - by CantTetris
    I've never really done much game programming before, pretty straightforward question. Imagine I'm building a Tetris game, with the main loop looking something like this. for every frame handle input if it's time to make the current block move down a row if we can move the block move the block else remove all complete rows move rows down so there are no gaps if we can spawn a new block spawn a new current block else game over Everything in the game so far happens instantly - things are spawned instantly, rows are removed instantly etc. But what if I don't want things to happen instantly (i.e animate things)? for every frame handle input if it's time to make the current block move down a row if we can move the block move the block else ?? animate complete rows disappearing (somehow, wait over multiple frames until the animation is done) ?? animate rows moving downwards (and again, wait over multiple frames) if we can spawn a new block spawn a new current block else game over In my Pong clone this wasn't an issue, as every frame I was just moving the ball and checking for collisions. How can I wrap my head around this issue? Surely most games involves some action that takes more than a frame, and other things halt until the action is done.

    Read the article

  • How much info can I store in a cookie?

    - by Artemix
    Hi guys, Im developing a flash game and I'd like to know how much info can I store in a browser cookie. The game is simple, but it needs to store several variables in order to save all the details of your current progress. The game is only one swf file, no server, no nothing. I need to know how should I use the cookies to achieve this, and if they have the posibility of doing it, of course. (several = 200 variables i.e)

    Read the article

  • DirectX10 How to use Constant Buffers

    - by schnozzinkobenstein
    I'm trying to access some variables in my shader, but I think I'm doing this wrong. Say I have a constant buffer that looks like this: cbuffer perFrame { float foo; float bar; }; I got an ID3D10EffectConstantBuffer reference to it, and I can get a specific index by calling GetMemberByIndex, but how can I figure out how many members perFrame has so that I can get each member without going out of bounds?

    Read the article

  • Executing Components in an Entity Component System

    - by John
    Ok so I am just starting to grasp the whole ECS paradigm right now and I need clarification on a few things. For the record, I am trying to develop a game using C++ and OpenGL and I'm relatively new to game programming. First of all, lets say I have an Entity class which may have several components such as a MeshRenderer,Collider etc. From what I have read, I understand that each "system" carries out a specific task such as calculating physics and rendering and may use more that one component if needed. So for example, I would have a MeshRendererSystem act on all entities with a MeshRenderer component. Looking at Unity, I see that each Gameobject has, by default, got components such as a renderer, camera, collider and rigidbody etc. From what I understand, an entity should start out as an empty "container" and should be filled with components to create a certain type of game object. So what I dont understand is how the "system" works in an entity component system. http://docs.unity3d.com/ScriptReference/GameObject.html So I have a GameObject(The Entity) class like class GameObject { public: GameObject(std::string objectName); ~GameObject(void); Component AddComponent(std::string name); Component AddComponent(Component componentType); }; So if I had a GameObject to model a warship and I wanted to add a MeshRenderer component, I would do the following: warship->AddComponent(new MeshRenderer()); In the MeshRenderers constructor, should I call on the MeshRendererSystem and "subscribe" the warship object to this system? In that case, the MeshRendererSystem should probably be a Singleton("shudder"). From looking at unity's GameObject, if each object potentially has a renderer or any of the components in the default GameObject class, then Unity would iterate over all objects available. To me, this seems kind of unnecessary since some objects might not need to be rendered for example. How, in practice, should these systems be implemented?

    Read the article

  • Warp GameObject Size When Entering/Leaving Area

    - by Julian
    Below I have an image describing the desired functionality I am going for. Let's say you control a square and when you move this square into a given area, any part of your rigidbody/model inside of the area will be magnified upon entering and shrunk upon leaving. So now you more or less are made up of two rectangles, one small and one large. What would be an elegant approach towards achieving this effect?

    Read the article

  • What is a simple deformer in which vertices deform linearly with control points?

    - by sebf
    In my project I want to deform a complex mesh, using a simpler 'proxy' mesh. In effect, each vertex of the proxy/collision mesh will be a control point/bone, which should deform the vertices of the main mesh attached to it depending on weight, but where the weight is not dependant on the absolute distance from the control point but rather distance relative to the other affecting control points. The point of this is to preserve complex three dimensional features of the main mesh while using physics implementations which expect something far simpler, low resolution, single surface, etc. Therefore, the vertices must deform linearly with their respective weighted control points (i.e. no falloff fields or all the mesh features will end up collapsed) - as if each vertex was linked to a point on the plane created by the attached control points and deformed with it. I have tried implementing the weight computation algorithm in this paper (page 4) but it is not working as expected and I am wondering if it is really the best way to do what I want. What is the simplest way to 'skin'* an arbitrary mesh, to another arbitrary mesh? *By skin I mean I need an algorithm to determine the best control points for a vertex, and their weights.

    Read the article

  • Open source clone for Starcraft

    - by sinekonata
    Two questions about a SC:Broodwar clone. Is there one yet? How likely is legal pursuit? Since almost all games I usually play now have an FOS alternative from alpha to way polished, I was wondering why can't I find one for SC, one of the biggest titans of the gaming community? So my first question is, is there a game that was made with the intention to emulate SC? Is it that I didn't look well enough? Could it really be that no one tackled what seems like a small effort compared to the creation of a game engine like Spring or games like Rigs of Rod or Minetest? And since SC is not being maintained at all shouldn't the incentive to see a bug free modable balanced version huge? What am I not getting here? In the event that there is none, is it a legal problem? Could it be that people expect Blizzard to release sources themselves? Or that developers don't see the point in having SC mechanics without the patented lore and aesthetics? And the trickier question, if I were to make SC an open source game, a total clone of it for the purposes of maintenance, modability, etc. Would Blizzard really sue a team of developer fans that just do them a favour knowing they don't lose any money from Korea broadcasts? Or would they do it not to set precedents. So thanks for reading all that, hope I'm not the only one to think it's weird that no one talks about it. See you.

    Read the article

  • Will having many timers affect my game performance?

    - by iQue
    I'm making a game for android, and earlier today I was trying to add some cool stuff to my game. The problem is this thing needs like 5 timers. I build my timers like this: timer += deltaTime; if(timer >= 2.0f){ doStuff; timer -= 2.0f; } // this timers gets stuff done every 2 secs Will having to many timers like this, getting checked every frame, screw up my games performance? The effect I wanted to add was a crosshair every 2 sec, then remove it after 2 sec and do a timed animation. So an array of crosshairs dependent on a bunch of timers to be exact. This caused my game to shut down when used, so thats why Im wondering if using that many timers causes my game to flip out.

    Read the article

  • Flip rotation matrix

    - by azer89
    right now i'm doing character control with kinect. Basically i need to mirror the joint orientation because the character faces the player. Somehow by googling through internet i've done it and everything works very well. But i have little idea about how the math works, here's my code: //------------------------------------------------------------------------------------- Ogre::Quaternion JointOrientationCalculator::buildQuaternion(Ogre::Vector3 xAxis, Ogre::Vector3 yAxis, Ogre::Vector3 zAxis) { Ogre::Matrix3 mat; if(isMirror) { mat = Ogre::Matrix3(xAxis.x, yAxis.x, zAxis.x, xAxis.y, yAxis.y, zAxis.y, xAxis.z, yAxis.z, zAxis.z); Ogre::Matrix3 flipMat(1, 0, 0, 0, 1, 0, 0, 0, -1); mat = flipMat * mat * flipMat; } else { mat = Ogre::Matrix3(xAxis.x, -yAxis.x, zAxis.x, -xAxis.y, yAxis.y, -zAxis.y, xAxis.z, -yAxis.z, zAxis.z); } Ogre::Quaternion q; q.FromRotationMatrix(mat); return q; } when i need to mirror/flip it by axes z i calculate mat = flipMat * mat * flipMat; but i don't understand how this equation works.

    Read the article

  • problem with frustum AABB culling in DirectX

    - by Matthew Poole
    Hi, I am currently working on a project with a few friends, and I am trying to get frustum culling working. Every single tutorial or article I go to shows that my math is correct and that this should be working. I thought maybe posting here, somebody would catch something I could not. Thank you. Here are the important code snippets /create the projection matrix void CD3DCamera::SetLens(float fov, float aspect, float nearZ, float farZ) { D3DXMatrixPerspectiveFovLH(&projMat, D3DXToRadian(fov), aspect, nearZ, farZ); } //build the view matrix after changes have been made to camera void CD3DCamera::BuildView() { //keep axes orthoganal D3DXVec3Normalize(&look, &look); //up D3DXVec3Cross(&up, &look, &right); D3DXVec3Normalize(&up, &up); //right D3DXVec3Cross(&right, &up, &look); D3DXVec3Normalize(&right, &right); //fill view matrix float x = -D3DXVec3Dot(&position, &right); float y = -D3DXVec3Dot(&position, &up); float z = -D3DXVec3Dot(&position, &look); viewMat(0,0) = right.x; viewMat(1,0) = right.y; viewMat(2,0) = right.z; viewMat(3,0) = x; viewMat(0,1) = up.x; viewMat(1,1) = up.y; viewMat(2,1) = up.z; viewMat(3,1) = y; viewMat(0,2) = look.x; viewMat(1,2) = look.y; viewMat(2,2) = look.z; viewMat(3,2) = z; viewMat(0,3) = 0.0f; viewMat(1,3) = 0.0f; viewMat(2,3) = 0.0f; viewMat(3,3) = 1.0f; } void CD3DCamera::BuildFrustum() { D3DXMATRIX VP; D3DXMatrixMultiply(&VP, &viewMat, &projMat); D3DXVECTOR4 col0(VP(0,0), VP(1,0), VP(2,0), VP(3,0)); D3DXVECTOR4 col1(VP(0,1), VP(1,1), VP(2,1), VP(3,1)); D3DXVECTOR4 col2(VP(0,2), VP(1,2), VP(2,2), VP(3,2)); D3DXVECTOR4 col3(VP(0,3), VP(1,3), VP(2,3), VP(3,3)); // Planes face inward frustum[0] = (D3DXPLANE)(col2); // near frustum[1] = (D3DXPLANE)(col3 - col2); // far frustum[2] = (D3DXPLANE)(col3 + col0); // left frustum[3] = (D3DXPLANE)(col3 - col0); // right frustum[4] = (D3DXPLANE)(col3 - col1); // top frustum[5] = (D3DXPLANE)(col3 + col1); // bottom // Normalize the frustum for( int i = 0; i < 6; ++i ) D3DXPlaneNormalize( &frustum[i], &frustum[i] ); } bool FrustumCheck(D3DXVECTOR3 max, D3DXVECTOR3 min, const D3DXPLANE* frustum) { // Test assumes frustum planes face inward. D3DXVECTOR3 P; D3DXVECTOR3 Q; bool ret = false; for(int i = 0; i < 6; ++i) { // For each coordinate axis x, y, z... for(int j = 0; j < 3; ++j) { // Make PQ point in the same direction as the plane normal on this axis. if( frustum[i][j] > 0.0f ) { P[j] = min[j]; Q[j] = max[j]; } else { P[j] = max[j]; Q[j] = min[j]; } } if(D3DXPlaneDotCoord(&frustum[i], &Q) < 0.0f ) ret = false; } return true; }

    Read the article

  • How do I get FEATURE_LEVEL_9_3 to work with shaders in Direct3D11?

    - by Dominic
    Currently I'm going through some tutorials and learning DX11 on a DX10 machine (though I just ordered a new DX11 compatible computer) by means of setting the D3D_FEATURE_LEVEL_ setting to 10_0 and switching the vertex and pixel shader versions in D3DX11CompileFromFile to "vs_4_0" and "ps_4_0" respectively. This works fine as I'm not using any DX11-only features yet. I'd like to make it compatible with DX9.0c, which naively I thought I could do by changing the feature level setting to 9_3 or something and taking the vertex/pixel shader versions down to 3 or 2. However, no matter what I change the vertex/pixel shader versions to, it always fails when I try to call D3DX11CompileFromFile to compile the vertex/pixel shader files when I have D3D_FEATURE_LEVEL_9_3 enabled. Maybe this is due to the the vertex/pixel shader files themselves being incompatible for the lower vertex/pixel shader versions, but I'm not expert enough to say. My shader files are listed below: Vertex shader: cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; PixelInputType LightVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Calculate the normal vector against the world matrix only. output.normal = mul(input.normal, (float3x3)worldMatrix); // Normalize the normal vector. output.normal = normalize(output.normal); return output; } Pixel Shader: Texture2D shaderTexture; SamplerState SampleType; cbuffer LightBuffer { float4 ambientColor; float4 diffuseColor; float3 lightDirection; float padding; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; float4 LightPixelShader(PixelInputType input) : SV_TARGET { float4 textureColor; float3 lightDir; float lightIntensity; float4 color; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor = shaderTexture.Sample(SampleType, input.tex); // Set the default output color to the ambient light value for all pixels. color = ambientColor; // Invert the light direction for calculations. lightDir = -lightDirection; // Calculate the amount of light on this pixel. lightIntensity = saturate(dot(input.normal, lightDir)); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. color += (diffuseColor * lightIntensity); } // Saturate the final light color. color = saturate(color); // Multiply the texture pixel and the final diffuse color to get the final pixel color result. color = color * textureColor; return color; }

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >