Search Results

Search found 28230 results on 1130 pages for 'embedded development'.

Page 471/1130 | < Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >

  • Game Code Design for Rendering

    - by kuroutadori
    I first created a game on the iPhone and I'm now porting it to Android. I wrote most of the code in C++, but when it came to porting it wasn't so easy. The Android way is to have two threads, one for rendering and one for updating. This due to some devices blocking when updating the hardware. My problem is that I am coming from the iPhone. When I transition, say from the Menu to the Game, I would stop the Animation (Rendering) and load up the next Manager (the Menu has a Manager and so has the Game). I could implement the same thing on Android, but I have noticed on game ports like Quake, don't do this - as far as I can tell. I have learnt that I cannot just dynamically add another Renderer class the the tree because I will probably get a dequeuing buffer error - which I believe to be a problem with the OpenGL ES side. So how is it done?

    Read the article

  • Collision detection code style

    - by Marian Ivanov
    Not only there are two useful broad-phase algorithms and a lot of useful narrowphase algorithms, there are also multiple code styles. Arrays vs. calling Make an array of broadphase checks, then filter them with narrowphase checks, then resolve them. function resolveCollisions(thingyStructure * a,thingyStructure * b,int index){ possibleCollisions = getPossibleCollisions(b,a->get(index)); for(i=0; i<possibleCollitionsNumber; i++){ if(narrowphase(possibleCollisions[i],a[index])) { collisions->push(possibleCollisions[i]); }; }; for(i=0; i<collitionsNumber; i++){ //CODE FOR RESOLUTION }; }; Make the broadphase call the narrowphase, and the narrowphase call the resolution function resolveCollisions(thingyStructure * a,thingyStructure * b,int index){ broadphase(b,a->get(index)); }; function broadphase(thingy * with, thingy * what){ while(blah){ //blahcode narrowphase(what,collidingThing); }; }; Events vs. in-the-loop Fire an event. This abstracts the check away, but it's trickier to make an equal interaction. a[index] -> collisionEvent(eventdata); //much later int collisionEvent(eventdata){ //resolution gets here } Resolve the collision inside the loop. This glues narrowphase and resolution into one layer. if(narrowphase(possibleCollisions[i],a[index])) { //CODE GOES HERE }; The questions are: Which of the first two is better, and how am I supposed to make a zero-sum Newtonian interaction under B1.

    Read the article

  • How do I cap rendering of tiles in a 2D game with SDL?

    - by farmdve
    I have some boilerplate code working, I basically have a tile based map composed of just 3 colors, and some walls and render with SDL. The tiles are in a bmp file, but each tile inside it corresponds to an internal number of the type of tile(color, or wall). I have pretty basic collision detection and it works, I can also detetc continuous presses, which allows me to move pretty much anywhere I want. I also have a moving camera, which follows the object. The problem is that, the tile based map is bigger than the resolution, thus not all of the map can be displayed on the screen, but it's still rendered. I would like to cap it, but since this is new to me, I pretty much have no idea. Although I cannot post all the code, as even though I am a newbie and the code pretty basic, it's already quite a few lines, I can post what I tried to do void set_camera() { //Center the camera over the dot camera.x = ( player.box.x + DOT_WIDTH / 2 ) - SCREEN_WIDTH / 2; camera.y = ( player.box.y + DOT_HEIGHT / 2 ) - SCREEN_HEIGHT / 2; //Keep the camera in bounds. if(camera.x < 0 ) { camera.x = 0; } if(camera.y < 0 ) { camera.y = 0; } if(camera.x > LEVEL_WIDTH - camera.w ) { camera.x = LEVEL_WIDTH - camera.w; } if(camera.y > LEVEL_HEIGHT - camera.h ) { camera.y = LEVEL_HEIGHT - camera.h; } } set_camera() is the function which calculates the camera position based on the player's positions. I won't pretend I know much about it. Rectangle box = {0,0,0,0}; for(int t = 0; t < TOTAL_TILES; t++) { if(box.x < (camera.x - TILE_WIDTH) || box.y > (camera.y - TILE_HEIGHT)) apply_surface(box.x - camera.x, box.y - camera.y, surface, screen, &clips[tiles[t]]); box.x += TILE_WIDTH; //If we've gone too far if(box.x >= LEVEL_WIDTH) { //Move back box.x = 0; //Move to the next row box.y += TILE_HEIGHT; } } This is basically my render code. The for loop loops over 192 tiles stored in an int array, each with their own unique value describing the tile type(wall or one of three possible colored tiles). box is an SDL_Rect containing the current position of the tile, which is calculated on render. TILE_HEIGHT and TILE_WIDTH are of value 80. So the cap is determined by if(box.x < (camera.x - TILE_WIDTH) || box.y > (camera.y - TILE_HEIGHT)) However, this is just me playing with the values and see what doesn't break it. I pretty much have no idea how to calculate it. My screen resolution is 1024/768, and the tile map is of size 1280/960.

    Read the article

  • How to loop section from a song correctly?

    - by Teflo
    I'm programming a little Music Engine for my game in C# and XNA, and one aspect from it is the possibility to loop a section from a song. For example, my song has an intropart, and when the song reached the end ( or any other specific point ), it jumps back where the intropart is just over. ( A - B - B - B ... ) Now I'm using IrrKlank, which is working perfectly, without any gaps, but I have a problem: The point where to jump back is a bit inaccurate. Here's some example code: public bool Passed(float time) { if ( PlayPosition >= time ) return true; return false; } //somewhere else if( song.Passed( 10.0f ) ) song.JumpTo( 5.0f ); Now the problem is, the song passes the 10 seconds, but play a few milliseconds until 10.1f or so, and then jumps to 5 seconds. It's not that dramatic, but very incorrect for my needs. I tried to fix it like that: public bool Passed( float time ) { if( PlayPosition + 3 * dt >= time && PlayPosition <= time ) return true; return false; } ( dt is the delta time, the elapsed time since the last frame ) But I don't think, that's a good solution for that. I hope, you can understand my problem ( and my english, yay /o/ ) and help me :)

    Read the article

  • How can I transform a Point2f with a matrix on Android?

    - by Vivendi
    I'm developing for Android and I'm using the android.renderscript.Matrix3f class to do some calculations. What I need to do now is to now is to do something like mat.tranform(pointIn, pointOut); So I need to transform a matrix by a given Point class. In awt I would simply do this: AffineTransform t = new AffineTransform(); Point2D.Float p = new Point2D.Float(); t.transform( p, p ); But in Android I now have this: Matrix3f t = new Matrix3f(); PointF p = new PointF(); // Now I need to tranform it somehow.. But the Matrix3f class in Android doesn't have a Matrix.transform(Point2D ptSrc, Point2D ptDst) method. So I guess I have to do the transformation manually. But I'm not really sure how that works. From what I've seen it's something like a translate and then a rotate? Could anyone please tell me how to do this in code?

    Read the article

  • Mac mini 2012 graphic upgrade for UE4 Unity3D Blender

    - by DaCrAn
    I have a mac mini (late 2012) i7, 16gb ram Vengeance graphic card intel HD4000. I buy recently a thunderbolt expansion PCIE whit support a graphic card PCIE 2.0 16x whit space for Full leght card. I have dubts about what graphic card gona give me the best results for using the Unreal Engine 4 UE4 or Unity3D, and Blender. My badget cover a Nvidia Quadro K4000 3gb or ATI Firepro W7000 4gb. Any recomendation? What professional graphic card can be better for design games in 3D? Thanks. DaCrAn

    Read the article

  • Fixed timestep with interpolation in AS3

    - by Jim Sreven
    I'm trying to implement Glenn Fiedler's popular fixed timestep system as documented here: http://gafferongames.com/game-physics/fix-your-timestep/ In Flash. I'm fairly sure that I've got it set up correctly, along with state interpolation. The result is that if my character is supposed to move at 6 pixels per frame, 35 frames per second = 210 pixels a second, it does exactly that, even if the framerate climbs or falls. The problem is it looks awful. The movement is very stuttery and just doesn't look good. I find that the amount of time in between ENTER_FRAME events, which I'm adding on to my accumulator, averages out to 28.5ms (1000/35) just as it should, but individual frame times vary wildly, sometimes an ENTER_FRAME event will come 16ms after the last, sometimes 42ms. This means that at each graphical redraw the character graphic moves by a different amount, because a different amount of time has passed since the last draw. In theory it should look smooth, but it doesn't at all. In contrast, if I just use the ultra simple system of moving the character 6px every frame, it looks completely smooth, even with these large variances in frame times. How can this be possible? I'm using getTimer() to measure these time differences, are they even reliable?

    Read the article

  • Camera Collision inside the room model

    - by sanddy
    I am having a problem in Calculating the camera collision for my Room model which consists of sofa, tables and other models. The users shall be moving the camera front, back, rotating so i need to make sure that the camera does not collide with any of the models with in the room. I have treated all my models inside the room by BoundingBox[] and the camera by BoundingSphere. So, far i have implemented collision by looking into the tutorial from http://www.toymaker.info/Games/XNA/html/xna_model_collisions.html which was great. But, I guess the problem lies in the Transformation part. I debugged and found some points to be at Vector(-XXX,-XXX,-XXX) where X is digit. Also i found my radius of some models where too large(in thousand, i just looked into its radius value before converting to BoundingBox). Do I need to scale the model for collision??? Below are my code:- On My LoadContent(): Matrix[] transforms = new Matrix[myModel.Bones.Count]; myModel.CopyAbsoluteBoneTransformsTo(transforms); int index = 0; box = new List<BoundingBox>(); BoundingBox worldModel = Utility.CalculateBoundingBox(myModel); foreach (ModelMesh mesh in myModel.Meshes) { Vector3[] obb = new Vector3[8]; worldModel.GetCorners(obb); Vector3[] asdf = (Vector3[])obb.Clone(); Vector3.Transform(obb, ref transforms[mesh.ParentBone.Index], obb); BoundingBox worldBox = BoundingBox.CreateFromPoints(obb); box.Add(worldBox); index++; } On CameraPosition Update: BoundingSphere bs = new BoundingSphere(this.cameraPos, 5.0f); if (RoomWalkthrough.Utility.CheckCollision(bs, bb)) { // Do Something } Please Help.

    Read the article

  • Mapping dynamic buffers in Direct3D11 in Windows Store apps

    - by Donnie
    I'm trying to make instanced geometry in Direct3D11, and the ID3D11DeviceContext1->Map() call is failing with the very helpful error of "Invalid Parameter" when I'm attempting to update the instance buffer. The buffer is declared as a member variable: Microsoft::WRL::ComPtr<ID3D11Buffer> m_instanceBuffer; Then I create it (which succeeds): D3D11_BUFFER_DESC instanceDesc; ZeroMemory(&instanceDesc, sizeof(D3D11_BUFFER_DESC)); instanceDesc.Usage = D3D11_USAGE_DYNAMIC; instanceDesc.ByteWidth = sizeof(InstanceData) * MAX_INSTANCE_COUNT; instanceDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; instanceDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; instanceDesc.MiscFlags = 0; instanceDesc.StructureByteStride = 0; DX::ThrowIfFailed(d3dDevice->CreateBuffer(&instanceDesc, NULL, &m_instanceBuffer)); However, when I try to map it: D3D11_MAPPED_SUBRESOURCE inst; DX::ThrowIfFailed(d3dContext->Map(m_instanceBuffer.Get(), 0, D3D11_MAP_WRITE, 0, &inst)); The map call fails with E_INVALIDARG. Nothing is NULL incorrectly, and this being one of my first D3D apps I'm currently stumped on what to do next to track it down. I'm thinking I must be creating the buffer incorrectly, but I can't see how. Any input would be appreciated.

    Read the article

  • Which Open Source Licenses can address concerns for an Open Source Game Engine?

    - by Chris
    I am on a team that is looking to open source an engine we are building. It's intended as an engine for Online RPG style games. We're writing it to work on both desktops and android platforms. I've been over to the OSI http://opensource.org/licenses/category to check out the most common licenses. However, this will be my first time going into an open source project and I wanted to know if the community had some insight into which licenses might be best suited. Key licensing concerns: Removing or limiting our liability (most already seem to cover this, but stating for completeness). We want other developers to be able to take part or all of our project and use it in their own projects with proper accreditation to our project. Licensing should not hinder someone's ability to quickly use the engine. They should be able to download a release and start using it without needing to wait on licensing issues. Game content (gfx, sound, etc.) that is not part of the engine should be allowed to be licensed separately. If someone is using our engine, they can retain full copy right of their content, including engine generated data. Our primary goal is exposure, which is why we're going open source to start with. Both for the project and for the individuals developing it. Are there any licenses that can require accreditation visible to players? While I'd put our primary goal as exposure, for licensing the accreditation is less of a concern. From what I've read through (and have been able to understand) it doesn't seem like any of the licenses cover anything that is produced by the licensed software. Are there any that state this specifically, or does simply not mentioning it leave it open for other licensing? Are there any other concerns that we should consider? Has anyone had any issues using any of these licenses?

    Read the article

  • How to translate along Z axis in OpenTK

    - by JeremyJAlpha
    I am playing around with an OpenGL sample application I downloaded for Xamarin-Android. The sample application produces a rotating colored cube I would simply like to edit it so that the rotating cube is translated along the Z axis and disappears into the distance. I modified the code by: adding an cumulative variable to store my Z distance, adding GL.Enable(All.DepthBufferBit) - unsure if I put it in the right place, adding GL.Translate(0.0f, 0.0f, Depth) - before the rotate functions, Result: cube rotates a couple of times then disappears, it seems to be getting clipped out of the frustum. So my question is what is the correct way to use and initialize the Z buffer and get the cube to travel along the Z axis? I am sure I am missing some function calls but am unsure of what they are and where to put them. I apologise in advance as this is very basic stuff but am still learning :P, I would appreciate it if anyone could show me the best way to get the cube to still rotate but to also move along the Z axis. I have commented all my modifications in the code: // This gets called when the drawing surface is ready protected override void OnLoad (EventArgs e) { // this call is optional, and meant to raise delegates // in case any are registered base.OnLoad (e); // UpdateFrame and RenderFrame are called // by the render loop. This is takes effect // when we use 'Run ()', like below UpdateFrame += delegate (object sender, FrameEventArgs args) { // Rotate at a constant speed for (int i = 0; i < 3; i ++) rot [i] += (float) (rateOfRotationPS [i] * args.Time); }; RenderFrame += delegate { RenderCube (); }; GL.Enable(All.DepthBufferBit); //Added by Noob GL.Enable(All.CullFace); GL.ShadeModel(All.Smooth); GL.Hint(All.PerspectiveCorrectionHint, All.Nicest); // Run the render loop Run (30); } void RenderCube () { GL.Viewport(0, 0, viewportWidth, viewportHeight); GL.MatrixMode (All.Projection); GL.LoadIdentity (); if ( viewportWidth > viewportHeight ) { GL.Ortho(-1.5f, 1.5f, 1.0f, -1.0f, -1.0f, 1.0f); } else { GL.Ortho(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f); } GL.MatrixMode (All.Modelview); GL.LoadIdentity (); Depth -= 0.02f; //Added by Noob GL.Translate(0.0f,0.0f,Depth); //Added by Noob GL.Rotate (rot[0], 1.0f, 0.0f, 0.0f); GL.Rotate (rot[1], 0.0f, 1.0f, 0.0f); GL.Rotate (rot[2], 0.0f, 1.0f, 0.0f); GL.ClearColor (0, 0, 0, 1.0f); GL.Clear (ClearBufferMask.ColorBufferBit); GL.VertexPointer(3, All.Float, 0, cube); GL.EnableClientState (All.VertexArray); GL.ColorPointer (4, All.Float, 0, cubeColors); GL.EnableClientState (All.ColorArray); GL.DrawElements(All.Triangles, 36, All.UnsignedByte, triangles); SwapBuffers (); }

    Read the article

  • Deferred Rendering With Diffuse,Specular, and Normal maps

    - by John
    I have been reading up on deferred rendering and I am trying to implement a renderer using the Sponza atrium model, which can be found here, as my sandbox.Note I am also using OpenGL 3.3 and GLSL. I am loading the model from a Wavefront OBJ file using Assimp. I extract all geometry information including tangents and bitangents. For all the aiMaterials,I extract the following information which essentially comes from the sponza.mtl file. Ambient/Diffuse/Specular/Emissive Reflectivity Coefficients(Ka,Kd,Ks,Ke) Shininess Diffuse Map Specular Map Normal Map I understand that I must render vertex attributes such as position ,normals,texture coordinates to textures as well as depth for the second render pass. A lot of resources mention putting colour information into a g-buffer in the initial render pass but do you not require the diffuse,specular and normal maps and therefore lights to determine the fragment colour? I know that doesnt make since sense because lighting should be done in the second render pass. In terms of normal mapping, do you essentially just pass the tangent,bitangents, and normals into g-buffers and then construct the tangent matrix and apply it to the sampled normal from the normal map. Ultimately, I would like to know how to incorporate this material information into my deferred renderer.

    Read the article

  • Is there any advantage in using DX10/11 for a 2D game?

    - by David Gouveia
    I'm not entirely familiar with the feature set introduced by DX10/11 class hardware. I'm vaguely familiar with the new stages added to the programmable graphics pipeline, such as the geometry shader, the compute shader, and the new tesselation stages. I don't see how any of these make much of a difference for a 2D game though. Is there any compelling reason to make the switch to DX10/11 (or the OpenGL equivalents) for a 2D game, or would it be wiser to stick with DX9 considering that that a significant share of the market still runs on older technologies (e.g. the February 2012 Steam surveys lists around 17% of users as still using Windows XP)?

    Read the article

  • Question about component based design: handling objects interaction

    - by Milo
    I'm not sure how exactly objects do things to other objects in a component based design. Say I have an Obj class. I do: Obj obj; obj.add(new Position()); obj.add(new Physics()); How could I then have another object not only move the ball but have those physics applied. I'm not looking for implementation details but rather abstractly how objects communicate. In an entity based design, you might just have: obj1.emitForceOn(obj2,5.0,0.0,0.0); Any article or explanation to get a better grasp on a component driven design and how to do basic things would be really helpful.

    Read the article

  • Speed up lighting in deferred shading

    - by kochol
    I implemented a simple deferred shading renderer. I use 3 G-Buffer for storing position (R32F), normal (G16R16F) and albedo (ARGB8). I use sphere map algorithm to store normals in world space. Currently I use inverse of view * projection matrix to calculate the position of each pixel from stored depth value. First I want to avoid per pixel matrix multiplication for calculating the position. Is there another way to store and calculate position in G-Buffer without the need of matrix multiplication Store the normal in view space Every lighting in my engine is in world space and I want do the lighting in view space to speed up my lighting pass. I want an optimized lighting pass for my deferred engine.

    Read the article

  • Absorbtion 2d image effect

    - by Ed.
    I want to create a specyfic 2d image effect. It consists in modifying a sprite so it looks like it is being zoomed to a point or "absorbed" by that point. I'm not really sure what is the technical name of this effect so I cannot explain it correctly. Here you can see a video of what I'm talking about, it is the effect when the character absorbs the three glyphs. http://www.youtube.com/watch?v=PIo-GddsMcU&t=4m45s What is the name of this effect? How can I implement it with XNA for 2D textures/sprites?

    Read the article

  • Why don't Normal maps in tangent space have a single blue color?

    - by seahorse
    Normal maps are predominantly blue in color because the z component maps to Blue and since normals point out of the surface in the z direction we see Blue as the predominant component. If the above is true then why are normal maps just of one color i.e. blue and they should not be having any other shades(not even shades of blue) Since by definition tangent space is perpendicular to normal at any point we should have the normal always pointing in the Z (Blue direction) with no X(Red component) and Y(Green component). Thus the normal map(since it is a "normal map") should have had color of normals which is just the Blue(Z =Blue compoennt = 1, R=0, G=0) and the normal map should have been of only Blue color with no shades in between. But even then normal maps are not so, and they have gradients of shades in them, why is this so?

    Read the article

  • Detecting collision between ball (circle) and brick(rectangle)?

    - by James Harrison
    Ok so this is for a small uni project. My lecturer provided me with a framework for a simple brickbreaker game. I am currently trying to overcome to problem of detecting a collision between the two game objects. One object is always the ball and the other objects can either be the bricks or the bat. public Collision hitBy( GameObject obj ) { //obj is the bat or the bricks //the current object is the ball // if ball hits top of object if(topX + width >= obj.topX && topX <= obj.topX + obj.width && topY + height >= obj.topY - 2 && topY + height <= obj.topY){ return Collision.HITY; } //if ball hits left hand side else if(topY + height >= obj.topY && topY <= obj.topY + obj.height && topX + width >= obj.topX -2 && topX + width <= obj.topX){ return Collision.HITX; } else return Collision.NO_HIT; } So far I have a method that is used to detect this collision. The the current obj is a ball and the obj passed into the method is the the bricks. At the moment I have only added statement to check for left and top collisions but do not want to continue as I have a few problems. The ball reacts perfectly if it hits the top of the bricks or bat but when it hits the ball often does not change directing. It seems that it is happening toward the top of the left hand edge but I cannot figure out why. I would like to know if there is another way of approaching this or if people know where I'm going wrong. Lastly the collision.HITX calls another method later on the changes the x direction likewise with y.

    Read the article

  • Creating a curved mesh on inside of sphere based on texture image coordinates

    - by user5025
    In Blender, I have created a sphere with a panoramic texture on the inside. I have also manually created a plane mesh (curved to match the size of the sphere) that sits on the inside wall where I can draw a different texture. This is great, but I really want to reduce the manual labor, and do some of this work in a script -- like having a variable for the panoramic image, and coordinates of the area in the photograph that I want to replace with a new mesh. The hardest part of doing this is going to be creating a curved mesh in code that can sit on the inside wall of a sphere. Can anyone point me in the right direction?

    Read the article

  • Isometric algorithm producing tiles in wrong draw order

    - by David
    I've been toying with isometric and I just cant get the tiles to be in the right order. I'm probably missing something obvious and I just can't see it. Even at the risk of looking stupid, here's my code: for (int i = 0; i < Tile.MapSize; i++) { for (int j = 0; j < Tile.MapSize; j++) { spriteBatch.Draw( Tile.TileSetTexture, new Rectangle( (-j * Tile.TileWidth / 2) + (i * Tile.TileWidth / 2), (i * (Tile.TileHeight - 9) / 2) - (-j * (Tile.TileHeight - 9) / 2), Tile.TileWidth, Tile.TileHeight), Tile.GetSourceRectangle(tileID), Color.White, 0.0f, new Vector2(-350, -60), SpriteEffects.None, 1.0f); } } And here's what I end up with: messed up map Yep, bit of an issue. If anyone could help, I'd appreciate it.

    Read the article

  • What is UVIndex and how do I use it on OpenGL?

    - by Delta
    I am a noob in OpenGL ES 2.0 (for WebGL) and I'm trying to draw a simple model I've made with a 3D tool and exported to .fbx format. I've been able to draw some models that only have: A vertex buffer, a index buffer for the vertices, a normal buffer and a texture coordinate buffer, but this model now has a "UVIndex" and I'm not sure where am I supposed to put this UVIndex. My code looks like this: GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.VertexBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["vPosition"],3,GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.NormalBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["vNormal"], 3, GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.TexCoordBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["TexCoord"], 2, GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ELEMENT_ARRAY_BUFFER, this.Model.House.IndexBuffer); GL.bindTexture(GL.TEXTURE_2D, this.Texture.HTex1); GL.activeTexture(GL.TEXTURE0); GL.drawElements(GL.TRIANGLES, this.Model.House.IndexBuffer.Length, GL.UNSIGNED_SHORT, 0); But my model renders totally incorrect and I think it has to do with the fact that I am ignoring this "UVIndex" in the .fbx file, since I've never drawn any model that uses this UVIndex I really have no clue on what to do with it. This is the json file containing the model's data: http://pastebin.com/raw.php?i=G294TVmz

    Read the article

  • How to have operations with character/items in binary with concrete operations?

    - by Piperoman
    I have the next problem. A item can have a lot of states: NORMAL = 0000000 DRY = 0000001 HOT = 0000010 BURNING = 0000100 WET = 0001000 COLD = 0010000 FROZEN = 0100000 POISONED= 1000000 A item can have some states at same time but not all of them Is impossible to be dry and wet at same time. If you COLD a WET item, it turns into FROZEN. If you HOT a WET item, it turns into NORMAL A item can be BURNING and POISON Etc. I have tried to set binary flags to states, and use AND to combine different states, checking before if it is possible or not to do it, or change to another status. Does there exist a concrete approach to solve this problem efficiently without having an interminable switch that checks every state with every new state? It is relatively easy to check 2 different states, but if there exists a third state it is not trivial to do.

    Read the article

  • OpenGL position from depth is wrong

    - by CoffeeandCode
    My engine is currently implemented using a deferred rendering technique, and today I decided to change it up a bit. First I was storing 5 textures as so: DEPTH24_STENCIL8 - Depth and stencil RGBA32F - Position RGBA10_A2 - Normals RGBA8 x 2 - Specular & Diffuse I decided to minimize it and reconstruct positions from the depth buffer. Trying to figure out what is wrong with my method currently has not been fun :/ Currently I get this: which changes whenever I move the camera... weird Vertex shader really simple #version 150 layout(location = 0) in vec3 position; layout(location = 1) in vec2 uv; out vec2 uv_f; void main(){ uv_f = uv; gl_Position = vec4(position, 1.0); } Fragment shader Where the fun (and not so fun) stuff happens #version 150 uniform sampler2D depth_tex; uniform sampler2D normal_tex; uniform sampler2D diffuse_tex; uniform sampler2D specular_tex; uniform mat4 inv_proj_mat; uniform vec2 nearz_farz; in vec2 uv_f; ... other uniforms and such ... layout(location = 3) out vec4 PostProcess; vec3 reconstruct_pos(){ float z = texture(depth_tex, uv_f).x; vec4 sPos = vec4(uv_f * 2.0 - 1.0, z, 1.0); sPos = inv_proj_mat * sPos; return (sPos.xyz / sPos.w); } void main(){ vec3 pos = reconstruct_pos(); vec3 normal = texture(normal_tex, uv_f).rgb; vec3 diffuse = texture(diffuse_tex, uv_f).rgb; vec4 specular = texture(specular_tex, uv_f); ... do lighting ... PostProcess = vec4(pos, 1.0); // Just for testing } Rendering code probably nothing wrong here, seeing as though it always worked before this->gbuffer->bind(); gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT); gl::Enable(gl::DEPTH_TEST); gl::Enable(gl::CULL_FACE); ... bind geometry shader and draw models and shiz ... gl::Disable(gl::DEPTH_TEST); gl::Disable(gl::CULL_FACE); gl::Enable(gl::BLEND); ... bind textures and lighting shaders shown above then draw each light ... gl::BindFramebuffer(gl::FRAMEBUFFER, 0); gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT); gl::Disable(gl::BLEND); ... bind screen shaders and draw quad with PostProcess texture ... Rinse_and_repeat(); // not actually a function ;) Why are my positions being output like they are?

    Read the article

  • SFML - Moving a sprite on mouseclick

    - by Mike
    I want to be able to move a sprite from a current location to another based upon where the user clicks in the window. This is the code that I have: #include <SFML/Graphics.hpp> int main() { // Create the main window sf::RenderWindow App(sf::VideoMode(800, 600), "SFML window"); // Load a sprite to display sf::Texture Image; if (!Image.LoadFromFile("cb.bmp")) return EXIT_FAILURE; sf::Sprite Sprite(Image); // Define the spead of the sprite float spriteSpeed = 200.f; // Start the game loop while (App.IsOpened()) { if (sf::Keyboard::IsKeyPressed(sf::Keyboard::Escape)) App.Close(); if (sf::Mouse::IsButtonPressed(sf::Mouse::Right)) { Sprite.SetPosition(sf::Mouse::GetPosition(App).x, sf::Mouse::GetPosition(App).y); } // Clear screen App.Clear(); // Draw the sprite App.Draw(Sprite); // Update the window App.Display(); } return EXIT_SUCCESS; } But instead of just setting the position I want to use Sprite.Move() and gradually move the sprite from one position to another. The question is how? Later I plan on adding a node system into each map so I can use Dijkstra's algorithm, but I'll still need this for moving between nodes.

    Read the article

  • Running multiple box2D world objects on a server

    - by CharbelAbdo
    I'm creating a multiplayer game using LibGdx (with Box2d) and Kryonet. Since this is the first time I work on multiplayer games, I read a bit about server - client implementations, and it turns out that the server should handle important tasks like collision detection, hits, characters dying etc... Based on some articles (like the excellent Gabriel Gambetta Fast paced multiplayer series), I also know that the client should work in parallel to avoid the lag while the server responds to commands. Physics wise, each game will have 2 players, and any projectiles fired. What I'm thinking of doing is the following: Create a physics world on the client When the game is signaled to start, I create the same physics world on the server (without any rendering obviously). Whenever the player issues a command (move or fire), I send the command to the server and immediately start processing it on the client. When the server receives the command, it applies it on the server's world (set velocity etc...) Each 100ms, the server sends the new state to the client which corrects what was calculated locally. Any critical action (hit, death, level up) is calculated only on the server and sent to the client. Essentially, I would have a Box2d World object running on the server for each game in progress, in sync with the worlds running on the clients. The alternative would be to do my own calculations on the server instead of relying on Box2D to do them for me, but I'm trying to avoid that. My question is: Is it wise to have, for example, 1000 instances of the World object running and executing steps on the server? Tomcat used around 750 MBytes of memory when trying it without any object added to the world. Anybody tried that before? If not, is there any alternative? Google did not help me, are there any guidelines to use when you want to have physics on both the client and the server? Thanks for any help.

    Read the article

< Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >