Search Results

Search found 41789 results on 1672 pages for 'software development'.

Page 550/1672 | < Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >

  • Level selection view - similiar to Angry Bird's

    - by Piotr
    I am making game and need to prepare view for level selection. Could you recommend me some opensource library which could I use? I need icons to vibrate after long pressing one of them, some callbacks after choosing them, possibility to prepare custom icon's view, page control and horizontal scrolling. I was trying to use OpenSpringBoard but weirdly couldn't see scrollview and pagecontrol working in this project - it seems that there's possibility to use only one page. On the other hand, myLauncher(https://github.com/dlinsin/myLauncher) isn't so easy to include in project, as I need a seperate view with some delegate methods. I need to be compatible with iOS 4.2

    Read the article

  • UDK game Prisoners/Guards

    - by RR_1990
    For school I need to make a little game with UDK, the concept of the game is: The player is the headguard, he will have some other guard (bots) who will follow him. Between the other guards and the player are some prisoners who need to evade the other guards. It needs to look like this My idea was to let the guard bots follow the player at a certain distance and let the prisoners bots in the middle try to evade the guard bots. Now is the problem i'm new to Unreal Script and the school doesn't support me that well. Untill now I have only was able to make the guard bots follow me. I hope you guys can help me or make me something that will make this game work. Here is the class i'm using to let te bots follow me: class ChaseControllerAI extends AIController; var Pawn player; var float minimalDistance; var float speed; var float distanceToPlayer; var vector selfToPlayer; auto state Idle { function BeginState(Name PreviousStateName) { Super.BeginState(PreviousStateName); } event SeePlayer(Pawn p) { player = p; GotoState('Chase'); } Begin: player = none; self.Pawn.Velocity.x = 0.0; self.Pawn.Velocity.Y = 0.0; self.Pawn.Velocity.Z = 0.0; } state Chase { function BeginState(Name PreviousStateName) { Super.BeginState(PreviousStateName); } event PlayerOutOfReach() { `Log("ChaseControllerAI CHASE Player out of reach."); GotoState('Idle'); } // class ChaseController extends AIController; CONTINUED // State Chase (continued) event Tick(float deltaTime) { `Log("ChaseControllerAI in Event Tick."); selfToPlayer = self.player.Location - self.Pawn.Location; distanceToPlayer = Abs(VSize(selfToPlayer)); if (distanceToPlayer > minimalDistance) { PlayerOutOfReach(); } else { self.Pawn.Velocity = Normal(selfToPlayer) * speed; //self.Pawn.Acceleration = Normal(selfToPlayer) * speed; self.Pawn.SetRotation(rotator(selfToPlayer)); self.Pawn.Move(self.Pawn.Velocity*0.001); // or *deltaTime } } Begin: `Log("Current state Chase:Begin: " @GetStateName()@""); } defaultproperties { bAdjustFromWalls=true; bIsPlayer= true; minimalDistance = 1024; //org 1024 speed = 500; }

    Read the article

  • Which algorithm used in Advance Wars type turn based games

    - by Jan de Lange
    Has anyone tried to develop, or know of an algorithm such as used in a typical turn based game like Advance Wars, where the number of objects and the number of moves per object may be too large to search through up to a reasonable depth like one would do in a game with a smaller search base like chess? There is some path-finding needed to to engage into combat, harvest, or move to an object, so that in the next move such actions are possible. With this you can build a search tree for each item, resulting in a large tree for all items. With a cost function one can determine the best moves. Then the board flips over to the player role (min/max) and the computer searches the best player move, and flips back etc. upto a number of cycles deep. Finally it has found the best move and now it's the players turn. But he may be asleep by now... So how is this done in practice? I have found several good sources on A*, DFS, BFS, evaluation / cost functions etc. But as of yet I do not see how I can put it all together.

    Read the article

  • Resources for game networking in Java

    - by pudelhund
    I am currently working on a Java multiplayer game. The game itself (single player) already works perfectly fine and so does the chat. The only thing that is really missing is the multiplayer part. Sadly I am absolutely clueless on where to start with that. I roughly know that I will have to work with packages, and I also know many things about streaming etc (chat is already working). Oh and it should - according to this article - be a UDP server. My problem is that I can't find any resources on how to do this. A tutorial (book or website) would be perfect, alternatively a good example of an open source client/server (in Java of course) would be fine as well. If you feel like doing something helpful I'd also really appreciate someone "privately" teaching me via email or some chat program :) Thank you!

    Read the article

  • Adding Vertices to a dynamic mesh via Method Call

    - by Raven Dreamer
    I have a C# Struct with a static method, "Get Shape" which populates a List with the vertices of a polyhedron. Method Signature: public static void GetShape(Block b, int x, int y, int z, List<Vector3> vertices, List<int> triangles, List<Vector2> uvs, List<Vector2> uv2s) Adding directly to the vertices list (via vertices.Add(vector3) ), the code works as expected, and the new polyhedron appears when I trigger the method. However, I want to do some processing on the vertices I'm adding (a rotation), and the most sensible way I can think to do that is by creating a separate list of Vector3s, and then combining the lists when I'm done. However, vertices.AddRange(newVerts) does not add the shape to the mesh, nor does a foreach loop with verts.Add(vertices[i]). And this is before I've added in any of the processing! I have a feeling this might stem from passing the list of vertices in as a parameter, rather than returning a list and then adding to the vertices in the calling object, but since I'm filling 4 lists, I was trying to avoid having to create a data struct to return all four at once. Any ideas? The working version of the method is reprinted below, in full: public static void GetShape(Block b, int x, int y, int z, List<Vector3> vertices, List<int> triangles, List<Vector2> uvs, List<Vector2> uv2s) { //List<Vector3> vertices = new List<Vector3>(); int l_blockShape = b.blockShape; int l_blockType = b.blockType; //CheckFace checks if the block is empty //if this block is empty, don't draw anything. int vertexIndex; //only y faces need to be hidden. //if((l_blockShape & BlockShape.NegZFace) == BlockShape.NegZFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.2f, y + 1, z+.2f)); vertices.Add(new Vector3(x+.8f, y + 1, z+.2f)); vertices.Add(new Vector3(x+.8f, y , z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.2f)); // first triangle for the face triangles.Add(vertexIndex); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+3); // second triangle for the face triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+3); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //XY Z+1 face //if((l_blockShape & BlockShape.PosZFace) == BlockShape.PosZFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.8f, y + 1, z+.8f)); vertices.Add(new Vector3(x+.2f, y + 1, z+.8f)); vertices.Add(new Vector3(x+.2f, y , z+.8f)); vertices.Add(new Vector3(x+.8f, y , z+.8f)); // first triangle for the face triangles.Add(vertexIndex); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+3); // second triangle for the face triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+3); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //ZY face //if((l_blockShape & BlockShape.NegXFace) == BlockShape.NegXFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.2f, y + 1, z+.8f)); vertices.Add(new Vector3(x+.2f, y + 1, z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.8f)); // first triangle for the face triangles.Add(vertexIndex); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+3); // second triangle for the face triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+3); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //ZY X+1 face // if((l_blockShape & BlockShape.PosXFace) == BlockShape.PosXFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.8f, y + 1, z+.2f)); vertices.Add(new Vector3(x+.8f, y + 1, z+.8f)); vertices.Add(new Vector3(x+.8f, y , z+.8f)); vertices.Add(new Vector3(x+.8f, y , z+.2f)); // first triangle for the face triangles.Add(vertexIndex); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+3); // second triangle for the face triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+3); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //ZX face if((l_blockShape & BlockShape.NegYFace) == BlockShape.NegYFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.8f, y , z+.8f)); vertices.Add(new Vector3(x+.8f, y , z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.8f)); // first triangle for the face triangles.Add(vertexIndex+3); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex); // second triangle for the face triangles.Add(vertexIndex+3); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+1); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //ZX + 1 face if((l_blockShape & BlockShape.PosYFace) == BlockShape.PosYFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.8f, y+1 , z+.2f)); vertices.Add(new Vector3(x+.8f, y+1 , z+.8f)); vertices.Add(new Vector3(x+.2f, y+1 , z+.8f)); vertices.Add(new Vector3(x+.2f, y+1 , z+.2f)); // first triangle for the face triangles.Add(vertexIndex+3); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex); // second triangle for the face triangles.Add(vertexIndex+3); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+1); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } }

    Read the article

  • Interpolating Matrices

    - by sebf
    Hello, Apologies if I am missing something very obvious (likely!) but is there anything wrong with interpolating between two matrices by: float d = (float)(targetTime.Ticks - keyframe_start.ticks) / (float)(keyframe_end.ticks - keyframe_start.ticks); return ((keyframe_start.Transform * (1 - d)) + (keyframe_end.Transform * d)); As in my app, when I try an use this to interpolate between two keyframes, the model begins to 'shrink' - the severity based on how far between the two keyframes the target time is; its worst when the transform split is ~50/50.

    Read the article

  • creating bounding box list

    - by Christian Frantz
    I'm trying to create a list of bounding boxes for each cube drawn, so I can use the boxes to intersect with a ray that my mouse position is casting, but I have no idea how. I've created a list that stores the boxes, but how am I getting the values from each box? for (int x = 0; x < mapHeight; x++) { for (int z = 0; z < mapWidth; z++) { cubes.Add(new Vector3(x, map[x, z], z), Matrix.Identity, grass); boxList.Add(something here); } } public Cube(GraphicsDevice graphicsDevice) { device = graphicsDevice; var vertices = new List<VertexPositionTexture>(); BuildFace(vertices, new Vector3(0, 0, 0), new Vector3(0, 1, 1)); BuildFace(vertices, new Vector3(0, 0, 1), new Vector3(1, 1, 1)); BuildFace(vertices, new Vector3(1, 0, 1), new Vector3(1, 1, 0)); BuildFace(vertices, new Vector3(1, 0, 0), new Vector3(0, 1, 0)); BuildFaceHorizontal(vertices, new Vector3(0, 1, 0), new Vector3(1, 1, 1)); BuildFaceHorizontal(vertices, new Vector3(0, 0, 1), new Vector3(1, 0, 0)); cubeVertexBuffer = new VertexBuffer(device, VertexPositionTexture.VertexDeclaration, vertices.Count, BufferUsage.WriteOnly); cubeVertexBuffer.SetData<VertexPositionTexture>(vertices.ToArray()); } There aren't any clearly defined variables for the bounds of each cube created, so where do I create the bounding box from?

    Read the article

  • Rotate sprite to face 3D camera

    - by omikun
    I am trying to rotate a sprite so it is always facing a 3D camera. shaders->setUniform("camera", gCamera.matrix()); glm::mat4 scale = glm::scale(glm::mat4(), glm::vec3(5e5, 5e5, 5e5)); glm::vec3 look = gCamera.position(); glm::vec3 right = glm::cross(gCamera.up(), look); glm::vec3 up = glm::cross(look, right); glm::mat4 newTransform = glm::lookAt(glm::vec3(0), gCamera.position(), up) * scale; shaders->setUniform("model", newTransform); In the vertex shader: gl_Position = camera * model * vec4(vert, 1); The object will track the camera if I move the camera up or down, but if I rotate the camera around it, it will rotate in the other direction so I end up seeing its front twice and its back twice as I rotate around it 360. What am I doing wrong?

    Read the article

  • How do I identify mouse clicks versus mouse down in games?

    - by Tristan
    What is the most common way of handling mouse clicks in games? Given that all you have in way of detecting input from the mouse is whether a button is up or down. I currently just rely on the mouse down being a click, but this simple approach limits me greatly in what I want to achieve. For example I have some code that should only be run once on a mouse click, but using mouse down as a mouse click can cause the code to run more then once depending on how long the button is held down for. So I need to do it on a click! But what is the best way to handle a click? Is a click when the mouse goes from mouse up to down or from down to up or is it a click if the button was down for less then x frames/milliseconds and then if so, is it considered mouse down and a click if its down for x frames/milliseconds or a click then mouse down? I can see that each of the approaches can have their uses but which is the most common in games? And maybe i'll ask more specifically which is the most common in RTS games?

    Read the article

  • Low-level GPU code and Shader Compilation

    - by ktodisco
    Bear with me, because I will raise several questions at once. I still feel, though, that overall this can be treated as one question that may be answered succinctly. I recently dove into solidifying my understanding of the assembly language, low-level memory operations, CPU structure, and program optimizations. This also sparked my interest in how higher-level shading languages, GLSL and HLSL in particular, are compiled and optimized, as well as what formats they are reduced to before machine code is generated (assuming they are not converted directly into machine code). After a bit of research into this, the best resource I've found is this presentation from ATI about the compilation of and optimizations for HLSL. I also found sample ARB assembly code. This sort of addressed my original curiosity, but it raised several other questions. The assembler code in the ATI presentation seems like it contains instructions specifically targeted for the GPU, but is this merely a hypothetical example created for the purpose of conceptual understanding, or is this code really generated during shader compilation? If so, is it possible to inspect it, or even write it in place of the higher-level syntax? My initial searches for an answer to the last question tell me that this may be disallowed, but I have not dug too deep yet. Also, along the same lines, are GLSL shader programs compiled into ARB assembly code before machine code is generated, and is it possible to write direct ARB assembly? Lastly, and perhaps what I am most interested in finding out: are there comprehensive resources on shader compilation and low-level GPU code? I have been unable to find any thus far. I ask simply because I am curious :)

    Read the article

  • GLSL vertex shaders with movements vs vertex off the screen

    - by user827992
    If i have a vertex shader that manage some movements and variations about the position of some vertex in my OpenGL context, OpenGL is smart enough to just run this shader on only the vertex visible on the screen? This part of the OpenGL programmable pipeline is not clear to me because all the sources are not really really clear about this, they talk about fragments and pixels and I get that, but what about vertex shaders? If you need a reference i'm reading from this right now and this online book has a couple of examples about this.

    Read the article

  • Car engine sound simulation

    - by Petteri Hietavirta
    I have been thinking how to create realistic sound for a car. The main sound is the engine, then all kind of wind, road and suspension sounds. Are there any open source projects for the engine sound simulation? Simply pitching up the sample does not sound too great. The ideal would be to something that allows me to pick type of the engine (i.e. inline-4 vs v-8), add extras like turbo/supercharger whine and finally set the load and rpm. Edit: Something like http://www.sonory.org/examples.html

    Read the article

  • How to fix issue with my 3D first person camera?

    - by dxCUDA
    My camera moves and rotates, but relative to the worlds origin, instead of the players. I am having difficulty rotating the camera and then translating the camera in the direction relative to the camera facing angle. I have been able to translate the camera and rotate relative to the players origin, but not then rotate and translate in the direction relative to the cameras facing direction. My goal is to have a standard FPS-style camera. float yaw, pitch, roll; D3DXMATRIX rotationMatrix; D3DXVECTOR3 Direction; D3DXMATRIX matRotAxis,matRotZ; D3DXVECTOR3 RotAxis; // Set the yaw (Y axis), pitch (X axis), and roll (Z axis) rotations in radians. pitch = m_rotationX * 0.0174532925f; yaw = m_rotationY * 0.0174532925f; roll = m_rotationZ * 0.0174532925f; up = D3DXVECTOR3(0.0f, 1.0f, 0.0f);//Create the up vector //Build eye ,lookat and rotation vectors from player input data eye = D3DXVECTOR3(m_fCameraX, m_fCameraY, m_fCameraZ); lookat = D3DXVECTOR3(m_fLookatX, m_fLookatY, m_fLookatZ); rotation = D3DXVECTOR3(m_rotationX, m_rotationY, m_rotationZ); D3DXVECTOR3 camera[3] = {eye,//Eye lookat,//LookAt up };//Up RotAxis.x = pitch; RotAxis.y = yaw; RotAxis.z = roll; D3DXVec3Normalize(&Direction, &(camera[1] - camera[0]));//Direction vector D3DXVec3Cross(&RotAxis, &Direction, &camera[2]);//Strafe vector D3DXVec3Normalize(&RotAxis, &RotAxis); // Create the rotation matrix from the yaw, pitch, and roll values. D3DXMatrixRotationYawPitchRoll(&matRotAxis, pitch,yaw, roll); //rotate direction D3DXVec3TransformCoord(&Direction,&Direction,&matRotAxis); //Translate up vector D3DXVec3TransformCoord(&camera[2], &camera[2], &matRotAxis); //Translate in the direction of player rotation D3DXVec3TransformCoord(&camera[0], &camera[0], &matRotAxis); camera[1] = Direction + camera[0];//Avoid gimble locking D3DXMatrixLookAtLH(&in_viewMatrix, &camera[0], &camera[1], &camera[2]);

    Read the article

  • Best way to go about sorting 2D sprites in a "RPG Maker" styled RPG

    - by Aaron Stewart
    I am trying to come up with the best way to create overlapping sprites without having any issues. I was thinking of having a SortedDictionary and setting the Entity's key to it's Y position relative to the max bound of the simulation, aka the Z value. I'd update the "Z" value in the update method each frame, if the entity's position has changed at all. For those who don't know what I mean, I want characters who are standing closer in front of another character to be drawn on top, and if they are behind the character, they are drawn behind. I'm leery of using SpriteBatch back to front or front to back, I've been doing some searching and have been under the impression they are a bad idea. and want to know exactly how other people are dealing with their depth sorting. Just ultimately trying to come up with the best method of sorting for good practice before I get too far in to refactor the system effectively.

    Read the article

  • How do I load tmx files with Slick2d?

    - by mbreen
    I just started using Slick2D and learned how simple it is to load in a tilemap and display it. I tried atleast a dozen different tmx files from numerous examples to see if it was the actual file that was corrupted. Everytime I get this error: Exception in thread "main" java.lang.RuntimeException: Resource not found: data/maps/desert.tmx at org.newdawn.slick.util.ResourceLoader.getResourceAsStream(ResourceLoader.java:69) at org.newdawn.slick.tiled.TiledMap.<init>(TiledMap.java:101) at game.Game.init(Game.java:17) at game.Tunneler.initStatesList(Tunneler.java:37) at org.newdawn.slick.state.StateBasedGame.init(StateBasedGame.java:164) at org.newdawn.slick.AppGameContainer.setup(AppGameContainer.java:390) at org.newdawn.slick.AppGameContainer.start(AppGameContainer.java:314) at game.Tunneler.main(Tunneler.java:29) Here is my Game class: package game; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.SlickException; import org.newdawn.slick.state.BasicGameState; import org.newdawn.slick.state.StateBasedGame; import org.newdawn.slick.tiled.TiledMap; public class Game extends BasicGameState{ private int stateID = -1; private TiledMap map = null; public Game(int stateID){ this.stateID = stateID; } public void init(GameContainer container, StateBasedGame game) throws SlickException{ map = new TiledMap("data/maps/desert.tmx","maps");//ERROR } public void render(GameContainer container, StateBasedGame game, Graphics g) throws SlickException{ //map.render(0,0); } public void update(GameContainer container, StateBasedGame game, int delta) throws SlickException{ } public int getID(){return stateID;} } I've tried to see if anyone else has had similar problems but haven't turned up anything. I am able to load other files, so I don't believe it's a compiler issue. My menu class can load images and display them just fine. Also, the filepath is correct. Please let me know if you have any pointers that might help me sort this out.

    Read the article

  • I am looking for an graduation project idea for bacelor of computer engineering [closed]

    - by project idea
    I am interested in computer graphics and I have developed many hobby projects, mostly 2D and 3D games/scenes in directX and openGL, But for a grad project, proffesors wont allow games. I browsed many similar questions here and I am convinced project should be something I am really interested in as I will give considerable time to it. But apart from games I am not able to decide on the topic. I am also open to ideas on social apps and android.

    Read the article

  • Trouble with UV Mapping Blender => Unity 3

    - by Lea Hayes
    For some reason I am getting nasty grey edges around the edges of rendered 3D models that are not present in Blender. I seem to be able to solve the problem by reducing the size of the UV coordinates within the part of the texture that is to be mapped. But this means that: I am wasting valuable texture space Loss of accuracy in drawn UV maps Could I be doing something wrong, perhaps a setting in Unity that needs changing? I have watched countless tutorials which demonstrate Blender default generated UV coordinates with "Texture Paint" which are perfectly aligned in Unity. Here is an illustration of the problem: Left: approximately 15 pixels of margin on each side of UV coordinates Right: Approximately 3 pixels of margin on each side of UV coordinates Note: Texture image resolution is 1024x1024

    Read the article

  • Axis-Aligned Bounding Boxes vs Bounding Ellipse

    - by Griffin
    Why is it that most, if not all collision detection algorithms today require each body to have an AABB for the use in the broad phase only? It seems to me like simply placing a circle at the body's centroid, and extending the radius to where the circle encompasses the entire body would be optimal. This would not need to be updated after the body rotates and broad overlap-calculation would be faster to. Correct? Bonus: Would a bounding ellipse be practical for broad phase calculations also, since it would better represent long, skinny shapes? Or would it require extensive calculations, defeating the purpose of broad-phase?

    Read the article

  • What kind of steering behaviour or logic can I use to get mobiles to surround another?

    - by Vaughan Hilts
    I'm using path finding in my game to lead a mob to another player (to pursue them). This works to get them overtop of the player, but I want them to stop slightly before their destination (so picking the penultimate node works fine). However, when multiple mobs are pursuing the mobile they sometimes "stack on top of each other". What's the best way to avoid this? I don't want to treat the mobs as opaque and blocked (because they're not, you can walk through them) but I want the mobs to have some sense of structure. Example: Imagine that each snake guided itself to me and should surround "Setsuna". Notice how both snakes have chosen to prong me? This is not a strict requirement; even being slightly offset is okay. But they should "surround" Setsuna.

    Read the article

  • Pygame surfaces and their Rects

    - by Jaka Novak
    I am trying to understand how pygame surfaces work. I am confused about Rect position of Surface object. If I try blit surface on screen at some position then Surface is drawn at right position, but Rect of the surface is still at position (0, 0)... I tried write my own surface class with new rect, but i am not sure if is that right solution. My goal is that i could move surface like image with rect.move() or something like that. If there is any solution to do that i would be happy to read it. Thanks for answer and time for reading this awful English If helps i write some code for better understanding my problem. (run it first, and then uncomment two lines of code and run again to see the diference): import pygame from pygame.locals import * class SurfaceR(pygame.Surface): def __init__(self, size, position): pygame.Surface.__init__(self, size) self.rect = pygame.Rect(position, size) self.position = position self.size = size def get_rect(self): return self.rect def main(): pygame.init() screen = pygame.display.set_mode((640, 480)) pygame.display.set_caption("Screen!?") clock = pygame.time.Clock() fps = 30 white = (255, 255, 255) red = (255, 0, 0) green = (0, 255, 0) blue = (0, 0, 255) surface = pygame.Surface((70,200)) surface.fill(red) surface_re = SurfaceR((300, 50), (100, 300)) surface_re.fill(blue) while True: for event in pygame.event.get(): if event.type == QUIT: return 0 screen.blit(surface, (100,50)) screen.blit(surface_re, surface_re.position) #pygame.draw.rect(screen, white, surface.get_rect()) #pygame.draw.rect(screen, white, surface_re.get_rect()) pygame.display.update() clock.tick(fps) if __name__ == "__main__": main()

    Read the article

  • Load different levels in XML

    - by Anearion
    my question is more about a theoretical gener than a pratical way to make things happen. I'm about start developing a game for android, text based so i won't need sprites or animation, nor a game engine. Let's say is similar to a sudoku game, where each level is an harder version of sudoku and each level has some question to be asnwered over the sudoku itself. I was wondering if the better way is to have only one XML and then inside all the different levels, each one with his meta-tags, or if the different approach of making n xml files where each one is a level is preferred. At the moment a level should have those tags: <level> <question>Question_1</question> <hint1>what does it do?</hint1> <hint2>where...</hint2> .... <hintN>how...</hintN> </level> So each level could have some items to read and that's what made me think that maybe different files are better cuz if i have to load lvl 10 i can read only the 10.xml file. I hope my question isn't too stupind. Thanks in advance

    Read the article

  • Creating blur with an alpha channel, incorrect inclusion of black

    - by edA-qa mort-ora-y
    I'm trying to do a blur on a texture with an alpha channel. Using a typical approach (two-pass, gaussian weighting) I end up with a very dark blur. The reason is because the blurring does not properly account for the alpha channel. It happily blurs in the invisible part of the image, whcih happens to be black, and thus results in a very dark blur. Is there a technique to blur that properly accounts for the alpha channel?

    Read the article

  • Is there a grey-area with Copyright infringement?

    - by Z.O
    Currently a student, I'm trying to put together a game for iOS. From everywhere I've read, it seems any game's sound and art are apart of their IP and covered under their Copyright. That being said, say I wanted to use the coin sound effect from the original Mario (less than 1s long and used sparsely)... would anyone really care? Having no experience with this, I'm just wondering if cases like this are treated like "Ya you're driving slightly over the speed limit, but nobody cares" or as "you stole that car". Thanks for any insight anyone may be able to provide.

    Read the article

  • 2.5D action RPG game

    - by Phorden
    I want to make a 2.5D action RPG game in the next say five years. I need to learn a language first and I have started with C#. I haven't gotten too far into learning it and I would like advice on the best way to approach making a game like this in the long run. Work with XNA studios or stop and learn C++ and UDK? Or maybe there is another good way to approach this. I want to learn programming, so just using a visual editor without learning to code is not the way I want to go. I also don't want to write my game engine from scratch. I'm all ears for advice.

    Read the article

  • Separate shaders from HTML file in WebGL

    - by Chris Smith
    I'm ramping up on WebGL and was wondering what is the best way to specify my vertex and fragment shaders. Looking at some tutorials, the shaders are embedded directly in the HTML. (And referenced via an ID.) For example: <script id="shader_1-fs" type="x-shader/x-fragment"> precision highp float; void main(void) { // ... } </script> <script id="shader_1-vs" type="x-shader/x-vertex"> attribute vec3 aVertexPosition; uniform mat4 uMVMatrix; // ... My question is, is it possible to have my shaders referenced in a separate file? (Ideally as plain text.) I presume this is straight forward in JavaScript. Is there essentially a way to do this: var shaderText = LoadRemoteFileOnSever('/shaders/shader_1.txt');

    Read the article

< Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >