Search Results

Search found 25952 results on 1039 pages for 'development lifecycle'.

Page 465/1039 | < Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >

  • CCUserDefault, iOS/Android and game updates

    - by Luke
    My game uses cocos2d-x and will be published on iOS platform first, later on Android. I save a lot of things with CCUserDefault (scores, which level was completed, number of coins taken, etc...). But now I have a big doubt. What will happen when the game will receive its first update? CCUserDefault uses an XML file stored somewhere in the app storage space. This file is created and retained until one uninstalls the app. I am wondering what happens when the app is updated. Will the old XML file be maintained? Because if not, how should I handle app updates (updates in the sense that 2, 3 or more new level packages will be added, but the informations about the old ones, like scores, which level was finished and which not, number of coins, etc., need absolutely not to be lost)?

    Read the article

  • Applying prerecorded animations to models with the same skeleton

    - by Jeremias Pflaumbaum
    well my question sounds a bit like, how do I apply mo-cap animations to my model, but thats not really it I guess. Animations and model share the same skeleton, but the models vary in size and proportion, but I still want to be able to apply any animation to any model. I think this should be possible since the models got the same skeleton bone structure and the bones are always in the same area only their position varies from model to model. In particular Im trying to apply this to 2D characters that got 2arm, 2legs, a head and a body, but if you got anything related to that topic even if its 3D related or keywords, articles, books whatever Im gratefull for everything cause Im a bit stuck at the moment. cheers Jery

    Read the article

  • Control convention for circular movement?

    - by Christian
    I'm currently doing a kind of training project in Unity (still a beginner). It's supposed to be somewhat like Breakout, but instead of just going left and right I want the paddle to circle around the center point. This is all fine and dandy, but the problem I have is: how do you control this with a keyboard or gamepad? For touch and mouse control I could work around the problem by letting the paddle follow the cursor/finger, but with the other control methods I'm a bit stumped. With a keyboard for example, I could either make it so that the Left arrow always moves the paddle clockwise (it starts at the bottom of the circle), or I could link it to the actual direction - meaning that if the paddle is at the bottom, it goes left and up along the circle or, if it's in the upper hemisphere, it moves left and down, both times toward the outer left point of the circle. Both feel kind of weird. With the first one, it can be counter intuitive to press Left to move the paddle right when it's in the upper area, while in the second method you'd need to constantly switch buttons to keep moving. So, long story short: is there any kind of existing standard, convention or accepted example for this type of movement and the corresponding controls? I didn't really know what to google for (control conventions for circular movement was one of the searches I tried, but it didn't give me much), and I also didn't really find anything about this on here. If there is a Question that I simply didn't see, please excuse the duplicate.

    Read the article

  • Multithreading for a mixed-genre game in Python?

    - by arrogantc
    So here's the situation. I'm making a game that mixes two genres; arcade shooter and puzzler. They don't intertwine TOO much; all the interaction that really goes on is that every time an enemy is destroyed, a block is created. The blocks aren't even a part of the main collision detection system; they have their own more suited to their needs. What I want to ask is this; might it be a good idea to have the arcade shooter portion run on one thread, and the puzzle game portion run on another?

    Read the article

  • Projective texture and deferred lighting

    - by Vodácek
    In my previous question, I asked whether it is possible to do projective texturing with deferred lighting. Now (more than half a year later) I have a problem with my implementation of the same thing. I am trying to apply this technique in light pass. (my projector doesn't affect albedo). I have this projector View a Projection matrix: Matrix projection = Matrix.CreateOrthographicOffCenter(-halfWidth * Scale, halfWidth * Scale, -halfHeight * Scale, halfHeight * Scale, 1, 100000); Matrix view = Matrix.CreateLookAt(Position, Target, Vector3.Up); Where halfWidth and halfHeight is are half of the texture's width and height, Position is the Projector's position and target is the projector's target. This seems to be ok. I am drawing full screen quad with this shader: float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; texture2D ProjectorTexture; float4x4 ProjectorViewProjection; sampler2D depthSampler = sampler_state { texture = <DepthTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = <NormalTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D projectorSampler = sampler_state { texture = <ProjectorTexture>; AddressU = Clamp; AddressV = Clamp; }; float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position :POSITION0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = input.Position; output.PositionCopy=output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float2 texCoord =postProjToScreen(input.PositionCopy) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); //return float4(depth.r,0,0,1); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; //compute projection float3 projection=tex2D(projectorSampler,postProjToScreen(mul(position,ProjectorViewProjection)) + halfPixel()); return float4(projection,1); } In first part of pixel shader is recovered position from G-buffer (this code I am using in other shaders without any problem) and then is tranformed to projector viewprojection space. Problem is that projection doesn't appear. Here is an image of my situation: The green lines are the rendered projector frustum. Where is my mistake hidden? I am using XNA 4. Thanks for advice and sorry for my English. EDIT: Shader above is working but projection was too small. When I changed the Scale property to a large value (e.g. 100), the projection appears. But when the camera moves toward the projection, the projection expands, as can bee seen on this YouTube video.

    Read the article

  • Trouble using Ray.Intersect method on bounding boxes in a 2D XNA game

    - by getsauce
    I am trying to use a ray and bounding box to determine if a box is between the player and the mouse pointer in 2D space. When I try testing the code, the collision will return true when pointed at the box but it also returns true under other circumstances where it shouldn't. For instance. If I have a player on the left and a box directly to the right, I can put the mouse pointer a few hundred pixels above the box or a few hundred below and it will still return true. Also, I can put my mouse pointer to the left of the player and in a certain area it will still return true. Does anyone have any idea what might cause this? I have left out definitions for some of my members and properties just to make this code sample easier to read. The position property is just a Vector2 for where each object is located. ray = new Ray(new Vector3(player.Position, 0), new Vector3(mouse.Position, 0); box = new BoundingBox(new Vector3(box.Position, 0), new Vector3( new Vector2(box.Position + box.Width, box.Position + box.Height), 0); if (ray.Intersects(box) != null) collision = true; else collision = false;

    Read the article

  • Sanity checks vs file sizes

    - by Richard Fabian
    In your game assets do you make room for explicit sanity checks, or do you have some generally expected bounds which you assert? I've been thinking about how we compress data and thought that it's much better to have the former, and less of the latter. If your data can exceed your normal valid ranges, but if it does it's an error, then surely that implies you're not compressing the data well enough? What do you do to find out if your data is compressed as far as it can be, and what do you use to ensure your data isn't corrupted and ensure it's an official release? EDIT I'm not interested in sanity checking the file size, but instead, how you manage your sanity checks and whether you arrange the excess size caused by the opportunity to do sanity checks by using explicit extra data, or through allowing the data enough file space (data member size) to be out of valid range and thus able to be checked merely by looking at the asset in memory after loading.

    Read the article

  • What causes player box/world geometry glitches in old games?

    - by Alexander
    I'm looking to understand and find the terminology for what causes - or allows - players to interfere with geometry in old games. Famously, ID's Quake3 gave birth to a whole community of people breaking the physics by jumping, sliding, getting stuck and launching themselves off points in geometry. Some months ago (though I'd be darned if I can find it again!) I saw a conference held by Bungie's Vic DeLeon and a colleague in which Vic briefly discussed the issues he ran into while attempting to wrap 'collision' objects (please correct my terminology) around environment objects so that players could appear as though they were walking on organic surfaces, while not clipping through them or appear to be walking on air at certain points, due to complexities in the modeling. My aim is to compose a case study essay for University in which I can tackle this issue in games, drawing on early exploits and how techniques have changed to address such exploits and to aid in the gameplay itself. I have 3 current day example of where exploits still exist, however specifically targeting ID Software clearly shows they've massively improved their techniques between Q3 and Q4. So in summary, with your help please, I'd like to gain a slightly better understanding of this issue as a whole (its terminology mainly) so I can use terms and ask the right questions within the right contexts. In practical application, I know what it is, I know how to do it, but I don't have the benefit of level design knowledge yet and its technical widgety knick-knack terms =) Many thanks in advance AJ

    Read the article

  • Implementing a wheeled character controller

    - by Lazlo
    I'm trying to implement Boxycraft's character controller in XNA (with Farseer), as Bryan Dysmas did (minus the jumping part, yet). My current implementation seems to sometimes glitch in between two parallel planes, and fails to climb 45 degree slopes. (YouTube videos in links, plane glitch is subtle). How can I fix it? From the textual description, I seem to be doing it right. Here is my implementation (it seems like a huge wall of text, but it's easy to read. I wish I could simplify and isolate the problem more, but I can't): public Body TorsoBody { get; private set; } public PolygonShape TorsoShape { get; private set; } public Body LegsBody { get; private set; } public Shape LegsShape { get; private set; } public RevoluteJoint Hips { get; private set; } public FixedAngleJoint FixedAngleJoint { get; private set; } public AngleJoint AngleJoint { get; private set; } ... this.TorsoBody = BodyFactory.CreateRectangle(this.World, 1, 1.5f, 1); this.TorsoShape = new PolygonShape(1); this.TorsoShape.SetAsBox(0.5f, 0.75f); this.TorsoBody.CreateFixture(this.TorsoShape); this.TorsoBody.IsStatic = false; this.LegsBody = BodyFactory.CreateCircle(this.World, 0.5f, 1); this.LegsShape = new CircleShape(0.5f, 1); this.LegsBody.CreateFixture(this.LegsShape); this.LegsBody.Position -= 0.75f * Vector2.UnitY; this.LegsBody.IsStatic = false; this.Hips = JointFactory.CreateRevoluteJoint(this.TorsoBody, this.LegsBody, Vector2.Zero); this.Hips.MotorEnabled = true; this.AngleJoint = new AngleJoint(this.TorsoBody, this.LegsBody); this.FixedAngleJoint = new FixedAngleJoint(this.TorsoBody); this.Hips.MaxMotorTorque = float.PositiveInfinity; this.World.AddJoint(this.Hips); this.World.AddJoint(this.AngleJoint); this.World.AddJoint(this.FixedAngleJoint); ... public void Move(float m) // -1, 0, +1 { this.Hips.MotorSpeed = 0.5f * m; }

    Read the article

  • Optimal sprite size for rotations

    - by Panda Pajama
    I am making a sprite based game, and I have a bunch of images that I get in a ridiculously large resolution and I scale them to the desired sprite size (for example 64x64 pixels) before converting them to a game resource, so when draw my sprite inside the game, I don't have to scale it. However, if I rotate this small sprite inside the game (engine agnostically), some destination pixels will get interpolated, and the sprite will look smudged. This is of course dependent on the rotation angle as well as the interpolation algorithm, but regardless, there is not enough data to correctly sample a specific destination pixel. So there are two solutions I can think of. The first is to use the original huge image, rotate it to the desired angles, and then downscale all the reaulting variations, and put them in an atlas, which has the advantage of being quite simple to implement, but naively consumes twice as much sprite space for each rotation (each rotation must be inscribed in a circle whose diameter is the diagonal of the original sprite's rectangle, whose area is twice of that original rectangle, supposing square sprites). It also has the disadvantage of only having a predefined set of rotations available, which may be okay or not depending on the game. So the other choice would be to store a larger image, and rotate and downscale while rendering, which leads to my question. What is the optimal size for this sprite? Optimal meaning that a larger image will have no effect in the resulting image. This is definitely dependent on the image size, the amount of desired rotations without data loss down to 1/256, which is the minimum representable color difference. I am looking for a theoretical general answer to this problem, because trying a bunch of sizes may be okay, but is far from optimal.

    Read the article

  • Generated 3d tree meshes

    - by Jari Komppa
    I did not find a question on these lines yet, correct me if I'm wrong. Trees (and fauna in general) are common in games. Due to their nature, they are a good candidate for procedural generation. There's SpeedTree, of course, if you can afford it; as far as I can tell, it doesn't provide the possibility of generating your tree meshes at runtime. Then there's SnappyTree, an online webgl based tree generator based on the proctree.js which is some ~500 lines of javascript. One could use either of above (or some other tree generator I haven't stumbled upon) to create a few dozen tree meshes beforehand - or model them from scratch in a 3d modeller - and then randomly mirror/scale them for a few more variants.. But I'd rather have a free, linkable tree mesh generator. Possible solutions: Port proctree.js to c++ and deal with the open source license (doesn't seem to be gpl, so could be doable; the author may also be willing to co-operate to make the license even more free). Roll my own based on L-systems. Don't bother, just use offline generated trees. Use some other method I haven't found yet.

    Read the article

  • Dynamically load images inside jar

    - by Rahat Ahmed
    I'm using Slick2d for a game, and while it runs fine in Eclipse, i'm trying to figure out how to make it work when exported to a runnable .jar. I have it set up to where I load every image located in the res/ directory. Here's the code /** * Loads all .png images located in source folders. * @throws SlickException */ public static void init() throws SlickException { loadedImages = new HashMap<>(); try { URI uri = new URI(ResourceLoader.getResource("res").toString()); File[] files = new File(uri).listFiles(new FilenameFilter(){ @Override public boolean accept(File dir, String name) { if(name.endsWith(".png")) return true; return false; } }); System.out.println("Naming filenames now."); for(File f:files) { System.out.println(f.getName()); FileInputStream fis = new FileInputStream(f); Image image = new Image(fis, f.getName(), false); loadedImages.put(f.getName(), image); } } catch (URISyntaxException | FileNotFoundException e) { System.err.println("UNABLE TO LOAD IMAGES FROM RES FOLDER!"); e.printStackTrace(); } font = new AngelCodeFont("res/bitmapfont.fnt",Art.get("bitmapfont.png")); } Now the obvious problem is the line URI uri = new URI(ResourceLoader.getResource("res").toString()); If I pack the res folder into the .jar there will not be a res folder on the filesystem. How can I iterate through all the images in the compiled .jar itself, or what is a better system to automatically load all images?

    Read the article

  • Best way to handle realtime melee AI in authoritative network environment

    - by PrimeDerektive
    So i've been working on a multiplayer game for a bit; it's a co-op action RPG with real-time combat. If you've seen or played TERA, I'd say it's comparable to that, but not an MMO, heh. I'm currently handling the AI units authoritatively, the server calculates their pathing, movement, and pursue/attack logic, and syncs the movement to the clients 15x per second, and the state changes when they happen. When I emulate 200ms ping, though, the client can perceive being out of range to an AI's attack, but still take the hit, because on the server he hadn't moved that far yet. This also plays hell with my real-time blocking. I don't really want to allow the clients to be allowed to say "that was out of range" or "I blocked that", but I'm not really sure how else to handle it.

    Read the article

  • Generating triangles from a square grid

    - by vivi
    I have a 2D square grid of values representing terrain elevations, and I want to generate triangles from that grid to make a 3D view of the terrain. My first thought was to split each square diagonally into 2 triangles, however the split diagonal can clearly be seen, especially from the top : [Sorry, as a new user I can't post images, please see here : imgur] Is there a recommended way to generate triangles to remove/reduce this effect ?

    Read the article

  • XNA 4.0 2D sidescroller variable terrain heightmap for walking/collision

    - by JiminyCricket
    I've been fooling around with moving on sloped tiles in XNA and it is semi-working but not completely satisfactory. I also have been thinking that having sets of predetermined slopes might not give me terrain that looks "organic" enough. There is also the problem of having to construct several different types of tile for each slope when they're chained together (only 45 degree tiles will chain perfectly as I understand it). I had thought of somehow scanning for connected chains of sloped tiles and treating it as a new large triangle, as I was having trouble with glitching at the edges where sloped tiles connect. But, this leads back to the problem of limiting the curvature of the terrain. So...what I'd like to do now is create a simple image or texture of the terrain of a level (or section of the level) and generate a simple heightmap (of the Y's for each X) for the terrain. The player's Y position would then just be updated based on their X position. Is there a simple way of doing this (or a better way of solving this problem)? The main problem I can see with this method is the case where there are areas above the ground that can be walked on. Maybe there is a way to just map all walkable ground areas? I've been looking at this helpful bit of code: http://thirdpartyninjas.com/blog/2010/07/28/sloped-platform-collision/ but need a way to generate the actual points/vectors.

    Read the article

  • cocos2d event handler not fired when reentering scene

    - by Adam Freund
    I am encountering a very strange problem with my cocos2d app. I add a sprite to the page and have an event handler linked to it which replaces the scene with another scene. On that page I have another button to take me back to the original scene. When I am back on the original scene, the eventHandler doesn't get fired when I click on the sprite. Below is the relevant code. Thanks for any help! CCMenuItemImage *backBtnImg = [CCMenuItemImage itemWithNormalImage:@"btn_back.png" selectedImage:@"btn_back_pressed.png" target:self selector:@selector(backButtonTapped:)]; backBtnImg.position = ccp(45, 286); CCMenu *backBtn = [CCMenu menuWithItems:backBtnImg, nil]; backBtn.position = CGPointZero; [self addChild:backBtn]; EventHandler method (doesn't get called when the scene is re-entered). (void)backButtonTapped:(id)sender { NSLog(@"backButtonTapped\n"); CCMenuItemImage *backButton = (CCMenuItemImage *)sender; [backButton setNormalImage:[CCSprite spriteWithFile:@"btn_back_pressed.png"]]; [[CCDirector sharedDirector] replaceScene:[CCTransitionFade transitionWithDuration:.25 scene:[MenuView scene] withColor:ccBLACK]]; }

    Read the article

  • Too much delay while sending object over UDP to server

    - by RomZes
    I'm getting 4 sec delay when sending objects over UDP. Working on small game and trying to implement multiplayer. For now just trying to synchronize movements of 2 balls on the screen. StartingPoint.java is my server(first player), that receiving serialized objects (coordinates). SecondPlayer.java is client that sending serialized objects to server. When I'm moving my first object it appears 4 seconds later on different screen. StartingPoint.java @Override public void run() { byte[] receiveData = new byte[256]; byte[] sendData = new byte[256]; // DatagramSocket socketS; try { socket = new DatagramSocket(5000); System.out.println("Socket created on "+ port + " port"); } catch (SocketException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } while(true){ b1.update(this); b3.update(); System.out.println("Starting server..."); //// Receiving and deserializing object try { //socket.setSoTimeout(1000); DatagramPacket packet = new DatagramPacket(buf, buf.length); socket.receive(packet); byte[] data = packet.getData(); ByteArrayInputStream in = new ByteArrayInputStream(data); ObjectInputStream is = new ObjectInputStream(in); // socket.setSoTimeout(300); b1 = (Ball) is.readObject(); } catch (IOException | ClassNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } repaint(); try { Thread.sleep(17); } catch (InterruptedException e) { e.printStackTrace(); } SecondPlayer.java @Override public void run() { while(true){ b.update(); networkSend(); repaint(); try { Thread.sleep(17); } catch (InterruptedException e) { e.printStackTrace(); } } public void networkSend(){ // Serialize to a byte array try { ByteArrayOutputStream bStream = new ByteArrayOutputStream(); ObjectOutputStream oo; oo = new ObjectOutputStream(bStream); oo.writeObject(b); oo.flush(); oo.close(); byte[] bufCar = bStream.toByteArray(); //socket = new DatagramSocket(); //socket.setSoTimeout(1000); InetAddress address = InetAddress.getByName("localhost"); DatagramPacket packet = new DatagramPacket(bufCar, bufCar.length, address, port); socket.send(packet); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); }

    Read the article

  • Would like some help in understanding rendering geometry vs textures

    - by Anon
    So I was just pondering whether it is more taxing on the GPU to render geometry or a texture. What I'm trying to see is whether there is a huge difference in rendering two scenes with the same setup: Scene 1: Example Object: A dirt road (nothing else) Geometry: Detailed road, with all the bumps, cracks and so forth done in the mesh Scene 2: Example Object: A dirt road (nothing else) Geometry: A simple mesh, in a form of a road, but in this case maps and textures are simulating cracks, bumps, etc... So of these two, which one is likely to tax the hardware more? Or is it not a like for like comparison? What would be the best way of doing something like this? Go heavy on the textures? Or have a blend of both?

    Read the article

  • projected textures not appear on the "back" of the mesh as well?

    - by user975135
    I want to create blood wounds on my character's bodies by using projected textures. I've watched some commentaries on games like Left 4 Dead and they say they use projected textures for the blood. But the way projected textures work is that if you project a texture on a rigged character, say his chest, it will also appear on his back. So what's the trick? How to get projected textures appear only on one "side" of the mesh? I use the Panda3D game engine, if that will help.

    Read the article

  • Permanently Sync a wiimote with a computer

    - by Adam Geisweit
    i have tried to look up many ways to sync up my wiimotes to my computer so that i can program games with it, but every time it only syncs them up temporarily, or if it says it can permanently sync it, it doesn't actually do it. it gets tiresome when i have to keep on reconnecting it every time i want to save battery life. how would i be able to sync up my wiimote to my computer so that if i turn off my wiimote, i can just hit any button and it will automatically sync it up?

    Read the article

  • Why was my Facebook game rejected with the note that "your app icon must not overlap with content in your cover image?"

    - by peterwilli
    My FB game just recently got rejected for two reasons. The first I fixed, but I just can't see to figure out what they mean by the second, and I was hoping someone else got the same issue and did know what they meant. The remaining error is: Cover Image Your app icon must not overlap with content in your cover image. Click on 'Web Preview' in the 'App Details' section to check for overlap prior to submitting your app. See more here. All I know is that the rejection has something to do with the cover image, not the icons or the screenshots. The web preview of my game looks like this now: Please let me know what to do to get approved.

    Read the article

  • What would be a good filter to create 'magnetic deformers' from a depth map?

    - by sebf
    In my project, I am creating a system for deforming a highly detailed mesh (clothing) so that it 'fits' a convex mesh. To do this I use depth maps of the item and the 'hull' to determine at what point in world space the deviation occurs and the extent. Simply transforming all occluded vertices to the depths as defined by the 'hull' is fairly effective, and has good performance, but it suffers the problem of not preserving the features of the mesh and requires extensive culling to avoid false-positives. I would like instead to generate from the depth deviation map a set of simple 'deformers' which will 'push'* all vertices of the deformed mesh outwards (in world space). This way, all features of the mesh are preserved and there is no need to have complex heuristics to cull inappropriate vertices. I am not sure how to go about generating this deformer set however. I am imagining something like an algorithm that attempts to match a spherical surface to each patch of contiguous deviations within a certain range, but do not know where to start doing this. Can anyone suggest a suitable filter or algorithm for generating deformers? Or to put it another way 'compressing' a depth map? (*Push because its fitting to a convex 'bulgy' humanoid so transforms are likely to be 'spherical' from the POV of the surface.)

    Read the article

  • AI to move custom-shaped spaceships (shape affecting movement behaviour)

    - by kaoD
    I'm designing a networked turn based 3D-6DOF space fleet combat strategy game which relies heavily on ship customization. Let me explain the game a bit, since you need to know a bit about it to set the question. What I aim for is the ability to create your own fleet of ships with custom shapes and attached modules (propellers, tractor beams...) which would give advantages and disadvantages to each ship, so you have lots of different fleet distributions. E.g., long ship with two propellers at the side would let the ship spin around that plane easily, bigger ships would move slowly unless you place lots of propellers at the back (therefore spending more "construction" points and energy when moving, and it will only move fast towards that direction.) I plan to balance all the game around this feature. The game would revolve around two phases: orders and combat phase. During the orders phase, you command the different ships. When all players finish the order phase, the combat phase begins and the ship orders get resolved in real-time for some time, then the action pauses and there's a new orders phase. The problem comes when I think about player input. To move a ship, you need to turn on or off different propellers if you want to steer, travel forward, brake, rotate in place... These propellers don't have to work at their whole power, so you can achieve more movement combinations with less propellers. I think this approach is a bit boring. The player doesn't want to fiddle with motors or anything, you just want to MOVE and KILL. The way I intend the player to give orders to these ships is by a destination and a rotation, and then the AI would calculate the correct propeller power to achive that movement and rotation. Propulsion doesn't have to be the same throught the entire turn calculation (after the orders have been given) so it would be cool if the ships reacted as they move, adjusting the power of the propellers for their needs dynamically, but it may be too hard to implement and it's not really needed for the game to work. In both cases, how would that AI decide which propellers to activate for the best (or at least not worst) trajectory to be achieved? I though about some approaches: Learning AI: The ship types would learn about their movement by trial and error, adjusting their behaviour with more uses, and finally becoming "smart". I don't want to get involved THAT far in AI coding, and I think it can be frustrating for the player (even if you can let it learn without playing.) Pre-calculated timestep movement: Upon ship creation, ALL possible movements are calculated for each propeller configuration and power for a given delta-time. Memory intensive, ugly, bad. Pre-calculated trajectories: The same as above but not for each delta-time but the whole trajectory, which would then be fitted as much as possible. Requires a fixed propeller configuration for the whole combat phase and is still memory intensive, ugly and bad. Continuous brute forcing: The AI continously checks ALL possible propeller configurations throughout the entire combat phase, precalculates a few time steps and decides which is the best one based on that. Con: what's good now might not be that good later, and it's too CPU intensive, ugly, and bad too. Single brute forcing: Same as above, but only brute forcing at the beginning of the simulation, so it needs constant propeller configuration throughout the entire combat phase. Coninuous angle check: This is not a full movement method, but maybe a way to discard "stupid" propeller configurations. Given the current propeller's normal vector and the final one, you can approximate the power needed for the propeller based on the angle. You must do this continuously throughout the whole combat phase. I figured this one out recently so I didn't put in too much thought. A priori, it has the "what's good now might not be that good later" drawback too, and it doesn't care about the other propellers which may act together to make a better propelling configuration. I'm really stuck here. Any ideas?

    Read the article

  • Projectiles in tile mapped turn-based tactics game?

    - by Petteri Hietavirta
    I am planning to make a Laser Squad clone and I think I have most of the aspects covered. But the major headache is the projectiles shot/thrown. The easy way would be to figure out the probability of hit and just mark miss/hit. But I want to be able to have the projectile to hit something eventually (collateral damage!). Currently everything is flat 2D tile map and there would be full (wall, door) and half height (desk, chair, window) obstacles. My idea is to draw an imaginary line from the shooter to the target and add some horizontal&vertical error based on the player skills. Then I would trace the modified path until it hits something. This is basically what the original Laser Squad seems to do. Can you recommend any algorithms or other approaches for this?

    Read the article

  • How to move an object using X and Y coordinates in JavaScript

    - by Geroy290
    I am making a 2d game with JavaScript and HTML5 and am trying to move an image that I have drawn with JavaScript like so: //canvas var c = document.getElementById("gameCanvas"); var ctx = c.getContext("2d"); //baseball var baseball = new Image(); baseball.onload = function() { ctx.drawImage(baseball, 400, 425); }; baseball.src = "baseball2.png"; I'm not sure how I would move it though, I have seen many people seem to just type something like ballX and ballY but I don't understand where the actual x and y definition comes from. Here is my code so far: http://jsfiddle.net/xRfua/ I have a different image source but it is a local source so I couldn't include it. Thanks in a dvance for any help!

    Read the article

< Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >