Search Results

Search found 33194 results on 1328 pages for 'development approach'.

Page 471/1328 | < Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >

  • How to follow object on CatmullRomSplines at constant speed (e.g. train and train carriage)?

    - by Simon
    I have a CatmullRomSpline, and using the very good example at https://github.com/libgdx/libgdx/wiki/Path-interface-%26-Splines I have my object moving at an even pace over the spline. Using a simple train and carriage example, I now want to have the carriage follow the train at the same speed as the train (not jolting along as it does with my code below). This leads into my main questions: How can I make the carriage have the same constant speed as the train and make it non jerky (it has something to do with the derivative I think, I don't understand how that part works)? Why do I need to divide by the line length to convert to metres per second, and is that correct? It wasn't done in the linked examples? I have used the example I linked to above, and modified for my specific example: private void process(CatmullRomSpline catmullRomSpline) { // Render path with precision of 1000 points renderPath(catmullRomSpline, 1000); float length = catmullRomSpline.approxLength(catmullRomSpline.spanCount * 1000); // Render the "train" Vector2 trainDerivative = new Vector2(); Vector2 trainLocation = new Vector2(); catmullRomSpline.derivativeAt(trainDerivative, current); // For some reason need to divide by length to convert from pixel speed to metres per second but I do not // really understand why I need it, it wasn't done in the examples??????? current += (Gdx.graphics.getDeltaTime() * speed / length) / trainDerivative.len(); catmullRomSpline.valueAt(trainLocation, current); renderCircleAtLocation(trainLocation); if (current >= 1) { current -= 1; } // Render the "carriage" Vector2 carriageLocation = new Vector2(); float carriagePercentageCovered = (((current * length) - 1f) / length); // I would like it to follow at 1 metre behind carriagePercentageCovered = Math.max(carriagePercentageCovered, 0); catmullRomSpline.valueAt(carriageLocation, carriagePercentageCovered); renderCircleAtLocation(carriageLocation); } private void renderPath(CatmullRomSpline catmullRomSpline, int k) { // catMulPoints would normally be cached when initialising, but for sake of example... Vector2[] catMulPoints = new Vector2[k]; for (int i = 0; i < k; ++i) { catMulPoints[i] = new Vector2(); catmullRomSpline.valueAt(catMulPoints[i], ((float) i) / ((float) k - 1)); } SHAPE_RENDERER.begin(ShapeRenderer.ShapeType.Line); SHAPE_RENDERER.setColor(Color.NAVY); for (int i = 0; i < k - 1; ++i) { SHAPE_RENDERER.line((Vector2) catMulPoints[i], (Vector2) catMulPoints[i + 1]); } SHAPE_RENDERER.end(); } private void renderCircleAtLocation(Vector2 location) { SHAPE_RENDERER.begin(ShapeRenderer.ShapeType.Filled); SHAPE_RENDERER.setColor(Color.YELLOW); SHAPE_RENDERER.circle(location.x, location.y, .5f); SHAPE_RENDERER.end(); } To create a decent sized CatmullRomSpline for testing this out: Vector2[] controlPoints = makeControlPointsArray(); CatmullRomSpline myCatmull = new CatmullRomSpline(controlPoints, false); .... private Vector2[] makeControlPointsArray() { Vector2[] pointsArray = new Vector2[78]; pointsArray[0] = new Vector2(1.681817f, 10.379999f); pointsArray[1] = new Vector2(2.045455f, 10.379999f); pointsArray[2] = new Vector2(2.663636f, 10.479999f); pointsArray[3] = new Vector2(3.027272f, 10.700000f); pointsArray[4] = new Vector2(3.663636f, 10.939999f); pointsArray[5] = new Vector2(4.245455f, 10.899999f); pointsArray[6] = new Vector2(4.736363f, 10.720000f); pointsArray[7] = new Vector2(4.754545f, 10.339999f); pointsArray[8] = new Vector2(4.518181f, 9.860000f); pointsArray[9] = new Vector2(3.790908f, 9.340000f); pointsArray[10] = new Vector2(3.172727f, 8.739999f); pointsArray[11] = new Vector2(3.300000f, 8.340000f); pointsArray[12] = new Vector2(3.700000f, 8.159999f); pointsArray[13] = new Vector2(4.227272f, 8.520000f); pointsArray[14] = new Vector2(4.681818f, 8.819999f); pointsArray[15] = new Vector2(5.081817f, 9.200000f); pointsArray[16] = new Vector2(5.463636f, 9.460000f); pointsArray[17] = new Vector2(5.972727f, 9.300000f); pointsArray[18] = new Vector2(6.063636f, 8.780000f); pointsArray[19] = new Vector2(6.027272f, 8.259999f); pointsArray[20] = new Vector2(5.700000f, 7.739999f); pointsArray[21] = new Vector2(5.300000f, 7.440000f); pointsArray[22] = new Vector2(4.645454f, 7.179999f); pointsArray[23] = new Vector2(4.136363f, 6.940000f); pointsArray[24] = new Vector2(3.427272f, 6.720000f); pointsArray[25] = new Vector2(2.572727f, 6.559999f); pointsArray[26] = new Vector2(1.900000f, 7.100000f); pointsArray[27] = new Vector2(2.336362f, 7.440000f); pointsArray[28] = new Vector2(2.590908f, 7.940000f); pointsArray[29] = new Vector2(2.318181f, 8.500000f); pointsArray[30] = new Vector2(1.663636f, 8.599999f); pointsArray[31] = new Vector2(1.209090f, 8.299999f); pointsArray[32] = new Vector2(1.118181f, 7.700000f); pointsArray[33] = new Vector2(1.045455f, 6.880000f); pointsArray[34] = new Vector2(1.154545f, 6.100000f); pointsArray[35] = new Vector2(1.281817f, 5.580000f); pointsArray[36] = new Vector2(1.700000f, 5.320000f); pointsArray[37] = new Vector2(2.190908f, 5.199999f); pointsArray[38] = new Vector2(2.900000f, 5.100000f); pointsArray[39] = new Vector2(3.700000f, 5.100000f); pointsArray[40] = new Vector2(4.372727f, 5.220000f); pointsArray[41] = new Vector2(4.827272f, 5.220000f); pointsArray[42] = new Vector2(5.463636f, 5.160000f); pointsArray[43] = new Vector2(5.554545f, 4.700000f); pointsArray[44] = new Vector2(5.245453f, 4.340000f); pointsArray[45] = new Vector2(4.445455f, 4.280000f); pointsArray[46] = new Vector2(3.609091f, 4.260000f); pointsArray[47] = new Vector2(2.718181f, 4.160000f); pointsArray[48] = new Vector2(1.990908f, 4.140000f); pointsArray[49] = new Vector2(1.427272f, 3.980000f); pointsArray[50] = new Vector2(1.609090f, 3.580000f); pointsArray[51] = new Vector2(2.136363f, 3.440000f); pointsArray[52] = new Vector2(3.227272f, 3.280000f); pointsArray[53] = new Vector2(3.972727f, 3.340000f); pointsArray[54] = new Vector2(5.027272f, 3.360000f); pointsArray[55] = new Vector2(5.718181f, 3.460000f); pointsArray[56] = new Vector2(6.100000f, 4.240000f); pointsArray[57] = new Vector2(6.209091f, 4.500000f); pointsArray[58] = new Vector2(6.118181f, 5.320000f); pointsArray[59] = new Vector2(5.772727f, 5.920000f); pointsArray[60] = new Vector2(4.881817f, 6.140000f); pointsArray[61] = new Vector2(5.318181f, 6.580000f); pointsArray[62] = new Vector2(6.263636f, 7.020000f); pointsArray[63] = new Vector2(6.645453f, 7.420000f); pointsArray[64] = new Vector2(6.681817f, 8.179999f); pointsArray[65] = new Vector2(6.627272f, 9.080000f); pointsArray[66] = new Vector2(6.572727f, 9.699999f); pointsArray[67] = new Vector2(6.263636f, 10.820000f); pointsArray[68] = new Vector2(5.754546f, 11.479999f); pointsArray[69] = new Vector2(4.536363f, 11.599998f); pointsArray[70] = new Vector2(3.572727f, 11.700000f); pointsArray[71] = new Vector2(2.809090f, 11.660000f); pointsArray[72] = new Vector2(1.445455f, 11.559999f); pointsArray[73] = new Vector2(0.936363f, 11.280000f); pointsArray[74] = new Vector2(0.754545f, 10.879999f); pointsArray[75] = new Vector2(0.700000f, 9.939999f); pointsArray[76] = new Vector2(0.918181f, 9.620000f); pointsArray[77] = new Vector2(1.463636f, 9.600000f); return pointsArray; } Disclaimer: My math is very rusty, so please explain in lay mans terms....

    Read the article

  • Is there any way to enable the HiDef graphics profile property on a Silverlight 5 3d Web App?

    - by Daniel
    I have an XNA Windows Game that uses the HiDef profile to load complex fbx and obj files. Trying to move it over to a Silverlight 3d Web App, Silverlight seems to only want to use the Reach profile, and I get an error that the Reach profile does not support a sufficient number of primitive draws per call. Is there any way to change to HiDef in Silverlight 5? It is not in the project properties and attempting to change it in mainpage.xaml.cs only gives me the option of setting it to Reach.

    Read the article

  • Convex Hull for Concave Objects

    - by Lighthink
    I want to implement GJK and I want it to handle concave shapes too (almost all my shapes are concave). I've thought of decomposing the concave shape into convex shapes and then building a hierarchical tree out of convex shapes, but I do not know how to do it. Nothing I could find on the Internet about it wasn't satisfying my needs, so maybe someone can point me in the right direction or give a full explanation.

    Read the article

  • How do GameEngines stop Pixel Seams appearing in adjacent mesh boundaries due to FP imprecision?

    - by ufomorace
    Graphics cards are mathematically imprecise. So when some meshes are joined by their borders, the graphics card often makes mistakes and decides that some pixels at the seam represent neither object, and unwanted pixels appear. It's a natural behaviour on all graphics cards. How are such worries avoided in Pro Games? Batching? Shaders? Different tangent vectors? Merging? Overlaping seams? Dark backgrounds? Extra vertices at borders? Z precision? Camera distance tweaks? Screencap of a fix that ended up not working:

    Read the article

  • How to update off screen bitmap in a surfaceview thread

    - by DKDiveDude
    I have a Surfaceview thread and an off canvas texture bitmap that is being generated (changed), first row (line), every frame and then copied one position (line) down on regular surfaceview bitmap to make a scrolling effect, and I then continue to draw other things on top of that. Well that is what I really want, however I can't get it to work even though I am creating a separate canvas for off screen bitmap. It is just not scrolling at all. I other words I have a memory bitmap, same size as Surfaceview canvas, which I need to scroll (shift) down one line every frame, and then replace top line with new random texture, and then draw that on regular Surfaceview canvas. Here is what I thought would work; My surfaceChanged where I specify bitmap and canvasses and start thread: @Override public void surfaceCreated(SurfaceHolder holder) { intSurfaceWidth = mSurfaceView.getWidth(); intSurfaceHeight = mSurfaceView.getHeight(); memBitmap = Bitmap.createBitmap(intSurfaceWidth, intSurfaceHeight, Bitmap.Config.ARGB_8888); memCanvas = new Canvas(memCanvas); myThread = new MyThread(holder, this); myThread.setRunning(true); blnPause = false; myThread.start(); } My thread, only showing essential middle running part: @Override public void run() { while (running) { c = null; try { // Lock canvas for drawing c = myHolder.lockCanvas(null); synchronized (mSurfaceHolder) { // First draw off screen bitmap to off screen canvas one line down memCanvas.drawBitmap(memBitmap, 0, 1, null); // Create random one line(row) texture bitmap memTexture = Bitmap.createBitmap(imgTexture, 0, rnd.nextInt(intTextureImageHeight), intSurfaceWidth, 1); // Now add this texture bitmap to top of off screen canvas and hopefully bitmap memCanvas.drawBitmap(textureBitmap, intSurfaceWidth, 0, null); // Draw above updated off screen bitmap to regular canvas, at least I thought it would update (save changes) shifting down and add the texture line to off screen bitmap the off screen canvas was pointing to. c.drawBitmap(memBitmap, 0, 0, null); // Other drawing to canvas comes here } finally { // do this in a finally so that if an exception is thrown // during the above, we don't leave the Surface in an // inconsistent state if (c != null) { myHolder.unlockCanvasAndPost(c); } } } } For my game Tunnel Run. Right now I have a working solution where I instead have an array of bitmaps, size of surface height, that I populate with my random texture and then shift down in a loop for each frame. I get 50 frames per second, but I think I can do better by instead scrolling bitmap.

    Read the article

  • Annoying flickering of vertices and edges (possible z-fighting)

    - by Belgin
    I'm trying to make a software z-buffer implementation, however, after I generate the z-buffer and proceed with the vertex culling, I get pretty severe discrepancies between the vertex depth and the depth of the buffer at their projected coordinates on the screen (i.e. zbuffer[v.xp][v.yp] != v.z, where xp and yp are the projected x and y coordinates of the vertex v), sometimes by a small fraction of a unit and sometimes by 2 or 3 units. Here's what I think is happening: Each triangle's data structure holds the plane's (that is defined by the triangle) coefficients (a, b, c, d) computed from its three vertices from their normal: void computeNormal(Vertex *v1, Vertex *v2, Vertex *v3, double *a, double *b, double *c) { double a1 = v1 -> x - v2 -> x; double a2 = v1 -> y - v2 -> y; double a3 = v1 -> z - v2 -> z; double b1 = v3 -> x - v2 -> x; double b2 = v3 -> y - v2 -> y; double b3 = v3 -> z - v2 -> z; *a = a2*b3 - a3*b2; *b = -(a1*b3 - a3*b1); *c = a1*b2 - a2*b1; } void computePlane(Poly *p) { double x = p -> verts[0] -> x; double y = p -> verts[0] -> y; double z = p -> verts[0] -> z; computeNormal(p -> verts[0], p -> verts[1], p -> verts[2], &p -> a, &p -> b, &p -> c); p -> d = p -> a * x + p -> b * y + p -> c * z; } The z-buffer just holds the smallest depth at the respective xy coordinate by somewhat casting rays to the polygon (I haven't quite got interpolation right yet so I'm using this slower method until I do) and determining the z coordinate from the reversed perspective projection formulas (which I got from here: double z = -(b*Ez*y + a*Ez*x - d*Ez)/(b*y + a*x + c*Ez - b*Ey - a*Ex); Where x and y are the pixel's coordinates on the screen; a, b, c, and d are the planes coefficients; Ex, Ey, and Ez are the eye's (camera's) coordinates. This last formula does not accurately give the exact vertices' z coordinate at their projected x and y coordinates on the screen, probably because of some floating point inaccuracy (i.e. I've seen it return something like 3.001 when the vertex's z-coordinate was actually 2.998). Here is the portion of code that hides the vertices that shouldn't be visible: for(i = 0; i < shape.nverts; ++i) { double dist = shape.verts[i].z; if(z_buffer[shape.verts[i].yp][shape.verts[i].xp].z < dist) shape.verts[i].visible = 0; else shape.verts[i].visible = 1; } How do I solve this issue? EDIT I've implemented the near and far planes of the frustum, with 24 bit accuracy, and now I have some questions: Is this what I have to do this in order to resolve the flickering? When I compare the z value of the vertex with the z value in the buffer, do I have to convert the z value of the vertex to z' using the formula, or do I convert the value in the buffer back to the original z, and how do I do that? What are some decent values for near and far? Thanks in advance.

    Read the article

  • Question about BoundingSpheres and Ray intersections

    - by NDraskovic
    I'm working on a XNA project (not really a game) and I'm having some trouble with picking algorithm. I have a few types of 3D models that I draw to the screen, and one of them is a switch. So I'm trying to make a picking algorithm that would enable the user to click on the switch and that would trigger some other function. The problem is that the BoundingSphere.Intersect() method always returns null as result. This is the code I'm using: In the declaration section: ` //Basic matrices private Matrix world = Matrix.CreateTranslation(new Vector3(0, 0, 0)); private Matrix view = Matrix.CreateLookAt(new Vector3(10, 10, 10), new Vector3(0, 0, 0), Vector3.UnitY); private Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45), 800f / 600f, 0.01f, 100f); //Collision detection variables Viewport mainViewport; List<BoundingSphere> spheres = new List<BoundingSphere>(); Ray ControlRay; Vector3 nearPoint, farPoint, nearPlane, farPlane, direction; ` And then in the Update method: ` nearPlane = new Vector3((float)Mouse.GetState().X, (float)Mouse.GetState().Y, 0.0f); farPlane = new Vector3((float)Mouse.GetState().X, (float)Mouse.GetState().Y, 10.0f); nearPoint = GraphicsDevice.Viewport.Unproject(nearPlane, projection, view, world); farPoint = GraphicsDevice.Viewport.Unproject(farPlane, projection, view, world); direction = farPoint - nearPoint; direction.Normalize(); ControlRay = new Ray(nearPoint, direction); if (spheres.Count != 0) { for (int i = 0; i < spheres.Count; i++) { if (spheres[i].Intersects(ControlRay) != null) { Window.Title = spheres[i].Center.ToString(); } else { Window.Title = "Empty"; } } ` The "spheres" list gets filled when the 3D object data gets loaded (I read it from a .txt file). For every object marked as switch (I use simple numbers to determine which object is to be drawn), a BoundingSphere is created (center is on the coordinates of the 3D object, and the diameter is always the same), and added to the list. The objects are drawn normally (and spheres.Count is not 0), I can see them on the screen, but the Window title always says "Empty" (of course this is just for testing purposes, I will add the real function when I get positive results) meaning that there is no intersection between the ControlRay and any of the bounding spheres. I think that my basic matrices (world, view and projection) are making some problems, but I cant figure out what. Please help.

    Read the article

  • What tools should I consider if my strategy is to make a game available to as many platforms as possible?

    - by Kenji Kina
    We're planning on developing a 2D, grid-based puzzle game, and although it's still very early in the planning stages, we'd like to make our decisions well from the beginning. Our strategy will be to make the game available to as many platforms as possible, for example PCs (Windows, Mac and/or Linux), mobile phones (iPhone and/or Android based phones), game consoles (XBLA and/or PSN) PC will have an emphasis, but I believe that's the most flexible platform so that shouldn't be a problem. So, what programming language, game engine, frameworks and all around tools would be best suited for our goal? P.S.: I'm betting a set of tools won't cover ALL of them, and that there will still be some kind of "translating" effort for some platforms, but we'd like to know what the most far reaching are.

    Read the article

  • Best practices of texture size

    - by psal
    I wanted to know how should I determine a good texture size ? Currently, I always create UV texture that are 1024x1024px but if I create for example, a big house with a 1024px texture size, it will looks pretty bad. So, should I create different texture size (512, 1024, ...) for different mesh size like this ? : or is it better to always do high-resolution texture and then reduce it in the software (ie : increase the LODBias settings in UDK reduce the size of the texture) ? Thanks for your answer. ps : sorry for my english !

    Read the article

  • Making more complicated systems(entity-component-system model question)

    - by winch
    I'm using a model where entities are collections of components and components are just data. All the logic goes into systems which operate on components. Making basic systems(for Rendering and handling collision) was easy. But how do I do more compilcated systems? For example, in a CollisionSystem I can check if entity A collides with entity B. I have this code in CollisionSystem for checking if B damages A: if(collides(a, b)) { HealthComponent* hc = a->get<HealthComponent(); hc.reduceHealth(b->get<DamageComponent>()->getDamage()); But I feel that this code shouldn't belong to Collision system. Where should code like this be and which additional systems should I create to make this code generic?

    Read the article

  • Surface normal to screen angle

    - by Tannz0rz
    I've been struggling to get this working. I simply wish to take a surface normal and convert it to a screen angle. As an example, assuming we're working with the highlighted surface on the sphere below, where the arrow is the normal, the 2D angle would obviously be PI/4 radians. Here's one of the many things I've tried to no avail: float4 A = v.vertex; float4 B = v.vertex + float4(v.normal, 0.0); A = mul(VP, A); B = mul(VP, B); A.xy = (0.5 * (A.xy / A.w)) + 0.5; B.xy = (0.5 * (B.xy / B.w)) + 0.5; o.theta = atan2(B.y - A.y, B.x - A.x); I'm finally at my wit's end. Thanks for any and all help.

    Read the article

  • How to emulate Mode 13h in a modern 3D renderer?

    - by David Gouveia
    I was indulging in nostalgia and remembered the first game I created, which used Mode 13h. This mode was really simple to work with, since it was essentially just an array of bytes with an element for each pixel on the screen (using an indexed color scheme). So I thought it might be fun to create something nowadays under these restrictions, but on modern hardware. The API could be as simple as: public class Mode13h { public byte[] VideoMemory = new byte[320 * 200]; public Color[] Palette = new Color[256]; } Now I'm wondering what would be the best way to get this data on the screen, using something like XNA / DirectX / OpenGL. The only solution I could think of was to create a texture with the same size as the VideoMemory array, write the contents of VideoMemory to it every frame, then render that texture in a full screen quad with the correct aspect ratio and using point texture filtering for that retro look. Is there a better way?

    Read the article

  • How would you code an AI engine to allow communication in any programming language?

    - by Tokyo Dan
    I developed a two-player iPhone board game. Computer players (AI) can either be local (in the game code) or remote running on a server. In the 2nd case, both client and server code are coded in Lua. On the server the actual AI code is separate from the TCP socket code and coroutine code (which spawns a separate instance of AI for each connecting client). I want to be able to further isolate the AI code so that that part can be a module coded by anyone in their language of choice. How can I do this? What tecniques/technology would enable communication between the Lua TCP socket/coroutine code and the AI module?

    Read the article

  • DOT implementation

    - by Denis Ermolin
    I have some DOT(damage over time) implementation problems. My game runs on 30 FPS speed. Current implementation is: let's say hero cast spell which make 1 damage per second. So on every frame i do (pseudo code): damage_done = getRandomDamage() * delta_time; I accumulate damage and when it becomes more then 0 then subtract rounded damage from current health and so on. With 30 FPS and 1 DPS it will be 1/33 = 0.05... We know that floats a not precise enough to sum 30 circulating decimals and have exact 1 in the end. But HP is discrete value and that's why 1 DPS will not have 1 damage after 1 second because value will be 0.9999..... It's not so big deal when you have 100000 DPS - +/- 1 damage will not be noticeable. But if i have 1, 5 DPS? How modern RPG's implemented DOT's?

    Read the article

  • Load Texture From Image Content In Runtime

    - by Austin Brunkhorst
    Basically I wrote a world editor for a game I'm working on. Looking ahead, I was brainstorming ways to save the created world including the tile-sets (this game will rely on a tile engine). I was hoping to save the image data of each tile-set in the same file containing the tile positions, etc. and load the image data into a Texture with XNA. Is it possible? Something like this is what I'm going for. Texture2D tileset = Content.LoadFromString<Texture2D>("png tileset data");

    Read the article

  • XNA, how to draw two cubes standing in line parallelly?

    - by user3535716
    I just got a problem with drawing two 3D cubes standing in line. In my code, I made a cube class, and in the game1 class, I built two cubes, A on the right side, B on the left side. I also setup an FPS camera in the 3D world. The problem is if I draw cube B first(Blue), and move the camera to the left side to cube B, A(Red) is still standing in front of B, which is apparently wrong. I guess some pics can make much sense. Then, I move the camera to the other side, the situation is like: This is wrong.... From this view, the red cube, A should be behind the blue one, B.... Could somebody give me help please? This is the draw in the Cube class Matrix center = Matrix.CreateTranslation( new Vector3(-0.5f, -0.5f, -0.5f)); Matrix scale = Matrix.CreateScale(0.5f); Matrix translate = Matrix.CreateTranslation(location); effect.World = center * scale * translate; effect.View = camera.View; effect.Projection = camera.Projection; foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); device.SetVertexBuffer(cubeBuffer); RasterizerState rs = new RasterizerState(); rs.CullMode = CullMode.None; rs.FillMode = FillMode.Solid; device.RasterizerState = rs; device.DrawPrimitives( PrimitiveType.TriangleList, 0, cubeBuffer.VertexCount / 3); } This is the Draw method in game1 A.Draw(camera, effect); B.Draw(camera, effect); **

    Read the article

  • Everything "invisible" when launching map from launcher

    - by Predanoob
    Excuse my noobiness, but I downloaded the SDK, and I tried the map Forest from within the editor and it worked fine. However if I launch it from the Launcher using the console it looks like this: http://i.stack.imgur.com/U7rPU.jpg I can use the weapons(although they are invisible), and interact with objects despite not seeing them. I also did my own map same problem. What am I doing wrong? ?(

    Read the article

  • Starting with text based MUD/MUCK game

    - by Scott Ivie
    I’ve had this idea for a video game in my head for a long time but I’ve never had the knowledge or time to get it done. I still don’t really, but I am willing to dedicate a chunk of my time to this before it’s too late. Recently I started studying Lua script for a program called “MUSH Client” which works for MU* telnet style text games. I want to use the GUI capabilities of Mush Client with a MU* server to create a basic game but here is my dilemma. I figured this could be a suitable starting place for me. BUT… Because I’m not very programmer savvy yet, I don’t know how to download/install/use the MU* server software. I was originally considering Protomuck because a few of the MU*s I were more impressed with began there. http://www.protomuck.org/ I downloaded it, but I guess I'm too used to GUI style programs so I'm having great difficulty figuring out what to do next. Does anyone have any suggestions? Does anyone even know what I'm talking about? heh..

    Read the article

  • Unity3D Android : Game Over/Retry

    - by user3666251
    Im making a simple 2D game for android using the Unity3D game engine.I created all the levels and everything but Im stuck at making the game over/retry menu.So far I've been using new scenes as a game over menu.I used this simple script : pragma strict var level = Application.LoadLevel; function OnCollisionEnter(Collision : Collision) { if(Collision.collider.tag == "Player") { Application.LoadLevel("GameOver"); } } And this as a 'menu' : #pragma strict var myGUISkin : GUISkin; var btnTexture : Texture; function OnGUI() { GUI.skin = myGUISkin; if (GUI.Button(Rect(Screen.width/2-60,Screen.height/2+30,100,40),"Retry")) Application.LoadLevel("Easy1"); if (GUI.Button(Rect(Screen.width/2-90,Screen.height/2+100,170,40),"Main Menu")) Application.LoadLevel("MainMenu"); } The problem stands at the part where I have to create over 200 game over scenes,obscales(the objects that kill the player) and recreate the same script over 200 times for each level. Is there any other way to make this faster and less painful? I've been searching the web but didn't find anything useful according to my issue. Thank you.

    Read the article

  • Collision Detection with SAT: False Collision for Diagonal Movement Towards Vertical Tile-Walls?

    - by Macks
    Edit: Problem solved! Big thanks to Jonathan who pointed me in the right direction. Sean describes the method I used in a different thread. Also big thanks to him! :) Here is how I solved my problem: If a collision is registered by my SAT-method, only fire the collision-event on my character if there are no neighbouring solid tiles in the direction of the returned minimum translation vector. I'm developing my first tile-based 2D-game with Javascript. To learn the basics, I decided to write my own "game engine". I have successfully implemented collision detection using the separating axis theorem, but I've run into a problem that I can't quite wrap my head around. If I press the [up] and [left] arrow-keys simultaneously, my character moves diagonally towards the upper left. If he hits a horizontal wall, he'll just keep moving in x-direction. The same goes for [up] and [left] as well as downward-diagonal movements, it works as intended: http://i.stack.imgur.com/aiZjI.png Diagonal movement works fine for horizontal walls, for both left and right-movement However: this does not work for vertical walls. Instead of keeping movement in y-direction, he'll just stop as soon as he "enters" a new tile on the y-axis. So for some reason SAT thinks my character is colliding vertically with tiles from vertical walls: http://i.stack.imgur.com/XBEKR.png My character stops because he thinks that he is colliding vertically with tiles from the wall on the right. This only occurs, when: Moving into top-right direction towards the right wall Moving into top-left direction towards the left wall Bottom-right and bottom-left movement work: the character keeps moving in y-direction as intended. Is this inherited from the way SAT works or is there a problem with my implementation? What can I do to solve my problem? Oh yeah, my character is displayed as a circle but he's actually a rectangular polygon for the collision detection. Thank you very much for your help.

    Read the article

  • Why can't a blendShader sample anything but the current coordinate of the background image?

    - by Triynko
    In Flash, you can set a DisplayObject's blendShader property to a pixel shader (flash.shaders.Shader class). The mechanism is nice, because Flash automatically provides your Shader with two input images, including the background surface and the foreground display object's bitmap. The problem is that at runtime, the shader doesn't allow you to sample the background anywhere but under the current output coordinate. If you try to sample other coordinates, it just returns the color of the current coordinate instead, ignoring the coordinates you specified. This seems to occur only at runtime, because it works properly in the Pixel Bender toolkit. This limitation makes it impossible to simulate, for example, the Aero Glass effect in Windows Vista/7, because you cannot sample the background properly for blurring. I must mention that it is possible to create the effect in Flash through manual composition techniques, but it's hard to determine when it actually needs updated, because Flash does not provide information about when a particular area of the screen or a particular display object needs re-rendered. For example, you may have a fixed glass surface with objects moving underneath it that don't dispatch events when they move. The only alternative is to re-render the glass bar every frame, which is inefficient, which is why I am trying to do it through a blendShader so Flash determines when it needs rendered automatically. Is there a technical reason for this limitation, or is it an oversight of some sort? Does anyone know of a workaround, or a way I could provide my manual composition implementation with information about when it needs re-rendered? The limitation is mentioned with no explanation in the last note in this page: http://help.adobe.com/en_US/as3/dev/WSB19E965E-CCD2-4174-8077-8E5D0141A4A8.html It says: "Note: When a Pixel Bender shader program is run as a blend in Flash Player or AIR, the sampling and outCoord() functions behave differently than in other contexts.In a blend, a sampling function will always return the current pixel being evaluated by the shader. You cannot, for example, use add an offset to outCoord() in order to sample a neighboring pixel. Likewise, if you use the outCoord() function outside a sampling function, its coordinates always evaluate to 0. You cannot, for example, use the position of a pixel to influence how the blended images are combined."

    Read the article

  • 2D wave-like sprite movement XNA

    - by TheBroodian
    I'm trying to create a particle that will 'circle' my character. When the particle is created, it's given a random position in relation to my character, and a box to provide boundaries for how far left or right this particle should circle. When I use the phrase 'circle', I'm referring to a simulated circling, i.e., when moving to the right, the particle will appear in front of my character, when passing back to the left, the particle will appear behind my character. That may have been too much context, so let me cut to the chase: In essence, the path I would like my particle to follow would be akin to a sine wave, with the left and right sides of the provided rectangle being the apexes of the wave. The trouble I'm having is that the position of the particle will be random, so it will never be produced at the same place within the wave twice, but I have no idea how to create this sort of behavior procedurally.

    Read the article

  • What different ways are there to model restitution in a physics engine?

    - by Mikael Högström
    In my physics engine I give a body a value for restitution between 0 and 1. When two bodies collide there seems to be different views on how the restitution of the collision should be calculated. To me the most intuitive seems to be to take the average of the two but some seem to take only the largest one. Are there other ways to do it? Also, could the closing velocity or some other parameter come into effect?

    Read the article

  • Artist, Looking to lean how to program games, where do I start? [on hold]

    - by Christopher Hindson
    I have bean an artist for many years now I am very comfortable with using Photoshop and Flash but I want to learn how to but together my own games, I bean doing my own research into this and at the moment I am at little bit stuck on witch direction to go down. So my question is witch programming language should I learn? I have already bean looking into this what i understand is that Gamemaker with its built in language (GML) is one of the most friendly to people who are new to the game making world. I have played around with this program and was pretty happy with it but I want more also you can use Unity with language such as C and javascript and then games built with Java witch looks interesting. One more thing before you send your answer in at this moment in time I would only be able to make 2D game but 3D isn't out of the picture.

    Read the article

  • Problems with moving 2D circle/box collision detection

    - by dario3004
    This is my first game ever and I'm a newbie in computer physics. I've got this code for the collision detection and it works fine for BOTTOM and TOP collision.It miss the collision detection with the paddle's edge and angles so I've (roughly) tried to implement it. Main method that is called for bouncing, it checks if it bounce with wall, or with top (+ right/left side) or with bottom (+ right/left side): protected void handleBounces(float px, float py) { handleWallBounce(px, py); if(mBall.y < getHeight()/4){ if (handleRedFastBounce(mRed, px, py)) return; if (handleRightSideBounce(mRed,px,py)) return; if (handleLeftSideBounce(mRed,px,py)) return; } if(mBall.y > getHeight()/4 * 3){ if (handleBlueFastBounce(mBlue, px, py)) return; if (handleRightSideBounce(mBlue,px,py)) return; if (handleLeftSideBounce(mBlue,px,py)) return; } } This is the code for the BOTTOM bounce: protected boolean handleRedFastBounce(Paddle paddle, float px, float py) { if (mBall.goingUp() == false) return false; // next position tx = mBall.x; ty = mBall.y - mBall.getRadius(); // actual position ptx = px; pty = py - mBall.getRadius(); dyp = ty - paddle.getBottom(); xc = tx + (tx - ptx) * dyp / (ty - pty); if ((ty < paddle.getBottom() && pty > paddle.getBottom() && xc > paddle.getLeft() && xc < paddle.getRight())) { mBall.x = xc; mBall.y = paddle.getBottom() + mBall.getRadius(); mBall.bouncePaddle(paddle); playSound(mPaddleSFX); increaseDifficulty(); return true; } else return false; } As long as I understood it should be something like this: So I tried to make the "left side" and "right side" bounce method: protected boolean handleLeftSideBounce(Paddle paddle, float px, float py){ // next position tx = mBall.x + mBall.getRadius(); ty = mBall.y; // actual position ptx = px + mBall.getRadius(); pty = py; dyp = tx - paddle.getLeft(); yc = ty + (pty - ty) * dyp / (ptx - tx); if (ptx < paddle.getLeft() && tx > paddle.getLeft()){ System.out.println("left side bounce1"); System.out.println("yc: " + yc + "top: " + paddle.getTop() + " bottom: " + paddle.getBottom()); if (yc > paddle.getTop() && yc < paddle.getBottom()){ System.out.println("left side bounce2"); mBall.y = yc; mBall.x = paddle.getLeft() - mBall.getRadius(); mBall.bouncePaddle(paddle); playSound(mPaddleSFX); increaseDifficulty(); return true; } } return false; } I think I'm quite near to the solution but I'm having big troubles with the new "yc" formula. I tried so many versions of it but since I don't know the theory behind it I can't adjust for the Y axis. Since the Y axis is inverted I even tried this: yc = ty - (pty - ty) * dyp / (ptx - tx); I tried Googling it but I can't seem to find a solution for it. Also this method fails when ball touches the angle and I don't think is a nice way because it just test "one" point of the ball and probably there will be many cases in which the ball won't bounce.

    Read the article

< Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >