Search Results

Search found 28230 results on 1130 pages for 'embedded development'.

Page 472/1130 | < Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >

  • XNA C# Rectangle Intersect Ball on a Square

    - by user2436057
    I made a Game like Peggle Deluxe using C# and XNA for lerning. I have 2 rectangles a ball and a square field. The ball gets shoot out with a cannon and if the Ball hits the Square the Square disapears and the Ball flys away.But the Ball doesent spring of realistically, it sometimes flys away in a different direction or gets stuck on the edge. Thads my Code at the moment: public void Update(Ball b, Deadline dl) { ArrayList listToDelete = new ArrayList(); foreach (Field aField in allFields) { if (aField.square.Intersects(b.ballhere)) { listToDelete.Add(aField); Punkte = Punkte + 100; float distanceX = Math.Abs(b.ballhere.X - aField.square.X); float distanceY = Math.Abs(b.ballhere.Y - aField.square.Y); if (distanceX < distanceY) { b.myMovement.X = -b.myMovement.X; } else { b.myMovement.Y = -b.myMovement.Y; } } } It changes the X or Y axis depending on how the ball hits the Square but not everytimes. What could cause the problem? Thanks for your answer. Greetings from Switzerland.

    Read the article

  • DX11 - Weird shader behavior with and without branching

    - by Martin Perry
    I have found problem in my shader code, which I dont´t know how to solve. I want to rewrite this code without "ifs" tmp = evaluate and result is 0 or 1 (nothing else) if (tmp == 1) val = X1; if (tmp == 0) val = X2; I rewite it this way, but this piece of code doesn ´t word correctly tmp = evaluate and result is 0 or 1 (nothing else) val = tmp * X1 val = !tmp * X2 However if I change it to: tmp = evaluate and result is 0 or 1 (nothing else) val = tmp * X1 if (!tmp) val = !tmp * X2 It works fine... but it is useless because of "if", which need to be eliminated I honestly don´t understand it Posted Image . I tried compilation with NO and FULL optimalization, result is same

    Read the article

  • Texturize a shape of multiple triangles in 2D

    - by Deukalion
    This is an example of a shape consisting of multiple points, triangles and eventually a shape: Red Dots = Vector3 (X, Y, Z) or Vector2 (X, Y) If I have a Texture of a certain size, how do I texturize this area in the best way so that the texture inside the shape matches the shape and does not overlap anywhere? Perhaps also with a chance to scale the texture in case it's too small or to big for the shape, but still so that it gets rendered correctly. Do I treat the shape as a rectangle? Figure out it's 4 corners? Or do I calculate the distance between Center - (Texture Width / 2) and Point (to see how "many" times the texture can fit between on that axis to estimate at what Coordinates the Texture should be at that certain point? I've looked at Texture Mapping but haven't found any concrete examples that it explains it well, it's also confusing with 0.0-1.0 values for Texture Coordinates.

    Read the article

  • Euler angles to Cartesian Coordinates for use with gluLookAt

    - by notrodash
    I have searched all of the internet but just couldn't find the answer. I am using LibGDX and this is part of my code that loops over and over: public void render() { GL11 gl = Gdx.gl11; float centerX = (float)Math.cos(yaw) * (float)Math.cos(pitch); float centerY = (float)Math.sin(yaw) * (float)Math.cos(pitch); float centerZ = (float)Math.sin(pitch); System.out.println(centerX+" "+centerY+" "+centerZ+" ~ "+GDXRacing.camera.position.x+" "+GDXRacing.camera.position.y+" "+GDXRacing.camera.position.z); Gdx.glu.gluLookAt(gl, GDXRacing.camera.position.x, GDXRacing.camera.position.y, GDXRacing.camera.position.z, centerX, centerY, centerZ, 0, 1, 0); if(Gdx.input.isKeyPressed(Keys.A)) { yaw--; } if(Gdx.input.isKeyPressed(Keys.D)) { yaw++; } } I might just be bad at the math, but I dont get it. Does someone have a good explanation and an idea about how to deal with this? I am trying to make a first person camera. By the way, the camera is translated by +10 on the Z axis. Currently when I run the application, this is what I get: Watch video in browser | Download video (for those who cant download the video, everything shakes in a clockwise/anticlockwise action, depending on if I increase or decrease the Yaw value) -Thank you. [edit] and with this code: public void render() { GL11 gl = Gdx.gl11; float centerX = (float)(MathUtils.cosDeg(yaw)*4); float centerY = 0; float centerZ = (float)(MathUtils.sinDeg(yaw)*4); System.out.println(centerX+" "+centerY+" "+centerZ+" ~ "+GDXRacing.camera.position.x+" "+GDXRacing.camera.position.y+" "+GDXRacing.camera.position.z); Gdx.glu.gluLookAt(gl, GDXRacing.camera.position.x, GDXRacing.camera.position.y, GDXRacing.camera.position.z, centerX, centerY, centerZ, 0, 1, 0); if(Gdx.input.isKeyPressed(Keys.A)) { yaw--; } if(Gdx.input.isKeyPressed(Keys.D)) { yaw++; } } it slowly swings from the left to the right. This approach worked for turning left and right for 2d games though. What am I doing wrong?

    Read the article

  • Boat passing under a bridge in a 2D tile based RTS

    - by aleguna
    I'm writing a 2D tile based RTS. And I want to add a 'pseudo 3D' feature to it - bridges over the rivers. I havent't start any coding yet, just trying to think how it fits the collision detection model. A boat passing under the bridge and a unit moving over the bridge will eventually occupy the same cell on the map. How to prement them from colliding? Is there a common approach to solve such a problem? Or I need to implement a 3D world to do this?

    Read the article

  • Deferred Rendering With Diffuse,Specular, and Normal maps

    - by John
    I have been reading up on deferred rendering and I am trying to implement a renderer using the Sponza atrium model, which can be found here, as my sandbox.Note I am also using OpenGL 3.3 and GLSL. I am loading the model from a Wavefront OBJ file using Assimp. I extract all geometry information including tangents and bitangents. For all the aiMaterials,I extract the following information which essentially comes from the sponza.mtl file. Ambient/Diffuse/Specular/Emissive Reflectivity Coefficients(Ka,Kd,Ks,Ke) Shininess Diffuse Map Specular Map Normal Map I understand that I must render vertex attributes such as position ,normals,texture coordinates to textures as well as depth for the second render pass. A lot of resources mention putting colour information into a g-buffer in the initial render pass but do you not require the diffuse,specular and normal maps and therefore lights to determine the fragment colour? I know that doesnt make since sense because lighting should be done in the second render pass. In terms of normal mapping, do you essentially just pass the tangent,bitangents, and normals into g-buffers and then construct the tangent matrix and apply it to the sampled normal from the normal map. Ultimately, I would like to know how to incorporate this material information into my deferred renderer.

    Read the article

  • Logging library for (c++) games

    - by Klaim
    I know a lot of logging libraries but didn't test a lot of them. (GoogleLog, Pantheios, the coming boost::log library...) In games, especially in remote multiplayer and multithreaded games, logging is vital to debugging, even if you remove all logs in the end. Let's say I'm making a PC game (not console) that needs logs (multiplayer and multithreaded and/or multiprocess) and I have good reasons for looking for a library for logging (like, I don't have time or I'm not confident in my ability to write one correctly for my case). Assuming that I need : performance ease of use (allow streaming or formating or something like that) reliable (don't leak or crash!) cross-platform (at least Windows, MacOSX, Linux/Ubuntu) Wich logging library would you recommand? Currently, I think that boost::log is the most flexible one (you can even log to remotely!), but have not good performance. Pantheios is often cited but I don't have comparison points on performance and usage. I've used my own lib for a long time but I know it don't manage multithreading so it's a big problem, even if it's fast enough. Google Log seems interesting, I just need to test it but if you already have compared those libs and more, your advice might be of good use. Games are often performance demanding while complex to debug so it would be good to know logging libraries that, in our specific case, have clear advantages.

    Read the article

  • How is the gimbal locked problem solved using accumulative matrix transformations

    - by Luke San Antonio
    I am reading the online "Learning Modern 3D Graphics Programming" book by Jason L. McKesson As of now, I am up to the gimbal lock problem and how to solve it using quaternions. However right here, at the Quaternions page. Part of the problem is that we are trying to store an orientation as a series of 3 accumulated axial rotations. Orientations are orientations, not rotations. And orientations are certainly not a series of rotations. So we need to treat the orientation of the ship as an orientation, as a specific quantity. I guess this is the first spot I start to get confused, the reason is because I don't see the dramatic difference between orientations and rotations. I also don't understand why an orientation cannot be represented by a series of rotations... Also: The first thought towards this end would be to keep the orientation as a matrix. When the time comes to modify the orientation, we simply apply a transformation to this matrix, storing the result as the new current orientation. This means that every yaw, pitch, and roll applied to the current orientation will be relative to that current orientation. Which is precisely what we need. If the user applies a positive yaw, you want that yaw to rotate them relative to where they are current pointing, not relative to some fixed coordinate system. The concept, I understand, however I don't understand how if accumulating matrix transformations is a solution to this problem, how the code given in the previous page isn't just that. Here's the code: void display() { glClearColor(0.0f, 0.0f, 0.0f, 0.0f); glClearDepth(1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glutil::MatrixStack currMatrix; currMatrix.Translate(glm::vec3(0.0f, 0.0f, -200.0f)); currMatrix.RotateX(g_angles.fAngleX); DrawGimbal(currMatrix, GIMBAL_X_AXIS, glm::vec4(0.4f, 0.4f, 1.0f, 1.0f)); currMatrix.RotateY(g_angles.fAngleY); DrawGimbal(currMatrix, GIMBAL_Y_AXIS, glm::vec4(0.0f, 1.0f, 0.0f, 1.0f)); currMatrix.RotateZ(g_angles.fAngleZ); DrawGimbal(currMatrix, GIMBAL_Z_AXIS, glm::vec4(1.0f, 0.3f, 0.3f, 1.0f)); glUseProgram(theProgram); currMatrix.Scale(3.0, 3.0, 3.0); currMatrix.RotateX(-90); //Set the base color for this object. glUniform4f(baseColorUnif, 1.0, 1.0, 1.0, 1.0); glUniformMatrix4fv(modelToCameraMatrixUnif, 1, GL_FALSE, glm::value_ptr(currMatrix.Top())); g_pObject->Render("tint"); glUseProgram(0); glutSwapBuffers(); } To my understanding, isn't what he is doing (modifying a matrix on a stack) considered accumulating matrices, since the author combined all the individual rotation transformations into one matrix which is being stored on the top of the stack. My understanding of a matrix is that they are used to take a point which is relative to an origin (let's say... the model), and make it relative to another origin (the camera). I'm pretty sure this is a safe definition, however I feel like there is something missing which is blocking me from understanding this gimbal lock problem. One thing that doesn't make sense to me is: If a matrix determines the difference relative between two "spaces," how come a rotation around the Y axis for, let's say, roll, doesn't put the point in "roll space" which can then be transformed once again in relation to this roll... In other words shouldn't any further transformations to this point be in relation to this new "roll space" and therefore not have the rotation be relative to the previous "model space" which is causing the gimbal lock. That's why gimbal lock occurs right? It's because we are rotating the object around set X, Y, and Z axes rather than rotating the object around it's own, relative axes. Or am I wrong? Since apparently this code I linked in isn't an accumulation of matrix transformations can you please give an example of a solution using this method. So in summary: What is the difference between a rotation and an orientation? Why is the code linked in not an example of accumulation of matrix transformations? What is the real, specific purpose of a matrix, if I had it wrong? How could a solution to the gimbal lock problem be implemented using accumulation of matrix transformations? Also, as a bonus: Why are the transformations after the rotation still relative to "model space?" Another bonus: Am I wrong in the assumption that after a transformation, further transformations will occur relative to the current? Also, if it wasn't implied, I am using OpenGL, GLSL, C++, and GLM, so examples and explanations in terms of these are greatly appreciated, if not necessary. The more the detail the better! Thanks in advance...

    Read the article

  • box2d resize bodies arround point

    - by philipp
    I have a compound object, consisting of a b2Body, vector-graphics and a list polygons which describe the b2body's shapes. This object has its own transformation matrix to centralize the storage of transformations. So far everything is working quiet fine, even scaling works, but not if i scale around a point. In the initialization phase of the object it is scaled around a point. This happens in this order: transform the main matrix transform the vector graphics and the polygons recreate the b2Body After this function ran, the shapes and all the graphics are exactly where they should be, BUT: after the first steps of the b2World the graphical stuff moves away from the body. When I ran the debugger I found out that the position of the body is 0/0 the red dot shows the center of scaling. the first image shows the basic setup and the second the final position of the graphics. This distance stays constant for the rest of the simulation. If I set the position via myBody.SetPosition( sx, sy ); the whole scenario just plays a bit more distant for the origin. Any Idea how to fix this? EDIT:: I came deeper down to the problem and it lies in the fact that i must not scale the transform matrix for the b2body shapes around the center, but set the b2body's position back to the point after scaling. But how can I calculate that point? EDIT 2 :: I came ever deeper down to it, even solved it, but this is a slow solution and i hope that there is somebody who understands what formula I need. assuming to have a set polygons relative to an origin as basis shapes for a b2body: scaling the whole object around a certain point is done in the following steps: i scale everything around the center except the polygons i create a clone of the polygons matrix i scale this clone around the point i calculate dx, dy as difference of clone.tx - original.tx and clone.ty - original.ty i scale the original polygon matrix NOT around the point i recreate the body i create the fixture i set the position of the body to dx and dy done! So what i an interested in is a formula for dx and dy without cloning matrices, scaling the clone around a point, getting dx and dy and finally scale the vertex matrix.

    Read the article

  • Tips for communication between JS browser game and node.js server?

    - by Petteri Hietavirta
    I am tinkering around with some simple Canvas based cave flyer game and I would like to make it multiplayer eventually. The plan is to use Node.js on the server side. The data sent over would consists of position of each player, direction, velocity and such. The player movements are simple force physics, so I should be able to extrapolate movements before next update from server. Any tips or best practices on the communications side? I guess web sockets are the way to go. Should I send information in every pass of the game loop or with specified intervals? Also, I don't mind if it doesn't work with older browsers.

    Read the article

  • Multiplayer game communication framework for mac/ios

    - by ishaq
    (Cross post from stackoverflow) I am creating a multiplayer 2D game for Mac and iOS devices. I'll be using cocso2d for graphics/game engine, however I am largely blank on what to use for multiplayer communication. Please note that I cannot use central severs e.g. SmartFox, RedDwarf, etc since I want the players to "host" games for others and be able to play it on their LAN, VPN or my own servers. Any pointers? I checked lidgren but it's for .NET only and hence not an option for me. EDIT: just in case it wasn't clear, the messaging has to be real time hence it's probably going to be over UDP

    Read the article

  • Continuous Movement of gun bullet

    - by Siddharth
    I was using box2d for the movement of the body. When I apply gravity (0,0) the bullet continuously move but when I change gravity to the earth the behavior was changed. I also try to apply continuous force to the bullet body but the behavior was not so good. So please provide any suggestion to continuously move bullet body in earth gravity. currentVelocity = bulletBody.getLinearVelocity(); if (currentVelocity.len() < speed|| currentVelocity.len() > speed + 0.25f) { velocityChange = Math.abs(speed - currentVelocity.len()); currentVelocity.set(currentVelocity.x* velocityChange, currentVelocity.y*velocityChange); bulletBody.applyLinearImpulse(currentVelocity,bulletBody.getWorldCenter()); } I apply above code for the continuous velocity of the body. And also I did not able to find any setGravityScale method in the library.

    Read the article

  • Detecting extremely fast joystick button presses?

    - by DBRalir
    Is it usually possible for the player to press and release a button within a single frame, so that the game engine doesn't have time to detect it? How do programmers usually handle this situation? Is it even necessary to handle it? Specifically, I am asking about GLFW's joystick input capabilities. I am currently using GLFW to make a game, and I've noticed that keyboard and mouse have callback functions, while joysticks do not. Also, it does not appear to be possible to enable "sticky keys" for a joystick. (I have only recently started using GLFW, so please correct me if I am wrong, as having either of those would solve the problem.)

    Read the article

  • Hardware instancing for voxel engine

    - by Menno Gouw
    i just did the tutorial on Hardware Instancing from this source: http://www.float4x4.net/index.php/2011/07/hardware-instancing-for-pc-in-xna-4-with-textures/. Somewhere between 900.000 and 1.000.000 draw calls for the cube i get this error "XNA Framework HiDef profile supports a maximum VertexBuffer size of 67108863." while still running smoothly on 900k. That is slightly less then 100x100x100 which are a exactly a million. Now i have seen voxel engines with very "tiny" voxels, you easily get to 1.000.000 cubes in view with rough terrain and a decent far plane. Obviously i can optimize a lot in the geometry buffer method, like rendering only visible faces of a cube or using larger faces covering multiple cubes if the area is flat. But is a vertex buffer of roughly 67mb the max i can work with or can i create multiple?

    Read the article

  • Get Specific depth values in Kinect (XNA)

    - by N0xus
    I'm currently trying to make a hand / finger tracking with a kinect in XNA. For this, I need to be able to specify the depth range I want my program to render. I've looked about, and I cannot see how this is done. As far as I can tell, kinect's depth values only work with pre-set ranged found in the depthStream. What I would like to do is make it modular so that I can change the depth range my kinect renders. I know this has been down before but I can't find anything online that can show me how to do this. Could someone please help me out? I have made it possible to render the standard depth view with the kinect, and the method that I have made for converting the depth frame is as follows (I've a feeling its something in here I need to set) private byte[] ConvertDepthFrame(short[] depthFrame, DepthImageStream depthStream, int depthFrame32Length) { int tooNearDepth = depthStream.TooNearDepth; int tooFarDepth = depthStream.TooFarDepth; int unknownDepth = depthStream.UnknownDepth; byte[] depthFrame32 = new byte[depthFrame32Length]; for (int i16 = 0, i32 = 0; i16 < depthFrame.Length && i32 < depthFrame32.Length; i16++, i32 += 4) { int player = depthFrame[i16] & DepthImageFrame.PlayerIndexBitmask; int realDepth = depthFrame[i16] >> DepthImageFrame.PlayerIndexBitmaskWidth; // transform 13-bit depth information into an 8-bit intensity appropriate // for display (we disregard information in most significant bit) byte intensity = (byte)(~(realDepth >> 8)); if (player == 0 && realDepth == 00) { // white depthFrame32[i32 + RedIndex] = 255; depthFrame32[i32 + GreenIndex] = 255; depthFrame32[i32 + BlueIndex] = 255; } // omitted other if statements. Simple changed the color of the pixels if they went out of the pre=set depth values else { // tint the intensity by dividing by per-player values depthFrame32[i32 + RedIndex] = (byte)(intensity >> IntensityShiftByPlayerR[player]); depthFrame32[i32 + GreenIndex] = (byte)(intensity >> IntensityShiftByPlayerG[player]); depthFrame32[i32 + BlueIndex] = (byte)(intensity >> IntensityShiftByPlayerB[player]); } } return depthFrame32; } I have a strong hunch it's something I need to change in the int player and int realDepth values, but i can't be sure.

    Read the article

  • Deferred Shading - Toolkit

    - by AliveDevil
    I recently managed to get some lights rendered in a scene by using a buffer and a for-loop. The problem with this method is the performance drop if more lights are used. I tried to convert Deferred Rendering in XNA4.0 | ROY-T.NL but it is not working, because I am not using any models. I know I have to render color, normals and lights seperate but I don't know how I could get it working. For understanding my structure better I'm using a world-class which holds some chunks. These chunks are loading all vertices from their items. These items have a property which returns the vertices. The item is returning VertexPositionNormalTexture[]. The chunk loads these Vertices and combines them to one large array of VertexPositionNormalTexture via someList.AsParallel().SelectMany(m => m).ToArray()). m is a VertexPositionNormalTexture. someList is List<VertexPositionNormalTexture>. I got my own shader to draw these vertices how I want them to be drawn. The first thing I would try is setting up two RenderTarget2D for rendering the color and normal part. With two different shaders. Than I would have to render the lights and there's the problem: I don't know how. I set up a structure to simplify working with lights but it didn't really help. public struct Light { public Vector3 Position; public Color4 Color; public float Range; public float Intensity; public Light( Vector3 position, Color color, float range, float intensity ) : this() { this.Position = position; this.Color = color; this.Range = range; this.Intensity = intensity; } public float[] Definition { get { return new[] { Position.X, Position.Y, Position.Z, Color.Red, Color.Green, Color.Blue, Intensity, Range }; } } } The next part is equally different because I don't know how to combine the colorMap, normalMap and textureMap to one finalMap. Some information to the system: I'm using SharpDX (Nightly from some months ago) and the SharpDX.Toolkit (I don't want to mess up with Direct3DDevice and similar things). Can someone help me with this problem? If things are missing or I provided insufficient information tell me, I need to get deferred shading working. Things I'm not able to do: create a rendertarget which holds all lights, merge colorMap, normalMap and lightMap to one finalMap and presenting this to the user.

    Read the article

  • How do I implement movement in a WPF Adventure game?

    - by ZeroPhase
    I'm working on making a short WPF adventure game. The only major hurdle I have right now is how to animate objects on the screen correctly. I've experimented with DoubleAnimation and ThicknessAnimation both enable movement of the character, but the speed is a bit erratic. The objects I'm trying to move around are labels in a grid, I'm checking the mouse's position in terms of the canvas I have the grid in. Does anyone have any suggestions for coding the movement, while still allowing mouse clicks to pick up items when needed? It would be nice if I could continue using the Visual Studio GUI Editor. By the way, I'm fine with scrapping labels in a grid for a more ideal object to manipulate. Here's my movement code: ThicknessAnimation ta = new ThicknessAnimation(); The event handling movement: private void Hansel_MouseLeftButtonDown(object sender, MouseButtonEventArgs e) { ta.FillBehavior = FillBehavior.HoldEnd; ta.From = Hansel.Margin; double newX = Mouse.GetPosition(PlayArea).X; double newY = Mouse.GetPosition(PlayArea).Y; if (newX < Convert.ToDouble(Hansel.Margin.Left)) { //newX = -1 * newX; ta.To = new Thickness(0, newY, newX, 0); } else if (newY < Convert.ToDouble(Hansel.Margin.Top)) { newY = -1 * newY; } else { ta.To = new Thickness(newX, newY, 0, 0); } ta.Duration = new Duration(TimeSpan.FromSeconds(2)); Hansel.BeginAnimation(Grid.MarginProperty, ta); } ScreenShot with annotations: http://i1118.photobucket.com/albums/k608/sealclubberr/clickToMove_zps9d4a33cc.png ScreenShot with example movement: http://i1118.photobucket.com/albums/k608/sealclubberr/clickToMove_zps51f2359f.jpg

    Read the article

  • Manipulating Perlin Noise

    - by Numeri
    I've been learning about Procedurally Generated Content lately (in particular, Perlin noise). Perlin noise works great for making things like landscapes, height maps, and stuff like that. But now I am trying to generate structures more like mountain ranges (in 2D, as 3D would be way over my head right now) or underground veins of ores. I can't manage to manipulate Perlin Noise to do this. Making a cut off point (i.e. using only the tops of the 'mountains' of a heightmap) wouldn't work, because I would get lumps of mountains/veins. Any suggestions? Thanks, Numeri

    Read the article

  • Collision detection code style

    - by Marian Ivanov
    Not only there are two useful broad-phase algorithms and a lot of useful narrowphase algorithms, there are also multiple code styles. Arrays vs. calling Make an array of broadphase checks, then filter them with narrowphase checks, then resolve them. function resolveCollisions(thingyStructure * a,thingyStructure * b,int index){ possibleCollisions = getPossibleCollisions(b,a->get(index)); for(i=0; i<possibleCollitionsNumber; i++){ if(narrowphase(possibleCollisions[i],a[index])) { collisions->push(possibleCollisions[i]); }; }; for(i=0; i<collitionsNumber; i++){ //CODE FOR RESOLUTION }; }; Make the broadphase call the narrowphase, and the narrowphase call the resolution function resolveCollisions(thingyStructure * a,thingyStructure * b,int index){ broadphase(b,a->get(index)); }; function broadphase(thingy * with, thingy * what){ while(blah){ //blahcode narrowphase(what,collidingThing); }; }; Events vs. in-the-loop Fire an event. This abstracts the check away, but it's trickier to make an equal interaction. a[index] -> collisionEvent(eventdata); //much later int collisionEvent(eventdata){ //resolution gets here } Resolve the collision inside the loop. This glues narrowphase and resolution into one layer. if(narrowphase(possibleCollisions[i],a[index])) { //CODE GOES HERE }; The questions are: Which of the first two is better, and how am I supposed to make a zero-sum Newtonian interaction under B1.

    Read the article

  • Calculate gears rotation for a realtime simulation

    - by nkint
    Hi I'm trying to do a game with real time simulations of gears. There is a big Gear with inside a smaller gear. I managed to draw gears with different diameters but equal size teeth, but if i try to move the smaller one inside the bigger one the movement is odd. see the animated gif. the biggest gear is in center C1 and the small in the center C2. I calculate C2 position in this way: C2.x = C1.x + C1_RADIUS-C2_RADIUS) * cos(t); C2.y = C1.y - C1_RADIUS-C2_RADIUS) * sin(t); for t that goes from 0 to TWO_PI in n steps. I apply as rotation the angle t, but maybe it is wrong and i have to calculate another rotation for get a perfect joint

    Read the article

  • How do I pass an object location into a vertex shader?

    - by Greg Kassapidis
    I am using Blender Game Engine. I want to create a large flat plane, and deform it locally near a moving object. So far (despite being a beginner at shaders) I've written a vertex shader for the plane which moves the vertices to their correct positions (constant positions, for now). I cannot find a way to swap that constant location with an object's location updated every frame, while the shader is running. I am not even sure if it's possible. I only want to access a specific object's center from the shader.

    Read the article

  • Automatically zoom out the camera to show all players (XNA)

    - by user36159
    I am building a game in XNA that takes place in a rectangular arena. The game is multiplayer and each player may go where they like within the arena. The camera is a persepective camera that looks directly downwards. The camera should be automatically repositioned based on the game state. Currently, the xy position is a weighted sum of the xy positions of important entities. I would like the camera's z position to be calculated from the xy coordinates so that it zooms out to the point where all important entities are visible. My current approach is to: hw = the greatest x distance from the camera to an important entity hh = the greatest y distance from the camera to an important entity Calculate z = max(hw / tan(FoVx), hh / tan(FoVy)) My code seems to almost work as it should, but the resulting z values are always too low by a factor of about 4. Any ideas?

    Read the article

  • How do display a "mucus spreading" effect in a 2D environment?

    - by nathan
    Here is an example of such a mucus spreading. The substance is spread around the source (in this example, the source would be the main alien building). The game is starcraft, the purple substance is called creep. How this kind of substance spreading would be achieved in a top down 2D environment? Recalculating the substance progression and regenerate the effect on the fly each frame or rather use a large collection of tiles or something else?

    Read the article

  • XNA CustomModelAnimationSample problem

    - by Mentoliptus
    I downloaded the official tutorial from:CustomModelAnimationSample It works fine but when I try to replicate it in my project, it fails to load the Tag property in my model. Is found that the probelm is in the line: skinnedModel = Content.Load<Model>("DudeWalk"); This line loads the model from the DudeWalk.fbx file and with the custom SkinnedModelProcessor. It loads the animations data in the model. After the line the Tag property is full. I stepped into the method and it went to the custom ModelData class. I copied everything from the projects CustomModelAnimationWindows and CustomModelAnimationPipeline to my solution and set all the references. I tried the same line of code and couldn't step in the method. It called the default method or model constructor and after the line the model's Tag propetry was null. I have to load the model through my custom SkinnedModelProcessor class, but how I tell the game to use this class? In the tutroail CustomModelClass the line is changed to: model = Content.Load<CustomModel>("tank"); So I assumed that I have to set the generic type to a custom model class, but the first example works without it. If anyone has some useful advice or some other helpful link, I'll be happy to try it.

    Read the article

  • Can anyone recommend an AI sandbox?

    - by user19433
    I'm passionate person, who has been around AI from a long time [1] but never going in deep enough. Now it's time! I've been really looking for some way to concentrate on AI coding but couldn't succeeded to find an AI environment I can focus on. I just want to use an AI sandbox environment which would let me have tools like: visibility information character controller able to easily define a level, with obstacles of course physics collider management triggers management don't need to be a shiny, eye candy graphical render : this is about pathfinding, tactical reasoning, etc.. I have tried : Unreal Dev Kit : while the new release announce is about C++ coding, this is about external tools and will be released in 2013 Cry Engine : really interesting as AI is presents here but coding with it appears to be an hell: did I got it wrong ? Half Life source, C4, Torque, Dx Studio : either quite old, not very useful or costly these imply to dig in documentation (when provided) to code everything, graphics included. Unity 3D : the most promising platform. While you also need to create your own environment, there are lot of examples. The disadvantage is, in addition to spend time to have this env. working, is the languages choice : C#, Javascript or Boo. C# is not that hard, but this implies you'll allways have to convert papers (I love those from Lars Linden) books codes, or anything you can have in Aigamedev are most often in C++. This is extra work. I've look at "Simple Path", the very good Arong Greenberg work but no source provided and AngryAnt work. AI Sandbox : this seems to be exactly what as AI coder I want to use. I saw some preview but from 2009 we still don't know what it will be about precisely, will it be opensource or free (I strongly doubt), will I be able to buy it? will it really provide me tools I need to focus on AI ? That being said, what is the best environment to be able to focus on AI coding only, is it even possible?

    Read the article

< Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >