Search Results

Search found 25377 results on 1016 pages for 'development'.

Page 497/1016 | < Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >

  • Using components in the XNA Game State Management example?

    - by Zolomon
    In the game state management example at the App Hub, they say that if you want to use components in the example you can extend the GameScreen to host other components inside itself. I'm having a very hard time trying to tie this up. I tried extending the GameScreen class by adding a public property of public List<DrawableGameCompnent> components { get; set; } and then add my components to that list when I initialize the current screen as well as looping over the components in the LoadContent, Update and Draw methods. However, this doesn't feel like the correct way to go - mainly because it doesn't work when I get to the implementation of my GameplayScreen. Any thoughts?

    Read the article

  • Setting density for Android game

    - by Asghar
    I am developing an Android game, in which a ball (bitmap) translates( is in motion). So I have provided motion equations for the ball. I have checked my app on Samsung galaxy S2 whose actual density is roundly 252 dpi, and It works fine on that. So my question is that Does these motions of bitmaps in surfaceView, depends on actual density of phone( i.e 252 dpi for S2) or generalized density(i.e 240 dpi). I am confused whether if I run this app on 235 dpi smartphone, So will it have the same performance of motion as it is on Galaxy S2( with 252 dpi) or it would be little slow ? Any help will be appreciated.

    Read the article

  • AABB vs OBB Collision Resolution jitter on corners

    - by patt4179
    I've implemented a collision library for a character who is an AABB and am resolving collisions between AABB vs AABB and AABB vs OBB. I wanted slopes for certain sections, so I've toyed around with using several OBBs to make one, and it's working great except for one glaring issue; The collision resolution on the corner of an OBB makes the player's AABB jitter up and down constantly. I've tried a few things I've thought of, but I just can't wrap my head around what's going on exactly. Here's a video of what's happening as well as my code: Here's the function to get the collision resolution (I'm likely not doing this the right way, so this may be where the issue lies): public Vector2 GetCollisionResolveAmount(RectangleCollisionObject resolvedObject, OrientedRectangleCollisionObject b) { Vector2 overlap = Vector2.Zero; LineSegment edge = GetOrientedRectangleEdge(b, 0); if (!SeparatingAxisForRectangle(edge, resolvedObject)) { LineSegment rEdgeA = new LineSegment(), rEdgeB = new LineSegment(); Range axisRange = new Range(), rEdgeARange = new Range(), rEdgeBRange = new Range(), rProjection = new Range(); Vector2 n = edge.PointA - edge.PointB; rEdgeA.PointA = RectangleCorner(resolvedObject, 0); rEdgeA.PointB = RectangleCorner(resolvedObject, 1); rEdgeB.PointA = RectangleCorner(resolvedObject, 2); rEdgeB.PointB = RectangleCorner(resolvedObject, 3); rEdgeARange = ProjectLineSegment(rEdgeA, n); rEdgeBRange = ProjectLineSegment(rEdgeB, n); rProjection = GetRangeHull(rEdgeARange, rEdgeBRange); axisRange = ProjectLineSegment(edge, n); float axisMid = (axisRange.Maximum + axisRange.Minimum) / 2; float projectionMid = (rProjection.Maximum + rProjection.Minimum) / 2; if (projectionMid > axisMid) { overlap.X = axisRange.Maximum - rProjection.Minimum; } else { overlap.X = rProjection.Maximum - axisRange.Minimum; overlap.X = -overlap.X; } } edge = GetOrientedRectangleEdge(b, 1); if (!SeparatingAxisForRectangle(edge, resolvedObject)) { LineSegment rEdgeA = new LineSegment(), rEdgeB = new LineSegment(); Range axisRange = new Range(), rEdgeARange = new Range(), rEdgeBRange = new Range(), rProjection = new Range(); Vector2 n = edge.PointA - edge.PointB; rEdgeA.PointA = RectangleCorner(resolvedObject, 0); rEdgeA.PointB = RectangleCorner(resolvedObject, 1); rEdgeB.PointA = RectangleCorner(resolvedObject, 2); rEdgeB.PointB = RectangleCorner(resolvedObject, 3); rEdgeARange = ProjectLineSegment(rEdgeA, n); rEdgeBRange = ProjectLineSegment(rEdgeB, n); rProjection = GetRangeHull(rEdgeARange, rEdgeBRange); axisRange = ProjectLineSegment(edge, n); float axisMid = (axisRange.Maximum + axisRange.Minimum) / 2; float projectionMid = (rProjection.Maximum + rProjection.Minimum) / 2; if (projectionMid > axisMid) { overlap.Y = axisRange.Maximum - rProjection.Minimum; overlap.Y = -overlap.Y; } else { overlap.Y = rProjection.Maximum - axisRange.Minimum; } } return overlap; } And here is what I'm doing to resolve it right now: if (collisionDetection.OrientedRectangleAndRectangleCollide(obb, player.PlayerCollision)) { var resolveAmount = collisionDetection.GetCollisionResolveAmount(player.PlayerCollision, obb); if (Math.Abs(resolveAmount.Y) < Math.Abs(resolveAmount.X)) { var roundedAmount = (float)Math.Floor(resolveAmount.Y); player.PlayerCollision._position.Y -= roundedAmount; } else if (Math.Abs(resolveAmount.Y) <= 30.0f) //Catch cases where the player should be able to step over the top of something { var roundedAmount = (float)Math.Floor(resolveAmount.Y); player.PlayerCollision._position.Y -= roundedAmount; } else { var roundedAmount = (float)Math.Floor(resolveAmount.X); player.PlayerCollision._position.X -= roundedAmount; } } Can anyone see what might be the issue here, or has anyone experienced this before that knows a possible solution? I've tried for a few days to figure this out on my own, but I'm just stumped.

    Read the article

  • HTML5 clicking objects in canvas

    - by Dave
    I have a function in my JS that gets the user's mouse click on the canvas. Now lets say I have a random shape on my canvas (really its a PNG image which is rectangular) but i don't want to include any alpha space. My issue lies with lets say i click some where and it involves a pixel of one of the images. The first issue is how do you work out the pixel location is an object on the map (and not the grass tiles behind). Secondly if i clicked said image, if each image contains its own unique information how do you process the click to load the correct data. Note I don't use libraries I personally prefer the raw method. Relying on libraries doesn't teach me much I find.

    Read the article

  • Units in tile world

    - by Vilzow
    I've started to make a 2D sidescroller, the camera and world rendering works as I expect, but now comes the physics part of world. What I need is that one tile in x direction (or y direction) should correspond to 1 meter. Since I have a variable time step (android mobile game), I can't figure it out, since the timing and velocity always will be dependent of the device. So, is there any good way to make one tile to correspond 1 meter? This would be good, otherwise the physics implementation would later be weird.

    Read the article

  • How access PhysicalMaterial from Actor Class?

    - by EmAdpres
    I use Projectile for my weapon system and UDKProjectile has two main function to handle Hit of projectiles(=bullet of my weapon): simulated function ProcessTouch(Actor Other, Vector HitLocation, Vector HitNormal) // For Actors simulated event HitWall(vector HitNormal, actor Wall, PrimitiveComponent WallComp) // Everything except Actors ( I guess) the first method, the function just give me the actor which I hit and my question is How I can get that actor's physical material by first parameter ( Other ), in order to make a proper react about it ( for example a proper Sound of collide ) ... A tricky (but hateful ) way which I knew works is, make a Trace from a little back of that actor to that actor, and use HitInfo parameter which include physical Material ! But there should be a more standard way !

    Read the article

  • Writing a basic shader for large input files

    - by Zoltan Varadi
    I started writing a shader for my iOS app and instead of starting from scratch i used this tutorial here: http://www.raywenderlich.com/3664/opengl-es-2-0-for-iphone-tutorial I wrote an import function, first to import wavefront .obj models. My problem is that with I can't handle larger inputs (with a simple cube it was working). I realized that the indices array is an array of GLubyte values, which is unsigned char, so as a result i cant have more than 256 indexes. I modified it to GLuint, but then only get a blank screen. What else needs to me modified? p.s.: the source can be downloaded from here: http://d1xzuxjlafny7l.cloudfront.net/downloads/HelloOpenGL.zip

    Read the article

  • Why is C++ used for game engines? How about its future in game engines?

    - by kasperov
    C++, as I have seen, is being heavily used in 3d video game engines.... Is it because of the performance issues, legecy code or libraries such as DriverX? If performance, libraries and code infrastructure are the reasons, dosen't that make C++ indispensible, at least for game engines? (ie, we have no other option even in the very distant future). I asked this because, I have the right to know the upcomming future trends in game engines.

    Read the article

  • Where to start in creating a massive multiplayer 3D Java game [on hold]

    - by user1373771
    I am planning on creating a massive multiplayer world and I am wondering where to start. I am quite inexperienced in the field of Java but I have researched into it and learned that it is perhaps my best bet in creating this project is Java for the fact that it has a much easier learning curve than C++ to beginners and still capable of holding massive amounts of players at a time. My question is simple: Should I start the game by creating a single player prototype and introducing multiplayer later as I become more experienced or start with multiplayer before I am completely experienced in the field. Thanks for your help!

    Read the article

  • List<T>.AddRange is causing a brief Update/Draw delay

    - by Justin Skiles
    I have a list of entities which implement an ICollidable interface. This interface is used to resolve collisions between entities. My entities are thus: Players Enemies Projectiles Items Tiles On each game update (about 60 t/s), I am clearing the list and adding the current entities based on the game state. I am accomplishing this via: collidableEntities.Clear(); collidableEntities.AddRange(players); collidableEntities.AddRange(enemies); collidableEntities.AddRange(projectiles); collidableEntities.AddRange(items); collidableEntities.AddRange(camera.VisibleTiles); Everything works fine until I add the visible tiles to the list. The first ~1-2 seconds of running the game loop causes a visible hiccup that delays drawing (so I can see a jitter in the rendering). I can literally remove/add the line that adds the tiles and see the jitter occur and not occur, so I have narrowed it down to that line. My question is, why? The list of VisibleTiles is about 450-500 tiles, so it's really not that much data. Each tile contains a Texture2D (image) and a Vector2 (position) to determine what is rendered and where. I'm going to keep looking, but from the top of my head, I can't understand why only the first 1-2 seconds hiccups but is then smooth from there on out. Any advice is appreciated.

    Read the article

  • My game seems to be incompatible with recording software. What could be causing this?

    - by Lewis Wakeford
    I've just finished a little Game-Dev project for university and I need to record a video to accompany my submission (just in case they can't get my source to work). Basically my game doesn't work at all when FRAPS or Bandicam attempts to attach to it, I get a black screen and a stream of GL INVALID OPERATION messages from my error reporting code. Dxtory can't seem to hook into it correctly at all, it doesn't display it's FPS counter or anything. My game logic appears to be running correctly from the debug traces, it just seems like all the gl library calls break. I don't know a huge amount about how these programs operate so I don't really know what I could be doing to cause this. I've heard they read from the OpenGL frame buffers so maybe I'm doing something wrong there? I'm letting GLFW and GLEW do all the low level initialization, but I have successfully recorded projects with the same setup and recording software. Essentially, has anyone ever run into something like this before or do you know anything about how these programs work that could give a clue as to the cause of the issue?

    Read the article

  • How do I improve terrain rendering batch counts using DirectX?

    - by gamer747
    We have determined that our terrain rendering system needs some work to minimize the number of batches being transferred to the GPU in order to improve performance. I'm looking for suggestions on how best to improve what we're trying to accomplish. We logically split our terrain mesh into smaller grid cells which are 32x32 world units. Each cell has meta data that dictates the four 256x256 textures that are used for spatting along with the alpha blend data, shadow, and light mappings. Each cell contains 81 vertices in a 9x9 grid. Presently, we examine each cell and determine the four textures that are being used to spat the cell. We combine that geometry with any other cell that perhaps uses the same four textures regardless of spat order. If the spat order for a cell differs, the blend map is adjusted so that the spat order is maintained the same as other like cells and blending happens in the right order too. But even with this batching approach, it isn't uncommon when looking out across an area of open terrain to have between 1200-1700 batch count depending upon how frequently textures differ or have different texture blends are between cells. We are only doing frustum culling presently. So using texture spatting, are there other alternatives that can reduce the batch count and allow rendering to be extremely performance-friendly even under DirectX9c? We considered using texture atlases since we're targeting DirectX 9c & older OpenGL platforms but trying to repeat textures using atlases and shaders result in seam artifacts which we haven't been able to eliminate with the exception of disabling mipmapping. Disabling mipmapping results in poor quality textures from a distance. How have others batched together terrain geometry such that one could spat terrain using various textures, minimizing batch count and texture state switches so that rendering performance isn't negatively impacted?

    Read the article

  • how to keep display tick rate steady when using continuous collision detection?

    - by nas Ns
    (I've just found about this forum). I hope it is ok to repost my question again here. I posted this question at stackoverflow, but it looks like I might get better help here. Here is the question: I've implemented basic particles motion simulation with continuous collision detection. But there is small issue in display. Assume simple case of circles moving inside square. All elastic collisions. no firction. All motion is constant speed. No forces are involved, no gravity. So when a particle is moving, it is always moving at constant speed (in between collisions) What I do now is this: Let the simulation time step be 1 second (for example). This is the time step simulation is advanced before displaying the new state (unless there is a collision sooner than this). At start of each time step, time for the next collision between any particles or a particle with a wall is determined. Call this the TOC time; let’s say TOC was .5 seconds in this case. Since TOC is smaller than the standard time step, then the system is moved by TOC and the new system is displayed so that the new display shows any collisions as just taking place (say 2 circles just touched each other’s, or a circle just touched a wall) Next, the collision(s) are resolved (i.e. speeds updated, changed directions etc..). A new step is started. The same thing happens. Now assume there is no collision detected within the next 1 second (those 2 circles above will not be in collision any more, even though they are still touching, due to their speeds showing they are moving apart now), Hence, simulation time is advanced now by the full one second, the standard time step, and particles are moved on the screen using 1 second simulation time and new display is shown. You see what has just happened: One frame ran for .5 seconds, but the next frame runs for 1 second, may be the 3rd frame is displayed after 2 seconds, may be the 4th frame is displayed after 2.8 seconds (because TOC was .8 seconds then) and so on. What happens is that the motion of a particle on the screen appears to speed up or slow down, even though it is moving at constant speed and was not even involved in a collision. i.e. Looking at one particle on its own, I see it suddenly speeding up or slowing down, becuase another particle had hit a wall. This is because the display tick is not uniform. i.e. the frame rate update is changing, giving the false illusion that a particle is moving at non-constant speed while in fact it is moving at constant speed. The motion on the screen is not smooth, since the screen is not updating at constant rate. I am not able to figure how to fix this. If I want to show 2 particles at the moment of the collision, I must draw the screen at different times. Drawing the screen always at the same tick interval, results in seeing 2 particles before the collision, and then after the collision, and not just when they colliding, which looked bad when I tried it. So, how do real games handle this issue? How to display things in order to show collisions when it happen, yet keep the display tick constant? These 2 requirements seem to contradict each other’s.

    Read the article

  • Scan-Line Z-Buffering Dilemma

    - by Belgin
    I have a set of vertices in 3D space, and for each I retain the following information: Its 3D coordinates (x, y, z). A list of pointers to some of the other vertices with which it's connected by edges. Right now, I'm doing perspective projection with the projecting plane being XY and the eye placed somewhere at (0, 0, d), with d < 0. By doing Z-Buffering, I need to find the depth of the point of a polygon (they're all planar) which corresponds to a certain pixel on the screen so I can hide the surfaces that are not visible. My questions are the following: How do I determine to which polygon does a pixel belong to so I could use the formula of the plane which contains the polygon to find the Z-coordinate? Are my data structures correct? Do I need to store something else entirely in order for this to work? I'm just projecting the vertices onto the projection plane and joining them with lines based on the pointer lists.

    Read the article

  • How are larger games organized?

    - by Matthew G.
    I'm using Java, but the language I'm using here is probably irrelevant. I'd like to create an economy based on an ancient civilization. I'm not sure how to design it. If I were working on a smaller game, like a copy of "Space Invaders", I'd have no problem structuring it like this. Game -Main Control Class --Graphics Class --Player Class --Enemy class I'd pass the graphics class to both the player and enemy class so they could call graphics functions. I don't understand how I'd do this for larger projects. Do I create a country class that contains a bunch of towns? Do the towns contain a lot building class, most contain classes of people? Do I make a path finding class that the player can access to get around? How exactly do I structure this and pass all these references around? Thanks.

    Read the article

  • Avatar creation / dressing feature

    - by milesmeow
    What is the effort required to use a game engine such as Unreal or Unity, etc. and create an avatar customization features...complete with clothes. The user should be able to customize the body features and the clothes need to then fit onto the customized body. What is needed? Can you create one set of 3D models for clothes and somehow programatically have the clothes adapt to the body shape? I.e. The same shirt model will be able to fit on a skinny person vs. someone with a big beer belly. How difficult is this? What are the steps needed to implement this avatar creation/dressing feature. I'm basically talking about something like in Rockband 3.

    Read the article

  • Dynamic Jump spot

    - by Pasquale Sada
    I have an initial velocity V(Vx,Vy,VZ) and a spot where he stands still at S(Sx,Sy,Sz). What I'm trying to achieve is a jump on a spot E(Ex,Ey,Ez) where you have clicked on(only lower or higher spot, because I've in place a simple steering behavior for even terrains). There are no obstacle around. I've implemented a formula that can make him jump in a precise way on a spot but you need to declare an angle: the problem arise when the selected spot is straight above your head. It' pretty lame that the char hang there and can reach a thing that is 1cm above is head. I'll share the code I'm using: Vector3 dir = target - transform.position; // get target direction float h = dir.y; // get height difference dir.y = 0; // retain only the horizontal direction float dist = dir.magnitude ; // get horizontal distance float a = angle * Mathf.Deg2Rad; // convert angle to radians dir.y = dist * Mathf.Tan(a); // set dir to the elevation angle dist += h / Mathf.Tan(a); // correct for small height differences // calculate the velocity magnitude float vel = Mathf.Sqrt(dist * Physics.gravity.magnitude / Mathf.Sin(2 *a)); return vel * dir.normalized;

    Read the article

  • What tools should I consider if my aim is to make a game available to as many platforms as possible?

    - by Kensai
    We're planning on developing a 2D, grid-based puzzle game, and although it's still very early in the planning stages, we'd like to make our decisions well from the beginning. Our strategy will be to make the game available to as many platforms as possible, for example PCs (Windows, Mac and/or Linux), mobile phones (iPhone and/or Android based phones), game consoles (XBLA and/or PSN) PC will have an emphasis, but I believe that's the most flexible platform so that shouldn't be a problem. So, what programming language, game engine, frameworks and all around tools would be best suited for our goal? P.S.: I'm betting a set of tools won't cover ALL of them, and that there will still be some kind of "translating" effort for some platforms, but we'd like to know what the most far reaching are.

    Read the article

  • Octrees as data structure

    - by Christian Frantz
    In my cube world, I want to use octrees to represent my chunks of 20x20x20 cubes for frustum and occlusion culling. I understand how octrees work, I just dont know if I'm going about this the right way. My base octree class is taken from here: http://www.xnawiki.com/index.php/Octree What I'm wondering is how to apply occlusion culling using this class. Does it make sense to have one octree for each cube chunk? Or should I make the octree bigger? Since I'm using cubes, each cube should fit into a node without overlap so that won't be an issue

    Read the article

  • How can i create sprite sheet from 3d model (3D studio max)

    - by OopsUser
    I built simple 3D model of a car, with simple animation in which it's wheels are turning. Now i want to create a sprite sheet, the only way i know how to do it, is to render manually 20 frames from the from, then combine them to a strip manually, then rotate it by 10 degrees, render 20 frames of animation again and combine them to a strip... Is there a way to do it automatically ? With out rotating the scene manually and render it and combining .. it's a lot of work, takes more time then the modelling itself... Thanks

    Read the article

  • Efficient path-finding in free space

    - by DeadMG
    I've got a game situated in space, and I'd like to issue movement orders, which requires pathfinding. Now, it's my understanding that A* and such mostly apply to trees, and not empty space which does not have pathfinding nodes. I have some obstacles, which are currently expressed as fixed AABBs- that is, there is no unbounded "terrain" obstacle. In addition, I expect most obstacles to be reasonably approximable as cubes or spheres. So I've been thinking of applying a much simpler pathfinding algorithm- that is, simply cast a ray from the current position to the target position, and then I can get a list of obstacles using spatial partitioning relatively quickly. What I'm not so sure about is how to determine the part where the ordered unit manoeuvres around the obstacles. What I've been thinking so far is that I will simply use potential fields- that is, all units will feel a strong repulsive force away from each other and a moderate force towards the desired point. This also has the advantage that to issue group orders, I can simply order a mid-level force towards another entity. But this obviously won't achieve the optimal solution. Will potential fields achieve a reasonable approximation given my parameters, or do I need another solution?

    Read the article

  • 3d Model Scaling With Camera

    - by spasarto
    I have a very simple 3D maze program that uses a first person camera to navigate the maze. I'm trying to scale the blocks that make up the maze walls and floor so the corridors seem more roomy to the camera. Every time I scale the model, the camera seems to scale with it, and the corridors always stay the same width. I've tried apply the scale to the model in the content pipe (setting the scale property of the model in the properties window in VS). I've also tried to apply the scale using Matrix.CreateScale(float) using the Scale-Rotate-Transform order with the same result. If I leave the camera speed the same, the camera moves slower, so I know it's traversing a larger distance, but the world doesn't look larger; the camera just seems slower. I'm not sure what part of the code to include since I don't know if it is an issue with my model, camera, or something else. Any hints at what I'm doing wrong? Camera: Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.PiOver4, _device.Viewport.AspectRatio, 1.0f, 1000.0f ); Matrix camRotMatrix = Matrix.CreateRotationX( _cameraPitch ) * Matrix.CreateRotationY( _cameraYaw ); Vector3 transCamRef = Vector3.Transform( _cameraForward, camRotMatrix ); _cameraTarget = transCamRef + CameraPosition; Vector3 camRotUpVector = Vector3.Transform( _cameraUpVector, camRotMatrix ); View = Matrix.CreateLookAt( CameraPosition, _cameraTarget, camRotUpVector ); Model: World = Matrix.CreateTranslation( Position );

    Read the article

  • AndEngine GLES2 - getting Black screen on emulator 4.1

    - by dizworld.com
    I'm new in andengine . I create following code public class MainActivity extends BaseGameActivity { static final int CAMERA_WIDTH = 800; static final int CAMERA_HEIGHT = 480; public Font mFont; public Camera mCamera; //A reference to the current scene public Scene mCurrentScene; public static BaseActivity instance; public EngineOptions onCreateEngineOptions() { instance = this; mCamera = new Camera(0, 0, CAMERA_WIDTH, CAMERA_HEIGHT); return new EngineOptions(true, ScreenOrientation.LANDSCAPE_SENSOR, new RatioResolutionPolicy(CAMERA_WIDTH, CAMERA_HEIGHT), mCamera); } @Override public void onCreateResources(OnCreateResourcesCallback arg0) throws Exception { mFont = FontFactory.create(this.getFontManager(),this.getTextureManager(), 256, 256,Typeface.create(Typeface.DEFAULT, Typeface.BOLD), 32); mFont.load(); } @Override public void onCreateScene(OnCreateSceneCallback arg0) throws Exception { mEngine.registerUpdateHandler(new FPSLogger()); mCurrentScene = new Scene(); Log.v("Scene","enter"); mCurrentScene.setBackground(new Background(0.09804f, 0.7274f, 0.8f)); // return mCurrentScene; } @Override public void onPopulateScene(Scene arg0, OnPopulateSceneCallback arg1) throws Exception { // TODO Auto-generated method stub } } I got code on sites there is returning scene but in AndEngine GLES2 in method onCreateScene() there is no return scene ... so my First run is BLACK .. any suggestion :)

    Read the article

  • I have an "amoeba" game mechanic. Any idea on how to implement it?

    - by Jason
    Outside of a tetris clone, a crappy 2D top-down shooter, and some messing around with stuff like Unity and Flixel, I realize that I have yet to complete a single, polished, bells-and-whistles game. I want to change this, and I have an idea for my next project. The idea is that you're an amoeba. Amoebas have these eye-like cores (or something like that, I don't know biology), and you have two of 'em. You control one with WASD and the other with IJKL. There has to be a constant radius of stuff around each of the cores: And the area of the amoeba has to stay constant. So if you move a core in one direction, you increase the amoeba's area, but that increase is compensated by a decrease somewhere else: Aaaaaand I'd like to implement a vagination mechanic. You absorb things by engulfing them, like a boss. Maybe even an extra core, or a needle that pops you and causes all your inner stuff to start gushing out: But here's the problem: I don't know how to make this. However, I would like some ideas on how to implement it. Should I explore physics libraries like Box2D? Or maybe something involving fluid physics? Any help would be much appreciated. P.S. Feel free to steal this idea. I have plenty of ideas. If you do, please tell me how you made it so I can try it myself.

    Read the article

  • Calculating distance from viewer to object in a shader

    - by Jay
    Good morning, I'm working through creating the spherical billboards technique outlined in this paper. I'm trying to create a shader that calculates the distance from the camera to all objects in the scene and stores the results in a texture. I keep getting either a completely black or white texture. Here are my questions: I assume the position that's automatically sent to the vertex shader from ogre is in object space? The gpu interpolates the output position from the vertex shader when it sends it to the fragment shader. Does it do the same for my depth calculation or do I need to move that calculation to the fragment shader? Is there a way to debug shaders? I have no errors but I'm not sure I'm getting my parameters passed into the shaders correctly. Here's my shader code: void DepthVertexShader( float4 position : POSITION, uniform float4x4 worldViewProjMatrix, uniform float3 eyePosition, out float4 outPosition : POSITION, out float Depth ) { // position is in object space // outPosition is in camera space outPosition = mul( worldViewProjMatrix, position ); // calculate distance from camera to vertex Depth = length( eyePosition - position ); } void DepthFragmentShader( float Depth : TEXCOORD0, uniform float fNear, uniform float fFar, out float4 outColor : COLOR ) { // clamp output using clip planes float fColor = 1.0 - smoothstep( fNear, fFar, Depth ); outColor = float4( fColor, fColor, fColor, 1.0 ); } fNear is the near clip plane for the scene fFar is the far clip plane for the scene

    Read the article

< Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >