Search Results

Search found 288 results on 12 pages for 'bounding spheres'.

Page 4/12 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Sphere-Sphere intersection and Circle-Sphere intersection

    - by cagirici
    I have code for circle-circle intersection. But I need to expand it to 3-D. How do I calculate: Radius and center of the intersection circle of two spheres Points of the intersection of a sphere and a circle? Given two spheres (sc0,sr0) and (sc1,sr1), I need to calculate a circle of intersection whose center is ci and whose radius is ri. Moreover, given a sphere (sc0,sr0) and a circle (cc0, cr0), I need to calulate the two intersection points (pi0, pi1) I have checked this link and this link, but I could not understand the logic behind them and how to code them. I tried ProGAL library for sphere-sphere-sphere intersection, but the resulting coordinates are rounded. I need precise results.

    Read the article

  • [JOGL] my program is too slow, ho can i profile with Eclipse?

    - by nkint
    hi juys my simple opengl program is really toooo slow and not fluid i'm rendering 30 sphere with simple illumination and simple material. the only hard(?) computing stuffs i do is a collision detection between ray-mouse and spheres (that works ok and i do it only in mouseMoved) i have no thread only animator to move spheres how can i profile my jogl project? or mayebe (most probable..) i have some opengl instruction that i don't understand and make render particular accurate (or back face rendering that i don't need or whatever i don't know exctly i'm just entered in opengl world)

    Read the article

  • Get coordinates of beads covering the surface of a protein

    - by Hefeweizen
    Given a protein structure from the PDB, I would like to generate NS spheres of radius Rs which cover totally the protein surface. Given RS there, NS is the maximum number of spheres so they do not overlap. I would need the coordinates of the center of each sphere. Does anybody know if this has been implemented in some method / program? Or how to do it with scripting. Thanks

    Read the article

  • How to structure class to support imported 3d model ?

    - by brainydexter
    Hello, I've written a C++ library that reads in this 3d model file (collada DAE). uptil now, I would output a list of triangles and handle each at rendering stage. But now, I need to attach some Bounding sphere information with the imported model. I need some advice on how should I organize this in code. Here are some specs of the 3D file format: - 3D model is represented as a Tree consisting of nodes - each node can contain other nodes, geometry information, transformation etc My requirements: - a bounding sphere associated with each node, thereby yielding a tree of bounding sphere hierarchy for the model itself. - actual vertex information What would be the recommended way to deal with this situation? Thanks

    Read the article

  • Opinions on platform game actor/background collision resolving

    - by Kawa
    Imagine the following scenario: I have a level whose physical structure is built up from a collection of bounding rectangles, combined with prerendered bitmap backgrounds. My actors, including the player character, all have their own bounding rectangle. If an actor manages to get stuck inside a level block, partially or otherwise, it'll need to be shifted out again, so that it is flush against the block. The untested technique I thought up during bio break is as follows: If an actor's box is found to intersect a level box, determine where the centerpoints of each rect are. If the actor's center is higher than the level box's, move the actor so that the bottom of the actor's rect is flush with the top of the level's rect, and vice versa if it's lower. Then do a similar thing horizontally. Opinions on that? Suggestions on better methods? Actually, the bounding rects are XNA BoundingBlocks with their Z spanning from -1 to 1, but it's still 2D gameplay.

    Read the article

  • Determine coordinates of rotated rectangle

    - by MathieuK
    I'm creating an utility application that should detect and report the coordinates of the corners of a transparent rectangle (alpha=0) within an image. So far, I've set up a system with Javascript + Canvas that displays the image and starts a floodfill-like operation when I click inside the transparent rectangle in the image. It correctly determines the bounding box of the floodfill operation and as such can provide me the correct coordinates. Here's my implementation so-far: http://www.scriptorama.nl/image/ (works in recent Firefox / Safari ). However, the bounding box approach breaks down then the transparent rectangle is rotated (CW or CCW) as the resulting bounding box no longer properly represents the proper width and height. I've tried to come up with a few alternatives to detect to corners, but have not been able to think up a proper solution. So, does anyone have any suggestions on how I might approach this so I can properly detect the coordinates of 4 corners of the rotated rectangle?

    Read the article

  • Calculate an AABB for bone animated model

    - by Byte56
    I have a model that has its initial bounding box calculated by finding the maximum and minimum on the x, y and z axes. Producing a correct result like so: The vertices are then stored in a VBO and only altered with matrices for rotation and bone animation. Currently the bounds are not updated when the model is altered. So the animated and rotated model has bounds like so: (Maybe it's hard to tell, but the bounds are the same as before, and don't accurately represent the rotated/animated model) So my question is, how can I calculate the bounding box using the armature matrices and rotation/translation matrices for each model? Keep in mind the modified vertex data is not available because those calculations are performed on the GPU in the shader. The end result I want is to have an accurate AABB the represents the animated model for picking/basic collision checks.

    Read the article

  • Collisions between moving ball and polygons

    - by miguelSantirso
    I know this is a very typical problem and that there area a lot of similar questions, but I have been looking for a while and I have not found anything that fits what I want. I am developing a 2D game in which I need to perform collisions between a ball and simple polygons. The polygons are defined as an array of vertices. I have implemented the collisions with the bounding boxes of the polygons (that was easy) and I need to refine that collision in the cases where the ball collides with the bounding box. The ball can move quite fast and the polygons are not too big so I need to perform continuous collisions. I am looking for a method that allows me to detect if the ball collides with a polygon and, at the same time, calculate the new direction for the ball after bouncing in the polygon. (I am using XNA, in case that helps)

    Read the article

  • Linear search vs Octree (Frustum cull)

    - by Dave
    I am wondering whether I should look into implementing an octree of some kind. I have a very simple game which consists of a 3d plane for the floor. There are multiple objects scattered around on the ground, each one has an aabb in world space. Currently I just do a loop through the list of all these objects and check if its bounding box intersects with the frustum, it works great but I am wondering if if it would be a good investment in an octree. I only have max 512 of these objects on the map and they all contain bounding boxes. I am not sure if an octree would make it faster since I have so little objects in the scene.

    Read the article

  • QuadTree: store only points, or regions?

    - by alekop
    I am developing a quadtree to keep track of moving objects for collision detection. Each object has a bounding shape, let's say they are all circles. (It's a 2D top-down game) I am unsure whether to store only the position of each object, or the whole bounding shape. If working with points, insertion and subdivision is easy, because objects will never span multiple nodes. On the other hand, a proximity query for an object may miss collisions, because it won't take the objects' dimensions into account. How to calculate the query region when you only have points? If working with regions, how to handle an object that spans multiple nodes? Should it be inserted in the nearest parent node that completely contains it, even if this exceeds the node's capacity? Thanks.

    Read the article

  • Project collision shapes to plane for 2.5D collision detection

    - by Jkh2
    I am working on a top down 2.5D game. In the game anything that overlaps on the screen should be 'colliding' with each other regardless of whether they are on the same plane in the 3D world. This is illustrated below from a side-ways view: The orange and green circles are spheres floating in the 3D world. They are projected onto a plane parallel to the viewport plane (y = 0 in the image) and if they overlap there is a collision event between them. These spheres are attached to other meshes to represent the sphere bounding boxes for collisions. The way I plan to implement this at the moment is the following: Get the 3D world position at the center of the sphere. Use Camera.WorldToViewportPoint to project the point to the viewport plane. Move a Sphere Collider with the radius of the sphere to that point. Test for collisions using unity colliders. My question is how to extend this to work for rotated cuboids. For instance if I have two rotated cuboids, if I follow the logic above it would not work as intended as the cuboids may not collide but they could still be intersected on the view plane. An example is below: Is there a way to project a cuboid that would be aligned with the plane? Would it be a valid cuboid for all rotations if I did this?

    Read the article

  • How AlphaBlend Blendstate works in XNA when accumulighting light into a RenderTarget?

    - by cubrman
    I am using a Deferred Rendering engine from Catalin Zima's tutorial: His lighting shader returns the color of the light in the rgb channels and the specular component in the alpha channel. Here is how light gets accumulated: Game.GraphicsDevice.SetRenderTarget(LightRT); Game.GraphicsDevice.Clear(Color.Transparent); Game.GraphicsDevice.BlendState = BlendState.AlphaBlend; // Continuously draw 3d spheres with lighting pixel shader. ... Game.GraphicsDevice.BlendState = BlendState.Opaque; MSDN states that AlphaBlend field of the BlendState class uses the next formula for alphablending: (source × Blend.SourceAlpha) + (destination × Blend.InvSourceAlpha), where "source" is the color of the pixel returned by the shader and "destination" is the color of the pixel in the rendertarget. My question is why do my colors are accumulated correctly in the Light rendertarget even when the new pixels' alphas equal zero? As a quick sanity check I ran the following code in the light's pixel shader: float specularLight = 0; float4 light4 = attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight); if (light4.a == 0) light4 = 0; return light4; This prevents lighting from getting accumulated and, subsequently, drawn on the screen. But when I do the following: float specularLight = 0; float4 light4 = attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight); return light4; The light is accumulated and drawn exactly where it needs to be. What am I missing? According to the formula above: (source x 0) + (destination x 1) should equal destination, so the "LightRT" rendertarget must not change when I draw light spheres into it! It feels like the GPU is using the Additive blend instead: (source × Blend.One) + (destination × Blend.One)

    Read the article

  • How AlphaBlend Blendstate works in XNA 4 when accumulighting light into a RenderTarget?

    - by cubrman
    I am using a Deferred Rendering engine from Catalin Zima's tutorial: His lighting shader returns the color of the light in the rgb channels and the specular component in the alpha channel. Here is how light gets accumulated: Game.GraphicsDevice.SetRenderTarget(LightRT); Game.GraphicsDevice.Clear(Color.Transparent); Game.GraphicsDevice.BlendState = BlendState.AlphaBlend; // Continuously draw 3d spheres with lighting pixel shader. ... Game.GraphicsDevice.BlendState = BlendState.Opaque; MSDN states that AlphaBlend field of the BlendState class uses the next formula for alphablending: (source × Blend.SourceAlpha) + (destination × Blend.InvSourceAlpha), where "source" is the color of the pixel returned by the shader and "destination" is the color of the pixel in the rendertarget. My question is why do my colors are accumulated correctly in the Light rendertarget even when the new pixels' alphas equal zero? As a quick sanity check I ran the following code in the light's pixel shader: float specularLight = 0; float4 light4 = attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight); if (light4.a == 0) light4 = 0; return light4; This prevents lighting from getting accumulated and, subsequently, drawn on the screen. But when I do the following: float specularLight = 0; float4 light4 = attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight); return light4; The light is accumulated and drawn exactly where it needs to be. What am I missing? According to the formula above: (source x 0) + (destination x 1) should equal destination, so the "LightRT" rendertarget must not change when I draw light spheres into it! It feels like the GPU is using the Additive blend instead: (source × Blend.One) + (destination × Blend.One)

    Read the article

  • Binding BoundingSpheres to a world matrix in XNA

    - by NDraskovic
    I made a program that loads the locations of items on the scene from a file like this: using (StreamReader sr = new StreamReader(OpenFileDialog1.FileName)) { String line; while ((line = sr.ReadLine()) != null) { red = line.Split(','); model = row[0]; x = row[1]; y = row[2]; z = row[3]; elements.Add(Convert.ToInt32(model)); data.Add(new Vector3(Convert.ToSingle(x), Convert.ToSingle(y), Convert.ToSingle(z))); sfepheres.Add(new BoundingSphere(new Vector3(Convert.ToSingle(x), Convert.ToSingle(y), Convert.ToSingle(z)), 1f)); } I also have a list of BoundingSpheres (called spheres) that adds a new bounding sphere for each line from the file. In this program I have one item (a simple box) that moves (it has its world matrix called matrixBox), and other items are static entire time (there is a world matrix that holds those elements called simply world). The problem i that when I move the box, bounding spheres move with it. So how can I bind all BoundingSpheres (except the one corresponding to the box) to the static world matrix so that they stay in their place when the box moves?

    Read the article

  • python regex of a date in some text, enclosed by two keywords

    - by Horace Ho
    This is Part 2 of this question and thanks very much for David's answer. What if I need to extract dates which are bounded by two keywords? Example: text = "One 09 Jun 2011 Two 10 Dec 2012 Three 15 Jan 2015 End" Case 1 bounding keyboards: "One" and "Three" Result expected: ['09 Jun 2011', '10 Dec 2012'] Case 2 bounding keyboards: "Two" and "End" Result expected: ['10 Dec 2012', '15 Jan 2015'] Thanks!

    Read the article

  • Which is faster: creating a detailed mesh before execution or tessellating?

    - by Nick Udell
    For simplicity of the problem let's consider spheres. Let's say I have a sphere, and before execution I know the radius, the position and the triangle count. Let's also say the triangle count is sufficiently large (e.g. ~50k triangles). Would it be faster generally to create this sphere mesh before hand and stream all 50k triangles to the graphics card, or would it be faster to send a single point (representing the centre of the sphere) and use tessellation and geometry shaders to build the sphere on the GPU? Would it still be faster if I had 100 of these spheres in different positions? Can I use hull/geometry shaders to create something which I can then combine with instancing?

    Read the article

  • How to detect circles accurately

    - by user1767798
    Is there any way to accurately detect circles in opencv? I was using hough transform which give me good result but most of the time, shadow of the object and surrounding,light etc gives bad results, so am looking for options other than hough circles, accurate detection is very important for my project. My basic approach so far is to find some spheres in the image taken in realtime. I am using houghcircle to find the spheres and base later calculations on the radius I am getting from that. If the background is plain and nothing the sphere detect without problem, however if I am taking that image in my room where the background will have other objects it's often difficult to detect. So am looking for some other approach.

    Read the article

  • Demystifying "chunked level of detail"

    - by Caius Eugene
    Just recently trying to make sense of implementing a chunked level of detail system in Unity. I'm going to be generating four mesh planes, each with a height map but I guess that isn't too important at the moment. I have a lot of questions after reading up about this technique, I hope this isn't too much to ask all in one go, but I would be extremely grateful for someone to help me make sense of this technique. 1 : I can't understand at which point down the Chunked LOD pipeline that the mesh gets split into chunks. Is this during the initial mesh generation, or is there a separate algorithm which does this. 2 : I understand that a Quadtree data structure is used to store the Chunked LOD data, I think i'm missing the point a bit, but Is the quadtree storing vertex and triangles data for each subdivision level? 3a : How is the camera distance usually calculated. When reading up about quadtree's, Axis-aligned bounding box's are mentioned a lot. In this case would each chunk have a collision bounding box to detect the camera or player is nearby? or is there a better way of doing this? (raycast maybe?) 3b : Do the chunks calculate the camera distance themselves? 4 : Does each chunk have the same "resolution". for example at top level the mesh will be 32x32, will each subdivided node also be 32x32. Example below:

    Read the article

  • How do I handle specific tile/object collisions?

    - by Thomas William Cannady
    What do I do after the bounding box test against a tile to determine whether there is a real collision against the contents of that tile? And if there is, how should I move the object in response to that collision? I have a small object, and test for collisions against the tiles that each corner of it is on. Here's my current code, which I run for each of those (up to) four tiles: // get the bounding box of the object, in world space objectBounds = object->bounds + object->position; if ( (objectBounds.right >= tileBounds.left) && (objectBounds.left <= tileBounds.right) && (objectBounds.top >= tileBounds.bottom) && (objectBounds.bottom <= tileBounds.top)) { // perform specific test to see if it's a left, top , bottom // or right collision. If so, I check to see the nature of it // and where I need to place the object to respond to that collision... // [THIS IS THE PART THAT NEEDS WORK] // if( lastkey==keydown[right] && ((objectBounds.right >= tileBounds.left) && (objectBounds.right <= tileBounds.right) && (objectBounds.bottom >= tileBounds.bottom) && (objectBounds.bottom <= tileBounds.top)) ) { object->position.x = tileBounds.left - objectBounds.width; } // etc.

    Read the article

  • Frustum culling with third person camera

    - by Christian Frantz
    I have a third person camera that contains two matrices: view and projection, and two Vector3's: camPosition and camTarget. I've read up on frustum culling and it makes it seem easy enough for a first person camera, but how would I implement this for a third person camera? I need to take into effect the objects I can see behind me too. How would I implement this into my camera class so it runs at the same time as my update method? public void CameraUpdate(Matrix objectToFollow) { camPosition = objectToFollow.Translation + (objectToFollow.Backward *backward) + (objectToFollow.Up * up); camTarget = objectToFollow.Translation; view = Matrix.CreateLookAt(camPosition, camTarget, Vector3.Up); } Can I just create another method within the class which creates a bounding sphere with a value from my camera and then uses the culling based on that? And if so, which value am I using to create the bounding sphere from? After this is implemented, I'm planning on using occlusion culling for the faces of my objects adjacent to other objects. Will using just one or the other make a difference? Or will both of them be better? I'm trying to keep my framerate as high as possible

    Read the article

  • In a 2D platform game, how to ensure the player moves smoothly over sloping ground?

    - by Kovsa
    See image: http://i41.tinypic.com/huis13.jpg I'm developing a physics engine for a 2D platform game. I'm using the separating axis theorem for collision detection. The ground surface is constructed from oriented bounding boxes, with the player as an axis aligned bounding box. (Specifically, I'm using the algorithm from the book "Realtime Collision Detection" which performs swept collision detection for OBBs using SAT). I'm using a fairly small (close to zero) restitution coefficient in the collision response, to ensure that the dynamic objects don't penetrate the environment. The engine mostly works fine, it's just that I'm concerned about some edge cases that could possibly occur. For example, in the diagram, A, B and C are the ground surface. The player is heading left along B towards A. It seems to me that due to inaccuracy, the player box could be slightly below the box B as it continues up and left. When it reaches A, therefore, the bottom left corner of the player might then collide with the right side of A, which would be undesirable (as the intention is for the player to move smoothly over the top of A). It seems like a similar problem could happen when the player is on top of box C, moving left towards B - the most extreme point of B could collide with the left side of the player, instead of the player's bottom left corner sliding up and left above B. Box2D seems to handle this problem by storing connectivity information for its edge shapes, but I'm not really sure how it uses this information to solve the problem, and after looking at the code I don't really grasp what it's doing. Any suggestions would be greatly appreciated.

    Read the article

  • Can anyone explain step-by-step how the as3isolib depth-sorts isometric objects?

    - by Rob Evans
    The library manages to depth-sort correctly, even when using items of non-1x1 sizes. I took a look through the code but it's a big project to go through line by line! There are some questions about the process such as: How are the x, y, z values of each object defined? Are they the center points of the objects or something else? I noticed that the IBounds defines the bounds of the object. If you were to visualise a cuboid of 40, 40, 90 in size, where would each of the IBounds metrics be? I would like to know how as3isolib achieves this although I would also be happy with a generalised pseudo-code version. At present I have a system that works 90% of the time but in cases of objects that are along the same horizontal line, the depth is calculated as the same value. The depth calculation currently works like this: x = object horizontal center point y = object vertical center point originX and Y = the origin point relative to the object so if you want the origin to be the center, the value would be originX = 0.5, originY = 0.5. If you wanted the origin to be vertical center, horizontal far right of the object it would be originX = 1.0, originY = 0.5. The origin adjusts the position that the object is transformed from. AABB_width = The bounding box width. AABB_height = The bounding box height. depth = x + (AABB_width * originX) + y + (AABB_height * originY) - z; This generates the same depth for all objects along the same horizontal x.

    Read the article

  • Dynamic quadrees

    - by paul424
    recently I come out writing Quadtree for creatures culling in Opendungeons game. Thing is those are moving points and bounding hierarchy will quickly get lost if the quadtree is not rebuild very often. I have several variants, first is to upgrade the leaf position , every time creature move is requested. ( note if I would need collision detection anyway, so this might be necessery anyway). Second would be making leafs enough large , that the creature would sure stay inside it's bounding box ( due to its speed limit). The partition of a plane in quadtree is always fixed ( modulo the hierarchical unions of some parts) . For creatures close to the center of the plane , there would be no way of keeping it but inside one big leaf, besides this brokes the invariant that each point can be put into any small area as desired. So on the second thought could I use several quadrees ? Each would have its "coordinate axis XY" somwhere shifted ? Before I start playing with this maybe some other space diving structure would suit me better, unfortunetly wiki does not compare it's execution time : http://en.wikipedia.org/wiki/Grid_%28spatial_index%29#See_also

    Read the article

  • Fast, accurate 2d collision

    - by Neophyte
    I'm working on a 2d topdown shooter, and now need to go beyond my basic rectangle bounding box collision system. I have large levels with many different sprites, all of which are different shapes and sizes. The textures for the sprites are all square png files with transparent backgrounds, so I also need a way to only have a collision when the player walks into the coloured part of the texture, and not the transparent background. I plan to handle collision as follows: Check if any sprites are in range of the player Do a rect bounding box collision test Do an accurate collision (Where I need help) I don't mind advanced techniques, as I want to get this right with all my requirements in mind, but I'm not sure how to approach this. What techniques or even libraries to try. I know that I will probably need to create and store some kind of shape that accurately represents each sprite minus the transparent background. I've read that per pixel is slow, so given my large levels and number of objects I don't think that would be suitable. I've also looked at Box2d, but haven't been able to find much documentation, or any examples of how to get it up and running with SFML.

    Read the article

  • Unity3d Gravity script issues

    - by Joseph Le Brech
    I'm try this script out http://wiki.unity3d.com/index.php/Gravity and I'm having some issues with it (it seemed to work when I tried it with an old version of unity) the first issue is of collision, the objects (in my case spheres) will stick into each other rather than just touch. and the second is that when the objects collide one of the objects with continue it's trajectory. I'm thinking of rewriting the script from scratch unless someone can explain what's wrong with the script that i've got.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >