Search Results

Search found 226 results on 10 pages for 'bounding'.

Page 3/10 | < Previous Page | 1 2 3 4 5 6 7 8 9 10  | Next Page >

  • Google maps KM bounds box reapplying itself on map zoom

    - by creminsn
    I have a map in which I apply a custom overlay using KMbox overlay to signify where I think the users general point of interest lies. What I want is to display this map to the user and allow them to click on the map to give me an exact location match of their POI. This all works fine except for when the user clicks on the map and changes the zoom of the map. Here's my code to add the marker to the map. function addMarker(location) { if(KmOverlay) { KmOverlay.remove(); } if(last_marker) { last_marker.setMap(null); } marker = new google.maps.Marker({ position: location, map: map }); // Keep track for future unsetting... last_marker = marker; } And to show the map I have this function. function show_map(lt, ln, zoom, controls, marker) { var ltln = new google.maps.LatLng(lt, ln); var vars = { zoom: zoom, center: ltln, mapTypeId: google.maps.MapTypeId.ROADMAP, navigationControl: controls, navigationControlOptions: {style: google.maps.NavigationControlStyle.ZOOM_PAN} , mapTypeControl: false, scaleControl: false, scrollwheel: false }; map = new google.maps.Map(document.getElementById("map_canvas"), vars); KmOverlay = new KmBox(map, new google.maps.LatLng(lat, lon), KmOpts); var totalBounds = new google.maps.LatLngBounds(); totalBounds.union(KmOverlay.getBounds()); google.maps.event.addListener(map, 'click', function(event) { addMarker(event.latLng); }); } I have a working example at the following link here

    Read the article

  • How do I draw the points in an ESRI Polyline, given the bounding box as lat/long and the "points" as

    - by Aaron
    I'm using OpenMap and I'm reading a ShapeFile using com.bbn.openmap.layer.shape.ShapeFile. The bounding box is read in as lat/long points, for example 39.583642,-104.895486. The bounding box is a lower-left point and an upper-right point which represents where the points are contained. The "points," which are named "radians" in OpenMap, are in a different format, which looks like this: [0.69086486, -1.8307719, 0.6908546, -1.8307716, 0.6908518, -1.8307717, 0.69085056, -1.8307722, 0.69084936, -1.8307728, 0.6908477, -1.8307738, 0.69084626, -1.8307749, 0.69084185, -1.8307792]. How do I convert the points like "0.69086486, -1.8307719" into x,y coordinates that are usable in normal graphics? I believe all that's needed here is some kind of conversion, because bringing the points into Excel and graphing them creates a line whose curve matches the curve of the road at the given location (lat/long). However, the axises need to be adjusted manually and I have no reference as how to adjust the axises, since the given bounding box appears to be in a different format than the given points. The ESRI Shapefile technical description doesn't seem to mention this (http://www.esri.com/library/whitepapers/pdfs/shapefile.pdf).

    Read the article

  • Move camera to fit 3D scene

    - by Burre
    Hi there. I'm looking for an algorithm to fit a bounding box inside a viewport (in my case a DirectX scene). I know about algorithms for centering a bounding sphere in a orthographic camera but would need the same for a bounding box and a perspective camera. I have most of the data: I have the up-vector for the camera I have the center point of the bounding box I have the look-at vector (direction and distance) from the camera point to the box center I have projected the points on a plane perpendicular to the camera and retrieved the coefficients describing how much the max/min X and Y coords are within or outside the viewing plane. Problems I have: Center of the bounding box isn't necessarily in the center of the viewport (that is, it's bounding rectangle after projection). Since the field of view "skew" the projection (see http://en.wikipedia.org/wiki/File:Perspective-foreshortening.svg) I cannot simply use the coefficients as a scale factor to move the camera because it will overshoot/undershoot the desired camera position How do I find the camera position so that it fills the viewport as pixel perfect as possible (exception being if the aspect ratio is far from 1.0, it only needs to fill one of the screen axis)? I've tried some other things: Using a bounding sphere and Tangent to find a scale factor to move the camera. This doesn't work well, because, it doesn't take into account the perspective projection, and secondly spheres are bad bounding volumes for my use because I have a lot of flat and long geometries. Iterating calls to the function to get a smaller and smaller error in the camera position. This has worked somewhat, but I can sometimes run into weird edge cases where the camera position overshoots too much and the error factor increases. Also, when doing this I didn't recenter the model based on the position of the bounding rectangle. I couldn't find a solid, robust way to do that reliably. Help please!

    Read the article

  • Algorithm to reduce calls to mapping API

    - by aidan
    A random distribution of points lies on a map. This data lies behind an API, and I want to grab the complete set of points within a given bounding box. I can query the API with the bounding box and the API will return the set of points that fall within that box. The problem is that the API will limit the result set to 10 items, with no pagination and no indication if there are more points that have been omitted. So I made a recursive algorithm that takes a bounding box and requests the points that lie within it. If the result set is exactly 10 items, then I split the bounding box into four quadrants and recurse. It works fine but my question is this: if want to minimize the number of API calls, what is the optimal way to split the bounding box? Splitting it into quadrants was just an arbitrary decision. When there are a lot of points on the map, I have to drill down many levels before I start getting meaningful results. So I imagine it might be faster to split the box into, say, 9, 16, or more sections. But if I do that, then I eventually get to a point where a lot of requests are returning 0 results which isn't so efficient. Also, does the size of the limit on the results set affect the answer? (This is all assuming that I have no prior knowledge of nominal point density in the bounding box)

    Read the article

  • Replacing GTileLayer in Google Maps v3, with ImageMapType, Tile bounding box?

    - by justdev
    I need to update this code: radar_layer.getTileUrl=function(tile,zoom) { var llp = new GPoint(tile.x*256,(tile.y+1)*256); var urp = new GPoint((tile.x+1)*256,tile.y*256); var ll = G_NORMAL_MAP.getProjection().fromPixelToLatLng(llp,zoom); var ur = G_NORMAL_MAP.getProjection().fromPixelToLatLng(urp,zoom); var dt = new Date(); var nowtime = dt.getTime(); var tileurl = "http://demo.remoteservice.com/cgi-bin/serve.cgi?"; tileurl+="bbox="+ll.lng()+","+ll.lat()+","+ur.lng()+","+ur.lat(); tileurl+="&width=256&height=256&reaspect=false&cachetime="+nowtime; return tileurl; }; I got as far as: var DemoLayer = new google.maps.ImageMapType({ getTileUrl: function(coord, zoom) { var llp = new google.maps.Point(coord.x*256,(coord.y+1)*256); var urp = new google.maps.Point((coord.x+1)*256,coord.y*256); var ll = googleMap.getProjection().fromPointToLatLng(llp); var ur = googleMap.getProjection().fromPointToLatLng(urp); var dt = new Date(); var nowtime = dt.getTime(); var tileurl = "http://demo.remoteservice.com/cgi-bin/serve.cgi?"; tileurl+="bbox="+ll.lng()+","+ll.lat()+","+ur.lng()+","+ur.lat(); tileurl+="&width=256&height=256&reaspect=false&cachetime="+nowtime; return tileurl; }, tileSize: new google.maps.Size(256, 256), opacity:1.0, isPng: true }); Specifically, I need help with this section: var llp = new google.maps.Point(coord.x*256,(coord.y+1)*256); var urp = new google.maps.Point((coord.x+1)*256,coord.y*256); var ll = googleMap.getProjection().fromPointToLatLng(llp); var ur = googleMap.getProjection().fromPointToLatLng(urp); The service wants the tile bounding box from what I understand. However, ll and ur do not seem to correct at all. I had it working and displaying the entire map bounding box in each tile, but of course that's not what I need. Any insight here would be greatly appreciated, not having the GTileLayers in V3 is fine if I can work around it, until then I'm frustrated.

    Read the article

  • How to make NSView not clip its bounding area?

    - by Jeremy L
    I created an empty Cocoa app on Xcode for OS X, and added: - (void)applicationDidFinishLaunching:(NSNotification *)aNotification { self.view = [[NSView alloc] initWithFrame:NSMakeRect(100, 100, 200, 200)]; self.view.wantsLayer = YES; self.view.layer = [CALayer layer]; self.view.layer.backgroundColor = [[NSColor yellowColor] CGColor]; self.view.layer.anchorPoint = CGPointMake(0.5, 0.5); self.view.layer.transform = CATransform3DMakeRotation(30 * M_PI / 180, 1, 1, 1); [self.window.contentView addSubview:self.view]; } But the rotated layer's background is clipped by the view's bounding area: I thought since some version of OS X and iOS, the view won't clip the content of its subviews and will show everything inside and outside? On iOS, I do see that behavior, but I wonder why it shows up like that and how to make everything show? (I am already using the most current Xcode 4.4.1 on Mountain Lion). (note: if you try the code above, you will need to link to Quartz Core, and possibly import the quartz core header, although I wonder why I didn't import the header and it still compiles perfectly)

    Read the article

  • Movement and Collision with AABB

    - by Jeremy Giberson
    I'm having a little difficulty figuring out the following scenarios. http://i.stack.imgur.com/8lM6i.png In scenario A, the moving entity has fallen to (and slightly into the floor). The current position represents the projected position that will occur if I apply the acceleration & velocity as usual without worrying about collision. The Next position, represents the corrected projection position after collision check. The resulting end position is the falling entity now rests ON the floor--that is, in a consistent state of collision by sharing it's bottom X axis with the floor's top X axis. My current update loop looks like the following: // figure out forces & accelerations and project an objects next position // check collision occurrence from current position -> projected position // if a collision occurs, adjust projection position Which seems to be working for the scenario of my object falling to the floor. However, the situation becomes sticky when trying to figure out scenario's B & C. In scenario B, I'm attempt to move along the floor on the X axis (player is pressing right direction button) additionally, gravity is pulling the object into the floor. The problem is, when the object attempts to move the collision detection code is going to recognize that the object is already colliding with the floor to begin with, and auto correct any movement back to where it was before. In scenario C, I'm attempting to jump off the floor. Again, because the object is already in a constant collision with the floor, when the collision routine checks to make sure moving from current position to projected position doesn't result in a collision, it will fail because at the beginning of the motion, the object is already colliding. How do you allow movement along the edge of an object? How do you allow movement away from an object you are already colliding with. Extra Info My collision routine is based on AABB sweeping test from an old gamasutra article, http://www.gamasutra.com/view/feature/3383/simple_intersection_tests_for_games.php?page=3 My bounding box implementation is based on top left/bottom right instead of midpoint/extents, so my min/max functions are adjusted. Otherwise, here is my bounding box class with collision routines: public class BoundingBox { public XYZ topLeft; public XYZ bottomRight; public BoundingBox(float x, float y, float z, float w, float h, float d) { topLeft = new XYZ(); bottomRight = new XYZ(); topLeft.x = x; topLeft.y = y; topLeft.z = z; bottomRight.x = x+w; bottomRight.y = y+h; bottomRight.z = z+d; } public BoundingBox(XYZ position, XYZ dimensions, boolean centered) { topLeft = new XYZ(); bottomRight = new XYZ(); topLeft.x = position.x; topLeft.y = position.y; topLeft.z = position.z; bottomRight.x = position.x + (centered ? dimensions.x/2 : dimensions.x); bottomRight.y = position.y + (centered ? dimensions.y/2 : dimensions.y); bottomRight.z = position.z + (centered ? dimensions.z/2 : dimensions.z); } /** * Check if a point lies inside a bounding box * @param box * @param point * @return */ public static boolean isPointInside(BoundingBox box, XYZ point) { if(box.topLeft.x <= point.x && point.x <= box.bottomRight.x && box.topLeft.y <= point.y && point.y <= box.bottomRight.y && box.topLeft.z <= point.z && point.z <= box.bottomRight.z) return true; return false; } /** * Check for overlap between two bounding boxes using separating axis theorem * if two boxes are separated on any axis, they cannot be overlapping * @param a * @param b * @return */ public static boolean isOverlapping(BoundingBox a, BoundingBox b) { XYZ dxyz = new XYZ(b.topLeft.x - a.topLeft.x, b.topLeft.y - a.topLeft.y, b.topLeft.z - a.topLeft.z); // if b - a is positive, a is first on the axis and we should use its extent // if b -a is negative, b is first on the axis and we should use its extent // check for x axis separation if ((dxyz.x >= 0 && a.bottomRight.x-a.topLeft.x < dxyz.x) // negative scale, reverse extent sum, flip equality ||(dxyz.x < 0 && b.topLeft.x-b.bottomRight.x > dxyz.x)) return false; // check for y axis separation if ((dxyz.y >= 0 && a.bottomRight.y-a.topLeft.y < dxyz.y) // negative scale, reverse extent sum, flip equality ||(dxyz.y < 0 && b.topLeft.y-b.bottomRight.y > dxyz.y)) return false; // check for z axis separation if ((dxyz.z >= 0 && a.bottomRight.z-a.topLeft.z < dxyz.z) // negative scale, reverse extent sum, flip equality ||(dxyz.z < 0 && b.topLeft.z-b.bottomRight.z > dxyz.z)) return false; // not separated on any axis, overlapping return true; } public static boolean isContactEdge(int xyzAxis, BoundingBox a, BoundingBox b) { switch(xyzAxis) { case XYZ.XCOORD: if(a.topLeft.x == b.bottomRight.x || a.bottomRight.x == b.topLeft.x) return true; return false; case XYZ.YCOORD: if(a.topLeft.y == b.bottomRight.y || a.bottomRight.y == b.topLeft.y) return true; return false; case XYZ.ZCOORD: if(a.topLeft.z == b.bottomRight.z || a.bottomRight.z == b.topLeft.z) return true; return false; } return false; } /** * Sweep test min extent value * @param box * @param xyzCoord * @return */ public static float min(BoundingBox box, int xyzCoord) { switch(xyzCoord) { case XYZ.XCOORD: return box.topLeft.x; case XYZ.YCOORD: return box.topLeft.y; case XYZ.ZCOORD: return box.topLeft.z; default: return 0f; } } /** * Sweep test max extent value * @param box * @param xyzCoord * @return */ public static float max(BoundingBox box, int xyzCoord) { switch(xyzCoord) { case XYZ.XCOORD: return box.bottomRight.x; case XYZ.YCOORD: return box.bottomRight.y; case XYZ.ZCOORD: return box.bottomRight.z; default: return 0f; } } /** * Test if bounding box A will overlap bounding box B at any point * when box A moves from position 0 to position 1 and box B moves from position 0 to position 1 * Note, sweep test assumes bounding boxes A and B's dimensions do not change * * @param a0 box a starting position * @param a1 box a ending position * @param b0 box b starting position * @param b1 box b ending position * @param aCollisionOut xyz of box a's position when/if a collision occurs * @param bCollisionOut xyz of box b's position when/if a collision occurs * @return */ public static boolean sweepTest(BoundingBox a0, BoundingBox a1, BoundingBox b0, BoundingBox b1, XYZ aCollisionOut, XYZ bCollisionOut) { // solve in reference to A XYZ va = new XYZ(a1.topLeft.x-a0.topLeft.x, a1.topLeft.y-a0.topLeft.y, a1.topLeft.z-a0.topLeft.z); XYZ vb = new XYZ(b1.topLeft.x-b0.topLeft.x, b1.topLeft.y-b0.topLeft.y, b1.topLeft.z-b0.topLeft.z); XYZ v = new XYZ(vb.x-va.x, vb.y-va.y, vb.z-va.z); // check for initial overlap if(BoundingBox.isOverlapping(a0, b0)) { // java pass by ref/value gotcha, have to modify value can't reassign it aCollisionOut.x = a0.topLeft.x; aCollisionOut.y = a0.topLeft.y; aCollisionOut.z = a0.topLeft.z; bCollisionOut.x = b0.topLeft.x; bCollisionOut.y = b0.topLeft.y; bCollisionOut.z = b0.topLeft.z; return true; } // overlap min/maxs XYZ u0 = new XYZ(); XYZ u1 = new XYZ(1,1,1); float t0, t1; // iterate axis and find overlaps times (x=0, y=1, z=2) for(int i = 0; i < 3; i++) { float aMax = max(a0, i); float aMin = min(a0, i); float bMax = max(b0, i); float bMin = min(b0, i); float vi = XYZ.getCoord(v, i); if(aMax < bMax && vi < 0) XYZ.setCoord(u0, i, (aMax-bMin)/vi); else if(bMax < aMin && vi > 0) XYZ.setCoord(u0, i, (aMin-bMax)/vi); if(bMax > aMin && vi < 0) XYZ.setCoord(u1, i, (aMin-bMax)/vi); else if(aMax > bMin && vi > 0) XYZ.setCoord(u1, i, (aMax-bMin)/vi); } // get times of collision t0 = Math.max(u0.x, Math.max(u0.y, u0.z)); t1 = Math.min(u1.x, Math.min(u1.y, u1.z)); // collision only occurs if t0 < t1 if(t0 <= t1 && t0 != 0) // not t0 because we already tested it! { // t0 is the normalized time of the collision // then the position of the bounding boxes would // be their original position + velocity*time aCollisionOut.x = a0.topLeft.x + va.x*t0; aCollisionOut.y = a0.topLeft.y + va.y*t0; aCollisionOut.z = a0.topLeft.z + va.z*t0; bCollisionOut.x = b0.topLeft.x + vb.x*t0; bCollisionOut.y = b0.topLeft.y + vb.y*t0; bCollisionOut.z = b0.topLeft.z + vb.z*t0; return true; } else return false; } }

    Read the article

  • Ray Tracing concers: Efficient Data Structure and Photon Mapping

    - by Grieverheart
    I'm trying to build a simple ray tracer for specific target scenes. An example of such scene can be seen below. I'm concerned as to what accelerating data structure would be most efficient in this case since all objects are touching but on the other hand, the scene is uniform. The objects in my ray tracer are stored as a collection of triangles, thus I also have access to individual triangles. Also, when trying to find the bounding box of the scene, how should infinite planes be handled? Should one instead use the viewing frustum to calculate the bounding box? A few other questions I have are about photon mapping. I've read the original paper by Jensen and many more material. In the compact data structure for the photon they introduce, they store photon power as 4 chars, which from my understanding is 3 chars for color and 1 for flux. But I don't understand how 1 char is enough to store a flux of the order of 1/n, where n is the number of photons (I'm also a bit confused about flux vs power). The other question about photon mapping is, if it would be more efficient in my case to store photons per object (or even per Object's triangle) instead of using a balanced kd-tree. Also, same question about bounding box of the scene but for photon mapping. How should one find a bounding box from the pov of the light when infinite planes are involved?

    Read the article

  • XNA Per-Polygon Collision Check

    - by user22985
    I'm working on a project in XNA for WP7 with a low-poly environment, my problem is I need to setup a working per-polygon collision check between 2 or more 3d meshes. I've checked tons of tutorials but all of them use bounding-boxes, bounding-spheres,rays etc., but what I really need is a VERY precise way of checking if the polygons of two distinct models have intersected or not. If you could redirect me to an example or at least give me some pointers I would be grateful.

    Read the article

  • How do I implement collision detection with a sprite walking up a rocky-terrain hill?

    - by detectivecalcite
    I'm working in SDL and have bounding rectangles for collisions set up for each frame of the sprite's animation. However, I recently stumbled upon the issue of putting together collisions for characters walking up and down hills/slopes with irregularly curved or rocky terrain - what's a good way to do collisions for that type of situation? Per-pixel? Loading up the points of the incline and doing player-line collision checking? Should I use bounding rectangles in general or circle collision detection?

    Read the article

  • How to structure class to support imported 3d model ?

    - by brainydexter
    Hello, I've written a C++ library that reads in this 3d model file (collada DAE). uptil now, I would output a list of triangles and handle each at rendering stage. But now, I need to attach some Bounding sphere information with the imported model. I need some advice on how should I organize this in code. Here are some specs of the 3D file format: - 3D model is represented as a Tree consisting of nodes - each node can contain other nodes, geometry information, transformation etc My requirements: - a bounding sphere associated with each node, thereby yielding a tree of bounding sphere hierarchy for the model itself. - actual vertex information What would be the recommended way to deal with this situation? Thanks

    Read the article

  • Determine coordinates of rotated rectangle

    - by MathieuK
    I'm creating an utility application that should detect and report the coordinates of the corners of a transparent rectangle (alpha=0) within an image. So far, I've set up a system with Javascript + Canvas that displays the image and starts a floodfill-like operation when I click inside the transparent rectangle in the image. It correctly determines the bounding box of the floodfill operation and as such can provide me the correct coordinates. Here's my implementation so-far: http://www.scriptorama.nl/image/ (works in recent Firefox / Safari ). However, the bounding box approach breaks down then the transparent rectangle is rotated (CW or CCW) as the resulting bounding box no longer properly represents the proper width and height. I've tried to come up with a few alternatives to detect to corners, but have not been able to think up a proper solution. So, does anyone have any suggestions on how I might approach this so I can properly detect the coordinates of 4 corners of the rotated rectangle?

    Read the article

  • Opinions on platform game actor/background collision resolving

    - by Kawa
    Imagine the following scenario: I have a level whose physical structure is built up from a collection of bounding rectangles, combined with prerendered bitmap backgrounds. My actors, including the player character, all have their own bounding rectangle. If an actor manages to get stuck inside a level block, partially or otherwise, it'll need to be shifted out again, so that it is flush against the block. The untested technique I thought up during bio break is as follows: If an actor's box is found to intersect a level box, determine where the centerpoints of each rect are. If the actor's center is higher than the level box's, move the actor so that the bottom of the actor's rect is flush with the top of the level's rect, and vice versa if it's lower. Then do a similar thing horizontally. Opinions on that? Suggestions on better methods? Actually, the bounding rects are XNA BoundingBlocks with their Z spanning from -1 to 1, but it's still 2D gameplay.

    Read the article

  • Calculate an AABB for bone animated model

    - by Byte56
    I have a model that has its initial bounding box calculated by finding the maximum and minimum on the x, y and z axes. Producing a correct result like so: The vertices are then stored in a VBO and only altered with matrices for rotation and bone animation. Currently the bounds are not updated when the model is altered. So the animated and rotated model has bounds like so: (Maybe it's hard to tell, but the bounds are the same as before, and don't accurately represent the rotated/animated model) So my question is, how can I calculate the bounding box using the armature matrices and rotation/translation matrices for each model? Keep in mind the modified vertex data is not available because those calculations are performed on the GPU in the shader. The end result I want is to have an accurate AABB the represents the animated model for picking/basic collision checks.

    Read the article

  • Collisions between moving ball and polygons

    - by miguelSantirso
    I know this is a very typical problem and that there area a lot of similar questions, but I have been looking for a while and I have not found anything that fits what I want. I am developing a 2D game in which I need to perform collisions between a ball and simple polygons. The polygons are defined as an array of vertices. I have implemented the collisions with the bounding boxes of the polygons (that was easy) and I need to refine that collision in the cases where the ball collides with the bounding box. The ball can move quite fast and the polygons are not too big so I need to perform continuous collisions. I am looking for a method that allows me to detect if the ball collides with a polygon and, at the same time, calculate the new direction for the ball after bouncing in the polygon. (I am using XNA, in case that helps)

    Read the article

  • Linear search vs Octree (Frustum cull)

    - by Dave
    I am wondering whether I should look into implementing an octree of some kind. I have a very simple game which consists of a 3d plane for the floor. There are multiple objects scattered around on the ground, each one has an aabb in world space. Currently I just do a loop through the list of all these objects and check if its bounding box intersects with the frustum, it works great but I am wondering if if it would be a good investment in an octree. I only have max 512 of these objects on the map and they all contain bounding boxes. I am not sure if an octree would make it faster since I have so little objects in the scene.

    Read the article

  • QuadTree: store only points, or regions?

    - by alekop
    I am developing a quadtree to keep track of moving objects for collision detection. Each object has a bounding shape, let's say they are all circles. (It's a 2D top-down game) I am unsure whether to store only the position of each object, or the whole bounding shape. If working with points, insertion and subdivision is easy, because objects will never span multiple nodes. On the other hand, a proximity query for an object may miss collisions, because it won't take the objects' dimensions into account. How to calculate the query region when you only have points? If working with regions, how to handle an object that spans multiple nodes? Should it be inserted in the nearest parent node that completely contains it, even if this exceeds the node's capacity? Thanks.

    Read the article

  • python regex of a date in some text, enclosed by two keywords

    - by Horace Ho
    This is Part 2 of this question and thanks very much for David's answer. What if I need to extract dates which are bounded by two keywords? Example: text = "One 09 Jun 2011 Two 10 Dec 2012 Three 15 Jan 2015 End" Case 1 bounding keyboards: "One" and "Three" Result expected: ['09 Jun 2011', '10 Dec 2012'] Case 2 bounding keyboards: "Two" and "End" Result expected: ['10 Dec 2012', '15 Jan 2015'] Thanks!

    Read the article

  • Demystifying "chunked level of detail"

    - by Caius Eugene
    Just recently trying to make sense of implementing a chunked level of detail system in Unity. I'm going to be generating four mesh planes, each with a height map but I guess that isn't too important at the moment. I have a lot of questions after reading up about this technique, I hope this isn't too much to ask all in one go, but I would be extremely grateful for someone to help me make sense of this technique. 1 : I can't understand at which point down the Chunked LOD pipeline that the mesh gets split into chunks. Is this during the initial mesh generation, or is there a separate algorithm which does this. 2 : I understand that a Quadtree data structure is used to store the Chunked LOD data, I think i'm missing the point a bit, but Is the quadtree storing vertex and triangles data for each subdivision level? 3a : How is the camera distance usually calculated. When reading up about quadtree's, Axis-aligned bounding box's are mentioned a lot. In this case would each chunk have a collision bounding box to detect the camera or player is nearby? or is there a better way of doing this? (raycast maybe?) 3b : Do the chunks calculate the camera distance themselves? 4 : Does each chunk have the same "resolution". for example at top level the mesh will be 32x32, will each subdivided node also be 32x32. Example below:

    Read the article

  • How do I handle specific tile/object collisions?

    - by Thomas William Cannady
    What do I do after the bounding box test against a tile to determine whether there is a real collision against the contents of that tile? And if there is, how should I move the object in response to that collision? I have a small object, and test for collisions against the tiles that each corner of it is on. Here's my current code, which I run for each of those (up to) four tiles: // get the bounding box of the object, in world space objectBounds = object->bounds + object->position; if ( (objectBounds.right >= tileBounds.left) && (objectBounds.left <= tileBounds.right) && (objectBounds.top >= tileBounds.bottom) && (objectBounds.bottom <= tileBounds.top)) { // perform specific test to see if it's a left, top , bottom // or right collision. If so, I check to see the nature of it // and where I need to place the object to respond to that collision... // [THIS IS THE PART THAT NEEDS WORK] // if( lastkey==keydown[right] && ((objectBounds.right >= tileBounds.left) && (objectBounds.right <= tileBounds.right) && (objectBounds.bottom >= tileBounds.bottom) && (objectBounds.bottom <= tileBounds.top)) ) { object->position.x = tileBounds.left - objectBounds.width; } // etc.

    Read the article

  • Frustum culling with third person camera

    - by Christian Frantz
    I have a third person camera that contains two matrices: view and projection, and two Vector3's: camPosition and camTarget. I've read up on frustum culling and it makes it seem easy enough for a first person camera, but how would I implement this for a third person camera? I need to take into effect the objects I can see behind me too. How would I implement this into my camera class so it runs at the same time as my update method? public void CameraUpdate(Matrix objectToFollow) { camPosition = objectToFollow.Translation + (objectToFollow.Backward *backward) + (objectToFollow.Up * up); camTarget = objectToFollow.Translation; view = Matrix.CreateLookAt(camPosition, camTarget, Vector3.Up); } Can I just create another method within the class which creates a bounding sphere with a value from my camera and then uses the culling based on that? And if so, which value am I using to create the bounding sphere from? After this is implemented, I'm planning on using occlusion culling for the faces of my objects adjacent to other objects. Will using just one or the other make a difference? Or will both of them be better? I'm trying to keep my framerate as high as possible

    Read the article

  • In a 2D platform game, how to ensure the player moves smoothly over sloping ground?

    - by Kovsa
    See image: http://i41.tinypic.com/huis13.jpg I'm developing a physics engine for a 2D platform game. I'm using the separating axis theorem for collision detection. The ground surface is constructed from oriented bounding boxes, with the player as an axis aligned bounding box. (Specifically, I'm using the algorithm from the book "Realtime Collision Detection" which performs swept collision detection for OBBs using SAT). I'm using a fairly small (close to zero) restitution coefficient in the collision response, to ensure that the dynamic objects don't penetrate the environment. The engine mostly works fine, it's just that I'm concerned about some edge cases that could possibly occur. For example, in the diagram, A, B and C are the ground surface. The player is heading left along B towards A. It seems to me that due to inaccuracy, the player box could be slightly below the box B as it continues up and left. When it reaches A, therefore, the bottom left corner of the player might then collide with the right side of A, which would be undesirable (as the intention is for the player to move smoothly over the top of A). It seems like a similar problem could happen when the player is on top of box C, moving left towards B - the most extreme point of B could collide with the left side of the player, instead of the player's bottom left corner sliding up and left above B. Box2D seems to handle this problem by storing connectivity information for its edge shapes, but I'm not really sure how it uses this information to solve the problem, and after looking at the code I don't really grasp what it's doing. Any suggestions would be greatly appreciated.

    Read the article

  • Can anyone explain step-by-step how the as3isolib depth-sorts isometric objects?

    - by Rob Evans
    The library manages to depth-sort correctly, even when using items of non-1x1 sizes. I took a look through the code but it's a big project to go through line by line! There are some questions about the process such as: How are the x, y, z values of each object defined? Are they the center points of the objects or something else? I noticed that the IBounds defines the bounds of the object. If you were to visualise a cuboid of 40, 40, 90 in size, where would each of the IBounds metrics be? I would like to know how as3isolib achieves this although I would also be happy with a generalised pseudo-code version. At present I have a system that works 90% of the time but in cases of objects that are along the same horizontal line, the depth is calculated as the same value. The depth calculation currently works like this: x = object horizontal center point y = object vertical center point originX and Y = the origin point relative to the object so if you want the origin to be the center, the value would be originX = 0.5, originY = 0.5. If you wanted the origin to be vertical center, horizontal far right of the object it would be originX = 1.0, originY = 0.5. The origin adjusts the position that the object is transformed from. AABB_width = The bounding box width. AABB_height = The bounding box height. depth = x + (AABB_width * originX) + y + (AABB_height * originY) - z; This generates the same depth for all objects along the same horizontal x.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10  | Next Page >