Search Results

Search found 13494 results on 540 pages for 'board game'.

Page 307/540 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • What functionality should I use in OpenGL 2.0?

    - by Jeffrey
    Considering OpenGL 2.1, we all know that glBegin and glEnd are the devil. Should I use only VBO to render 3d primitives (I can't find VAO in that version, weren't there already?)? Should I still use the matrix stack (why not?)? Should I still use glFrustum? Can I take advantage of shaders in GLSL 1.20? Where can I find a tutorial for VBO in OpenGL 2.1 and the "correct" way of programming in it? Also how am I supposed to animate something. Like a cube moving around an object or a player moving in the scene (static vbo data + shader?)? Note: Take your time to answer this question, I'll accept an answer tomorrow.

    Read the article

  • Class Design - Space Simulator

    - by Peteyslatts
    I have pretty much taught myself everything I know about programming, so while I know how to teach myself (books, internet and reading API's), I'm finding that there hasn't been a whole lot in the way of good programming. So I have two questions: First the broad one: Does anyone have suggestions as to sources for learning about good programming habits and techniques? I'd prefer it if the resource wasn't a 5000 page tome. The more I can read it in installments the better. More specifically: I am finishing up learning the basics of XNA and I want to create a space simulator to test my knowledge. This isn't a full scale simulator, but just something that covers everything I learned. It's also going to be modular so I can build on it, after I get the basics down. One of the early features I want to implement is AI. And I want to take this into account as I'm designing my classes so I can minimize rewriting code. So my question: How should I design ship classes so that both the player and AI can use them? The only idea I have so far is: Create a ship class that contains stats, models, textures, collision data etc. The player and AI would then have the data for position, rotation, health, etc and would base their status off of the ship stats.

    Read the article

  • Moving while doing loop animation in RPGMaker

    - by AzDesign
    I made a custom class to display character portrait in RPGMaker XP Here is the class : class Poser attr_accessor :images def initialize @images = Sprite.new @images.bitmap = RPG::Cache.picture('Character.png') #100x300 @images.x = 540 #place it on the bottom right corner of the screen @images.y = 180 end end Create an event on the map to create an instance as global variable, tada! image popped out. Ok nice. But Im not satisfied, Now I want it to have bobbing-head animation-like (just like when we breathe, sometimes bob our head up and down) so I added another method : def move(x,y) @images.x += x @images.y += y end def animate(x,y,step,delay) forward = true 2.times { step.times { wait(delay) if forward move(x/step,y/step) else move(-x/step,-y/step) end } wait(delay*3) forward = false } end def wait(time) while time > 0 time -= 1 Graphics.update end end I called the method to run the animation and it works, so far so good, but the problem is, WHILE the portrait goes up and down, I cannot move my character until the animation is finished. So that's it, I'm stuck in the loop block, what I want is to watch the portrait moving up and down while I walk around village, talk to npc, etc. Anyone can solve my problem ? Or better solution ? Thanks in advance

    Read the article

  • A Star Path finding endless loop

    - by PoeHaH
    I have implemented A* algorithm. Sometimes it works, sometimes it doesn't, and it goes through an endless loop. After days of debugging and googling, I hope you can come to the rescue. This is my code: The algorythm: public ArrayList<Coordinate> findClosestPathTo(Coordinate start, Coordinate goal) { ArrayList<Coordinate> closed = new ArrayList<Coordinate>(); ArrayList<Coordinate> open = new ArrayList<Coordinate>(); ArrayList<Coordinate> travelpath = new ArrayList<Coordinate>(); open.add(start); while(open.size()>0) { Coordinate current = searchCoordinateWithLowestF(open); if(current.equals(goal)) { return travelpath; } travelpath.add(current); open.remove(current); closed.add(current); ArrayList<Coordinate> neighbors = current.calculateCoordAdjacencies(true, rowbound, colbound); for(Coordinate n:neighbors) { if(closed.contains(n) || map.isWalkeable(n)) { continue; } int gScore = current.getGvalue() + 1; boolean gScoreIsBest = false; if(!open.contains(n)) { gScoreIsBest = true; n.setHvalue(manhattanHeuristic(n,goal)); open.add(n); } else { if(gScore<n.getGvalue()) { gScoreIsBest = true; } } if(gScoreIsBest) { n.setGvalue(gScore); n.setFvalue(n.getGvalue()+n.getHvalue()); } } } return null; } What I have found out is that it always fails whenever there's an obstacle in the path. If I'm running it on 'open terrain', it seems to work. It seems to be affected by this part: || map.isWalkeable(n) Though, the isWalkeable function seems to work fine. If additional code is needed, I will provide it. Your help is greatly appreciated, Thanks :)

    Read the article

  • Trouble with SAT style vector projection in C#/XNA

    - by ssb
    Simply put I'm having a hard time working out how to work with XNA's Vector2 types while maintaining spatial considerations. I'm working with separating axis theorem and trying to project vectors onto an arbitrary axis to check if those projections overlap, but the severe lack of XNA-specific help online combined with pseudo code everywhere that omits key parts of the algorithm, googling has left me little help. I'm aware of HOW to project a vector, but the way that I know of doing it involves the two vectors starting from the same point. Particularly here: http://www.metanetsoftware.com/technique/tutorialA.html So let's say I have a simple rectangle, and I store each of its corners in a list of Vector2s. How would I go about projecting that onto an arbitrary axis? The crux of my problem is that taking the dot product of say, a vector2 of (1, 0) and a vector2 of (50, 50) won't get me the dot product I'm looking for.. or will it? Because that (50, 50) won't be the vector of the polygon's vertex but from whatever XNA calculates. It's getting the calculation from the right starting point that's throwing me off. I'm sorry if this is unclear, but my brain is fried from trying to think about this. I need a better understanding of how XNA calculates Vector2s as actual vectors and not just as random points.

    Read the article

  • How can I get started programming OpenGL on Mac OS X?

    - by Michael Stum
    I'm trying to start OpenGL programming on a Mac, which brings me into unknown territory on a lot of things. During the day, I'm a Web Developer, working in C# and before that in PHP and Delphi, all on Windows. During the night, I try to pick up Mac/OpenGL skills, but everything is so different. I've been trying to look for some books, but the OpenGL books are usually for iOS (tons of them out there) and the Mac Books usually cover "normal" application Development. I want to start simple with Pong, Tetris and Wolfenstein. I see that there are a bunch of different OpenGL Versions out there. I know about OpenGL ES 1&2, but I don't know about the "big" OpenGL Versions - which ones are commonly supported on 10.6 and 10.7 on current (2010/2011) Macs? Are there any up to date (XCode 4) books or tutorials? I don't want to use a premade Engine like Unity yet - again, I know next to nothing about any Mac development.

    Read the article

  • Calculate velocity of a bullet ricocheting on a circle

    - by SteveL
    I made a picture to demostrate what I need,basecaly I have a bullet with velocity and I want it to bounce with the correct angle after it hits a circle Solved(look the accepted answer for explain): Vector.vector.set(bullet.vel); //->v Vector.vector2.setDirection(pos, bullet.pos); //->n normal from center of circle to bullet float dot=Vector.vector.dot(Vector.vector2); //->dot product Vector.vector2.mul(dot).mul(2); Vector.vector.sub(Vector.vector2); Vector.vector.y=-Vector.vector.y; //->for some reason i had to invert the y bullet.vel.set(Vector.vector);

    Read the article

  • exact point on a rotating sphere

    - by nkint
    I have a sphere that represents the Earth textured with real pictures. It's rotating around the x axis, and when user click down it has to show me the exact place he clicked on. For example if he clicked on Singapore the system should be able to: understand that user clicked on the sphere (OK, I'll do it with unProject) understand where user clicked on the sphere (ray-sphere collision?) and take into account the rotation transform sphere-coordinate to some coordinate system good for some web-api service ask to api (OK, this is the simpler thing for me ;-) some advice?

    Read the article

  • When mapping the surface of a sphere with tiles, how might you deal with polar distortion?

    - by clweeks
    It's easy to deal with the way locations interact on a clean Cartesian grid. It's just vanilla math. And you can kind of ignore the geometry of the sphere's surface for a bunch of it if you want to just truncate the poles or something. But I keep coming up with ideas for games where the polar space matters. Geo-coded ARGs and global roguelikes and stuff. I want square(ish?) locations -- reasonably representable by square tiles of the same size across the globe, anyway. This has to be a solved problem, right? What are the solutions? ETA: At the equator -- and assuming that your square locations are reasonably small, it's close enough to true that you can get away with having one square in the rows north and south of the most equatorial row. And you could probably get away with that by just hand-waving the difference up to like 45-degrees or so. But eventually, you need to have fewer squares in a pole-ward circumferential row. If I reduce the length of the row by one and offset the squares by 1/2 then they're just like hexes and it's relatively easy to do the coding to keep track of the connections. But as you get pole-ward, it gets more and more extreme. Projecting the surface of the world onto the surface of a cube is tempting. But I figured there must be more elegant solutions already in use. If I did the cube thing (not dissecting it further through geodesy) Are there any pros and cons related to placing the pole at the center of a face or at the vertex of three sides?

    Read the article

  • 2D scene graph not transforming relative to parent

    - by Dr.Denis McCracleJizz
    I am currently in the process of coding my own 2D Scene graph, which is basically a port of flash's render engine. The problem I have right now is my rendering doesn't seem to be working properly. This code creates the localTransform property for each DisplayObject. Matrix m_transform = Matrix.CreateRotationZ(rotation) * Matrix.CreateScale(scaleX, scaleY, 1) * Matrix.CreateTranslation(new Vector3(x, y, z)); This is my render code. float dRotation; Vector2 dPosition, dScale; Matrix transform; transform = this.localTransform; if (parent != null) transform = localTransform * parent.localTransform; DecomposeMatrix(ref transform, out dPosition, out dRotation, out dScale); spriteBatch.Draw(this.texture, dPosition, null, Color.White, dRotation, new Vector2(originX, originY), dScale, SpriteEffects.None, 0.0f); Here is the result when I try to add the Stage then to the stage a First DisplayObjectContainer and then a second one. It may look fine but the problem lies in the fact that I add a first DisplayObjectContainer at (400,400) and the second one within it (that's the smallest one) at position (0,0). So he should be right over its parent but he gets render within the parent at the same position the parent has (400, 400) for some reason. It's just as if I double the parent's localMatrix and then render the second cat there. This is the code i use to loop through every childs. base.Draw(spriteBatch); foreach (DisplayObject childs in _childs) { childs.Draw(spriteBatch); }

    Read the article

  • Android how to get opengl 3D coordinates in ontouch event

    - by Sandy
    I created a cube in opengl and it rotates in ontouch event. To to this I created a CustomSurfaceView as follows public class CustomSurfaceView extends GLSurfaceView { @Override public boolean onTouchEvent(MotionEvent e) { float x = e.getX() float y = e.getY(); } } Here x and y are screen coordinates. How can I get 3D coordinated from this? I have already looked gluProject and NeHe. But I dont know how to implement this in my project, it shows that there is no GLdouble,GLfloat class.

    Read the article

  • Move the location of the XYZ pivot point on a mesh in UDK

    - by WebDevHobo
    When working with any mesh, you get an XYZ point somewhere on it. If you just want to move the mesh in any direction, it doesn't matter where this point is located. However, I want to rotate a door. This requires the point of rotation to be very specific. I can't find anywhere how to change the location of the point. Can anyone help? EDIT: solved, to change the pivot point, right click on the mesh, go to "Pivot" and move it. Then right click again and this time select "Save PrePivot to Pivot"

    Read the article

  • How to attach an object to a rotating circle?

    - by armands
    I am trying to make an object get attached on a collision point to a circle that is rotating, but the player needs to get attached with a constant point on the player. For example the player is moving back and forth and when the user touches the screen and the player jumps up but what I need is that when the player collides with the circle it attaches it's legs to it and continues rotating with the circle. So I wanted to know how to make this kind of collision joint in Cocos2d Box2d?

    Read the article

  • forward motion car physics - gradual slow

    - by spartan2417
    Im having trouble creating realistic car movements in xna 4. Right now i have a car going forward and hitting a terminal velocity which is fine but when i release the up key i need to the car to slow down gradually and then come to a stop. Im pretty sure this is easy code but i cant seem to get it to work the code - update if (Keyboard.GetState().IsKeyDown(Keys.Up)) { double elapsedTime = gameTime.ElapsedGameTime.Milliseconds; CalcTotalForce(); Acceleration = Vector2.Divide(CalcTotalForce(), MASS); Velocity = Vector2.Add(Velocity, Vector2.Multiply(Acceleration, (float)(elapsedTime))); Position = Vector2.Add(Position, Vector2.Multiply(Velocity, (float)(elapsedTime))); } added functions public Vector2 CalcTraction() { //Traction force = vector direction * engine force return Vector2.Multiply(forwardDirection, ENGINE_FORCE); } public Vector2 CalcDrag() { //Drag force = constdrag * velocity * speed return Vector2.Multiply(Vector2.Multiply(Velocity, DRAG_CONST), Velocity.Y); } public Vector2 CalcRoll() { //roll force = const roll * velocity return Vector2.Multiply(Velocity, ROLL_CONST); } public Vector2 CalcTotalForce() { //total force = traction + (-drag) + (-rolling) return Vector2.Add(CalcTraction(), Vector2.Add(-CalcDrag(), -CalcRoll())); } anyone have any ideas?

    Read the article

  • Creating my own kill cam

    - by DalexL
    I plan on creating my own kill cam system for a sandbox tool set. After thinking about the mechanics of the kill cam itself, however, I'm quite lost. I'm trying to recreate the ones commonly seen in call of duty games that show, from the view of the killer, the actual killing scene. My Thoughts: -I can't just keep in memory when people kill others because I wouldn't know when to start the 'recording process'. There is on way for me to accurately determine when somebody is 'about' to kill someone. -My only real idea so far is to have a complete duplicate of everything loaded off to the side copying all the movement from the original world but with a 10 second delay. That way, all the kill cams would be 10 seconds long and the persons camera would just be moved to the second world of their killer. My Questions: Is there already an accepted way to do this? Does anybody have any good ideas for something like this? Thanks if you can!

    Read the article

  • HLSL: An array of textures and sampler states

    - by nate142
    The shader must switch between multiple textures depending on the Alpha value of the original texture for each pixel. Now this would word fine if I didn't have to worry about SamplerStates. I have created my array of textures and can select a texture based on the Alpha value of the pixel. But how do I create an Array of SamplerStates and link it to my array of textures? I attempted to treat the SamplerState as a function by adding the (int i) but that didn't work. Also I can't use Texture.Sample since this is shader model 2.0. //shader model 2.0 (DX9) texture subTextures[255]; SamplerState MeshTextureSampler(int i) { Texture = (subTextures[i]); }; float4 SampleCompoundTexture(float2 texCoord, float4 diffuse) { float4 SelectedColor = SAMPLE_TEXTURE(Texture, texCoord); int i = SelectedColor.a; texture SelectedTx = subTextures[i]; return tex2D(MeshTextureSampler(i), texCoord) * diffuse; }

    Read the article

  • Triangulating a partially triangulated mesh (2D)

    - by teodron
    Referring to the above exhibits, this is the scenario I am working with: starting with a planar graph (in my case, a 2D mesh) with a given triangulation, based on a certain criterion, the graph nodes are labeled as RED and BLACK. (A) a subgraph containing all the RED nodes (with edges between only the directly connected neighbours) is formed (note: although this figure shows a tree forming, it may well happen that the subgraph contain loops) (B) Problem: I need to quickly build a triangulation around the subgraph (e.g. as shown in figure C), but under the constraint that I have to keep the already present edges in the final result. Question: Is there a fast way of achieving this given a partially triangulated mesh? Ideally, the complexity should be in the O(n) class. Some side-remarks: it would be nice for the triangulation algorithm to take into account a certain vertex priority when adding edges (e.g. it should always try to build a "1-ring" structure around the most important nodes first - I can implement iteratively such a routine, but it's O(n^2) ). it would also be nice to reflect somehow the "hop distance" when adding edges: add edges first between the nodes that were "closer" to each other given the start topology. Nevertheless, disregarding the remarks, is there an already known scenario similar to this one where a triangulation is built upon a partially given set of triangles/edges?

    Read the article

  • Why the clip space in OpenGL has 4 dimensions?

    - by user827992
    I will use this as a generic reference, but the more i browser online docs and books, the less i understand about this. const float vertexPositions[] = { 0.75f, 0.75f, 0.0f, 1.0f, 0.75f, -0.75f, 0.0f, 1.0f, -0.75f, -0.75f, 0.0f, 1.0f, }; in this online book there is an example about how to draw the first and classic hello world for OpenGL about making a triangle. The vertex structure for the triangle is declared as stated in the code above. The book, as all the other sources about this, stress the point that the Clip Space is a 4D structure that is used to basically decide what will be rasterized and rendered to the screen. Here I have my questions: i can't imagine something in 4D, i don't think that a human can do that, what is a 4D for this Clip space ? the most human-readable doc that i have read speaks about a camera, which is just an abstraction over the clipping concept, and i get that, the problem is, why not using the concept of a camera in the first place which is a more familiar 3D structure? The only problem with the concept of a camera is that you need to define the prospective in other way and so you basically have to add another statement about what kind of camera you wish to have. How i'm supposed to read this 0.75f, 0.75f, 0.0f, 1.0f ? All i get is that they are all float values and i get the meaning of the first 3 values, what does it mean the last one?

    Read the article

  • Movement of body after applying weld joint

    - by ved
    I have two rectangular bodies. I've applied Weldjoint successfully on these bodies. I want to move that joined body by applying linear impulse. After weld joint, these two bodies becomes single body right? How do I apply force/impulse on the joined body? I am using Box2D with LibGDX. I've tried this: polygon1.applyLinearImpulse(new Vector2(-5, 0), polygon1.getWorldCenter(), true); I thought that if I move polygon1 then polygon2 will also move due to my weld joint but it is not working properly. Why don't they move together after being welded?

    Read the article

  • backface culling error

    - by acrilige
    I write simple software renderer. In my pipeline i have stage of backface culling. But looks like it has some error (see picture). I perform culling right after world transformation. (i can't insert picture in post coz i don't have enough points, so i just upload it (cube model): http://imageshack.us/photo/my-images/705/bcerror.png/) Vector3F view_dir(0.0f, 0.0f, 1.0f); std::vector<Triangle> to_remove; for (Triangle &t : m_triangles) { Vector4F e1 = t.v2 - t.v1; Vector4F e2 = t.v3 - t.v1; Vector3F normal( e1.y * e2.z - e1.z * e2.y, e1.z * e2.x - e1.x * e2.z, e1.x * e2.y - e1.y * e2.x ); normal.Normalize(); float dot = Dot(view_dir, normal); if (dot <= 0) to_remove.push_back(t); } for (Triangle& t : to_remove) m_triangles.erase(std::remove(m_triangles.begin(), m_triangles.end(), t), m_triangles.end()); Camera sits in origin and points in screen (RH). What is the reason?

    Read the article

  • Map format for 3d open world

    - by Pacha
    I am making an open world 3d platformer in Ogre3D, and I have no idea on what kind of 3d map file format I should use for it. I want to make low-polygon blocky-style objects. Probably rectangles and other geometrical figures that don't have circular edges. Some of those blocks will have properties, like climbable or they might move. I was wondering what would be the best thing to do to make the map (just one level, as it is open).

    Read the article

  • Projecting onto different size screens by cropping

    - by Jason
    Hi, I am building a phone application which will display a shape on screen. The shape should look the same on different screen sizes. I. Decided the best way to do this is to show more of the background on larger screen keeping the shapes proportion the same on all screens. My problem is I am not sure how to achieve this, I can query the screen size at runtime and calculate how different it is from the six is designed for but I am not sure what to do with this value. What kind of projection should I use for my orthographic matrix an hour will I display more on larger screens and not loose information on smaller screens? Thanks, Jason.

    Read the article

  • relationship between the model and the renderer

    - by acrilige
    I tried to build a simple graphics engine, and faced with this problems: i have a list of models that i need to draw, and object (renderer) that implements IRenderer interface with method DrawObject(Object* obj). Implementation of renderer depends on using graphics library (opengl/directx). 1st question: model should not know nothing about renderer implementation, but in this case where can i hold (cache) information that depends on renderer implementation? For example, if model have this definition: class Model { public: Model(); Vertex* GetVertices() const; private: Vertex* m_vertices; }; what is the best way to cache, for example, vertex buffer of this model for dx11? Hold it in renderer object? 2nd question: what is the best way for model to say renderer HOW it must be rendered (for example with texture, bump mapping, or may be just in one color). I thought it can be done with flags, like this: model-SetRenderOptions(RENDER_TEXTURE | RENDER_BUMPMAPPING | RENDER_LIGHTING); and in Renderer::DrawModel method check for each flag. But looks like it will become uncomfortable with the options count growth...

    Read the article

  • XNA model drawing problem

    - by user1990950
    When using this code: public static void DrawModel(Model model, Vector3 position, Vector3 offset, float xRotation, float yRotation, float zRotation, float allrot, float xScale, float yScale, float zScale) { position.Y *= -1; offset.Y *= -1; Matrix worldMatrix = ((Matrix.CreateRotationZ(MathHelper.ToRadians(zRotation)) * Matrix.CreateRotationX(MathHelper.ToRadians(xRotation))) * Matrix.CreateRotationY(MathHelper.ToRadians(yRotation))) * (Matrix.CreateTranslation(offset) * Matrix.CreateRotationY(MathHelper.ToRadians(allrot))) * Matrix.CreateScale(xScale, yScale, zScale); worldMatrix *= Matrix.CreateTranslation(position) * theCamera.GetTransformation() * Matrix.CreateTranslation(new Vector3(-(graphics.GraphicsDevice.Viewport.Width / 2), graphics.GraphicsDevice.Viewport.Height / 2, 0)); foreach (ModelMesh mesh in model.Meshes) { for (int i = 0; i < mesh.Effects.Count; i++) { ((BasicEffect)mesh.Effects[i]).EnableDefaultLighting(); ((BasicEffect)mesh.Effects[i]).World = worldMatrix; ((BasicEffect)mesh.Effects[i]).View = viewMatrix; ((BasicEffect)mesh.Effects[i]).Projection = projectionMatrix; } mesh.Draw(); } } The model rotates and then scales. It should scale and then rotate, but whenever I try to change it, it won't work.

    Read the article

  • Per-pixel collision detection - why does XNA transform matrix return NaN when adding scaling?

    - by JasperS
    I looked at the TransformCollision sample on MSDN and added the Matrix.CreateTranslation part to a property in my collision detection code but I wanted to add scaling. The code works fine when I leave scaling commented out but when I add it and then do a Matrix.Invert() on the created translation matrix the result is NaN ({NaN,NaN,NaN},{NaN,NaN,NaN},...) Can anyone tell me why this is happening please? Here's the code from the sample: // Build the block's transform Matrix blockTransform = Matrix.CreateTranslation(new Vector3(-blockOrigin, 0.0f)) * // Matrix.CreateScale(block.Scale) * would go here Matrix.CreateRotationZ(blocks[i].Rotation) * Matrix.CreateTranslation(new Vector3(blocks[i].Position, 0.0f)); public static bool IntersectPixels( Matrix transformA, int widthA, int heightA, Color[] dataA, Matrix transformB, int widthB, int heightB, Color[] dataB) { // Calculate a matrix which transforms from A's local space into // world space and then into B's local space Matrix transformAToB = transformA * Matrix.Invert(transformB); // When a point moves in A's local space, it moves in B's local space with a // fixed direction and distance proportional to the movement in A. // This algorithm steps through A one pixel at a time along A's X and Y axes // Calculate the analogous steps in B: Vector2 stepX = Vector2.TransformNormal(Vector2.UnitX, transformAToB); Vector2 stepY = Vector2.TransformNormal(Vector2.UnitY, transformAToB); // Calculate the top left corner of A in B's local space // This variable will be reused to keep track of the start of each row Vector2 yPosInB = Vector2.Transform(Vector2.Zero, transformAToB); // For each row of pixels in A for (int yA = 0; yA < heightA; yA++) { // Start at the beginning of the row Vector2 posInB = yPosInB; // For each pixel in this row for (int xA = 0; xA < widthA; xA++) { // Round to the nearest pixel int xB = (int)Math.Round(posInB.X); int yB = (int)Math.Round(posInB.Y); // If the pixel lies within the bounds of B if (0 <= xB && xB < widthB && 0 <= yB && yB < heightB) { // Get the colors of the overlapping pixels Color colorA = dataA[xA + yA * widthA]; Color colorB = dataB[xB + yB * widthB]; // If both pixels are not completely transparent, if (colorA.A != 0 && colorB.A != 0) { // then an intersection has been found return true; } } // Move to the next pixel in the row posInB += stepX; } // Move to the next row yPosInB += stepY; } // No intersection found return false; }

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >