Search Results

Search found 779 results on 32 pages for 'coordinate'.

Page 1/32 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • C++: Error in Xcode; "Graph::Coordinate::Coordinate()", referenced from: ...

    - by Alexandstein
    In a program I am writing, I wrote for two classes (Coordinate, and Graph), with one of them taking the other as constructor arguments. When I try to compile it I get the following error for Graph.cpp: Undefined symbols: "Graph::Coordinate::Coordinate(double)", referenced from: Graph::Graph() in Graph.o Graph::Graph() in Graph.o "Graph::Coordinate::Coordinate()", referenced from: Graph::Graph(Graph::Coordinate, Graph::Coordinate, Graph::Coordinate)in Graph.o Graph::Graph(Graph::Coordinate, Graph::Coordinate, Graph::Coordinate)in Graph.o Graph::Graph(Graph::Coordinate, Graph::Coordinate, Graph::Coordinate)in Graph.o Graph::Graph(Graph::Coordinate, Graph::Coordinate, Graph::Coordinate)in Graph.o Graph::Graph(Graph::Coordinate, Graph::Coordinate, Graph::Coordinate)in Graph.o Graph::Graph(Graph::Coordinate, Graph::Coordinate, Graph::Coordinate)in Graph.o Graph::Graph(Graph::Coordinate, Graph::Coordinate)in Graph.o Graph::Graph(Graph::Coordinate, Graph::Coordinate)in Graph.o Graph::Graph(Graph::Coordinate, Graph::Coordinate)in Graph.o Graph::Graph(Graph::Coordinate, Graph::Coordinate)in Graph.o Graph::Graph(Graph::Coordinate, Graph::Coordinate)in Graph.o Graph::Graph(Graph::Coordinate, Graph::Coordinate)in Graph.o Graph::Graph(Graph::Coordinate)in Graph.o Graph::Graph(Graph::Coordinate)in Graph.o Graph::Graph(Graph::Coordinate)in Graph.o Graph::Graph(Graph::Coordinate)in Graph.o Graph::Graph(Graph::Coordinate)in Graph.o Graph::Graph(Graph::Coordinate)in Graph.o Graph::Graph() in Graph.o Graph::Graph() in Graph.o Graph::Graph() in Graph.o Graph::Graph() in Graph.o Graph::Graph() in Graph.o Graph::Graph() in Graph.o ld: symbol(s) not found collect2: ld returned 1 exit status I checked the code and couldn't find anything out of the ordinary. Here are the four class files: (Sorry if it's a lot of code to sift through.) Coordinate.h class Graph{ #include "Coordinate.h" public: Graph(); Graph(Coordinate); Graph(Coordinate, Coordinate); Graph(Coordinate, Coordinate, Coordinate); void setXSize(int); void setYSize(int); void setX(int); //int corresponds to coordinates 1, 2, or 3 void setY(int); void setZ(int); int getXSize(); int getYSize(); double getX(int); //int corresponds to coordinates 1, 2, or 3 double getY(int); double getZ(int); void outputGraph(); void animateGraph(); private: int xSize; int ySize; Coordinate coord1; Coordinate coord2; Coordinate coord3; }; Coordinate.cpp #include <iostream> #include "Coordinate.h" Coordinate::Coordinate() { xCoord = 1; yCoord = 1; zCoord = 1; xVel = 1; yVel = 1; zVel = 1; } Coordinate::Coordinate(double xCoo) { xCoord = xCoo; yCoord = 1; zCoord = 1; xVel = 1; yVel = 1; zVel = 1; } Coordinate::Coordinate(double xCoo,double yCoo) { xCoord = xCoo; yCoord = yCoo; zCoord = 1; xVel = 1; yVel = 1; zVel = 1; } Coordinate::Coordinate(double xCoo,double yCoo,double zCoo) { xCoord = xCoo; yCoord = yCoo; zCoord = zCoo; xVel = 1; yVel = 1; zVel = 1; } void Coordinate::setXCoord(double xCoo) { xCoord = xCoo; } void Coordinate::setYCoord(double yCoo) { yCoord = yCoo; } void Coordinate::setZCoord(double zCoo) { zCoord = zCoo; } void Coordinate::setXVel(double xVelo) { xVel = xVelo; } void Coordinate::setYVel(double yVelo) { yVel = yVelo; } void Coordinate::setZVel(double zVelo) { zVel = zVelo; } double Coordinate::getXCoord() { return xCoord; } double Coordinate::getYCoord() { return yCoord; } double Coordinate::getZCoord() { return zCoord; } double Coordinate::getXVel() { return xVel; } double Coordinate::GetYVel() { return yVel; } double Coordinate::GetZVel() { return zVel; } Graph.h class Graph{ #include "Coordinate.h" public: Graph(); Graph(Coordinate); Graph(Coordinate, Coordinate); Graph(Coordinate, Coordinate, Coordinate); void setXSize(int); void setYSize(int); void setX(int); //int corresponds to coordinates 1, 2, or 3 void setY(int); void setZ(int); int getXSize(); int getYSize(); double getX(int); //int corresponds to coordinates 1, 2, or 3 double getY(int); double getZ(int); void outputGraph(); void animateGraph(); private: int xSize; int ySize; Coordinate coord1; Coordinate coord2; Coordinate coord3; }; Graph.cpp #include "Graph.h" #include "Coordinate.h" #include <iostream> #include <ctime> using namespace std; Graph::Graph() { Coordinate coord1(0); } Graph::Graph(Coordinate cOne) { coord1 = cOne; xSize = 20; ySize = 20; } Graph::Graph(Coordinate cOne, Coordinate cTwo) { coord1 = cOne; coord2 = cTwo; xSize = 20; ySize = 20; } Graph::Graph(Coordinate cOne, Coordinate cTwo, Coordinate cThree) { coord1 = cOne; coord2 = cTwo; coord3 = cThree; xSize = 20; ySize = 20; } void Graph::setXSize(int size) { xSize = size; } void Graph::setYSize(int size) { ySize = size; } int Graph::getXSize() { return xSize; } int Graph::getYSize() { return ySize; } void Graph::outputGraph() { } void Graph::animateGraph() { } Thanks very much for any help!

    Read the article

  • Converting world space coordinate to screen space coordinate and getting incorrect range of values

    - by user1423893
    I'm attempting to convert from world space coordinates to screen space coordinates. I have the following code to transform my object position Vector3 screenSpacePoint = Vector3.Transform(object.WorldPosition, camera.ViewProjectionMatrix); The value does not appear to be in screen space coordinates and is not limited to a [-1, 1] range. What step have I missed out in the conversion process? EDIT: Projection Matrix Perspective(game.GraphicsDevice.Viewport.AspectRatio, nearClipPlaneZ, farClipPlaneZ); private void Perspective(float aspect_Ratio, float z_NearClipPlane, float z_FarClipPlane) { nearClipPlaneZ = z_NearClipPlane; farClipPlaneZ = z_FarClipPlane; float yZoom = 1f / (float)Math.Tan(fov * 0.5f); float xZoom = yZoom / aspect_Ratio; matrix_Projection.M11 = xZoom; matrix_Projection.M12 = 0f; matrix_Projection.M13 = 0f; matrix_Projection.M14 = 0f; matrix_Projection.M21 = 0f; matrix_Projection.M22 = yZoom; matrix_Projection.M23 = 0f; matrix_Projection.M24 = 0f; matrix_Projection.M31 = 0f; matrix_Projection.M32 = 0f; matrix_Projection.M33 = z_FarClipPlane / (nearClipPlaneZ - farClipPlaneZ); matrix_Projection.M34 = -1f; matrix_Projection.M41 = 0f; matrix_Projection.M42 = 0f; matrix_Projection.M43 = (nearClipPlaneZ * farClipPlaneZ) / (nearClipPlaneZ - farClipPlaneZ); matrix_Projection.M44 = 0f; } View Matrix // Make our view matrix Matrix.CreateFromQuaternion(ref orientation, out matrix_View); matrix_View.M41 = -Vector3.Dot(Right, position); matrix_View.M42 = -Vector3.Dot(Up, position); matrix_View.M43 = Vector3.Dot(Forward, position); matrix_View.M44 = 1f; // Create the combined view-projection matrix Matrix.Multiply(ref matrix_View, ref matrix_Projection, out matrix_ViewProj); // Update the bounding frustum boundingFrustum.SetMatrix(matrix_ViewProj);

    Read the article

  • How to get image's coordinate on JPanel

    - by Jessy
    This question is related to my previous question http://stackoverflow.com/questions/2376027/how-to-generate-cartesian-coordinate-x-y-from-gridbaglayout I have successfully get the coordinate of each pictures, however when I checked the coordinate through (System.out.println) and the placement of the images on the screen, it seems to be wrong. e.g. if on the screen it was obvious that the x point of the first picture is on cell 2 which is on coordinate of 20, but the program shows x=1. Here is part of the code: public Grid (){ setPreferredSize(new Dimension(600,600)); .... setLayout(new GridBagLayout()); GridBagConstraints gc = new GridBagConstraints(); gc.weightx = 1d; gc.weighty = 1d; gc.insets = new Insets(0, 0, 0, 0);//top, left, bottom, and right gc.fill = GridBagConstraints.BOTH; JLabel[][] label = new JLabel[ROWS][COLS]; Random rand = new Random(); // fill the panel with labels for (int i=0;i<IMAGES;i++){ ImageIcon icon = createImageIcon("myPics.jpg"); int r, c; do{ //pick random cell which is empty r = (int)Math.floor(Math.random() * ROWS); c = (int)Math.floor(Math.random() * COLS); } while (label[r][c]!=null); //randomly scale the images int x = rand.nextInt(50)+30; int y = rand.nextInt(50)+30; Image image = icon.getImage().getScaledInstance(x,y, Image.SCALE_SMOOTH); icon.setImage(image); JLabel lbl = new JLabel(icon); // Instantiate GUI components gc.gridx = r; gc.gridy = c; add(lbl, gc); //add(component, constraintObj); label[r][c] = lbl; } I checked the coordinate through this code: Component[] components = getComponents(); for (Component component : components) { System.out.println(component.getBounds()); }

    Read the article

  • Android Canvas Coordinate System

    - by Mitch
    I'm trying to find information on how to change the coordinate system for the canvas. I have some vector data I'd like to draw to a canvas using things like circles and lines, but the data's coordinate system doesn't match the canvas coordinate system. Is there a way to map the units I'm using to the screen's units? I'm drawing to an ImageView which isn't taking up the entire display. If I have to do my own calculations prior to each drawing call, how to I find the width and height of my ImageView? The getWidth() and getHeight() calls I tried seem to be returning the entire canvas size and not the size of the ImageView which isn't helpful. I see some matrix stuff, is that something that will work for me? I tried to use the "public void scale(float sx, float sy)", but that works more like a pixel level zoom rather than a vector scale function by expanding each pixel. This means if the dimensions are increased to fit the screen, the line thickness is also increased. Update: After some research I'm starting to think there's no way to change coordinate systems to something else. I'll need to map all my coordinates to the screen's pixel coordinates and do so by modifying each vector. The getWidth() and getHeight() seem to be working better for me now. I can say what was wrong, but I suspect I can't use these methods inside the constructor.

    Read the article

  • OpenGLES rotation in fixed coordinate system

    - by Jenicek
    Hi, I'm having real trouble finding out how to rotate an object arround two axes without changing axes orientation. I need only local rotation, first arround X axis and then arround Y axis(only example, it doesn't matter how many transformations arround which axes) without transforming the whole coordinate system, only the object. The problem is that if I'm using glRotatef arround X axis, the axes are rotated also and that's what I don't want. I've red bunch of articles about it but it seems I'm still missing something. Thanks for every help. To have some sample code here, it's something like this glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glRotatef(rotX, 1.0f, 0.0f, 0.0f); glRotatef(rotY, 0.0f, 1.0f, 0.0f); drawObject(); but this transforms the coordinate system also.

    Read the article

  • Opengl Coordinate System

    - by praveen
    Say I am using an Identity Matrix for my modelViewTransformation Matrix on an Open GL ES2.0 program. The Co-ordinate system in this case is the canonical opengl co-ordinate system which extends from (-1,-1,-1) to (1,,1,1). My question is, is this coordinate system right-handed or left-handed? A broader question: Is there a document with OpenGL which can list all the mathematical conventions followed by the API?

    Read the article

  • Coordinate geometry operations in images/discrete space

    - by avd
    I have images which have line segments, rays etc. I am representing these line segments using Bresenham algorithm (means whatever coordinates I get using this algorithm between two points). Now I want to do operations such as finding intersection point between two line segments, finding the projection of one vector onto other etc... The problem is I am not working in continuous space. The line segments are being approximated using Bresenham algorithm. So I want suggestions on what are the best and most efficient ways to do this? A link to C++ library or implementation would also be good enough. Please suggest some books which deal with such problems.

    Read the article

  • Split a 2D scene in layers or have a z coordinate

    - by Bane
    I am in the process of writing a 2D game engine, and a dilemma emerged. Let me explain the situation... I have a Scene class, to which various objects can be added (Drawable, ParticleEmitter, Light2D, etc), and as this is a 2D scene, things will obviously be drawn over each other. My first thought was that I could have basic add and remove methods, but I soon realized that then there would be no way for the programmer to control the order in which things were drawn. So I can up with two options, each with its pros and cons. A) Would be to split the scene in layers. By that I mean instead of having the scene be a container of objects, have it be a container of layers, which are in turn the containers of objects. B) Would require to have some kind of z-coordinate, and then have the scene sorted so objects with lower z get drawn first. Option A is pretty solid, but the problem is with the lights. In what layer do I add it? Does it work cross-layer? On all bottom layers? And I still need the Z coordinate to calculate the shadow! Option B would require me to change all my code from having Vector2D positions, to some kind of class that inherits from Vector2D and adds a z coordinate to it (I don't want it to be a Vector3D because I still need all the same methods the 2D kind has, just with .z clamped on). Am I missing something? Is there an alternative to these methods? I'm working in Javascript, if that makes a difference.

    Read the article

  • Get coordinates in parent, but not in stage.

    - by Bart van Heukelom
    I know about Flash's localToGlobal and globalToLocal methods to transform coordinates from the local system to the global system, but is there a way to achieve the intermediate? To transform coordinates from an arbitrary system to any other arbitrary system? I have a clickable object inside a Sprite, and the Sprite is a child of the stage. I want to retrieve the clicked point in the Sprite.

    Read the article

  • Converting from different handedness coordinate systems

    - by SirYakalot
    I am currently porting a demo from XNA to DirectX which, as I understand it, both have coordinate systems with different handednesses. What are the things I need to bare in mind when converting between the two? I understand not everything needs to be changed. Also I notice that many of the 3D maths functions in some of the direct3D libraries have right handed and left handed alternatives. Would it be better to just use these?

    Read the article

  • Transform between two 3d cartesian coordinate systems

    - by Pris
    I'd like to know how to get the rotation matrix for the transformation from one cartesian coordinate system (X,Y,Z) to another one (X',Y',Z'). Both systems are defined with three orthogonal vectors as one would expect. No scaling or translation occurs. I'm using OpenSceneGraph and it offers a Matrix convenience class, if it makes finding the matrix easier: http://www.openscenegraph.org/documentation/OpenSceneGraphReferenceDocs/a00403.html.

    Read the article

  • Changing coordinate system from Z-up to Y-up

    - by Jari Komppa
    Blender's coordinate system is different from what I'm used to, in that Z points upwards instead of Y. What would be the simplest way of converting all the world data (so that all animations, texture coordinates, etc still work) so that Y points upwards? Clarification: Object positions are defined as matrices, so just switching translation/rotation/scale information in matrices is not a trivial task.

    Read the article

  • Numerically stable(ish) method of getting Y-intercept of mouse position?

    - by Fraser
    I'm trying to unproject the mouse position to get the position on the X-Z plane of a ray cast from the mouse. The camera is fully controllable by the user. Right now, the algorithm I'm using is... Unproject the mouse into the camera to get the ray: Vector3 p1 = Vector3.Unproject(new Vector3(x, y, 0), 0, 0, width, height, nearPlane, farPlane, viewProj; Vector3 p2 = Vector3.Unproject(new Vector3(x, y, 1), 0, 0, width, height, nearPlane, farPlane, viewProj); Vector3 dir = p2 - p1; dir.Normalize(); Ray ray = Ray(p1, dir); Then get the Y-intercept by using algebra: float t = -ray.Position.Y / ray.Direction.Y; Vector3 p = ray.Position + t * ray.Direction; The problem is that the projected position is "jumpy". As I make small adjustments to the mouse position, the projected point moves in strange ways. For example, if I move the mouse one pixel up, it will sometimes move the projected position down, but when I move it a second pixel, the project position will jump back to the mouse's location. The projected location is always close to where it should be, but it does not smoothly follow a moving mouse. The problem intensifies as I zoom the camera out. I believe the problem is caused by numeric instability. I can make minor improvements to this by doing some computations at double precision, and possibly abusing the fact that floating point calculations are done at 80-bit precision on x86, however before I start micro-optimizing this and getting deep into how the CLR handles floating point, I was wondering if there's an algorithmic change I can do to improve this? EDIT: A little snooping around in .NET Reflector on SlimDX.dll: public static Vector3 Unproject(Vector3 vector, float x, float y, float width, float height, float minZ, float maxZ, Matrix worldViewProjection) { Vector3 coordinate = new Vector3(); Matrix result = new Matrix(); Matrix.Invert(ref worldViewProjection, out result); coordinate.X = (float) ((((vector.X - x) / ((double) width)) * 2.0) - 1.0); coordinate.Y = (float) -((((vector.Y - y) / ((double) height)) * 2.0) - 1.0); coordinate.Z = (vector.Z - minZ) / (maxZ - minZ); TransformCoordinate(ref coordinate, ref result, out coordinate); return coordinate; } // ... public static void TransformCoordinate(ref Vector3 coordinate, ref Matrix transformation, out Vector3 result) { Vector3 vector; Vector4 vector2 = new Vector4 { X = (((coordinate.Y * transformation.M21) + (coordinate.X * transformation.M11)) + (coordinate.Z * transformation.M31)) + transformation.M41, Y = (((coordinate.Y * transformation.M22) + (coordinate.X * transformation.M12)) + (coordinate.Z * transformation.M32)) + transformation.M42, Z = (((coordinate.Y * transformation.M23) + (coordinate.X * transformation.M13)) + (coordinate.Z * transformation.M33)) + transformation.M43 }; float num = (float) (1.0 / ((((transformation.M24 * coordinate.Y) + (transformation.M14 * coordinate.X)) + (coordinate.Z * transformation.M34)) + transformation.M44)); vector2.W = num; vector.X = vector2.X * num; vector.Y = vector2.Y * num; vector.Z = vector2.Z * num; result = vector; } ...which seems to be a pretty standard method of unprojecting a point from a projection matrix, however this serves to introduce another point of possible instability. Still, I'd like to stick with the SlimDX Unproject routine rather than writing my own unless it's really necessary.

    Read the article

  • Bitmap Font Displays in Center Always Without Coding it Manually (Fix Coordinate Problem onText)

    - by David Dimalanta
    Is there a way on how to stay the texts in center without manually coding it or something, especially when making an update? I'm making a display for the highest score. Let's say that the score is 9. However, if the score is 9,999,999, the text displays still only at the fixed X and Y coordinate. Is there really a way to stay the text in center especially when there is changes when a player beats the new world record? Here's my code inside Sprite Batch: font.setScale(1.5f); font.draw(batch, "HIGHEST SCORE:", (900/10)*1 + 60, (1280/16)*10); font.draw(batch, "" + 9999999 + "", (900/10)*4, (1280/16)*8); batch.draw(grid_guide, 0, 0, 900, 1280); // --> For testing purpose only. // Where 9999999 is a new record score for example. Here's the image shown as example. I add it some red grid so that I could check if the display of score when updated will always display on center no matter how many digits takes place in. However, it is fixed, so I have to figure it out how to display it automatically on center regardless of the number of digits while updating for the new highscore. I have used the LibGDX preferences very well though to save and load records for the highscore.

    Read the article

  • Per-vertex position/normal and per-index texture coordinate

    - by Boreal
    In my game, I have a mesh with a vertex buffer and index buffer up and running. The vertex buffer stores a Vector3 for the position and a Vector2 for the UV coordinate for each vertex. The index buffer is a list of ushorts. It works well, but I want to be able to use 3 discrete texture coordinates per triangle. I assume I have to create another vertex buffer, but how do I even use it? Here is my vertex/index buffer creation code: // vertices is a Vertex[] // indices is a ushort[] // VertexDefs stores the vertex size (sizeof(float) * 5) // vertex data numVertices = vertices.Length; DataStream data = new DataStream(VertexDefs.size * numVertices, true, true); data.WriteRange<Vertex>(vertices); data.Position = 0; // vertex buffer parameters BufferDescription vbDesc = new BufferDescription() { BindFlags = BindFlags.VertexBuffer, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None, SizeInBytes = VertexDefs.size * numVertices, StructureByteStride = VertexDefs.size, Usage = ResourceUsage.Default }; // create vertex buffer vertexBuffer = new Buffer(Graphics.device, data, vbDesc); vertexBufferBinding = new VertexBufferBinding(vertexBuffer, VertexDefs.size, 0); data.Dispose(); // index data numIndices = indices.Length; data = new DataStream(sizeof(ushort) * numIndices, true, true); data.WriteRange<ushort>(indices); data.Position = 0; // index buffer parameters BufferDescription ibDesc = new BufferDescription() { BindFlags = BindFlags.IndexBuffer, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None, SizeInBytes = sizeof(ushort) * numIndices, StructureByteStride = sizeof(ushort), Usage = ResourceUsage.Default }; // create index buffer indexBuffer = new Buffer(Graphics.device, data, ibDesc); data.Dispose(); Engine.Log(MessageType.Success, string.Format("Mesh created with {0} vertices and {1} indices", numVertices, numIndices)); And my drawing code: // ShaderEffect, ShaderTechnique, and ShaderPass all store effect data // e is of type ShaderEffect // get the technique ShaderTechnique t; if(!e.techniques.TryGetValue(techniqueName, out t)) return; // effect variables e.SetMatrix("worldView", worldView); e.SetMatrix("projection", projection); e.SetResource("diffuseMap", texture); e.SetSampler("textureSampler", sampler); // set per-mesh/technique settings Graphics.context.InputAssembler.SetVertexBuffers(0, vertexBufferBinding); Graphics.context.InputAssembler.SetIndexBuffer(indexBuffer, SlimDX.DXGI.Format.R16_UInt, 0); Graphics.context.PixelShader.SetSampler(sampler, 0); // render for each pass foreach(ShaderPass p in t.passes) { Graphics.context.InputAssembler.InputLayout = p.layout; p.pass.Apply(Graphics.context); Graphics.context.DrawIndexed(numIndices, 0, 0); } How can I do this?

    Read the article

  • Coordinate based travel through multi-line path over elapsed time

    - by Chris
    I have implemented A* Path finding to decide the course of a sprite through multiple waypoints. I have done this for point A to point B locations but am having trouble with multiple waypoints, because on slower devices when the FPS slows and the sprite travels PAST a waypoint I am lost as to the math to switch directions at the proper place. EDIT: To clarify my path finding code is separate in a game thread, this onUpdate method lives in a sprite like class which happens in the UI thread for sprite updating. To be even more clear the path is only updated when objects block the map, at any given point the current path could change but that should not affect the design of the algorithm if I am not mistaken. I do believe all components involved are well designed and accurate, aside from this piece :- ) Here is the scenario: public void onUpdate(float pSecondsElapsed) { // this could be 4x speed, so on slow devices the travel moved between // frames could be very large. What happens with my original algorithm // is it will start actually doing circles around the next waypoint.. pSecondsElapsed *= SomeSpeedModificationValue; final int spriteCurrentX = this.getX(); final int spriteCurrentY = this.getY(); // getCoords contains a large array of the coordinates to each waypoint. // A waypoint is a destination on the map, defined by tile column/row. The // path finder converts these waypoints to X,Y coords. // // I.E: // Given a set of waypoints of 0,0 to 12,23 to 23, 0 on a 23x23 tile map, each tile // being 32x32 pixels. This would translate in the path finder to this: // -> 0,0 to 12,23 // Coord : x=16 y=16 // Coord : x=16 y=48 // Coord : x=16 y=80 // ... // Coord : x=336 y=688 // Coord : x=336 y=720 // Coord : x=368 y=720 // // -> 12,23 to 23,0 -NOTE This direction change gives me trouble specifically // Coord : x=400 y=752 // Coord : x=400 y=720 // Coord : x=400 y=688 // ... // Coord : x=688 y=16 // Coord : x=688 y=0 // Coord : x=720 y=0 // // The current update index, the index specifies the coordinate that you see above // I.E. final int[] coords = getCoords( 2 ); -> x=16 y=80 final int[] coords = getCoords( ... ); // now I have the coords, how do I detect where to set the position? The tricky part // for me is when a direction changes, how do I calculate based on the elapsed time // how far to go up the new direction... I just can't wrap my head around this. this.setPosition(newX, newY); }

    Read the article

  • How to convert TM-2 degree coordinate to lat lon on Android?

    - by Victor Lin
    I am writing a program that convert TM-2 degree to lat/lon on Android, but I can't find formula for that. I do find some example on internet, but most of them are convert with open source library, that I can't use on Android platform. I also find a java class that do convert from UTM to lat lon, but it seems no suitable for TM 2 degreen coordinate system. So my question is: how do I convert TM 2 degree coordinate to lat/lon? Where can I find formula?

    Read the article

  • Finding the closest grid coordinate to the mouse position with javascript/jQuery

    - by Acorn
    What I'm trying to do is make a grid of invisible coordinates on the page equally spaced. I then want a <div> to be placed at whatever grid coordinate is closest to the pointer when onclick is triggered. Here's the rough idea: I have the tracking of the mouse coordinates and the placing of the <div> worked out fine. What I'm stuck with is how to approach the problem of the grid of coordinates. First of all, should I have all my coordinates in an array which I then compare my onclick coordinate to? Or seeing as my grid coordinates follow a rule, could I do something like finding out which coordinate that is a multiple of whatever my spacing is is closest to the onclick coordinate? And then, where do I start with working out which grid point coordinate is closest? What's the best way of going about it? Thanks!

    Read the article

  • Finding the closest grid coordinate to the mouse onclick with javascript/jQuery

    - by Acorn
    What I'm trying to do is make a grid of invisible coordinates on the page equally spaced. I then want a <div> to be placed at whatever grid coordinate is closest to the pointer when onclick is triggered. Here's the rough idea: I have the tracking of the mouse coordinates and the placing of the <div> worked out fine. What I'm stuck with is how to approach the problem of the grid of coordinates. First of all, should I have all my coordinates in an array which I then compare my onclick coordinate to? Or seeing as my grid coordinates follow a rule, could I do something like finding out which coordinate that is a multiple of whatever my spacing is is closest to the onclick coordinate? And then, where do I start with working out which grid point coordinate is closest? What's the best way of going about it? Thanks!

    Read the article

  • HTML5 canvas screen to isometric coordinate conversion

    - by ovhqe
    I am trying to create an isometric game using HTML5 canvas, but don't know how to convert HTML5 canvas screen coordinates to isometric coordinates. My code now is: var mouseX = 0; var mouseY = 0; function mouseCheck(event) { mouseX = event.pageX; mouseY = event.pageY; } which gives me canvas coordinates. But how do I convert these coordinates to isometric coordinates? I am using 16x16 tiles.

    Read the article

  • FBX SDK Not Converting Child Node Coordinate Systems

    - by Al Bundy
    I am trying to import a scene into my application from an fbx file. In 3DS Max, the scene and it’s local translations are as follows: Root (0, 0, 0) '-Sphere001 (-15, 30, 0) ' '-Sphere002 (-2, -30, 0) ' '-Sphere003 (-30, -20, 0) '-Cube001 (35, -15, 0) This is the code that I am using to get the translations of each node: FbxDouble3 fbxPosition = pChild->LclTranslation.Get(); FbxDouble3 fbxRotation = pChild->LclRotation.Get(); FbxDouble3 fbxScale = pChild->LclScaling.Get(); When I try to import the scene, the first node from the scene is getting converted to a right handed system, using this conversion: (X, Z, -Y), but none of their child nodes are. after importing the scene, the local translations I get are as follows: Root (0, 0, 0) --Sphere001 (-15, 0, -30) - converted ----Sphere002 (-2, -30, 0) - not converted ------Sphere003 (-30, -20, 0) - not converted --Cube001 (35, 0, 15) - converted Can anybody help me make sense of this? Thanks

    Read the article

  • Transform 3D vectors between coordinate systems

    - by Nir Cig
    I've got 6 points in 3D space: A,B,C,D,E,F, that represent 4 vectors. AB is perpendicular to AC and DE is perpendicular to DF. I need to find a transformation matrix M, that transforms AB to DE and AC to DF. In other words: M·AB=DE, M·AC=DF If no scaling was involved, this could be solved with a simple rotation matrix. But since the ratios |AB|/|DE|, |AC|/|DF| might be different, I'm not sure how to proceed.

    Read the article

  • Rotation based on x coordinate and x velocity?

    - by Lewis
    -(void) accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration { float deceleration = 0.3f, sensitivity = 8.0f, maxVelocity = 150; // adjust velocity based on current accelerometer acceleration playerVelocity.x = playerVelocity.x * deceleration + acceleration.x * sensitivity; // we must limit the maximum velocity of the player sprite, in both directions (positive & negative values) playerVelocity.x = fmaxf(fminf(playerVelocity.x, maxVelocity), -maxVelocity); } Hi, I want to rotate my sprite based on the velocity and accelerometer input. My sprite can move along the X axis like so: <--------- sprite ----------- But it always faces forwards, if it is moving left I want it to point slightly to the left, the degree of how far it is pointing to be judged from the velocity. This should also work for the right. I tried using atan but as the y velocity and position is always the same the function returns 0, which doesn't rotate it at all. Any ideas? Regards, Lewis.

    Read the article

  • Most efficient way to handle coordinate maps in Java

    - by glowcoder
    I have a rectangular tile-based layout. It's your typical Cartesian system. I would like to have a single class that handles two lookup styles Get me the set of players at position X,Y Get me the position of player with key K My current implementation is this: class CoordinateMap<V> { Map<Long,Set<V>> coords2value; Map<V,Long> value2coords; // convert (int x, int y) to long key - this is tested, works for all values -1bil to +1bil // My map will NOT require more than 1 bil tiles from the origin :) private Long keyFor(int x, int y) { int kx = x + 1000000000; int ky = y + 1000000000; return (long)kx | (long)ky << 32; } // extract the x and y from the keys private int[] coordsFor(long k) { int x = (int)(k & 0xFFFFFFFF) - 1000000000; int y = (int)((k >>> 32) & 0xFFFFFFFF) - 1000000000; return new int[] { x,y }; } } From there, I proceed to have other methods that manipulate or access the two maps accordingly. My question is... is there a better way to do this? Sure, I've tested my class and it works fine. And sure, something inside tells me if I want to reference the data by two different keys, I need two different maps. But I can also bet I'm not the first to run into this scenario. Thanks!

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >