Search Results

Search found 779 results on 32 pages for 'coordinate'.

Page 11/32 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • MATLAB: How do I get 3D coordiantes from a user-click?

    - by John
    I'm using Matlab to create a small chess game for one of my courses this semester. The thing I'm having trouble with is having the user be able to select one of the chess pieces. To simplify things, I'm making it so that the user selects a piece by clicking on the square that the chess piece resides on rather than clicking the piece itself (which I assume would be much more difficult). I know how to get the x and y coordinates of the view-port, but how do I transform these coordinates into 3-space coordinates? I know that there are multiple x,y,z coordinates associated with each view-port coordinate, but I'm only interested in the x,y,z coordinate where z = 0 (since the board itself is in the x,y plane that intersects the z axis where z = 0). Thanks!

    Read the article

  • Convert Lat/Longs to X/Y Co-ordinates

    - by michael
    I have the Lat/Long value of New York City, NY; 40.7560540,-73.9869510 and a flat image of the earth, 1000px × 446px. I would like to be able to convert, using Javascript, the Lat/Long to an X,Y coordinate where the point would reflect the location. So the X,Y coordinate form the Top-Left corner of the image would be; 289, 111 Things to note: don't worry about issues of what projection to use, make your own assumption or go with what you know might work X,Y can be form any corner of the image Bonus points for the same solution in PHP (but I really need the JS)

    Read the article

  • How can I build a list of world geo locations and their relative geographical hierarchies?

    - by Nathan Ridley
    I want to build a database of geographical locations and would like to be able to identify locations that fall inside other locations. For example, The Empire State Building is going to have one geo-coordinate, but my database would be able to tell me that it falls inside Manhattan, which falls inside New York City, which is in the state of New York and so forth. I've been looking at OpenStreetMap which seems to have a pretty decent database but as best I can tell, I would need to create a set of polygon structures representing each region and then detect if a coordinate falls inside a given region's polygon. Is there a better way to do this, or is there a data source where all of this has already been calculated?

    Read the article

  • How to calculate a point with an given center, angle and radius?

    - by mystify
    In this SO question, someone asked for calculating an angle from three points. I need to do the opposite thing. I want to draw a clock, and I have tiny tick images. An art dude made 60 of them, each with an individual and accurate shadow. So there are 60 distinct images at 10x10 points in size, already correctly rotated in the center of that square. So every 6 degrees one tick image has to be placed. I would just need to calculate the x/y coordinate based on a center point, an radius and an angle. So I have: an center point an radius an angle Is there an easy way to calculate the x/y coordinate with this? Maybe cocoa-touch already has a useful function or method for this?

    Read the article

  • How to calculate the touch location with convertToWorldSpace?

    - by Paul
    i would like to convert the touch location as a world coordinate in my tile game. With this code, i clicked on the right of the screen (so that my character walks in the tiled game, and the background goes slowly to the left) : - (void)ccTouchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { for( UITouch *touch in touches ) { CGPoint location = [touch locationInView: [touch view]]; location = [[CCDirector sharedDirector] convertToGL: location]; CGPoint test = [self convertToWorldSpace:location]; CCLOG(@"test : %.2g", test.x); The test log gives me : 50, 72, 1e+02, 2.6e+02, 4.2e+02, (and then goes down) 3.2e+02, 9.5, -1.9e+02, etc. Does anyone know why? I would like to calculate the "real" coordinate of the touch, so that i know when the character has to keep going (click on the right of its actual position) or if he has to turn and go backwards. (click on the left of its actual position) Thanks for your help

    Read the article

  • How to rate a connect four game situation in java

    - by MrPink
    Hey, I am trying to write a simple AI for a "Get four" game. The basic game principles are done, so I can throw in coins of different color, and they stack on each other and fill a 2D Array and so on and so forth. until now this is what the method looks like: public int insert(int x, int color) //0 = empty, 1=player1 2=player2" X is the horizontal coordinate, as the y coordinate is determined by how many stones are in the array already, I think the idea is obvious. Now the problem is I have to rate specific game situations, so find how many new pairs, triplets and possible 4 in a row I can get in a specific situation to then give each situation a specific value. With these values I can setup a "Game tree" to then decide which move would be best next (later on implementing Alpha-Beta-Pruning). My current problem is that I can't think of an efficient way to implement a rating of the current game situation in a java method. Any ideas would be greatly appreciated! greetings from Germany Mr. Pink

    Read the article

  • Regarding Standard Oxford Format for vlfeat sift

    - by Karl
    One of my upper classmen has gave me a data set for experimenting with vlfeat's SIFT, however, her extracted SIFT data for the frame part contains 5 dimensions. Recall from vl_sift function: [F,D] = VL_SIFT(I) Each column of D is the descriptor of the corresponding frame in F. F normally contains 4 dimensions which consists of x-coordinate, y-coordinate, scale, and orientation. So I asked her what is this 5th dimension, and she pointed me to search for "standard oxford format" for sift feature. The thing is I tried to search around regarding this standard oxford format and sift feature, but I got no luck in finding it at all. If somebody knows regarding this, could you please point me to the right direction?

    Read the article

  • How to replace&add the dataframe element by another dataframe in Python Pandas?

    - by bigbug
    Suppose I have two data frame 'df_a' & 'df_b' , both have the same index structure and columns, but some of the inside data elements are different: >>> df_a sales cogs STK_ID QT 000876 1 100 100 2 100 100 3 100 100 4 100 100 5 100 100 6 100 100 7 100 100 >>> df_b sales cogs STK_ID QT 000876 5 50 50 6 50 50 7 50 50 8 50 50 9 50 50 10 50 50 And now I want to replace the element of df_a by element of df_b which have the same (index, column) coordinate, and attach df_b's elements whose (index, column) coordinate beyond the scope of df_a . Just like add a patch 'df_b' to 'df_a' : >>> df_c = patch(df_a,df_b) sales cogs STK_ID QT 000876 1 100 100 2 100 100 3 100 100 4 100 100 5 50 50 6 50 50 7 50 50 8 50 50 9 50 50 10 50 50 How to write the 'patch(df_a,df_b)' function ?

    Read the article

  • Realy urgent and big help in a little MATLAB game... Please help me!

    - by Sanyi
    Hi! I have to make the game „Planarity” in Matlab, for school project. (If you Google it, you can see and play the game in flash). The computer have to randomly put 5 circles in the coordinate system in different positions. After that it must draw lines between them. When the circles are randomly put in the coordinate system the coordinates must be whole numbers. Good example for number one circle: (3,4) ; bad example for number one circle (2.5 ,6.7). Please if Matlab is a childs game to you help me by sending me the source code for this. I really really need help... Please help me, this can be one hour to you, but a life saving thing to me...

    Read the article

  • Sorting a 2D numpy array by multiple axes

    - by perimosocordiae
    I have a 2D numpy array of shape (N,2) which is holding N points (x and y coordinates). For example: array([[3, 2], [6, 2], [3, 6], [3, 4], [5, 3]]) I'd like to sort it such that my points are ordered by x-coordinate, and then by y in cases where the x coordinate is the same. So the array above should look like this: array([[3, 2], [3, 4], [3, 6], [5, 3], [6, 2]]) If this was a normal Python list, I would simply define a comparator to do what I want, but as far as I can tell, numpy's sort function doesn't accept user-defined comparators. Any ideas?

    Read the article

  • Possible to access all Movie clips on a layer, on timeline, through stage ?

    - by azislo
    I have a code ... var selection:Array = new Array(); var diplayObjCont:* = stage; // The rectangle that defines the selection in the containers coordinate space. // Loop throught the containers children. for(var a:int; a<diplayObjCont.numChildren; a++){ // Get the childs bounds in the containers coordinate space. var child:DisplayObject = diplayObjCont.getChildAt(a); selection.push(child); } trace(selection); that returns just [object MainTimeline] So, can I access layers on this MainTimeline to get all Movie Clips on this layer ? So I can do a simple operation "A_1_2.buttonMode = true;" to all my MC's (in an array for example) without writing every line (lot of MC's on layer and lot of lines).

    Read the article

  • matlab: simple matrix filtering - group size

    - by Art
    I have a huuuge matrix storing information about X and Y coordinates of multiple particle trajectories , which in simplified version looks like that: col 1- track number; col 2- frame number; col 2- coordinate X; col 3- coordinate Y for example: A = 1 1 5.14832 3.36128 1 2 5.02768 3.60944 1 3 4.85856 3.81616 1 4 5.17424 4.08384 2 1 2.02928 18.47536 2 2 2.064 18.5464 3 1 8.19648 5.31056 3 2 8.04848 5.33568 3 3 7.82016 5.29088 3 4 7.80464 5.31632 3 5 7.68256 5.4624 3 6 7.62592 5.572 Now I want to filter out trajectories shorter than lets say 2 and keep remaining stuff like (note renumbering of trajectories): B = 1 1 5.14832 3.36128 1 2 5.02768 3.60944 1 3 4.85856 3.81616 1 4 5.17424 4.08384 2 1 8.19648 5.31056 2 2 8.04848 5.33568 2 3 7.82016 5.29088 2 4 7.80464 5.31632 2 5 7.68256 5.4624 2 6 7.62592 5.572 How to do it efficiently? I can think about some ideas using for loop and vertcat, but its the slowest solution ever :/ Thanks!

    Read the article

  • MATLAB: How do I get 3D coordiantes from a user-click?

    - by Tim
    I'm using Matlab to create a small chess game for one of my courses this semester. The thing I'm having trouble with is having the user be able to select one of the chess pieces. To simplify things, I'm making it so that the user selects a piece by clicking on the square that the chess piece resides on rather than clicking the piece itself (which I assume would be much more difficult). I know how to get the x and y coordinates of the view-port, but how do I transform these coordinates into 3-space coordinates? I know that there are multiple x,y,z coordinates associated with each view-port coordinate, but I'm only interested in the x,y,z coordinate where z = 0 (since the board itself is in the x,y plane that intersects the z axis where z = 0). Thanks!

    Read the article

  • Converting between square and rectangular pixel co-ordinates

    - by FlyboyUtah
    I'm new at using transforms and this type of math, and would appreciate some direction solving my coding problem. I'm writing in XCode for the iphone, and am working with CGraphics. Problem: In Xcode, I want to draw curves, lines and so on it's screen of of square pixels. Then convert those points, as close as possible, into non-square pixel sysem. For example if the original coordinate system is 500 x 500 pixels that are displayed on square screen of 10 by 10 inchs I draw a round circle with the circle formula. It looks round, and all is well. Now, I draw the same circle on a second 10 x 10 inch screen that is 850 pixels by 500 pixels. Without changing the coordinates, the same circle formual displays something that looks like an egg. How can I draw the circle on the second screen in a different coordinate system? And in addition, I need to access the set of points x,y system individually. s

    Read the article

  • Get current location using CLLocationCoordinate2D

    - by Mobility
    I am trying to get current location of user using CLLocationCoordinate2D. I want the values in the format like CLLocationCoordinate2D start = {-28.078694,153.382844 }; So that I can use them like following: NSString *urlAddress = [NSString stringWithFormat:@"http://maps.google.com/?saddr=%1.6f,%1.6f&daddr=%1.6f,%1.6f", start.latitude, start.longitude, destination.latitude, destination.longitude]; I used CLLocation *location = [[CLLocation alloc]init ]; CLLocationDegrees currentLatitude = location.coordinate.latitude; CLLocationDegrees currentLongitude = location.coordinate.longitude; to get current lat & long. But I am getting 0.000 for both when I try to test. I am testing on iPhone 4s. If there is any sample code, it will be great.

    Read the article

  • Interview with Al-Sorayai Group’s Managing Director on the Oracle Retail deployment

    - by user801960
    Recently, I had the opportunity to speak with Sheik Al Sorayai, Managing Director of the Saudi Arabian carpet and rug manufacturer, the Al-Sorayai Group. His business has recently implemented Oracle® Retail Merchandising and Stores applications in only six months to support the launch of its new furniture retail concept, HomeStyle. With an aggressive growth strategy for the new business in place, the Oracle Retail solutions are enabling Al-Sorayai to coordinate merchandising and store operations and improve decision-making and insight to optimise margins, reduce inventory costs and provide a consistent customer experience.

    Read the article

  • Mapping Your Data with Bing Maps and SQL Server 2008 – Part 1

    Jonas Stawski takes you step by step through a sample project that demonstrates how to create an application that can get GeoSpatial coordinate data for addresses within a SQL Server database, and then use that data to locate those addresses on a Bing Map on a website as pushpins, either grouped or ungrouped: And there is full source-code too, in the speech-bubble.

    Read the article

  • Tellago speaks about Business Intellligence with SQL Server 2008 R2

    - by gsusx
    At Tellago , we always try to stay in the frontlines of technology that can enhance our solution development practices. This year we are putting a lot of emphasis on business intelligence and in particular the new set of BI technologies such as Microsoft's PowerPivot, Master Data Services and StreamInsight that are scheduled to be release with SQL Server 2008 R2. In the last few weeks we have been working closely with different Microsoft field offices to coordinate a series of customers events that...(read more)

    Read the article

  • Coordinates from 3DS Max to XNA 3.5

    - by David Conde
    Hello My problem is this. I have a simple box made in 3DS Max 2009, the Box is 10x10x10. I've tried to load it on XNA and traslate the camera for 15 units, but I can seem to find the values needed to see the box properly. Can anyone point me to a good resource where I can find some good introduction to XNA coordinate system and how is a simple box made in 3DS Max imported properly Best regards, David

    Read the article

  • OpenGL - Calculating camera view matrix

    - by Karle
    Problem I am calculating the model, view and projection matrices independently to be used in my shader as follows: gl_Position = projection * view * model * vec4(in_Position, 1.0); When I try to calculate my camera's view matrix the Z axis is flipped and my camera seems like it is looking backwards. My program is written in C# using the OpenTK library. Translation (Working) I've created a test scene as follows: From my understanding of the OpenGL coordinate system they are positioned correctly. The model matrix is created using: Matrix4 translation = Matrix4.CreateTranslation(modelPosition); Matrix4 model = translation; The view matrix is created using: Matrix4 translation = Matrix4.CreateTranslation(-cameraPosition); Matrix4 view = translation; Rotation (Not-Working) I now want to create the camera's rotation matrix. To do this I use the camera's right, up and forward vectors: // Hard coded example orientation: // Normally calculated from up and forward // Similar to look-at camera. Vector3 r = Vector.UnitX; Vector3 u = Vector3.UnitY; Vector3 f = -Vector3.UnitZ; Matrix4 rot = new Matrix4( r.X, r.Y, r.Z, 0, u.X, u.Y, u.Z, 0, f.X, f.Y, f.Z, 0, 0.0f, 0.0f, 0.0f, 1.0f); This results in the following matrix being created: I know that multiplying by the identity matrix would produce no rotation. This is clearly not the identity matrix and therefore will apply some rotation. I thought that because this is aligned with the OpenGL coordinate system is should produce no rotation. Is this the wrong way to calculate the rotation matrix? I then create my view matrix as: // OpenTK is row-major so the order of operations is reversed: Matrix4 view = translation * rot; Rotation almost works now but the -Z/+Z axis has been flipped, with the green cube now appearing closer to the camera. It seems like the camera is looking backwards, especially if I move it around. My goal is to store the position and orientation of all objects (including the camera) as: Vector3 position; Vector3 up; Vector3 forward; Apologies for writing such a long question and thank you in advance. I've tried following tutorials/guides from many sites but I keep ending up with something wrong. Edit: Projection Matrix Set-up Matrix4 projection = Matrix4.CreatePerspectiveFieldOfView( (float)(0.5 * Math.PI), (float)display.Width / display.Height, 0.1f, 1000.0f);

    Read the article

  • How do you calculate UVW coordinates?

    - by Jenko
    I'm working on a 3d engine and I'm calculating UVT coordinates, where U and V represent pixels on the texture measured in 0-1, and T is: T = perspective / Z But I'm trying to use this perspective-correct triangle rasteriser, which requires a W, per vertex. How do I calculate the W for each vertex for the drawPerspectiveTexturedPolygon() function? Hint: The code comments refer to W as the "homogenous coordinate" ... does that mean anything?

    Read the article

  • PCF shadow shader math causing artifacts

    - by user2971069
    For a while now I used PCSS for my shadow technique of choice until I discovered a type of percentage closer filtering. This method creates really smooth shadows and with hopes of improving performance, with only a fraction of texture samples, I tried to implement PCF into my shader. This is the relevant code: float c0, c1, c2, c3; float f = blurFactor; float2 coord = ProjectedTexCoords; if (receiverDistance - tex2D(lightSampler, coord + float2(0, 0)).x > 0.0007) c0 = 1; if (receiverDistance - tex2D(lightSampler, coord + float2(f, 0)).x > 0.0007) c1 = 1; if (receiverDistance - tex2D(lightSampler, coord + float2(0, f)).x > 0.0007) c2 = 1; if (receiverDistance - tex2D(lightSampler, coord + float2(f, f)).x > 0.0007) c3 = 1; coord = (coord % f) / f; return 1 - (c0 * (1 - coord.x) * (1 - coord.y) + c1 * coord.x * (1 - coord.y) + c2 * (1 - coord.x) * coord.y + c3 * coord.x * coord.y); This is a very basic implementation. blurFactor is initialized with 1 / LightTextureSize. So the if statements fetch the occlusion values for the four adjacent texels. I now want to weight each value based on the actual position of the texture coordinate. If it's near the bottom-right pixel, that occlusion value should be preferred. The weighting itself is done with a simple bilinear interpolation function, however this function takes a 2d vector in the range [0..1] so I have to convert my texture coordinate to get the distance from my first pixel to the second one in range [0..1]. For that I used the mod operator to get it into [0..f] range and then divided by f. This code makes sense to me, and for specific blurFactors it works, producing really smooth one pixel wide shadows, but not for all blurFactors. Initially blurFactor is (1 / LightTextureSize) to sample the 4 adjacent texels. I now want to increase the blurFactor by factor x to get a smooth interpolation across maybe 4 or so pixels. But that is when weird artifacts show up. Here is an image: Using a 1x on blurFactor produces a good result, 0.5 is as expected not so smooth. 2x however doesn't work at all. I found that only a factor of 1/2^n produces an good result, every other factor produces artifacts. I'm pretty sure the error lies here: coord = (coord % f) / f; Maybe the modulo is not calculated correctly? I have no idea how to fix that. Is it even possible for pixel that are further than 1 pixel away?

    Read the article

  • Object detection in bitmap JavaScript canvas

    - by fallenAngel
    I want to detect clicks on canvas elements which are drawn using paths. So far I have stored element paths in a JavaScript data structure and then check the coordinates of hits which match the element's coordinates. Rendering each element path and checking the hits would be inefficient when there are a lot of elements. I believe there must be an algorithm for this kind of coordinate search, can anyone help me with this?

    Read the article

  • (Unity)Getting a mirrored mesh from my data structure

    - by Steve
    Here's the background: I'm in the beginning stages of an RTS game in Unity. I have a procedurally generated terrain with a perlin-noise height map, as well as a function to generate a river. The problem is that the graphical creation of the map is taking the data structure of the map and rotating it by 180 degrees. I noticed this problem when i was creating my rivers. I would set the River's height to flat, and noticed that the actual tiles that were flat in the graphical representation were flipped and mirrored. Here's 3 screenshots of the map from different angles: http://imgur.com/a/VLHHq As you can see, if you flipped (graphically) the river by 180 degrees on the z axis, it would fit where the terrain is flattened. I have a suspicion it is being caused by a misunderstanding on my part of how vertices work. Alas, here is a snippet of the code that is used: This code here creates a new array of Tile objects, which hold the information for each tile, including its type, coordinate, height, and it's 4 vertices public DTileMap (int size_x, int size_y) { this.size_x = size_x; this.size_y = size_y; //Initialize Map_Data Array of Tile Objects map_data = new Tile[size_x, size_y]; for (int j = 0; j < size_y; j++) { for (int i = 0; i < size_x; i++) { map_data [i, j] = new Tile (); map_data[i,j].coordinate.x = (int)i; map_data[i,j].coordinate.y = (int)j; map_data[i,j].vertices[0] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -j * GTileMap.TileMap.tileSize); map_data[i,j].vertices[1] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[2] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[3] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); } } This code sets the river tiles to height 0 foreach (Tile t in map_data) { if (t.realType == "Water") { t.vertices[0].y = 0f; t.vertices[1].y = 0f; t.vertices[2].y = 0f; t.vertices[3].y = 0f; } } And below is the code to generate the actual graphics from the data: public void BuildMesh () { DTileMap.DTileMap map = new DTileMap.DTileMap (size_x, size_z); int numTiles = size_x * size_z; int numTris = numTiles * 2; int vsize_x = size_x + 1; int vsize_z = size_z + 1; int numVerts = vsize_x * vsize_z; // Generate the mesh data Vector3[] vertices = new Vector3[ numVerts ]; Vector3[] normals = new Vector3[numVerts]; Vector2[] uv = new Vector2[numVerts]; int[] triangles = new int[ numTris * 3 ]; int x, z; for (z=0; z < vsize_z; z++) { for (x=0; x < vsize_x; x++) { normals [z * vsize_x + x] = Vector3.up; uv [z * vsize_x + x] = new Vector2 ((float)x / size_x, 1f - (float)z / size_z); } } for (z=0; z < vsize_z; z+=1) { for (x=0; x < vsize_x; x+=1) { if (x == vsize_x - 1 && z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z - 1].vertices [3]; } else if (z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z - 1].vertices [2]; } else if (x == vsize_x - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z].vertices [1]; } else { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [0]; vertices [z * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [1]; vertices [(z+1) * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [2]; vertices [(z+1) * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [3]; } } } } for (z=0; z < size_z; z++) { for (x=0; x < size_x; x++) { int squareIndex = z * size_x + x; int triOffset = squareIndex * 6; triangles [triOffset + 0] = z * vsize_x + x + 0; triangles [triOffset + 2] = z * vsize_x + x + vsize_x + 0; triangles [triOffset + 1] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 3] = z * vsize_x + x + 0; triangles [triOffset + 5] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 4] = z * vsize_x + x + 1; } } // Create a new Mesh and populate with the data Mesh mesh = new Mesh (); mesh.vertices = vertices; mesh.triangles = triangles; mesh.normals = normals; mesh.uv = uv; // Assign our mesh to our filter/renderer/collider MeshFilter mesh_filter = GetComponent<MeshFilter> (); MeshCollider mesh_collider = GetComponent<MeshCollider> (); mesh_filter.mesh = mesh; mesh_collider.sharedMesh = mesh; calculateMeshTangents (mesh); BuildTexture (map); } If this looks familiar to you, its because i got most of it from Quill18. I've been slowly adapting it for my uses. And please include any suggestions you have for my code. I'm still in the very early prototyping stage.

    Read the article

  • Particle effect after the bullet

    - by Siddharth
    In my game, I fire a bullet from the gun along with that I generate a particle behind the bullet so that I look like fire effect after the bullet. But my problem is that the position I got from the bullet was distance in place. So basically I want to say that the bullet speed was high for that reason I got coordinate for the particle generation was far from each other like dot dot effect. But I want continuous flow of particle behind the bullet. So please provide any guidance for my problem

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >