Search Results

Search found 1507 results on 61 pages for 'coordinates'.

Page 6/61 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • What is the meaning of client coordinates in SetWindowPos

    - by user1775315
    The documentation for SetWindowPos says the following for the X and Y parameters: X [in] Type: int The new position of the left side of the window, in client coordinates. Y [in] Type: int The new position of the top of the window, in client coordinates. By "client coordinates", does it mean that the parameters specify the position of the client area of the window, or that they specify the position of the window (not the client area) relative to the parent window's client area? Or something else?

    Read the article

  • Simulating mouse clicks at specific screen coordinates

    - by Matteo Riva
    I would like to be able to bind some keys to mouse clicks done in specific locations. For example: when I press F1 I should get a left mouse click at coordinates 300x350, F2 at 600x350 and so on. Even better if this could be bound to a specific window application so that coordinates could be relative to it instead of the base desktop. Is there a software which allows this? ADDITION: Ok autohotkey is great but I have problems with my particular setup. Quoting my comment below: I'm using it with an old game (championship manager 01/02) which runs in windowed mode (and I have to set win98 compatibility for it to run): I can get the mouse to move but no click goes to the application I have read this FAQ but it didn't help, this is the script I tried: SendMode Play SetKeyDelay, 0, 50, Play F1::Click 42, 191 F2::ControlSend ahk_class main, Click, Championship Manager 01/02 Still no luck: pointer moves but no click goes through.

    Read the article

  • Why / how does XNA's right-handed coordinate system effect anything if you can specify near/far Z values?

    - by vargonian
    I am told repeatedly that XNA Game Studio uses a right-handed coordinate system, and I understand the difference between a right-handed and left-handed coordinate system. But given that you can use a method like Matrix.CreateOrthographicOffCenter to create your own custom projection matrix, specifying the left, right, top, bottom, zNear and zFar values, when does XNA's coordinate system come into play? For example, I'm told that in a right-handed coordinate system, increasingly negative Z values go "into" the screen. But I can easily create my projection matrix like this: Matrix.CreateOrthographicOffCenter(left, right, bottom, top, 0.1f, 10000f); I've now specified a lower value for the near Z than the far Z, which, as I understand it, means that positive Z now goes into the screen. I can similarly tweak the values of left/right/top/bottom to achieve similar results. If specifying a lower zNear than zFar value doesn't affect the Z direction of the coordinate system, what does it do? And when is the right-handed coordinate system enforced? The reason I ask is that I'm trying to implement a 2.5D camera that supports zooming and rotation, and I've spent two full days encountering one unexpected result after another.

    Read the article

  • The ship "shudders" in scrolling Asteroids

    - by Ciaran
    In my Asteroids game the user can scroll through space. When scrolling, the ship is drawn in the centre of the window. I use interpolation. I scroll the window uing glOrtho, centering it around the centre of the ship. On my first machine (7 years old, Windows XP, NVIDIA), I am doing 50 updates and 76 frames per second. This is smooth. My other machine an old compaq laptop (Pentium III) with Linux and Radeon OpenGL driver delivers 50 updates and 30 frames per second. The ship regularly seems to "shudder" back and forth when at maximum thrust. When you position the mouse cursor beside the ship it is obvious that its relative position in the window changes. Also, the stars seem blurred into short "lines". Playing the game in non-scrolling mode, the ship moves within the window, glOrtho is therefore not called repeatedly and there is no problem. I suspect a bug in my positioning of the ship and the window but I have dumped out these values and they seem to only go forward, not forward-back-forward. The driver does support double buffering. I guess if it is my bug I need to slow the frame-rate down to debug properly. My question: is this an obvious driver bug or is the slower machine uncovering a bug in my stuff and if so, some debugging tips would be appreciated. I am drawing in world co-ordinates and letting OpenGL do the scaling and translation so if I had a quick way of verifying what pixel co-ordinates OpenGL produces for the ship centre, that would help clarify this.

    Read the article

  • Transforming object world space matrix to a position in world space

    - by Fredrik Boston Westman
    Im trying to make a function for picking objects with a bounding sphere however I have run in to a problem. First I check against my my bounding sphere, then if it checks out then I test against the vertexes. I have already tested my vertex picking method and it work fine, however when I check first with my bounding sphere method it dosnt register anything. My conclusion is that when im transform my sphere position in to the position of the object in world space, the transformation goes wrong ( I base this on the fact the the x coordinate always becomes 1, even tho i translate non of my meshes along the x-axis to 1). So my question is: What is the proper way to transform a objects world space matrix to a position vector ? This is how i do it now: First i set my position vector to 0. XMVECTOR meshPos = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); Then I trannsform it with my object space matrix, and then add the offset to the center of the mesh. meshPos = XMVector3TransformCoord(meshPos, meshWorld) + centerOffset;

    Read the article

  • FBX SDK Not Converting Child Node Coordinate Systems

    - by Al Bundy
    I am trying to import a scene into my application from an fbx file. In 3DS Max, the scene and it’s local translations are as follows: Root (0, 0, 0) '-Sphere001 (-15, 30, 0) ' '-Sphere002 (-2, -30, 0) ' '-Sphere003 (-30, -20, 0) '-Cube001 (35, -15, 0) This is the code that I am using to get the translations of each node: FbxDouble3 fbxPosition = pChild->LclTranslation.Get(); FbxDouble3 fbxRotation = pChild->LclRotation.Get(); FbxDouble3 fbxScale = pChild->LclScaling.Get(); When I try to import the scene, the first node from the scene is getting converted to a right handed system, using this conversion: (X, Z, -Y), but none of their child nodes are. after importing the scene, the local translations I get are as follows: Root (0, 0, 0) --Sphere001 (-15, 0, -30) - converted ----Sphere002 (-2, -30, 0) - not converted ------Sphere003 (-30, -20, 0) - not converted --Cube001 (35, 0, 15) - converted Can anybody help me make sense of this? Thanks

    Read the article

  • Converting from different handedness coordinate systems

    - by SirYakalot
    I am currently porting a demo from XNA to DirectX which, as I understand it, both have coordinate systems with different handednesses. What are the things I need to bare in mind when converting between the two? I understand not everything needs to be changed. Also I notice that many of the 3D maths functions in some of the direct3D libraries have right handed and left handed alternatives. Would it be better to just use these?

    Read the article

  • Convert rotation from Right handed System to left handed

    - by Hector Llanos
    I have Euler angles from a right handed system that I am trying to convert to a left handed system. All the information that I have read online says that to convert it simply multiply the axis and the angle in the correct order and it should work. In other words, Z * Y * X. When I do this what I see in Maya, and in engine still do not match up. This is what I have so far: static Quaternion ConvertToRightHand(Vector3 Euler) { Quaternion x = Quaternion.AngleAxis(-Euler.x, Vector3.right); Quaternion y = Quaternion.AngleAxis(Euler.y, Vector3.up); Quaternion z = Quaternion.AngleAxis(Euler.z, Vector3.forward); return (z * y * x); } Keeping the -Euler.x helps keep the object pointing up correctly, but when I pass ( 0,0,0) to face in the -z, it faces in the +z. Help :/

    Read the article

  • Orthographic unit translation mismatch on grid (e.g. 64 pixels translates incorrectly)

    - by Justin Van Horne
    I am looking for some insight into a small problem with unit translations on a grid. Setup 512x448 window 64x64 grid gl_Position = projection * world * position; projection is defined by ortho(-w/2.0f, w/2.0f, -h/2.0f, h/2.0f); This is a textbook orthogonal projection function. world is defined by a fixed camera position at (0, 0) position is defined by the sprite's position. Problem In the screenshot below (1:1 scaling) the grid spacing is 64x64 and I am drawing the unit at (64, 64), however the unit draws roughly ~10px in the wrong position. I've tried uniform window dimensions to prevent any distortion on the pixel size, but now I am a bit lost in the proper way in providing a 1:1 pixel-to-world-unit projection. Anyhow, here are some quick images to aide in the problem. I decided to super-impose a bunch of the sprites at what the engine believes is 64x offsets. When this seemed off place, I went about and did the base case of 1 unit. Which seemed to line up as expected. The yellow shows a 1px difference in the movement. Vertices It would appear that the vertices going into the vertex shader are correct. For example, in reference to the first image the data looks like this in the VBO: x y x y ---------------------------- tl | 0.0 24.0 64.0 24.0 bl | 0.0 0.0 -> 64.0 0.0 tr | 16.0 0.0 80.0 0.0 br | 16.0 24.0 80.0 24.0 With that said, all I am left to believe is that I am munging up my actual projection. So, I am looking for any insight into maintaining the 1:1 pixel-to-world-unit projection.

    Read the article

  • Most efficient way to handle coordinate maps in Java

    - by glowcoder
    I have a rectangular tile-based layout. It's your typical Cartesian system. I would like to have a single class that handles two lookup styles Get me the set of players at position X,Y Get me the position of player with key K My current implementation is this: class CoordinateMap<V> { Map<Long,Set<V>> coords2value; Map<V,Long> value2coords; // convert (int x, int y) to long key - this is tested, works for all values -1bil to +1bil // My map will NOT require more than 1 bil tiles from the origin :) private Long keyFor(int x, int y) { int kx = x + 1000000000; int ky = y + 1000000000; return (long)kx | (long)ky << 32; } // extract the x and y from the keys private int[] coordsFor(long k) { int x = (int)(k & 0xFFFFFFFF) - 1000000000; int y = (int)((k >>> 32) & 0xFFFFFFFF) - 1000000000; return new int[] { x,y }; } } From there, I proceed to have other methods that manipulate or access the two maps accordingly. My question is... is there a better way to do this? Sure, I've tested my class and it works fine. And sure, something inside tells me if I want to reference the data by two different keys, I need two different maps. But I can also bet I'm not the first to run into this scenario. Thanks!

    Read the article

  • Help understand GLSL directional light on iOS (left handed coord system)

    - by Robse
    I now have changed from GLKBaseEffect to a own shader implementation. I have a shader management, which compiles and applies a shader to the right time and does some shader setup like lights. Please have a look at my vertex shader code. Now, light direction should be provided in eye space, but I think there is something I don't get right. After I setup my view with camera I save a lightMatrix to transform the light from global space to eye space. My modelview and projection setup: - (void)setupViewWithWidth:(int)width height:(int)height camera:(N3DCamera *)aCamera { aCamera.aspect = (float)width / (float)height; float aspect = aCamera.aspect; float far = aCamera.far; float near = aCamera.near; float vFOV = aCamera.fieldOfView; float top = near * tanf(M_PI * vFOV / 360.0f); float bottom = -top; float right = aspect * top; float left = -right; // projection GLKMatrixStackLoadMatrix4(projectionStack, GLKMatrix4MakeFrustum(left, right, bottom, top, near, far)); // identity modelview GLKMatrixStackLoadMatrix4(modelviewStack, GLKMatrix4Identity); // switch to left handed coord system (forward = z+) GLKMatrixStackMultiplyMatrix4(modelviewStack, GLKMatrix4MakeScale(1, 1, -1)); // transform camera GLKMatrixStackMultiplyMatrix4(modelviewStack, GLKMatrix4MakeWithMatrix3(GLKMatrix3Transpose(aCamera.orientation))); GLKMatrixStackTranslate(modelviewStack, -aCamera.position.x, -aCamera.position.y, -aCamera.position.z); } - (GLKMatrix4)modelviewMatrix { return GLKMatrixStackGetMatrix4(modelviewStack); } - (GLKMatrix4)projectionMatrix { return GLKMatrixStackGetMatrix4(projectionStack); } - (GLKMatrix4)modelviewProjectionMatrix { return GLKMatrix4Multiply([self projectionMatrix], [self modelviewMatrix]); } - (GLKMatrix3)normalMatrix { return GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3([self modelviewProjectionMatrix]), NULL); } After that, I save the lightMatrix like this: [self.renderer setupViewWithWidth:view.drawableWidth height:view.drawableHeight camera:self.camera]; self.lightMatrix = [self.renderer modelviewProjectionMatrix]; And just before I render a 3d entity of the scene graph, I setup the light config for its shader with the lightMatrix like this: - (N3DLight)transformedLight:(N3DLight)light transformation:(GLKMatrix4)matrix { N3DLight transformedLight = N3DLightMakeDisabled(); if (N3DLightIsDirectional(light)) { GLKVector3 direction = GLKVector3MakeWithArray(GLKMatrix4MultiplyVector4(matrix, light.position).v); direction = GLKVector3Negate(direction); // HACK -> TODO: get lightMatrix right! transformedLight = N3DLightMakeDirectional(direction, light.diffuse, light.specular); } else { ... } return transformedLight; } You see the line, where I negate the direction!? I can't explain why I need to do that, but if I do, the lights are correct as far as I can tell. Please help me, to get rid of the hack. I'am scared that this has something to do, with my switch to left handed coord system. My vertex shader looks like this: attribute highp vec4 inPosition; attribute lowp vec4 inNormal; ... uniform highp mat4 MVP; uniform highp mat4 MV; uniform lowp mat3 N; uniform lowp vec4 constantColor; uniform lowp vec4 ambient; uniform lowp vec4 light0Position; uniform lowp vec4 light0Diffuse; uniform lowp vec4 light0Specular; varying lowp vec4 vColor; varying lowp vec3 vTexCoord0; vec4 calcDirectional(vec3 dir, vec4 diffuse, vec4 specular, vec3 normal) { float NdotL = max(dot(normal, dir), 0.0); return NdotL * diffuse; } ... vec4 calcLight(vec4 pos, vec4 diffuse, vec4 specular, vec3 normal) { if (pos.w == 0.0) { // Directional Light return calcDirectional(normalize(pos.xyz), diffuse, specular, normal); } else { ... } } void main(void) { // position highp vec4 position = MVP * inPosition; gl_Position = position; // normal lowp vec3 normal = inNormal.xyz / inNormal.w; normal = N * normal; normal = normalize(normal); // colors vColor = constantColor * ambient; // add lights vColor += calcLight(light0Position, light0Diffuse, light0Specular, normal); ... }

    Read the article

  • How can I calculate a trend line in PHP?

    - by Stephen
    So I've read the two related questions for calculating a trend line for a graph, but I'm still lost. I have an array of xy coordinates, and I want to come up with another array of xy coordinates (can be fewer coordinates) that represent a logarithmic trend line using PHP. I'm passing these arrays to javascript to plot graphs on the client side.

    Read the article

  • How to Point sprite's direction towards Mouse or an Object [duplicate]

    - by Irfan Dahir
    This question already has an answer here: Rotating To Face a Point 1 answer I need some help with rotating sprites towards the mouse. I'm currently using the library allegro 5.XX. The rotation of the sprite works but it's constantly inaccurate. It's always a few angles off from the mouse to the left. Can anyone please help me with this? Thank you. P.S I got help with the rotating function from here: http://www.gamefromscratch.com/post/2012/11/18/GameDev-math-recipes-Rotating-to-face-a-point.aspx Although it's by javascript, the maths function is the same. And also, by placing: if(angle < 0) { angle = 360 - (-angle); } doesn't fix it. The Code: #include <allegro5\allegro.h> #include <allegro5\allegro_image.h> #include "math.h" int main(void) { int width = 640; int height = 480; bool exit = false; int shipW = 0; int shipH = 0; ALLEGRO_DISPLAY *display = NULL; ALLEGRO_EVENT_QUEUE *event_queue = NULL; ALLEGRO_BITMAP *ship = NULL; if(!al_init()) return -1; display = al_create_display(width, height); if(!display) return -1; al_install_keyboard(); al_install_mouse(); al_init_image_addon(); al_set_new_bitmap_flags(ALLEGRO_MIN_LINEAR | ALLEGRO_MAG_LINEAR); //smoother rotate ship = al_load_bitmap("ship.bmp"); shipH = al_get_bitmap_height(ship); shipW = al_get_bitmap_width(ship); int shipx = width/2 - shipW/2; int shipy = height/2 - shipH/2; int mx = width/2; int my = height/2; al_set_mouse_xy(display, mx, my); event_queue = al_create_event_queue(); al_register_event_source(event_queue, al_get_mouse_event_source()); al_register_event_source(event_queue, al_get_keyboard_event_source()); //al_hide_mouse_cursor(display); float angle; while(!exit) { ALLEGRO_EVENT ev; al_wait_for_event(event_queue, &ev); if(ev.type == ALLEGRO_EVENT_KEY_UP) { switch(ev.keyboard.keycode) { case ALLEGRO_KEY_ESCAPE: exit = true; break; /*case ALLEGRO_KEY_LEFT: degree -= 10; break; case ALLEGRO_KEY_RIGHT: degree += 10; break;*/ case ALLEGRO_KEY_W: shipy -=10; break; case ALLEGRO_KEY_S: shipy +=10; break; case ALLEGRO_KEY_A: shipx -=10; break; case ALLEGRO_KEY_D: shipx += 10; break; } }else if(ev.type == ALLEGRO_EVENT_MOUSE_AXES) { mx = ev.mouse.x; my = ev.mouse.y; angle = atan2(my - shipy, mx - shipx); } // al_draw_bitmap(ship,shipx, shipy, 0); //al_draw_rotated_bitmap(ship, shipW/2, shipH/2, shipx, shipy, degree * 3.142/180,0); al_draw_rotated_bitmap(ship, shipW/2, shipH/2, shipx, shipy,angle, 0); //I directly placed the angle because the allegro library calculates radians, and if i multiplied it by 180/3. 142 the rotation would go hawire, not would, it actually did. al_flip_display(); al_clear_to_color(al_map_rgb(0,0,0)); } al_destroy_bitmap(ship); al_destroy_event_queue(event_queue); al_destroy_display(display); return 0; } EDIT: This was marked duplicate by a moderator. I'd like to say that this isn't the same as that. I'm a total beginner at game programming, I had a view at that other topic and I had difficulty understanding it. Please understand this, thank you. :/ Also, while I was making a print of what the angle is I got this... Here is a screenshot:http://img34.imageshack.us/img34/7396/fzuq.jpg Which is weird because aren't angles supposed to be 360 degrees only?

    Read the article

  • Transition Player Position

    - by Lycrios
    I'm currently working on a java MMO with a pretty solid start, but I've come across an issue I need a little help with. I'm working on player position's. Meaning there X/Y on the screen, if the PlayerA has a higher FPS(Frames Per Second) then other players, the resulting action will be that PlayerA will move faster than everyone else. I know the reasoning for this, it's because when the game draws I just use: x++; What would a better method be?

    Read the article

  • Trying to convert openGL to MFC coordinates and having Problems with "gluProject"

    - by Erez
    Hi, i'm trying to find the naswer on the web and can't find a full solution that i can use and that will work... I'm developing a MFC project with static picture as the canvas for an openGL class that draw the grphics for my game. On moush down, i need to retrive a shape coordinate from the openGL class. I'm looking for a way to convert the openGL coordinates to MFC coordinates but no matter what i try i get junk after using the gluProject or gluUnProject (i've tried to do both ways but non is working) GLdouble modelMatrix[16]; glGetDoublev(GL_MODELVIEW_MATRIX,modelMatrix); GLdouble projMatrix[16]; glGetDoublev(GL_PROJECTION_MATRIX,projMatrix); int viewport[4]; glGetIntegerv(GL_VIEWPORT,viewport); POINT mouse; // Stores The X And Y Coords For The Current Mouse Position GetCursorPos(&mouse); // Gets The Current Cursor Coordinates (Mouse Coordinates) ScreenToClient(hWnd, &mouse); GLdouble winX, winY, winZ; // Holds Our X, Y and Z Coordinates winX; = (float)point.x; // Holds The Mouse X Coordinate winY; = (float)point.y; // Holds The Mouse Y Coordinate winY = (float)viewport[3] - winY; glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ); GLdouble posX=s1->getPosX(), posY=s1->getPosY(), posZ=s1->getPosZ(); // Hold The Final Values gluUnProject( winX, winY, winZ, modelMatrix, projMatrix, viewport, &posX, &posY, &posZ); gluProject(posX, posY, posZ, modelMatrix, projMatrix, viewport, &winX, &winY, &winZ); This is just part of the code i've tried. ofcourse not gluProject and gluUnProject together. just had them both here to show.....and i know there is lots of junk over there, its from some of my tries... p.s. i've tried many many more combinations and examples from the web and nothing seem to work in my case.... Can any one show me what is the right way to do the transformation? 10x

    Read the article

  • Utilities for finding x/y screen coordinates

    - by slothbear
    I have an application that needs the user to enter the x and y location of a couple of items on the screen. [Yes, this is crap and I'll replace it, but for now...] On Mac OS X, I'm using Snapz Pro X. When I choose to take a snap of a selection, the interface displays the mouse cursor location. This is OK for my personal use, but I can't ask users to buy a $69 program for this function. I thought I had a solution with the built-in program "Grab", but it reports the coordinates from the bottom left, and I need it from the top left. I don't have access to Windows; not sure what to use there. I've not even been able to come up with good search terms since mouse and coordinates and x/y are so common. Ideas are welcome there. Extra credit: same x/y finder for Linux. A Java program would be OK too, since the main application is in Java.

    Read the article

  • How to map coordinates in AxesImage to coordinates in saved image file?

    - by Vebjorn Ljosa
    I use matplotlib to display a matrix of numbers as an image, attach labels along the axes, and save the plot to a PNG file. For the purpose of creating an HTML image map, I need to know the pixel coordinates in the PNG file for a region in the image being displayed by imshow. I have found an example of how to do this with a regular plot, but when I try to do the same with imshow, the mapping is not correct. Here is my code, which saves an image and attempts to print the pixel coordinates of the center of each square on the diagonal: import numpy as np import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_axes([0.1, 0.1, 0.8, 0.8]) axim = ax.imshow(np.random.random((27,27)), interpolation='nearest') for x, y in axim.get_transform().transform(zip(range(28), range(28))): print int(x), int(fig.get_figheight() * fig.get_dpi() - y) plt.savefig('foo.png', dpi=fig.get_dpi()) Here is the resulting foo.png, shown as a screenshot in order to include the rulers: The output of the script starts and ends as follows: 73 55 92 69 111 83 130 97 149 112 … 509 382 528 396 547 410 566 424 585 439 As you see, the y-coordinates are correct, but the x-coordinates are stretched: they range from 73 to 585 instead of the expected 135 to 506, and they are spaced 19 pixels o.c. instead of the expected 14. What am I doing wrong?

    Read the article

  • Better use a tuple or numpy array for storing coordinates

    - by Ivan
    Hi, I'm porting an C++ scientific application to python, and as I'm new to python, some problems come to my mind: 1) I'm defining a class that will contain the coordinates (x,y). These values will be accessed several times, but they only will be read after the class instantiation. Is it better to use an tuple or an numpy array, both in memory and access time wise? 2) In some cases, these coordinates will be used to build a complex number, evaluated on a complex function, and the real part of this function will be used. Assuming that there is no way to separate real and complex parts of this function, and the real part will have to be used on the end, maybe is better to use directly complex numbers to store (x,y)? How bad is the overhead with the transformation from complex to real in python? The code in c++ does a lot of these transformations, and this is a big slowdown in that code. 3) Also some coordinates transformations will have to be performed, and for the coordinates the x and y values will be accessed in separate, the transformation be done, and the result returned. The coordinate transformations are defined in the complex plane, so is still faster to use the components x and y directly than relying on the complex variables? Thank you

    Read the article

  • Spatial index for geo coordinates?

    - by Michael Borgwardt
    What kind of data structure could be used for an efficient nearest neighbor search in a large set of geo coordinates? With "regular" spatial index structures like R-Trees that assume planar coordinates, I see two problems (Are there others I have overlooked?): Wraparound at the poles and the International Date Line Distortion of distances near the poles How can these factors be allowed for? I guess the second one could compensated by transforming the coordinates. Can an R-Tree be modified to take wraparound into account? Or are there specialized geo-spatial index structures?

    Read the article

  • Distance by sea calculator, intermediate coordinates?

    - by Lucian2k
    How do I calculate distance between 2 coordinates by sea? I also want to be able to draw a route between the two coordinates. Only solution I found so far is to split a map into pixels, identify each pixel as LAND or SEA and then try to find the path using A* algorithm. Then transform pixels to relative coordinates. There are some software packages I could buy but none have online extensions. A service that calculates distances between sea ports and plots the path on a map is searates.com

    Read the article

  • Retrieving of coordinates from google maps and search

    - by cheesebunz
    I know there is a way to retrieve the coordinates by clicking on the map, using specifically document.getElementById("lonTb").value=point.x; document.getElementById("latTb").value=point.y; Firstly, i have a html file namely MapToolKit.html, a mapGPS.js and a mapSearch.js. how do i input the coordinates returns into the js? I just need to be able to click the map for coordinates, not neccessary the search returns. Here's the file which i uploaded. http://www.mediafire.com/?0minqxgwzmx

    Read the article

  • Qt grabWindow coordinates shifted from GetCursorPos and GetWindowRect

    - by user1391
    In Qt, when I use the QPixmap::grabWindow(hwnd,x,y,h,w) function, the coordinates are shifted slightly, when compared to the coordinates using the windows api functions GetCursorPos and GetWindowRect. i.e. (0,0) from the point of view of GetCursorPos and GetWindowRect is at the very top left of the toolbar at the top of the window. But (0,0) for QPixmap::grabWindow is more "inside" (i.e. ignoring the window frame). How can I make these 2 coordinates consistent? Especially since the user might have different thicknesses for the window frame?

    Read the article

  • Determine coordinates of rotated rectangle

    - by MathieuK
    I'm creating an utility application that should detect and report the coordinates of the corners of a transparent rectangle (alpha=0) within an image. So far, I've set up a system with Javascript + Canvas that displays the image and starts a floodfill-like operation when I click inside the transparent rectangle in the image. It correctly determines the bounding box of the floodfill operation and as such can provide me the correct coordinates. Here's my implementation so-far: http://www.scriptorama.nl/image/ (works in recent Firefox / Safari ). However, the bounding box approach breaks down then the transparent rectangle is rotated (CW or CCW) as the resulting bounding box no longer properly represents the proper width and height. I've tried to come up with a few alternatives to detect to corners, but have not been able to think up a proper solution. So, does anyone have any suggestions on how I might approach this so I can properly detect the coordinates of 4 corners of the rotated rectangle?

    Read the article

  • OPENGL Android getting the coordinates of my image

    - by Debopam
    I used the codes from the following the website. And it helped a lot to draw the required object on my screen. But the problem I am facing is getting the coordinates of my object. Let me explain. According to the code. You just need to add your graphic images in PNG format and refer it to the class here. What I am trying to achieve is a simple collision detection mechanism. I have added a maze (as PNG). And have a object (as PNG) to go through the blank path within the maze. In order to do this I need to know the blank spaces within the coordinates through which my object will move. Can any one tell me how to get the blank spaces as (x,y) coordinates through which I can take my object?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >