Search Results

Search found 779 results on 32 pages for 'coordinate'.

Page 9/32 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Any way to loop through FPDF code with proper XY coordinates?

    - by JM4
    At the end of a form collection, I provide the consumer a printable PDF with the information they just entered. I already run through a loop to store the variables themselves but am wondering if it is at all possible to build a loop that builds on itself for FPDF. The catch is this, each new variable (#1, #2, #3) will change location by a determined amount of space. For example: I print the Member #1 First name at coordinate at coordinate (95, 101). I print Member #2 First name at coordinate (95, 110)... and so on. Each known variable will be 9.5mm greater than its previous entry (therefor Member #9 will be 40mm higher than Member 6) My sample code for the FPDF itself is: $pdf->SetFont('Arial','', 7); $pdf->SetXY(8,76.5); $pdf->Cell(20,0,$f1name); $pdf->SetFont('Arial','', 5); $pdf->SetXY(50.5,76.5); $pdf->Cell(20,0,$f1address); $pdf->SetFont('Arial','', 7); $pdf->SetXY(95.7,76.5); $pdf->Cell(20,0,$f1city); $pdf->SetXY(129.5,76.5); $pdf->Cell(20,0,$f1state); $pdf->SetXY(139.1,76.5); $pdf->Cell(20,0,$f1zip); $pdf->SetXY(151,76.5); $pdf->Cell(20,0,$f1dob); $pdf->SetXY(168,76.5); $pdf->Cell(20,0,$f1ssn); $pdf->SetXY(186,76.5); $pdf->Cell(20,0,$f1phone); $pdf->SetXY(55,81.1); $pdf->Cell(20,0,$f1email); $pdf->SetXY(129,81.1); $pdf->Cell(20,0,$f1fednum); Ideally, all Y variables with $f2 would be 9.5mm greater than f1's Y values.

    Read the article

  • Wanted: How to reliably, consistently select an MKMapView annotation

    - by jdandrea
    After calling MKMapView's setCenterCoordinate:animated: method (without animation), I'd like to call selectAnnotation:animated: (with animation) so that the annotation pops out from the newly-centered pushpin. For now, I simply watch for mapViewDidFinishLoadingMap: and then select the annotation. However, this is problematic. For instance, this method isn't called when there's no need to load additional map data. In those cases, my annotation isn't selected. :( Very well. I could call this immediately after setting the center coordinate instead. Ahh, but in that case it's possible that there is map data to load (but it hasn't finished loading yet). I'd risk calling it too soon, with the animation becoming spotty at best. Thus, if I understand correctly, it's not a matter of knowing if my coordinate is visible, since it's possible to stray almost a screenful of distance and have to load new map data. Rather, it's a matter of knowing if new map data needs to be loaded, and then acting accordingly. Any ideas on how to accomplish this, or how to otherwise (reliably) select an annotation after re-centering the map view on the coordinate where that annotation lives? Clues appreciated - thanks!

    Read the article

  • tikz: set appropriate x value for a node

    - by basweber
    This question resulted from the question here I want to produce a curly brace which spans some lines of text. The problem is that I have to align the x coordinate manually, which is not a clean solution. Currently I use \begin{frame}{Example} \begin{itemize} \item The long Issue 1 \tikz[remember picture] \node[coordinate,yshift=0.7em] (n1) {}; \\ spanning 2 lines \item Issue 2 \tikz[remember picture] \node[coordinate, xshift=1.597cm] (n2) {}; \item Issue 3 \end{itemize} \visible<2->{ \begin{tikzpicture}[overlay,remember picture] \draw[thick,decorate,decoration={brace,amplitude=5pt}] (n1) -- (n2) node[midway, right=4pt] {One and two are cool}; \end{tikzpicture} } % end visible \end{frame} which produces the desired result: The unsatisfying thing is, that I had to figure out the xshift value of 1.597cm by trial and error (more or less) Without xshift argument the result is: I guess there is an elegant way to avoid the explicit xshift value. The best way would it imho be to calculate the maximum x value of two nodes and use this, (as already suggested by Geoff) But it would already be very handy to be able to explicitly define the absolute xvalues of both nodes while keeping their current y values. This would avoid the fiddly procedure of adapting the third post decimal position to ensure that the brace looks vertical.

    Read the article

  • Can't access annotation property of subclassed uibutton

    - by Tzur Gazit
    I have a mapView to which I add annotations. The pin's callout have a button (rightCalloutAccessoryView). In order to be able to display various information when the button is pushed, i've subclassed uibutton and added a class called "Annotation". @interface CustomButton : UIButton { NSIndexPath *indexPath; Annotation *mAnnotation; } @property (nonatomic, retain) NSIndexPath *indexPath; @property (nonatomic, copy) Annotation *mAnnotation; - (id) setAnnotation2:(Annotation *)annotation; @end Here is "Annotation": @interface Annotation : NSObject <MKAnnotation> { CLLocationCoordinate2D coordinate; NSString *mPhotoID; NSString *mPhotoUrl; NSString *mPhotoName; NSString *mOwner; NSString *mAddress; } @property (nonatomic, assign) CLLocationCoordinate2D coordinate; @property (nonatomic, copy) NSString *mPhotoID; @property (nonatomic, copy) NSString *mPhotoUrl; @property (nonatomic, copy) NSString *mPhotoName; @property (nonatomic, copy) NSString *mOwner; @property (nonatomic, copy) NSString *mAddress; - (id) initWithCoordinates:(CLLocationCoordinate2D)coordinate; - (id) setPhotoId:(NSString *)id url:(NSString *)url owner:(NSString *)owner address:(NSString *)address andName:(NSString *)name; @end I want to set the annotation property of the uibutton at - (MKAnnotationView *)mapView:(MKMapView *)pMapView viewForAnnotation:(id )annotation, in order to refer to it at the button push handler (-(IBAction) showDetails:(id)sender). The problem is that I can't set the annotation property of the button. I get the following message at run time: 2010-04-27 08:15:11.781 HotLocations[487:207] *** -[UIButton setMAnnotation:]: unrecognized selector sent to instance 0x5063400 2010-04-27 08:15:11.781 HotLocations[487:207] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[UIButton setMAnnotation:]: unrecognized selector sent to instance 0x5063400' 2010-04-27 08:15:11.781 HotLocations[487:207] Stack: ( 32080987, 2472563977, 32462907, 32032374, 31884994, 55885, 30695992, 30679095, 30662137, 30514190, 30553882, 30481385, 30479684, 30496027, 30588515, 63333386, 31865536, 31861832, 40171029, 40171226, 2846639 ) I appreciate the help. Tzur.

    Read the article

  • Alternative to as3isolib?

    - by tedw4rd
    Hi everyone, I've been working on a Flash game that involves an isometric space. I've been using as3isolib for a while now, and I'm less than impressed with how easy it is to use. Whether I'm approaching it the wrong way or it's just not that great to use is a question for another post. Anyways, I've been thinking of a different way to approach the problem of isometric positions, and I think I've got an idea that might work. Essentially, each object that is to be rendered to the iso-space maintains a 3-coordinate position. Those items are then registered with a camera that projects that 3-coordinate position to a 2-coordinate point on the screen according to the math on this Wikipedia article. Then, the MovieClip is added to the stage (or to the camera's MovieClip, perhaps) at that point, and at a child index of the point's y-value. That way, I figure objects that are closer to the camera will be "above" the objects further away, and will get rendered over them. So my question, then, is two-fold: Do you think this idea will work the way I think it will? Are there any existing 3D matrix/vector packages that I should look at? I know there's a Matrix3 class in Flex 3, but we're not using Flex for this game. Thanks!

    Read the article

  • Problem with setVisible (true)

    - by Jessy
    The two examples shown below are same. Both are supposed to produce same result e.g. generate the coordinates of images displayed on JPanel. Example 1, works perfectly (print the coordinates of images), however example 2 returning 0 for the coordinate. I was wondering why because, I have put the setvisible (true) after adding the panel, in both examples. The only difference is that example 1 used extends JPanel and example 2 extends JFrame EXAMPLE 1: public class Grid extends JPanel{ public static void main(String[] args){ JFrame jf=new JFrame(); jf.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); final Grid grid = new Grid(); jf.add(grid); jf.pack(); Component[] components = grid.getComponents(); for (Component component : components) { System.out.println("Coordinate: "+ component.getBounds()); } jf.setVisible(true); } } EXAMPLE 2: public class Grid extends JFrame { public Grid () { setLayout(new GridBagLayout()); GridBagLayout m = new GridBagLayout(); Container c = getContentPane(); c.setLayout (m); GridBagConstraints con = new GridBagConstraints(); //construct the JPanel pDraw = new JPanel(); ... m.setConstraints(pDraw, con); pDraw.add (new GetCoordinate ()); // call new class to generate the coordinate c.add(pDraw); pack(); setVisible(true); } public static void main(String[] args) { new Grid(); } }

    Read the article

  • Projecting an object into a scene based on world coordinates only

    - by user354862
    I want to place a 3D image into a scene base on world/global coordinates. I have an image of a scene. The image was captures at some global coordinate (x1, y1, z1). I am given an object that needs to be placed into this scene based on its global coordinate (x2, y2, y3). This object needs to be projected into the scene accurately similarly to perspective projection. An example may help to make this clear. Imagine there is a parking lot with some set of global coordinates. A picture is taken of a portion of the parking lot. The coordinates from the spot where the image was taken is recorded. The goal is to place a virtual vehicle into this image using the global coordinates for that vehicle. Because the global cooridnates for the vehicle may not be in the fov of the global coordinates for the image I am assuming that I will need the image coordinates, angle and possibly fov. 3D graphics is not my area so I have been looking at http://en.wikipedia.org/wiki/Perspective_projection#Perspective_projection. I have also been looking at Matrix3DProjection which seems to possibly be what I am looking for but it only works in Silverlight and I am trying to do this in WPF. In my mind it appears I need to determine the (X,Y,Z) coordinates that are in the fov of the image, determine the world coordinate to pixel conversion and then accurately project the vehicle into the image giving it the correct perspective such that is looks 3D i.e smaller the further away bigger closer Is there a function within WPF that can help with this or will I need to re-learn matrices and do this by hand?

    Read the article

  • Envista: Coordinating Utilities with Oracle Spatial 11g

    - by stephen.garth
    It's annoying when the same streets seem to be perpetually dug up for utility construction or maintenance by your water or sewer department, electric utility, gas company or telephone company. Can't they do a better job of coordinating these activities? In this podcast, Marc Fagan, Executive VP of Product Management from Envista describes a Software-as-a-Service solution that Envista provides for utilities and public works departments to coordinate upcoming construction work, using Oracle Database 11g with Oracle Spatial. Each participating utility enters key data into the Web-based application, including when and where their work is to take place, and who to contact for more information. The data is then available on a common base map, enabling all participants to coordinate their activities, save money, and minimize inconvenience to their customers. Listen to the podcast Find out more about Oracle Spatial 11g var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Screen space to world space

    - by user13414
    I am writing a 2D game where my game world has x axis running left to right, y axis running top to bottom, and z axis out of the screen: Whilst my game world is top-down, the game is rendered on a slight tilt: I'm working on projecting from world space to screen space, and vice-versa. I have the former working as follows: var viewport = new Viewport(0, 0, this.ScreenWidth, this.ScreenHeight); var screenPoint = viewport.Project(worldPoint.NegateY(), this.ProjectionMatrix, this.ViewMatrix, this.WorldMatrix); The NegateY() extension method does exactly what it sounds like, since XNA's y axis runs bottom to top instead of top to bottom. The screenshot above shows this all working. Basically, I have a bunch of points in 3D space that I then render in screen space. I can modify camera properties in real time and see it animate to the new position. Obviously my actual game will use sprites rather than points and the camera position will be fixed, but I'm just trying to get all the math in place before getting to that. Now, I am trying to convert back the other way. That is, given an x and y point in screen space above, determine the corresponding point in world space. So if I point the cursor at, say, the bottom-left of the green trapezoid, I want to get a world space reading of (0, 480). The z coordinate is irrelevant. Or, rather, the z coordinate will always be zero when mapping back to world space. Essentially, I want to implement this method signature: public Vector2 ScreenPointToWorld(Vector2 point) I've tried several things to get this working but am just having no luck. My latest thinking is that I need to call Viewport.Unproject twice with differing near/far z values, calculate the resultant Ray, normalize it, then calculate the intersection of the Ray with a Plane that basically represents ground-level of my world. However, I got stuck on the last step and wasn't sure whether I was over-complicating things. Can anyone point me in the right direction on how to achieve this?

    Read the article

  • Depth interpolation for z-buffer, with scanline

    - by Twodordan
    I have to write my own software 3d rasterizer, and so far I am able to project my 3d model made of triangles into 2d space: I rotate, translate and project my points to get a 2d space representation of each triangle. Then, I take the 3 triangle points and I implement the scanline algorithm (using linear interpolation) to find all points[x][y] along the edges(left and right) of the triangles, so that I can scan the triangle horizontally, row by row, and fill it with pixels. This works. Except I have to also implement z-buffering. This means that knowing the rotated&translated z coordinates of the 3 vertices of the triangle, I must interpolate the z coordinate for all other points I find with my scanline algorithm. The concept seems clear enough, I first find Za and Zb with these calculations: var Z_Slope = (bottom_point_z - top_point_z) / (bottom_point_y - top_point_y); var Za = top_point_z + ((current_point_y - top_point_y) * Z_Slope); Then for each Zp I do the same interpolation horizontally: var Z_Slope = (right_z - left_z) / (right_x - left_x); var Zp = left_z + ((current_point_x - left_x) * Z_Slope); And of course I add to the zBuffer, if current z is closer to the viewer than the previous value at that index. (my coordinate system is x: left - right; y: top - bottom; z: your face - computer screen;) The problem is, it goes haywire. The project is here and if you select the "Z-Buffered" radio button, you'll see the results... (note that the rest of the options before "Z-Buffered" use the Painter's algorithm to correctly order the triangles. I also use the painter's algorithm -only- to draw the wireframe in "Z-Buffered" mode for debugging purposes) PS: I've read here that you must turn the z's into their reciprocals (meaning z = 1/z) before you interpolate. I tried that, and it appears that there's no change. What am I missing? (could anyone clarify, precisely where you must turn z into 1/z and where to turn it back?)

    Read the article

  • Auto Save and Auto Load Game onto the Device's Storage Concept Question

    - by David Dimalanta
    I'm trying to make a simple app that will test the save and load state. Is it a good idea to make an app that has an auto save and load game feature only every time the newbies open the first app then continues it on the other day? I tried making a sprite that is moving, starting at the center. When I close and re-open the app, the sprite goes back to the center instead of the last coordinate where the sprite land on this part (i.e. at the top). The thing I want to know how the sequence of saving and loading goes like this: I open the app The starting sprite at the center. It displays a coordinate of the sprite plus number of times does the sprite move. I exit the app that automatically saves the game without notice. Finally, when I re-opened it, it automatically loads the game retaining the number of times the sprite move, coordinates, and the sprite's area landed. These steps above are similar, but not the sprite movement test app, to the sequence of saving and loading the game's level and record in Jewel Stackers for the Android app. And, by default, if there is no SD card in any tab or phone that runs on Android, does it automatically save/load onto the internal drive or the APK file itself? Is it also useful to use auto save and auto load feature for protecting and fetching informations (i.e. fastest time, last time where the sprite is located via coordinates, etc.)?

    Read the article

  • Technique to have screen independent grid based puzzle with sprite animation

    - by Yan Cheng CHEOK
    Hello all, let's say I have a fixed size grid puzzle game (8 x 10). I will be using sprites animation, when the "pieces" in the puzzle is moving from one grid to another grid. I was wondering, what is the technique to have this game being implemented as screen resolution independent. Here is what I plan to do. 1) The data structure coordinate will be represented using double, with 1.0 as max value. // Puzzle grid of 8 x 10 Environment { double width = 0.8; double height = 1.0; } // Location of Sprite at coordinate (1, 1) Sprite { double posX = 0.1; double posY = 0.1; double width = 0.1; double height = 0.1; } // scale = PYSICAL_SCREEN_SIZE drawBitmap ( sprite_image, sprite_image_rect, new Rect(sprite.posX * Scale, sprite.posY * Scale, (sprite.posX + sprite.width) * Scale, (sprite.posY + sprite.Height) * Scale), paint ); 2) A large size sprite image will be used (128x128). As sprite image shall look fine if we scale from large size down to small size, but not vice versa. Besides the above mentioned technique, is there any other consideration I had missed out?

    Read the article

  • Obtaining a world point from a screen point with an orthographic projection

    - by vargonian
    I assumed this was a straightforward problem but it has been plaguing me for days. I am creating a 2D game with an orthographic camera. I am using a 3D camera rather than just hacking it because I want to support rotating, panning, and zooming. Unfortunately the math overwhelms me when I'm trying to figure out how to determine if a clicked point intersects a bounds (let's say rectangular) in the game. I was under the impression that I could simply transform the screen point (the clicked point) by the inverse of the camera's View * Projection matrix to obtain the world coordinates of the clicked point. Unfortunately this is not the case at all; I get some point that seems to be in some completely different coordinate system. So then as a sanity check I tried taking an arbitrary world point and transforming it by the camera's View*Projection matrices. Surely this should get me the corresponding screen point, but even that didn't work, and it is quickly shattering any illusion I had that I understood 3D coordinate systems and the math involved. So, if I could form this into a question: How would I use my camera's state information (view and projection matrices, for instance) to transform a world point to a screen point, and vice versa? I hope the problem will be simpler since I'm using an orthographic camera and can make several assumptions from that. I very much appreciate any help. If it makes a difference, I'm using XNA Game Studio.

    Read the article

  • XNA texture stretching at extreme coordinates

    - by Shaun Hamman
    I was toying around with infinitely scrolling 2D textures using the XNA framework and came across a rather strange observation. Using the basic draw code: spriteBatch.Begin(SpriteSortMode.Deferred, null, SamplerState.PointWrap, null, null); spriteBatch.Draw(texture, Vector2.Zero, sourceRect, Color.White, 0.0f, Vector2.Zero, 2.0f, SpriteEffects.None, 1.0f); spriteBatch.End(); with a small 32x32 texture and a sourceRect defined as: sourceRect = new Rectangle(0, 0, Window.ClientBounds.Width, Window.ClientBounds.Height); I was able to scroll the texture across the window infinitely by changing the X and Y coordinates of the sourceRect. Playing with different coordinate locations, I noticed that if I made either of the coordinates too large, the texture no longer drew and was instead replaced by either a flat color or alternating bands of color. Tracing the coordinates back down, I found the following at around (0, -16,777,000): As you can see, the texture in the top half of the image is stretched vertically. My question is why is this occurring? Certainly I can do things like bind the x/y position to some low multiple of 32 to give the same effect without this occurring, so fixing it isn't an issue, but I'm curious about why this happens. My initial thought was perhaps it was overflowing the coordinate value or some such thing, but looking at a data type size chart, the next closest below is an unsigned short with a range of about 32,000, and above is an unsigned int with a range of around 2,000,000,000 so that isn't likely the cause.

    Read the article

  • Smooth waypoint traversing

    - by TheBroodian
    There are a dozen ways I could word this question, but to keep my thoughts in line, I'm phrasing it in line with my problem at hand. So I'm creating a floating platform that I would like to be able to simply travel from one designated point to another, and then return back to the first, and just pass between the two in a straight line. However, just to make it a little more interesting, I want to add a few rules to the platform. I'm coding it to travel multiples of whole tile values of world data. So if the platform is not stationary, then it will travel at least one whole tile width or tile height. Within one tile length, I would like it to accelerate from a stop to a given max speed. Upon reaching one tile length's distance, I would like it to slow to a stop at given tile coordinate and then repeat the process in reverse. The first two parts aren't too difficult, essentially I'm having trouble with the third part. I would like the platform to stop exactly at a tile coordinate, but being as I'm working with acceleration, it would seem easy to simply begin applying acceleration in the opposite direction to a value storing the platform's current speed once it reaches one tile's length of distance (assuming that the tile is traveling more than one tile-length, but to keep things simple, let's just assume it is)- but then the question is what would the correct value be for acceleration to increment from to produce this effect? How would I find that value?

    Read the article

  • How can I ensure my Collada model fits on an iPhone screen?

    - by rakeshNS
    Hi I am new to game development. I see many examples and tried myself like displaying triangle, cube etc. Now I am looking to render a Collada object. So I created a Collada object using Google Sketch up and trying to render that now. But the thing I am not understanding is, in all examples the vertices are between -1.0 and +1.0 values. But when I looked into that Collada file, the vertices were ranging from -30.0 to 90.0. I know any vertices greater than 1.0 will not display on iPhone. So can you pleas tell my the secret behind converting Object coordinate to normalized vector coordinate? My previous triangle defined as struct Vertex{ float Position[3]; float Color[4]; }; const Vertex Vertices[] = { {{-0.5, -0.866}, {1, 1, 0.5f, 1}}, {{0.5, -0.866}, {1, 1, 0.5, 1}}, {{0, 1}, {1, 1, 0.5, 1}}, {{-0.5, -0.866}, {0.5f, 0.5f, 0.5f}}, {{0.5, -0.866}, {0.5f, 0.5f, 0.5f}}, {{0, -0.4f}, {0.5f, 0.5f, 0.5f}}, }; And now triangle from collada is const Vertex Vertices[] = { {{39.4202092, 90.1263924, 0.0000000}, {1, 1, 0.5f, 1}}, {{-20.2205588, 90.1263924, 0.0000000}, {1, 1, 0.5, 1}}, {{-20.2205588, 176.3763924, 0.0000000}, {1, 1, 0.5, 1}}, {{-20.2205588, 176.3763924, 0.0000000}, {1, 1, 0.5, 1}}, {{-20.2205588, 90.1263924, 0.0000000}, {1, 1, 0.5, 1}}, {{39.4202092, 90.1263924, 0.0000000}, {1, 1, 0.5, 1}}, };

    Read the article

  • The purpose of using invert and transpose

    - by user699215
    In openGl ES and the World of 3D - why use the invers matrix? The thing is that I dont have any intuition to, why it is used, therefore please correct me: As fare as I understand, it is used in shaders - and can help you to figure out the opposite direction of the normals? Invers in ordinary numbers is like; The product of a number and its multiplicative inverse is 1. Observe that 3/5 * 5/3 = 1. In a matrix this will give you the Identity Matrix, which is the base coordinate system or the orion of the World space - right. But the invers is - some other coordinate system? You can use the transpose(Row-major order to Column-major order) of a square matrix to find the inverted matrix, as calculating the invers is process heavy - and the transpose is giving you the inverted matrix as a bi product? Again, I am looking for getting some intuition of this - and therefore be able to use it as intended. Thank you for any reply that will guide me in the right direction. Regards

    Read the article

  • Android - Efficient way to draw tiles in OpenGL ES

    - by Maecky
    Hi, I am trying to write efficient code to render a tile based map in android. I load for each tile the corresponding bitmap (just one time) and then create the according tiles. I have designed a class to do this: public class VertexQuad { private float[] mCoordArr; private float[] mColArr; private float[] mTexCoordArr; private int mTextureName; private static short mCounter = 0; private short mIndex; As you can see, each tile has it's x,y location, a color array, texture coordinates and a texture name. Now, I want to render all my created tiles. To reduce the openGL api calls (I read somewhere that the state changes are costly and therefore I want to keep them to a minimum), I first want to hand ALL the coordinate-arrays, color-arrays and texture-coordinates over to OpenGL. After that I run two for loops. The first one iterates over the textures and binds the texture. The second for loop iterates over all Tiles and puts all tiles with the corresponding texture into an IndexBuffer. After the second for loop has finished, I call gl.gl_drawElements() whith the corresponding index buffer, to draw all tiles with the texture associated. For the next texture I do the same again. Now I run into some problems: Allocating and filling the FloatBuffers at the start of each rendering cycle costs very much time. I just run a test, where i wanted to put 400 coordinates into a FloatBuffer which took me about 200ms. My questions now are: Is there a better way, handling the coordinate and color structures? How is this correctly done, this is obviously not the optimal way? ;) thanks in advance, regards Markus

    Read the article

  • 2D Image Creator for a video game

    - by user1276078
    I need to make a few images for an arcade video game I'm making in Java. As of right now, I have drawings that animate, but there are two problems. The drawings are horrible, and as a result, the game won't get enough attention. It's a pain to have to change each coordinate for the drawing, as the drawings are fairly complex. I'd like to use images. I feel they could solve my problem. They would look better than the drawings, and it would only have an x and a y coordinate, rather than the many coordinates I need for each drawing. So, in a sense, I have two questions. Would images actually help? Would they solve my 2 problems? I just want to clarify. How would I make these images. I don't think I can copy them off of the internet because I plan on publishing this game. So, is there any software where you can make your own images? (It has to be in an image type that Java can support. I'm working with java). It also, as stated by the header, needs to be a 2D image; not 3D

    Read the article

  • CSMA between APs in same channel & different SSID ?

    - by Ranganathan
    Would be great if someone clarifies this doubt. Lets assume two Wireless Access Points AP1 & AP2 with these conditions 1. both in the same 802.11 standard 2. same channel 3. using different SSIDs (just like in adjacent apartment houses). In this case, do these two Access points (and the clients associated to them) coordinate via CSMA/CA ? ie., if one of the AP's or a client station is about to transmit, does it wait & observe the other AP's & its clients' transmission before sending the frame in air ? Also, do the clients associated with these different APs coordinate via CSMA/CA ?

    Read the article

  • What do ptLineDist and relativeCCW do?

    - by Fasih Khatib
    I saw these methods in the Line2D Java Docs but did not understand what they do? Javadoc for ptLineDist says: Returns the distance from a point to this line. The distance measured is the distance between the specified point and the closest point on the infinitely-extended line defined by this Line2D. If the specified point intersects the line, this method returns 0.0 Doc for relativeCCW says: Returns an indicator of where the specified point (PX, PY) lies with respect to the line segment from (X1, Y1) to (X2, Y2). The return value can be either 1, -1, or 0 and indicates in which direction the specified line must pivot around its first endpoint, (X1, Y1), in order to point at the specified point (PX, PY). A return value of 1 indicates that the line segment must turn in the direction that takes the positive X axis towards the negative Y axis. In the default coordinate system used by Java 2D, this direction is counterclockwise. A return value of -1 indicates that the line segment must turn in the direction that takes the positive X axis towards the positive Y axis. In the default coordinate system, this direction is clockwise. A return value of 0 indicates that the point lies exactly on the line segment. Note that an indicator value of 0 is rare and not useful for determining colinearity because of floating point rounding issues. If the point is colinear with the line segment, but not between the endpoints, then the value will be -1 if the point lies "beyond (X1, Y1)" or 1 if the point lies "beyond (X2, Y2)".

    Read the article

  • Bubble sorting my array does not sort it

    - by Trixmix
    I sort an array of chunks by doing this: for (int i =0; i<this.getQueue().size();i++) { for (int j =0; j<this.getQueue().size()-i-1;j++) { Chunk temp1 = this.getQueue().get(i); Chunk temp2 = this.getQueue().get(i+1); if (temp1 != null &&temp2 != null && temp2.getLocation().getY() < temp1.getLocation().getY()) { this.getQueue().set(i, temp2); this.getQueue().set(i+1, temp1); } } } What I want is the the chunks with the lowest Y coordinate will be at the start of the array and the ones with the bigger Y coordinate will be at the end of the array. And this is my result: 1024.0 944.0 1104.0 944.0 1104.0 ----BEFORE----- 944.0 1024.0 944.0 1104.0 1104.0 ---AFTER--- Why is this not working It seams fine. I dont want to use a comparator so dont suggest it. More info, the Y cords are floats. I got the result by for each looping on this queue and printed the Y locations.

    Read the article

  • Portion from CGPDFPage + Scale (zoom)

    - by malcom
    I wanna take a rect from CGPDFPage (the portion of image around the user's touch point(x,y)) and scale it by a scaleFactor (ie 2x). Below the code I've used to get CGPDFPage's rect. The problem with it is the scaleFactor support. The idea is: 1) pageRect size is pageRect.size *2 2) myThumbRect (the region to zoom) become resultImageSize/scaleFactor (because the final output will be scaleFactor times bigger) 3) pointOfClick (x,y) become pointOfClick(2x,2y) 4) scale up the context by factor CGContextScaleCTM(ctx, scaleFactor, -scaleFactor); 5) grab the rect However the result is an empty image. Any idea? -(UIImage *) zoomedPDFImageAtPoint:(CGPoint) pointOfClick size:(CGSize) resultImageSize scale:(CGFloat) scaleFactor { // get the rect of our page CGRect pageRect = CGPDFPageGetBoxRect(myPageRef, kCGPDFCropBox); // my thumb rect is a portion of our CGPDFPage with size as /scaleFactor of resultImageSize // then we need to scale the image portiong by *scaleFactor and draw it in our resultImageSize sized graphic context CGSize myThumbRect = resultImageSize; // page rect has size as original size * scaleFactor //resultImageSize = pageRect.size; // to remove, i've used it to see where the rect is printed in final image pointOfClick = CGPointMake(-pointOfClick.x, -pointOfClick.y); NSLog(@"Click (%0.f,%0.f) Page (%0.f,%0.f ; %0.f,%0.f)",pointOfClick.x,pointOfClick.y,pageRect.origin.x,pageRect.origin.y,pageRect.size.width,pageRect.size.height); // create a new context for resulting image of my desidered size UIGraphicsBeginImageContext(resultImageSize); CGContextRef ctx = UIGraphicsGetCurrentContext(); CGContextSaveGState(ctx); // because rect is that for drawing in a flipped coordinate system, this translate the lower-left corner of the rect // in an upright coordinate system CGContextTranslateCTM(ctx, CGRectGetMinX(pageRect),CGRectGetMaxY(pageRect)); // scale to flip the coordinate system so that the y axis goes up the drawing canvas CGContextScaleCTM(ctx, 1, -1); // translate so the origin is offset by exactly the rect origin CGContextTranslateCTM(ctx, -(pageRect.origin.x), -(pageRect.origin.y)); // zoomRect is interested region.the clickPoint is the center of this region CGRect zoomedRect = CGRectMake(-pointOfClick.x, (pageRect.size.height-(-pointOfClick.y)),myThumbRect.width,myThumbRect.height); zoomedRect.origin.y-=(myThumbRect.height/2.0); zoomedRect.origin.x-=(myThumbRect.width/2.0); NSLog(@"Zoom region at (%0.f,%0.f) (%0.f,%0.f)",zoomedRect.origin.x,zoomedRect.origin.y,zoomedRect.size.width,zoomedRect.size.height); // now we need to move clipped rect to the origin // x: x was moved subtracting current click x coordinate and adding the half of zoomed rect (because zoomedRect contains pointsOfClick at it's center) // same with y but inverse (because ctm is flipped) CGPoint translateToOrigin = CGPointMake(pointOfClick.x+(zoomedRect.size.width/2.0), -pointOfClick.y-(zoomedRect.size.height/2.0));//(pageRect.size.height-zoomedRect.size.height)+pointOfClick.y); NSLog(@"Translate zoomed region to origin using translate by (%0.f,%0.f)",translateToOrigin.x,translateToOrigin.y); CGContextTranslateCTM(ctx, translateToOrigin.x,translateToOrigin.y); CGContextClipToRect (ctx, zoomedRect); // now draw the document CGContextDrawPDFPage(ctx, myPageRef); CGContextRestoreGState(ctx); // generate image UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return finalImage; }

    Read the article

  • Get the touch position inside the imageview in android

    - by Manikandan
    I have a imageview in my activity and I am able to get the position where the user touch the imageview, through onTouchListener. I placed another image where the user touch over that image. I need to store the touch position(x,y), and use it in another activity, to show the tags. I stored the touch position in the first activity. In the first activity, my imageview at the top of the screen. In the second activity its at the bottom of the screen. If I use the position stored from the first acitvity, it place the tag image at the top, not on the imageview, where I previously clicked in the first activity. Is there anyway to get the position inside the imageview. FirstActivity: cp.setOnTouchListener(new OnTouchListener() { @Override public boolean onTouch(View v, MotionEvent event) { // TODO Auto-generated method stub Log.v("touched x val of cap img >>", event.getX() + ""); Log.v("touched y val of cap img >>", event.getY() + ""); x = (int) event.getX(); y = (int) event.getY(); tag.setVisibility(View.VISIBLE); int[] viewCoords = new int[2]; cp.getLocationOnScreen(viewCoords); int imageX = x - viewCoords[0]; // viewCoods[0] is the X coordinate int imageY = y - viewCoords[1]; // viewCoods[1] is the y coordinate Log.v("Real x >>>",imageX+""); Log.v("Real y >>>",imageY+""); RelativeLayout rl = (RelativeLayout) findViewById(R.id.lay_lin); ImageView iv = new ImageView(Capture_Image.this); Bitmap bm = BitmapFactory.decodeResource(getResources(), R.drawable.tag_icon_32); iv.setImageBitmap(bm); RelativeLayout.LayoutParams params = new RelativeLayout.LayoutParams( RelativeLayout.LayoutParams.WRAP_CONTENT, RelativeLayout.LayoutParams.WRAP_CONTENT); params.leftMargin = x; params.topMargin = y; rl.addView(iv, params); Intent intent= new Intent(Capture_Image.this,Tag_Image.class); Bundle b=new Bundle(); b.putInt("xval", imageX); b.putInt("yval", imageY); intent.putExtras(b); startActivity(intent); return false; } }); In TagImage.java I used the following: im = (ImageView) findViewById(R.id.img_cam22); b=getIntent().getExtras(); xx=b.getInt("xval"); yy=b.getInt("yval"); im.setOnTouchListener(new OnTouchListener() { @Override public boolean onTouch(View v, MotionEvent event) { int[] viewCoords = new int[2]; im.getLocationOnScreen(viewCoords); int imageX = xx + viewCoords[0]; // viewCoods[0] is the X coordinate int imageY = yy+ viewCoords[1]; // viewCoods[1] is the y coordinate Log.v("Real x >>>",imageX+""); Log.v("Real y >>>",imageY+""); RelativeLayout rl = (RelativeLayout) findViewById(R.id.lay_lin); ImageView iv = new ImageView(Tag_Image.this); Bitmap bm = BitmapFactory.decodeResource(getResources(), R.drawable.tag_icon_32); iv.setImageBitmap(bm); RelativeLayout.LayoutParams params = new RelativeLayout.LayoutParams( 30, 40); params.leftMargin =imageX ; params.topMargin = imageY; rl.addView(iv, params); return true; } });

    Read the article

  • My xcode mapview wont work when combined with webview

    - by user1715702
    I have a small problem with my mapview. When combining the mapview code with the code for webview, the app does not zoom in on my position correctly (just gives me a world overview where i´m supposed to be somewhere in California - wish I was). And it doesn´t show the pin that I have placed on a specific location. These things works perfectly fine, as long as the code does not contain anything concerning webview. Below you´ll find the code. If someone can help me to solve this, I would be som thankful! ViewController.h #import <UIKit/UIKit.h> #import <MapKit/MapKit.h> #define METERS_PER_MILE 1609.344 @interface ViewController : UIViewController <MKMapViewDelegate>{ BOOL _doneInitialZoom; IBOutlet UIWebView *webView; } @property (weak, nonatomic) IBOutlet MKMapView *_mapView; @property (nonatomic, retain) UIWebView *webView; @end ViewController.m #import "ViewController.h" #import "NewClass.h" @interface ViewController () @end @implementation ViewController @synthesize _mapView; @synthesize webView; - (void)viewDidLoad { [super viewDidLoad]; [_mapView setMapType:MKMapTypeStandard]; [_mapView setZoomEnabled:YES]; [_mapView setScrollEnabled:YES]; MKCoordinateRegion region = { {0.0,0.0} , {0.0,0.0} }; region.center.latitude = 61.097557; region.center.longitude = 12.126545; region.span.latitudeDelta = 0.01f; region.span.longitudeDelta = 0.01f; [_mapView setRegion:region animated:YES]; newClass *ann = [[newClass alloc] init]; ann.title = @"Hjem"; ann.subtitle = @"Her bor jeg"; ann.coordinate = region.center; [_mapView addAnnotation:ann]; NSString *urlAddress = @"http://google.no"; //Create a URL object. NSURL *url = [NSURL URLWithString:urlAddress]; //URL Requst Object NSURLRequest *requestObj = [NSURLRequest requestWithURL:url]; //Load the request in the UIWebView. [webView loadRequest:requestObj]; } - (void)viewDidUnload { [self setWebView:nil]; [self set_mapView:nil]; [super viewDidUnload]; // Release any retained subviews of the main view. } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return (interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown); } - (void)viewWillAppear:(BOOL)animated { // 1 CLLocationCoordinate2D zoomLocation; zoomLocation.latitude = 61.097557; zoomLocation.longitude = 12.126545; // 2 MKCoordinateRegion viewRegion = MKCoordinateRegionMakeWithDistance(zoomLocation, 0.5*METERS_PER_MILE, 0.5*METERS_PER_MILE); // 3 MKCoordinateRegion adjustedRegion = [_mapView regionThatFits:viewRegion]; // 4 [_mapView setRegion:adjustedRegion animated:YES]; } @end NewClass.h #import <UIKit/UIKit.h> #import <MapKit/MKAnnotation.h> @interface newClass : NSObject{ CLLocationCoordinate2D coordinate; NSString *title; NSString *subtitle; } @property (nonatomic, assign) CLLocationCoordinate2D coordinate; @property (nonatomic, copy) NSString *title; @property (nonatomic, copy) NSString *subtitle; @end NewClass.m #import "NewClass.h" @implementation newClass @synthesize coordinate, title, subtitle; @end

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >