Search Results

Search found 1510 results on 61 pages for 'barycentric coordinates'.

Page 53/61 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Raphael.js: Adding a new custom element

    - by Claudia
    I would like to create a custom Raphael element, with custom properties and functions. This object must also contain predefined Raphael objects. For example, I would have a node class, that would contain a circle with text and some other elements inside it. The problem is to add this new object to a set. These demands are needed because non-Raphael objects cannot be added to sets. As a result, custom objects that can contain Raphael objects cannot be used. The code would look like this: var Node = function (paper) { // Coordinates & Dimensions this.x = 0, this.y = 0, this.radius = 0, this.draw = function () { this.entireSet = paper.set(); var circle = paper.circle(this.x, this.y, this.radius); this.circleObj = circle; this.entireSet.push(circle); var text = paper.text(this.x, this.y, this.text); this.entireSet.push(text); } // other functions } var NodeList = function(paper){ this.nodes = paper.set(), this.populateList = function(){ // in order to add a node to the set // the object must be of type Raphael object // otherwise the set will have no elements this.nodes.push(// new node) } this.nextNode = function(){ // ... } this.previousNode = function(){ // ... } }

    Read the article

  • Confused about UIView frame property

    - by slowfungus
    I'm building a prototype iPad app that draws diagrams. I have the following view hierarchy: UIView UIScrollView DiagramView : UIView TabBar NavigationBar And a UIViewController subclass holding all that together. Before drawing the diagram the first time I calculate the dimensions of the diagram, and set the DiagramView frame to that size, and the content size of the scrollview as well. -(void)recalculateBounds { [renderer diagram:diagram shouldDraw:NO]; SQXRect diagramRect = SQXMakeRect(0.0,0.0,[diagram bounds].size.width,[diagram bounds].size.height); self.frame = diagramRect; [(UIScrollView*)[self superview] setContentSize:diagramRect.size]; } I should disclose that the frame is being set to about 1500 x 3500 which i know is ridiculous. I just want to focus on some other parts of the app before I get into optimizing the render code. This works beautifully, except that the rect being passed to drawRect is not the size that I set, and my drawing is getting clipped at the bottom. Its close the size i set, but bigger in width, and shorter in height. Also of note, is the fact that if I force the frame to be much bigger than what I know the diagram needs, then the drawRect:rect is big enough, and no clipping occurs. Of course this has me wondering if the frame size needs to take into account some other screen real estate like the toolbars but my reading of the docs tells me the frame is in superview coordinates, which would be the scrollview so I reckon I need to worry about such things. Any idea what is causing this discrepancy?

    Read the article

  • How can I tell when something outside my UITableViewCell has been touched?

    - by kk6yb
    Similar to this question I have a custom subclass of UITableViewCell that has a UITextField. Its working fine except the keyboard for doesn't go away when the user touches a different table view cell or something outside the table. I'm trying to figure out the best place to find out when something outside the cell is touched, then I could call resignFirstResponder on the text field. If the UITableViewCell could receive touch events for touches outside of its view then it could just resignFirstResponder itself but I don't see any way to get those events in the cell. The solution I'm considering is to add a touchesBegan:withEvent: method to the view controller. There I could send a resignFirstResponder to all tableview cells that are visible except the one that the touch was in (let it get the touch event and handle it itself). Maybe something like this pseudo code: - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { CGPoint touchPoint = // TBD - may need translate to cell's coordinates for (UITableViewCell* aCell in [theTableView visibleCells]) { if (![aCell pointInside:touchPoint withEvent:event]) { [aCell resignFirstResponder]; } } } I'm not sure if this is the best way to go about this. There doesn't seem to be any way for the tableviewcell itself to receive event notifications for events outside its view.

    Read the article

  • Inconsistent Frame / Bounds of UIImage and UIScrollView

    - by iFloh
    this one puzzles me. I have an Image with a width of 1600 pixels and height of 1819 pixels. I load the image as UIImageView into an UIScrollView and set the "contentsize parameter" iKarteScrollView.contentSize = CGSizeMake(iKarteImageView.bounds.size.width, iKarteImageView.bounds.size.height); NSLog(@"Frame - width %.6f, height %.6f - Bounds - width %.6f, height %.6f", myImageView.frame.size.width, myView.frame.size.height, myImageView.bounds.size.width, myImageView.bounds.size.height); NSLog(@"Content size width %.6f, height %.6f", myScrollView.contentSize.width, myScrollView.contentSize.height); The NSLog shows the following: Frame - width 1600.000000, height 1819.000000 - Bounds - width 1600.000000, height 1819.000000 Content size width 1600.000000, height 1819.000000 Now comes the miracle, in a subsequent method of the same object I call the same NSLog again. But this time the result is Frame - width 405.000000, height 411.000000 - Bounds - width 1601.510864, height 1625.236938 Content size width 404.617920, height 460.000000 Why is the frame size suddely 405 by 411 pixels? How can the Bounds be 1601 by 1625 where 1625 is roughly 200 pixels less than the original size? When Positioning a further UIImageView at the coordinates of 20 by 1625, the UIImageView is displayed an estimated 200 pixels above the bottom of the content of the UIScrollView. I'm lost ...

    Read the article

  • CIE XYZ colorspace: do I have RGBA or XYZA?

    - by Tronic
    I plan to write a painting program based on linear combinations of xy plane points (0,1), (1,0) and (0,0). Such system works identically to RGB, except that the primaries are not within the gamut but at the corners of a triangle that encloses the entire gamut. I have seen the three points being referred to as X, Y and Z (upper case) somewhere, but I cannot find the page anymore (I marked them to the picture myself). My pixel format stores the intensity of each of those three components the same way as RGB does, together with alpha value. This allows using pretty much any image manipulation operation designed for RGBA without modifying the code. What is my format called? Is it XYZA, RGBA or something else? Google doesn't seem to know of XYZA. RGBA will get confused with sRGB + alpha (which I also need to use in the same program). Notice that the primaries X, Y and Z and their intensities have little to do with the x, y and z coordinates (lower case) that are more commonly used.

    Read the article

  • Using OpenGL vertex buffers in C++.

    - by Ren
    I've loaded a Wavefront .obj file and drawn it in immediate mode, and it works fine. I'm now trying to draw the same model with a vertex buffer, but I have a question. My model data is organized in the following structures: struct Vert { double x; double y; double z; }; struct Norm { double x; double y; double z; }; struct Texcoord { double u; double v; double w; }; struct Face { unsigned int v[3]; unsigned int n[3]; unsigned int t[3]; }; struct Model { unsigned int vertNumber; unsigned int normNumber; unsigned int texcoordNumber; unsigned int faceNumber; Vert * vertArray; Norm * normArray; Texcoord * texcoordArray; Face * faceArray; }; As it is now, I don't think there is any redundant data, since multiple face structures can point to the same vertex, normal, or texture coordinate. When I make vbo's for the vertex positions, normals, and texture coordinates, and assign data to them with glBufferData, do I have to have make arrays with redundant data so that they will all have the same number of elements in the same order? I'd like to know if there is a simpler way to fill the buffers with the way I already have the model's data organized.

    Read the article

  • Textures in Opengl ES 2 not working properly

    - by Adl
    Hi! I'm working with Opengl ES 2 on iphone and right now I am trying to get my textures working on my objects. I'm using .obj files and all the data in them are correct. I have written a parser myself to retrieve all data, I convert it to static arrays in C. I discard the material properties for now, only getting the image path from the .mtl files manually. I have an object with 336 triangles, making this non-trivial to observe, with appertaining vertices, vertex faces and texture coordinates (u,v). Passing all data into the shaders, the resulting image is this: http://img530.imageshack.us/img530/9637/pic1io.png http://img404.imageshack.us/img404/7358/pic2pg.png But it should look like this (Displaying it in an object viewer). Please ignore the material properties. http://img16.imageshack.us/img16/1401/pic3cq.png Using this image as a texture: http://img217.imageshack.us/img217/1300/shirtdiffuse.png I'm thinking it might have to do with texture coordinate faces ? It is defined in my .obj file, and I'm not using them at all. In books and tutorials I have not found anything concerning this. Regards Niclas

    Read the article

  • How to force positioned elements to stay withing viewable browser area?

    - by jessegavin
    I have a script which inserts "popup" elements into the DOM. It sets their top and left css properties relative to mouse coordinates on a click event. It works great except that the height of these "popup" elements are variable and some of them extend beyond the viewable area of the browser window. I would like to avoid this. Here's what I have so far <script type="text/javascript"> $(function () { $("area").click(function (e) { e.preventDefault(); var offset = $(this).offset(); var relativeX = e.pageX - offset.left; var relativeY = e.pageY - offset.top; // 'responseText' is the "popup" HTML fragment $.get($(this).attr("href"), function (responseText) { $(responseText).css({ top: relativeY, left: relativeX }).appendTo("#territories"); // Need to be able to determine // viewable area width and height // so that I can check if the "popup" // extends beyond. $(".popup .close").click(function () { $(this).closest(".popup").remove(); }); }); }); }); </script>

    Read the article

  • [iText] Inserting Image onCloseDocument

    - by David
    I'm trying to insert an image in the footer of my document using iText's onCloseDocument event. I have the following code: public void onCloseDocument(PdfWriter writer, Document document) { PdfContentByte pdfByte = writer.getDirectContent(); try { // logo is a non-null global variable Image theImage = new Jpeg(logo); pdfByte.addImage(theImage, 400.0f, 0.0f, 0.0f, 400.0f, 0.0f, 0.0f); } catch (Exception e) { e.printStackTrace(); } } The code throws no exceptions, but it also fails to insert the image. This identical code is used onOpenDocument to insert the same logo. The only difference between the two methods are the coordinates in pdfByte.addImage. However, I've tried quite a few different coordinations in onCloseDocument and none of them appear anywhere in my document. Is there any troubleshooting technique for detecting content which is displayed off-page in a PDF? If not, can anyone see the problem with my onCloseDocument method? Edit: As a followup, it seems that onDocumentClose puts its content on page document.length() + 1 (according to its API). However, I don't know how to change the page number back to document.length() and place my logo on the last page.

    Read the article

  • jQuery's draggable grid

    - by Art
    It looks like that the 'grid' option in the constructor of Draggable is relatively bound to the original coordinates of the element being dragged - so simply put, if you have three draggable divs with their top set respectively to 100, 200, 254 pixels relative to their parent: <div class="parent-div" style="position: relative;"> <div id="div1" class="draggable" style="top: 100px; position: absolute;"></div> <div id="div2" class="draggable" style="top: 200px; position: absolute;"></div> <div id="div3" class="draggable" style="top: 254px; position: absolute;"></div> </div> Adn all of them are getting enabled for dragging with 'grid' set to [1, 100]: draggables = $('.draggable'); $.each(draggables, function(index, elem) { $(elem).draggable({ containment: $('#parent-div'), opacity: 0.7, revert: 'invalid', revertDuration: 300, grid: [1, 100], refreshPositions: true }); }); Problem here is that as soon as you drag div3, say, down, it's top is increased by 100, moving it to 354px instead of being increased by just mere 46px (254 + 46 = 300), which would get it to the next stop in the grid - 300px, if we are looking at the parent-div as a point of reference and "grid holder". I had a look at the draggable sources and it seem to be an in-built flaw - they just do all the calculations relative to the original position of the draggable element. I would like to avoid monkey-patching the code of draggable library and what I am really looking for here is the way how to make the Draggable calculate the grid positions relative to containing parent. However if monkey-patching is unavoidable, I guess I'll have to live with it. Thanks!

    Read the article

  • Android map performance with > 800 overlays of KML data

    - by span
    I have some a shape file which I have converted to a KML file that I wish to read coordinates from and then draw paths between the coordinates on a MapView. With the help of this great post: How to draw a path on a map using kml file? I have been able to read the the KML into an ArrayList of "Placemarks". This great blog post then showed how to take a list of GeoPoints and draw a path: http://djsolid.net/blog/android---draw-a-path-array-of-points-in-mapview The example in the above post only draws one path between some points however and since I have many more paths than that I am running into some performance problems. I'm currently adding a new RouteOverlay for each of the separate paths. This results in me having over 800 overlays when they have all been added. This has a performance hit and I would love some input on what I can do to improve it. Here are some options I have considered: Try to add all the points to a List which then can be passed into a class that will extend Overlay. In that new class perhaps it would be possible to add and draw the paths in a single Overlay layer? I'm not sure on how to implement this though since the paths are not always intersecting and they have different start and end points. At the moment I'm adding each path which has several points to it's own list and then I add that to an Overlay. That results in over 700 overlays... Simplify the KML or SHP. Instead of having over 700 different paths, perhaps there is someway to merge them into perhaps 100 paths or less? Since alot of paths are intersected at some point it should be possible to modify the original SHP file so that it merges all intersections. Since I have never worked with these kinds of files before I have not been able to find a way to do this in GQIS. If someone knows how to do this I would love for some input on that. Here is a link to the group of shape files if you are interested: http://danielkvist.net/cprg_bef_cbana_polyline.shp http://danielkvist.net/cprg_bef_cbana_polyline.shx http://danielkvist.net/cprg_bef_cbana_polyline.dbf http://danielkvist.net/cprg_bef_cbana_polyline.prj Anyway, here is the code I'm using to add the Overlays. Many thanks in advance. RoutePathOverlay.java package net.danielkvist; import java.util.List; import android.graphics.Canvas; import android.graphics.Color; import android.graphics.Paint; import android.graphics.Path; import android.graphics.Point; import android.graphics.RectF; import com.google.android.maps.GeoPoint; import com.google.android.maps.MapView; import com.google.android.maps.Overlay; import com.google.android.maps.Projection; public class RoutePathOverlay extends Overlay { private int _pathColor; private final List<GeoPoint> _points; private boolean _drawStartEnd; public RoutePathOverlay(List<GeoPoint> points) { this(points, Color.RED, false); } public RoutePathOverlay(List<GeoPoint> points, int pathColor, boolean drawStartEnd) { _points = points; _pathColor = pathColor; _drawStartEnd = drawStartEnd; } private void drawOval(Canvas canvas, Paint paint, Point point) { Paint ovalPaint = new Paint(paint); ovalPaint.setStyle(Paint.Style.FILL_AND_STROKE); ovalPaint.setStrokeWidth(2); int _radius = 6; RectF oval = new RectF(point.x - _radius, point.y - _radius, point.x + _radius, point.y + _radius); canvas.drawOval(oval, ovalPaint); } public boolean draw(Canvas canvas, MapView mapView, boolean shadow, long when) { Projection projection = mapView.getProjection(); if (shadow == false && _points != null) { Point startPoint = null, endPoint = null; Path path = new Path(); // We are creating the path for (int i = 0; i < _points.size(); i++) { GeoPoint gPointA = _points.get(i); Point pointA = new Point(); projection.toPixels(gPointA, pointA); if (i == 0) { // This is the start point startPoint = pointA; path.moveTo(pointA.x, pointA.y); } else { if (i == _points.size() - 1)// This is the end point endPoint = pointA; path.lineTo(pointA.x, pointA.y); } } Paint paint = new Paint(); paint.setAntiAlias(true); paint.setColor(_pathColor); paint.setStyle(Paint.Style.STROKE); paint.setStrokeWidth(3); paint.setAlpha(90); if (getDrawStartEnd()) { if (startPoint != null) { drawOval(canvas, paint, startPoint); } if (endPoint != null) { drawOval(canvas, paint, endPoint); } } if (!path.isEmpty()) canvas.drawPath(path, paint); } return super.draw(canvas, mapView, shadow, when); } public boolean getDrawStartEnd() { return _drawStartEnd; } public void setDrawStartEnd(boolean markStartEnd) { _drawStartEnd = markStartEnd; } } MyMapActivity package net.danielkvist; import java.util.ArrayList; import java.util.Collection; import java.util.Iterator; import android.graphics.Color; import android.os.Bundle; import android.util.Log; import com.google.android.maps.GeoPoint; import com.google.android.maps.MapActivity; import com.google.android.maps.MapView; public class MyMapActivity extends MapActivity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); MapView mapView = (MapView) findViewById(R.id.mapview); mapView.setBuiltInZoomControls(true); String url = "http://danielkvist.net/cprg_bef_cbana_polyline_simp1600.kml"; NavigationDataSet set = MapService.getNavigationDataSet(url); drawPath(set, Color.parseColor("#6C8715"), mapView); } /** * Does the actual drawing of the route, based on the geo points provided in * the nav set * * @param navSet * Navigation set bean that holds the route information, incl. * geo pos * @param color * Color in which to draw the lines * @param mMapView01 * Map view to draw onto */ public void drawPath(NavigationDataSet navSet, int color, MapView mMapView01) { ArrayList<GeoPoint> geoPoints = new ArrayList<GeoPoint>(); Collection overlaysToAddAgain = new ArrayList(); for (Iterator iter = mMapView01.getOverlays().iterator(); iter.hasNext();) { Object o = iter.next(); Log.d(BikeApp.APP, "overlay type: " + o.getClass().getName()); if (!RouteOverlay.class.getName().equals(o.getClass().getName())) { overlaysToAddAgain.add(o); } } mMapView01.getOverlays().clear(); mMapView01.getOverlays().addAll(overlaysToAddAgain); int totalNumberOfOverlaysAdded = 0; for(Placemark placemark : navSet.getPlacemarks()) { String path = placemark.getCoordinates(); if (path != null && path.trim().length() > 0) { String[] pairs = path.trim().split(" "); String[] lngLat = pairs[0].split(","); // lngLat[0]=longitude // lngLat[1]=latitude // lngLat[2]=height try { if(lngLat.length > 1 && !lngLat[0].equals("") && !lngLat[1].equals("")) { GeoPoint startGP = new GeoPoint( (int) (Double.parseDouble(lngLat[1]) * 1E6), (int) (Double.parseDouble(lngLat[0]) * 1E6)); GeoPoint gp1; GeoPoint gp2 = startGP; geoPoints = new ArrayList<GeoPoint>(); geoPoints.add(startGP); for (int i = 1; i < pairs.length; i++) { lngLat = pairs[i].split(","); gp1 = gp2; if (lngLat.length >= 2 && gp1.getLatitudeE6() > 0 && gp1.getLongitudeE6() > 0 && gp2.getLatitudeE6() > 0 && gp2.getLongitudeE6() > 0) { // for GeoPoint, first:latitude, second:longitude gp2 = new GeoPoint( (int) (Double.parseDouble(lngLat[1]) * 1E6), (int) (Double.parseDouble(lngLat[0]) * 1E6)); if (gp2.getLatitudeE6() != 22200000) { geoPoints.add(gp2); } } } totalNumberOfOverlaysAdded++; mMapView01.getOverlays().add(new RoutePathOverlay(geoPoints)); } } catch (NumberFormatException e) { Log.e(BikeApp.APP, "Cannot draw route.", e); } } } Log.d(BikeApp.APP, "Total overlays: " + totalNumberOfOverlaysAdded); mMapView01.setEnabled(true); } @Override protected boolean isRouteDisplayed() { // TODO Auto-generated method stub return false; } } Edit: There are of course some more files I'm using but that I have not posted. You can download the complete Eclipse project here: http://danielkvist.net/se.zip

    Read the article

  • Texture2D.Bounds.Intersect, but the Bounds never move? - XNA, .Net 4.0

    - by Gineer
    Hi all, I am still shiny new to XNA, so please forgive any stupid question and statements in this post (The added issue is that I am using Visual Studio 2010 with .Net 4.0 which also means very few examples exist out on the web - well, none that I could find easily): I have two 2D objects in a "game" that I am using to learn more about XNA. I need to figure out when these two objects intersect. I noticed that the Texture2D objects has a property named "Bounds" which in turn has a method named "Intersects" which takes a Rectangle (the other Texture2D.Bounds) as an argument. However when you run the code, the objects always intersect even if they are on separate sides of the screen. When I step into the code, I noticed that for the Texture2D Bounds I get 4 parameters back when you mouse over the Bounds and the X, and Y coordinates always read "X = 0, Y = 0" for both objects (hence they always intersect). The thing that confuses me is the fact that the Bounds property is on the Texture rather than on the Position (or Vector2) of the objects. I eventually created a little helper method that takes in the objects and there positions and then calculate whether they intersect, but I'm sure there must be a better way. any suggestions, pointers would be much appreciated. Gineer

    Read the article

  • Which text margin does SWT Table use when drawing text?

    - by Zordid
    I got a relatively easy question - but I cannot find anything anywhere to answer it. I use a simple SWT table widget in my application that displays only text in the cells. I got an incremental search feature and want to highlight text snippets in all cells if they match. So when typing "a", all "a"s should be highlighted. To get this, I add an SWT.EraseItem listener to interfere with the background drawing. If the current cell's text contains the search string, I find the positions and calculate relative x-coordinates within the text using event.gc.stringExtent - easy. With that I just draw rectangles "behind" the occurrences. Now, there's a flaw in this. The table does not draw the text without a margin, so my x coordinate does not really match - it is slightly off by a few pixels! But how many?? Where do I retrieve the cell's text margins that table's own drawing will use? No clue. Cannot find anything. :-( Bonus question: the table's draw method also shortens text and adds "..." if it does not fit into the cell. Hmm. My occurrence finder takes the TableItem's text and thus also tries to mark occurrences that are actually not visible because they are consumed by the "...". How do I get the shortened text and not the "real" text within the EraseItem draw handler? Thanks!

    Read the article

  • imageconvolution leaves black dot in the upper left corner

    - by Peter O.
    I'm trying to sharp resized images using this code: imageconvolution($imageResource, array( array( -1, -1, -1 ), array( -1, 16, -1 ), array( -1, -1, -1 ), ), 8, 0); When the transparent PNG image is sharpened, using code above, it appears with a black dot in the upper left corner (I have tried different convolution kernels, but the result is the same). After resizing the image looked OK. 1st image is the original one 2nd image is the sharpened one EDIT: What am I going wrong? I'm using the color retrieved from pixel. $color = imagecolorat($imageResource, 0, 0); imageconvolution($imageResource, array( array( -1, -1, -1 ), array( -1, 16, -1 ), array( -1, -1, -1 ), ), 8, 0); imagesetpixel($imageResource, 0, 0, $color); Is imagecolorat the right function? Or is the position correct? EDIT2: I have changed coordinates, but still no luck. I've check the transparency given by imagecolorat (according to this post). This is the dump: array(4) { red => 0 green => 0 blue => 0 alpha => 127 } Alpha 127 = 100% transparent. Those zeroes might cause the problem...

    Read the article

  • Why is my simple python gtk+cairo program running so slowly/stutteringly?

    - by synapz
    My program draws circles moving on the window. I think I must be missing some basic gtk/cairo concept because it seems to be running too slowly/stutteringly for what I am doing. Any ideas? Thanks for any help! #!/usr/bin/python import gtk import gtk.gdk as gdk import math import random import gobject # The number of circles and the window size. num = 128 size = 512 # Initialize circle coordinates and velocities. x = [] y = [] xv = [] yv = [] for i in range(num): x.append(random.randint(0, size)) y.append(random.randint(0, size)) xv.append(random.randint(-4, 4)) yv.append(random.randint(-4, 4)) # Draw the circles and update their positions. def expose(*args): cr = darea.window.cairo_create() cr.set_line_width(4) for i in range(num): cr.set_source_rgb(1, 0, 0) cr.arc(x[i], y[i], 8, 0, 2 * math.pi) cr.stroke_preserve() cr.set_source_rgb(1, 1, 1) cr.fill() x[i] += xv[i] y[i] += yv[i] if x[i] > size or x[i] < 0: xv[i] = -xv[i] if y[i] > size or y[i] < 0: yv[i] = -yv[i] # Self-evident? def timeout(): darea.queue_draw() return True # Initialize the window. window = gtk.Window() window.resize(size, size) window.connect("destroy", gtk.main_quit) darea = gtk.DrawingArea() darea.connect("expose-event", expose) window.add(darea) window.show_all() # Self-evident? gobject.idle_add(timeout) gtk.main()

    Read the article

  • In Powerpoint 2007, how can I position a Callout's Tail programatically?

    - by Rorschach
    I'm looking at the XML and this is what it has for the Callout object's coordinates and geometry: <p:spPr> <a:xfrm> <a:off x="2819400" y="5181600"/> // X,Y Position of Callout Box <a:ext cx="609600" cy="457200"/> // Width,Height of Callout Box </a:xfrm> <a:prstGeom prst="wedgeRectCallout"> <a:avLst> <a:gd name="adj1" fmla="val 257853"/> // X Position Of Tail <a:gd name="adj2" fmla="val -532360"/> // Y Position of Tail </a:avLst> </a:prstGeom> <a:solidFill> <a:schemeClr val="accent1"> <a:alpha val="50000"/> </a:schemeClr> </a:solidFill> </p:spPr> What I'm having trouble with is the formula for telling it to place the tail at a particular coordinate on the slide. I've tried this to calculate it, but it does not work correctly. //This gives me the distance between the Coordinate and the Center of the Callout. DistanceX = Coordinate.X - (Callout.X + Callout.X_Ext)/2 DistanceY = Coordinate.Y - (Callout.Y + Callout.Y_Ext)/2 But, the geometric value is not the distance between the two points. Anybody know what the formula is for calculating this?

    Read the article

  • How to create an array of buttons in C++

    - by mangoman13
    Hi, I'm using C++ in VS2005 and have an 8x8 grid of buttons on a form. I want to have these buttons in an array, so when I click on any of them it will open the same event handler (I think that is what they are called), but I will know the index of which one was clicked on. I know how to do this in VB and C#, but I can't seem to figure it out with C++ Right now I have all my buttons labelled by their location, i.e. b00, b10, b21, etc. So I think what I am looking for is a way to do something like this: Button b[8][8]; //this causes me errors (error C2728: 'System::Windows::Forms::Button' : a native array cannot contain this managed type) and (error C2227: left of '->{ctor}' must point to class/struct/union/generic type) void Assignment(){ b[0][0] = b00; b[1][0] = b10; ... } and then in form1.h: private: System::Void b_Click(System::Object^ sender, System::EventArgs^ e) { //somehow read the coordinates into variables x and y //do something based on these values } Any help would be appreciated. Also let me know if I am going in the complete wrong direction with this. Thanks!

    Read the article

  • Problem painting JLabel class to another JPanel class

    - by jjpotter
    I have created a class that extends JLabel to use as my object moving around a JPanel for a game. import javax.swing.*; public class Head extends JLabel { int xpos; int ypos; int xvel; int yvel; ImageIcon chickie = new ImageIcon("C:\\Users\\jjpotter.MSDOM1\\Pictures\\clavalle.jpg"); JLabel myLabel = new JLabel(chickie); public Head(int xpos, int ypos, int xvel, int yvel){ this.xpos = xpos; this.ypos = ypos; this.xvel = xvel; this.yvel = yvel; } public void draw(){ myLabel.setLocation(xpos, ypos); } public double getXpos() { return xpos; } public double getYpos() { return ypos; } public int getXvel() { return xvel; } public int getYvel() { return yvel; } public void setPos(int x, int y){ xpos = x; ypos = y; } } I am then trying to add it onto my JPanel. From here I will randomly have it increment its x and y coordinates to float it around the screen. I can not get it to paint itself onto the JPanel. I know there is a key concept I am missing here that involves painting components on different panels. Here is what I have in my GamePanel class import java.awt.Dimension; import java.util.Random; import javax.swing.*; public class GamePanel extends JPanel { Random myRand = new Random(); Head head = new Head(20,20,0,0); public GamePanel(){ this.setSize(new Dimension(640, 480)); this.add(head); } } Any suggestions on how to get this to add to the JPanel? Also, is this a good way to go about having the picture float around the screen randomly for a game?

    Read the article

  • Simplification / optimization of GPS track

    - by GreyCat
    I've got a GPS track, produces by gpxlogger(1) (supplied as a client for gpsd). GPS receiver updates its coordinates every 1 second, gpxlogger's logic is very simple, it writes down location (lat, lon, ele) and a timestamp (time) received from GPS every n seconds (n = 3 in my case). After writing down a several hours worth of track, gpxlogger saves several megabyte long GPX file that includes several thousands of points. Afterwards, I try to plot this track on a map and use it with OpenLayers. It works, but several thousands of points make using the map a sloppy and slow experience. I understand that having several thousands of points of suboptimal. There are myriads of points that can be deleted without losing almost anything: when there are several points making up roughly the straight line and we're moving with the same constant speed between them, we can just leave the first and the last point and throw anything else. I thought of using gpsbabel for such track simplification / optimization job, but, alas, it's simplification filter works only with routes, i.e. analyzing only geometrical shape of path, without timestamps (i.e. not checking that the speed was roughly constant). Is there some ready-made utility / library / algorithm available to optimize tracks? Or may be I'm missing some clever option with gpsbabel?

    Read the article

  • Need help regarding lwuit

    - by rajiv
    Hi, I have project, already developed using canvas and lib used is LCDUI. It's for nokia keyboard supported devices. Now I want to incorporate same application for touch devices. I have used touch methods like pointerpressed, etc. For normal functionality that worked pretty well. But it creates problem in commands. My application is in fullscreen mode. Commands I have created using user defined menu list. Probles is that I can not directly identify that which command has been clicked. Setting coordinates for every command is not thr feasible solution for me. I come across the new lib LWUIT, but i found out that it supports only forms(Can't we use on canvas?). and integrating LCDUI and LWUIT is also not possible(please give suggestion that can we use both in same application?). Is it possible to create form under canvas itself? Any other lib support available? thank you.

    Read the article

  • Learning Java, how to type text on canvas?

    - by Voley
    I'm reading a book by Eric Roberts - Art and science of java and it got an excersise that I can't figure out - You have to make calendar, with GRect's, 7 by 6, that goes ok, the code part is easy, but also you have to type the numbers of the date on those rectangles, and it's kinda hard for me, there is nothing about it in the book. I tried using GLabel thing, but here arises the problem that I need to work on those numbers, and it says "can't convert from int to string and vice versa". GLabel (string, posX, posY) - it is not accepting int as a parameter, only string, I even tried typecasting, still not working. For example I want to make a loop int currentDate = 1; while (currentDate < 31) { add(new Glabel(currentDate, 100, 100); currentDate++; This code is saying that no man, can't convert int to string. If i try changing currentDate to string, it works, but I got a problem with calculation, as I can't manipulate with number in string, it doesn't even allow to typecast it into int. How can I fix it? Maybe there is another class or method to type the text over those rectangles? I know about println but it doen't have any x or y coordinates, so I can't work with it. And I think it's only for console programs.

    Read the article

  • Looping through Markers with Google Maps API v3 Problem

    - by Oscar Godson
    I'm not sure why this isn't working. I don't have any errors, but what happens is, no matter what marker I click on, it clicks the 3rd one (which is the last one out of 4 markers. Array starts at 0, obviously) and shows the number "3", which is correct for THAT one, but I'm not clicking that one. Here is most of my code, just not the array of [place-name, coordinates] (var locations, which you will see): function initialize() { var latlng = new google.maps.LatLng(45.522015,-122.683811); var settings = { zoom: 15, center: latlng, disableDefaultUI:true, mapTypeId: google.maps.MapTypeId.SATELLITE }; var map = new google.maps.Map(document.getElementById("map_canvas"), settings); var infowindow = new Array(); var marker = new Array(); for(x in locations){ console.log(x); infowindow[x] = new google.maps.InfoWindow({content: x}); marker[x] = new google.maps.Marker({title:locations[x][0],map:map,position:locations[x][1]}); google.maps.event.addListener(marker[x], 'click', function() {infowindow[x].open(map,marker[x]);}); } } initialize() The console.log output is (its correct, and what i expect): 0 1 2 3 So, any ideas?

    Read the article

  • How do I detect proximity of the mouse pointer to a line in Flex?

    - by Hanno Fietz
    I'm working on a charting UI in Flex. One of the features I want to implement is "snapping" of the mousepointer to the data points in the diagram. I. e., if the user hovers the mouse pointer over a line diagram and gets close to the data point, I want the pointer to move to the exact coordinates and show a marker, like this: Currently, the lines are drawn on a Shape, using the Graphics API. The Shape is a child DisplayObject of a custom UIComponent subclass with the exact same dimensions. This means, I already get mouseOver events on the parent of the diagram's canvas. Now I need a way to detect if the pointer is close to one of the data points. I. e. I need an answer to the question "Which data points lie within a radius of x pixels from my current position and which of them is closest?" upon each move of the mouse. I can think of the following possibilities: draw the lines not as simple lines in the graphics API, but as more advanced objects that can have their own mouseOver events. However, I want the snapping to trigger before the mouse is actually over the line. check the original data for possible candidates upon each mouse movement. Using binary search, I might be able to reduce the number of items I have to compare sufficently. prepare some kind of new data structure from the raw data that makes the above search more efficient. I don't know how that would look like. I'm guessing this is a pretty standard problem for a number of applications, but probably the actual code usually is inside of some framework. Is there anything I can read about this topic?

    Read the article

  • Is it good choice to move a Sublayer around a view using UIPanGestureRecognizer?

    - by Pranjal Bikash Das
    I have a CALayer and as sublayer to this CALayer i have added an imageLayer which contains an image of resolution 276x183. I added a UIPanGestureRecognizer to the main view and calculation the coordinates of the CALayer as follows: - (void)panned:(UIPanGestureRecognizer *)sender{ subLayer.frame=CGRectMake([sender locationInView:self.view].x-138, [sender locationInView:self.view].y-92, 276, 183); } in viedDidLoad i have: subLayer.backgroundColor=[UIColor whiteColor].CGColor; subLayer.frame=CGRectMake(22, 33, 276, 183); imageLayer.contents=(id)[UIImage imageNamed:@"A.jpg"].CGImage; imageLayer.frame=subLayer.bounds; imageLayer.masksToBounds=YES; imageLayer.cornerRadius=15.0; subLayer.shadowColor=[UIColor blackColor].CGColor; subLayer.cornerRadius=15.0; subLayer.shadowOpacity=0.8; subLayer.shadowOffset=CGSizeMake(0, 3); [subLayer addSublayer:imageLayer]; [self.view.layer addSublayer:subLayer]; It is giving desired output but a bit slow in the simulator. I have not yet tested it in Device. so my question is - Is it OK to move a CALayer containing an image??

    Read the article

  • Problem shrinking with StretchBlt()

    - by SparkyNZ
    Hi. I have some code that paints my own rectangular buttons based on a source bitmap. Most of the time the destination buttons are bigger than my source bitmap image and StretchBlt works fine. However, when the destination is smaller than the source image, StretchBlt refuses to fill the entire destination area. I know StretchBlt isn't great on quality when it comes to scaling down images but I'm not too concerned about that. I just don't want missing pixels. Here a link with the source image at the top and destination at the bottom: link text Note, I am actually shrinking parts of the source image into the destination. I am not shrinking the entire image down. So for example, I copy the corners size for size with BitBlt() then I stretch (squash) the horizontal pixel data between the corners from the source image into the destination DC. There is no fault with my source and destination coordinates. If I change from SRCCOPY to WHITENESS, the entire area fills with white as you'd expect. There is no grey bar where pixels haven't copied as you see in the Broken.bmp image above. Has anyone had this problem before, and if so, can somebody please suggest a solution? Cheers

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >