Search Results

Search found 1449 results on 58 pages for 'coordinate geometry'.

Page 18/58 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Setting corelocation results to NSNumber object parameters

    - by Dan Ray
    This is a weird one, y'all. - (void)locationManager:(CLLocationManager *)manager didUpdateToLocation:(CLLocation *)newLocation fromLocation:(CLLocation *)oldLocation { CLLocationCoordinate2D coordinate = newLocation.coordinate; self.mark.longitude = [NSNumber numberWithDouble:coordinate.longitude]; self.mark.latitude = [NSNumber numberWithDouble:coordinate.latitude]; NSLog(@"Got %f %f, set %f %f", coordinate.latitude, coordinate.longitude, self.mark.latitude, self.mark.longitude); [manager stopUpdatingLocation]; manager.delegate = nil; if (self.waitingForLocation) { [self completeUpload]; } } The latitude and longitude in that "mark" object are synthesized parameters referring to NSNumber iVars. In the simulator, my NSLog output for that line in the middle there reads: 2010-05-28 15:08:46.938 EverWondr[8375:207] Got 37.331689 -122.030731, set 0.000000 -44213283338325225829852024986561881455984640.000000 That's a WHOLE lot further East than 1 Infinite Loop! The numbers are different on the device, but similar--lat is still zero and long is a very unlikely high negative number. Elsewhere in the controller I'm accepting a button press and uploading a file (an image I just took with the camera) with its geocoding info associated, and I need that self.waitingForLocation to inform the CLLocationManager delegate that I already hit that button and once its done its deal, it should go ahead and fire off the upload. Thing is, up in the button-click-receiving method, I test see if CL is finished by testing self.mark.latitude, which seems to be getting set zero...

    Read the article

  • Set initial view with SkpWriter in Google Sketchup C++ SDK

    - by Peter Olsson
    How do you set the initial view for the model in an SKP file created with the SkpWriter in Google Sketchup C++ SDK? There has been an example in an older version of the SDK. Part of the source is posted here. I'm trying to use: m_pDoc->GetModel()->SetCamera(cameraDefn); The problem is that I'm not able to create a valid atlast::sketchup::CCameraDefinition. Non of the examples in the above post works: atlast::sketchup::CCameraDefinition cameraDefn; cameraDefn.Set(atlast::geometry::CPoint3d(793.838, -1262.6, 2603.16), atlast::geometry::CPoint3d(567.977, 338.199, 398.932), atlast::geometry::CUnitVector3d(-0.112657, 0.798459, 0.591415)); and: atlast::sketchup::CCameraDefinition cameraDefn; cameraDefn.Set(atlast::geometry::CPoint3d(793.838, -1262.6, 2603.16), atlast::geometry::CPoint3d(567.977, 338.199, 398.932), atlast::geometry::CUnitVector3d(-0.112657, 0.798459, 0.591415)); In the end I want the initial view to be the view you get from pressing the icon for Zoom extents followed by the Iso icon (the other way around is also ok). Right now I would settle for creating a valid atlast::sketchup::CCameraDefinition. Any better way to achieve this in the SKP-file?

    Read the article

  • Hibernate Mapping Annotation Question?

    - by paddydub
    I've just started using hibernate and I'm trying to map walking distance between two coordinates into a hashmap, There can be many connections from one "FromCoordinate" to another "ToCoordinate". I'm not sure if i've implemented this correctly, What annotations do i need to map this MashMap? Thanks @Entity @Table(name = "COORDCONNECTIONS") public class CoordinateConnection implements Serializable{ private static final long serialVersionUID = -1624745319005591573L; /** auto increasing id number */ @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "ID") @id private int id; @Embedded public FromCoordinate fromCoord; @Embedded public ToCoordinate toCoord; HashMap<Coordinate, ArrayList<Coordinate>> coordWalkingConnections = new HashMap<Coordinate, ArrayList<Coordinate>>(); } public class FromCoordinate implements ICoordinate { @Column(name = "FROM_LAT") private double latitude; @Column(name = "FROM_LNG") private double longitude; } public class ToCoordinate implements ICoordinate { @Column(name = "TO_LAT") private double latitude; @Column(name = "TO_LNG") private double longitude; @Column(name = "DISTANCE") private double distance; } DATABASE STRUCTURE id FROM_LAT FROM_LNG TO_LAT TO_LNG Dist 1 43.352669 -6.264341 43.350012 -6.260653 0.38 2 43.352669 -6.264341 43.352669 -6.264341 0.00 3 46.352669 -6.264341 43.353373 -6.262013 0.17 4 47.352465 -6.265865 43.351290 -6.261200 0.25 5 45.452578 -6.265768 43.352788 -6.264396 0.01 6 45.452578 -6.265768 45.782788 -6.234523 0.01 ..... ... . Example HashMap for HashMap<Coordinate, ArrayList<Coordinate>> <KEY{43.352669 -6.264341}, Arraylist VALUES{(43.350012,-6.260653,0.383657), (43.352669, -6.264341, 0.000095), (43.353373, -6.262013, 0.173201)}> <KEY{47.352465 -6.265865}, Arraylist VALUES{(43.351290,-6.261200,0.258781)}> <KEY{45.452578 -6.265768}, Arraylist VALUES{(43.352788,-6.264396,0.013726),(45.782788,-6.234523,0.017726)}>

    Read the article

  • Reverse geocode coordinates to street address and setting as subtitle in annotation?

    - by Krismutt
    Hey everybody! Basically I wanna reverse geocode my coordinates for my annotation and show the address as the subtitle for the annotation. I just cant figure out how to do it... savePosition.h #import <Foundation/Foundation.h> #import <MapKit/MapKit.h> #import <MapKit/MKReverseGeocoder.h> #import <AddressBook/AddressBook.h> @interface savePosition : NSObject <MKAnnotation, MKReverseGeocoderDelegate> { CLLocationCoordinate2D coordinate; } @property (nonatomic, readonly) CLLocationCoordinate2D coordinate; -(id)initWithCoordinate:(CLLocationCoordinate2D) coordinate; - (NSString *)subtitle; - (NSString *)title; @end savePosition.m @synthesize coordinate; -(NSString *)subtitle{ return [NSString stringWithFormat:@"%f", streetAddress]; } -(NSString *)title{ return @"Saved position"; } -(id)initWithCoordinate:(CLLocationCoordinate2D) coor{ coordinate=coor; NSLog(@"%f,%f",coor.latitude,coor.longitude); return self; MKReverseGeocoder *geocoder = [[MKReverseGeocoder alloc] initWithCoordinate:coordinate]; geocoder.delegate = self; [geocoder start]; } - (void)reverseGeocoder:(MKReverseGeocoder *)geocoder didFailWithError:(NSError *)error { } - (void)reverseGeocoder:(MKReverseGeocoder *)geocoder didFindPlacemark:(MKPlacemark *)placemark{ NSString *streetAddress = [NSString stringWithFormat:@"%@", [placemark.addressDictionary objectForKey:kABPersonAddressStreetKey]]; } @end any ideas?? Thanks in advance!

    Read the article

  • backbone.js - Having multiple instances of the same view

    - by TrueWheel
    I am having problems having multiple instances in of the same view in different div elements. When I try to initialize them only the second of the two elements appear no matter what order I put them in. Here is the code for my view. var BodyShapeView = Backbone.View.extend({ thingiview: null, scene: null, renderer: null, model: null, mouseX: 0, mouseY: 0, events:{ 'click button#front' : 'front', 'click button#diag' : 'diag', 'click button#in' : 'zoomIn', 'click button#out' : 'zoomOut', 'click button#on' : 'rotateOn', 'click button#off' : 'rotateOff', 'click button#wireframeOn' : 'wireOn', 'click button#wireframeOff' : 'wireOff', 'click button#distance' : 'dijkstra' }, initialize: function(name){ _.bindAll(this, 'render', 'animate'); scene = new THREE.Scene(); camera = new THREE.PerspectiveCamera( 15, 400 / 700, 1, 4000 ); camera.position.z = 3; scene.add( camera ); camera.position.y = -5; var ambient = new THREE.AmbientLight( 0x202020 ); scene.add( ambient ); var directionalLight = new THREE.DirectionalLight( 0xffffff, 0.75 ); directionalLight.position.set( 0, 0, 1 ); scene.add( directionalLight ); var pointLight = new THREE.PointLight( 0xffffff, 5, 29 ); pointLight.position.set( 0, -25, 10 ); scene.add( pointLight ); var loader = new THREE.OBJLoader(); loader.load( "img/originalMeanModel.obj", function ( object ) { object.children[0].geometry.computeFaceNormals(); var geometry = object.children[0].geometry; console.log(geometry); THREE.GeometryUtils.center(geometry); geometry.dynamic = true; var material = new THREE.MeshLambertMaterial({color: 0xffffff, shading: THREE.FlatShading, vertexColors: THREE.VertexColors }); mesh = new THREE.Mesh(geometry, material); model = mesh; // model = object; scene.add( model ); } ); // RENDERER renderer = new THREE.WebGLRenderer(); renderer.setSize( 400, 700 ); $(this.el).find('.obj').append( renderer.domElement ); this.animate(); }, Here is how I create the instances var morphableBody = new BodyShapeView({ el: $("#morphable-body") }); var bodyShapeView = new BodyShapeView({ el: $("#mean-body") }); Any help would be really appreciated. Thanks in advance.

    Read the article

  • How to rescue from an SD (SDHC) card that I can't reformat (possible hardware failure)

    - by sbwoodside
    I have a transcend 16GB SDHC card and a lot of photos on it that I'd like to recover. When I plug it into the SD card reader, it takes a while for the Mac to even recognize that there's a disk present, and it shows up as 1.07GB with geometry 520/64/63 (according to fdisk). First I tried file recovery: PhotoRec: no files are found (the images are in CR2 format and I'm using testdisk-6.14-WIP which claims to recognize that format under TIF) dd / ddrescue: they create a 1.07GB image, same problem as above TestDisk: doesn't find any partitions to recover I found a source saying that the correct geometry for this type of SD Card is Heads 255, Sectors/Track 63, Cylinders 1953, so I tried manually setting that geometry in PhotoRec/TestDisk. No improvement. Next I tried formatting the disk with fdisk. After writing and quitting, I ran fdisk again and it reported that the new format hadn't been saved on the disk. I also tried resetting the format/partitions with TestDisk and that failed also. The fdisk log is below. I don't really care about the card, I've already ordered a new SanDisk card. But I'd like to get the data off. Maybe, is there any way to force dd or some other tool to create an image of the disk based on the original geometry and not on what the card "thinks" its geometry is? Or am I missing something?

    Read the article

  • VNC server failed to start CentOS

    - by Shaun
    I followed a tutorial on how to install and get VNCserver to run on CentOS 6 (since freenx isnt supported yet) and I keep getting Starting VNC server: 1:user [FAILED] How do I figure out whats going on here? Im new to Linux/CentOS and im trying to get RDP going so I can step away from SSH as much as possible (you know us Windows users love our pretty GUI's). So, where is the error log at and how do I find it? Or maybe someone else has experienced this and knows the solution based on the simple error given? After running in debug mode, here is my error + . /etc/init.d/functions ++ TEXTDOMAIN=initscripts ++ umask 022 ++ PATH=/sbin:/usr/sbin:/bin:/usr/bin ++ export PATH ++ '[' -z '' ']' ++ COLUMNS=80 ++ '[' -z '' ']' +++ /sbin/consoletype ++ CONSOLETYPE=pty ++ '[' -f /etc/sysconfig/i18n -a -z '' -a -z '' ']' ++ . /etc/profile.d/lang.sh ++ unset LANGSH_SOURCED ++ '[' -z '' ']' ++ '[' -f /etc/sysconfig/init ']' ++ . /etc/sysconfig/init +++ BOOTUP=color +++ RES_COL=60 +++ MOVE_TO_COL='echo -en \033[60G' +++ SETCOLOR_SUCCESS='echo -en \033[0;32m' +++ SETCOLOR_FAILURE='echo -en \033[0;31m' +++ SETCOLOR_WARNING='echo -en \033[0;33m' +++ SETCOLOR_NORMAL='echo -en \033[0;39m' +++ PROMPT=yes +++ AUTOSWAP=no +++ ACTIVE_CONSOLES='/dev/tty[1-6]' +++ SINGLE=/sbin/sushell ++ '[' pty = serial ']' ++ __sed_discard_ignored_files='/\(~\|\.bak\|\.orig\|\.rpmnew\|\.rpmorig\|\.rpmsave\)$/d' + '[' -r /etc/sysconfig/vncservers ']' + . /etc/sysconfig/vncservers ++ VNCSERVERS='1:larry 2:moe 3:curly' ++ VNCSERVERARGS[1]='-geometry 800x600' ++ VNCSERVERARGS[2]='-geometry 640x480' ++ VNCSERVERARGS[3]='-geometry 640x480' + prog='VNC server' + RETVAL=0 + case "$1" in + start + '[' 0 '!=' 0 ']' + . /etc/sysconfig/network ++ NETWORKING=yes ++ HOSTNAME=vps.binaryvisionaries.com ++ DOMAINNAME=server.name ++ GATEWAYDEV=venet0 ++ NETWORKING_IPV6=yes ++ IPV6_DEFAULTDEV=venet0 + '[' yes = no ']' + '[' -x /usr/bin/vncserver ']' + '[' -x /usr/bin/Xvnc ']' + echo -n 'Starting VNC server: ' Starting VNC server: + RETVAL=0 + '[' '!' -d /tmp/.X11-unix ']' + for display in '${VNCSERVERS}' + SERVS=1 + echo -n '1:larry ' 1:larry + DISP=1 + USER=larry + VNCUSERARGS='-geometry 800x600' + runuser -l larry -c 'cd ~larry && [ -r .vnc/passwd ] && vncserver :1 -geometry 800x600' + RETVAL=1 + '[' 1 -eq 0 ']' + break + '[' -z 1 ']' + '[' 1 -eq 0 ']' + failure 'vncserver start' + local rc=1 + '[' color '!=' verbose -a -z '' ']' + echo_failure + '[' color = color ']' + echo -en '\033[60G' + echo -n '[' [+ '[' color = color ']' + echo -en '\033[0;31m' + echo -n FAILED FAILED+ '[' color = color ']' + echo -en '\033[0;39m' + echo -n ']' ]+ echo -ne '\r' + return 1 + '[' -x /usr/bin/plymouth ']' + /usr/bin/plymouth --details + return 1 + echo + '[' 1 -eq 98 ']' + return 1 + exit 1

    Read the article

  • OpenGL CPU vs. GPU

    - by Nitrex88
    So I've always been under the impression that doing work on the GPU is always faster than on the CPU. Because of this, in OpenGL, I usually try to do intensive tasks in shaders so they get the speed boost from the GPU. However, now I'm starting to realize that some things simply work better on the CPU and actually perform worse on the GPU (particularly when a geometry shader is involved). For example, in a recent project I did involving procedurally generated terrain, I tried passing a grid of single triangles into a geometry shader, and tesselated each of these triangles into quads with 400 vertices whose height was determined by a noise function. This worked fine, and looked great, but easily maxed out the GPU with only 25 base triangles and caused a very slow framerate. I then discovered that tesselating on the CPU instead, and setting the height (using noise function) in the vertex shader was actually faster! This prompted me to question the benefits of using the GPU as much as possible... So, I was wondering if someone could describe the general pros and cons of using the GPU vs CPU for intensive graphics tasks. I know this mainly comes down to what your trying to achieve, so if necessary, use the above scenario to discuss why the "CPU + vertex shader" was actually faster than doing everything in the geometry shader on the GPU. It's possible my hardware (newest macbook pro) isn't optomized well for the geometry shader (thus causing the slow framerate). Also, I read that the vertex shader is very good with parallelism, and would love a quick explanation of how this may have played a role in speeding up my procedural terrain. Any info/advice about CPU/GPU/shaders would be awesome!

    Read the article

  • OpenGL CPU vs. GPU

    - by Nitrex88
    So I've always been under the impression that doing work on the GPU is always faster than on the CPU. Because of this, in OpenGL, I usually try to do intensive tasks in shaders so they get the speed boost from the GPU. However, now I'm starting to realize that some things simply work better on the CPU and actually perform worse on the GPU (particularly when a geometry shader is involved). For example, in a recent project I did involving procedurally generated terrain, I tried passing a grid of single triangles into a geometry shader, and tesselated each of these triangles into quads with 400 vertices whose height was determined by a noise function. This worked fine, and looked great, but easily maxed out the GPU with only 25 base triangles and caused a very slow framerate. I then discovered that tesselating on the CPU instead, and setting the height (using noise function) in the vertex shader was actually faster! This prompted me to question the benefits of using the GPU as much as possible... So, I was wondering if someone could describe the general pros and cons of using the GPU vs CPU for intensive graphics tasks. I know this mainly comes down to what your trying to achieve, so if necessary, use the above scenario to discuss why the "CPU + vertex shader" was actually faster than doing everything in the geometry shader on the GPU. It's possible my hardware (newest macbook pro) isn't optomized well for the geometry shader (thus causing the slow framerate). Also, I read that the vertex shader is very good with parallelism, and would love a quick explanation of how this may have played a role in speeding up my procedural terrain. Any info/advice about CPU/GPU/shaders would be awesome!

    Read the article

  • Resultant Vector Algorithm for 2D Collisions

    - by John
    I am making a Pong based game where a puck hits a paddle and bounces off. Both the puck and the paddles are Circles. I came up with an algorithm to calculate the resultant vector of the puck once it meets a paddle. The game seems to function correctly but I'm not entirely sure my algorithm is correct. Here are my variables for the algorithm: Given: velocity = the magnitude of the initial velocity of the puck before the collision x = the x coordinate of the puck y = the y coordinate of the puck moveX = the horizontal speed of the puck moveY = the vertical speed of the puck otherX = the x coordinate of the paddle otherY = the y coordinate of the paddle piece.horizontalMomentum = the horizontal speed of the paddle before it hits the puck piece.verticalMomentum = the vertical speed of the paddle before it hits the puck slope = the direction, in radians, of the puck's velocity distX = the horizontal distance between the center of the puck and the center of the paddle distY = the vertical distance between the center of the puck and the center of the paddle Algorithm solves for: impactAngle = the angle, in radians, of the angle of impact. newSpeedX = the speed of the resultant vector in the X direction newSpeedY = the speed of the resultant vector in the Y direction Here is the code for my algorithm: int otherX = piece.x; int otherY = piece.y; double velocity = Math.sqrt((moveX * moveX) + (moveY * moveY)); double slope = Math.atan(moveX / moveY); int distX = x - otherX; int distY = y - otherY; double impactAngle = Math.atan(distX / distY); double newAngle = impactAngle + slope; int newSpeedX = (int)(velocity * Math.sin(newAngle)) + piece.horizontalMomentum; int newSpeedY = (int)(velocity * Math.cos(newAngle)) + piece.verticalMomentum; for those who are not program savvy here is it simplified: velocity = v(moveX² + moveY²) slope = arctan(moveX / moveY) distX = x - otherX distY = y - otherY impactAngle = arctan(distX / distY) newAngle = impactAngle + slope newSpeedX = velocity * sin(newAngle) + piece.horizontalMomentum newSpeedY = velocity * cos(newAngle) + piece.verticalMomentum My Question: Is this algorithm correct? Is there an easier/simpler way to do what I'm trying to do?

    Read the article

  • XNA hlsl tex2D() only reads 3 channels from normal maps and specular maps

    - by cubrman
    Our engine uses deferred rendering and at the main draw phase gathers plenty of data from the objects it draws. In order to save on tex2D calls, we packed our objects' specular maps with all sorts of data, so three out of four channels are already taken. To make it clear: I am talking about the assets that come with the models and are stored in their material's Specular Level channel, not about the RenderTarget. So now I need another information to be stored in the alpha channel, but I cannot make the shader to read it properly! Nomatter what I write into alpha it ends up being 1 (255)! I tried: saving the textures in PNG/TGA formats. turning off pre-computed alpha in model's properties. Out of every texture available to me (we use Diffuse map, Normal Map and Specular Map) I was only able to read alpha successfully from the Diffuse Map! Here is how I add specular and normal maps to my model's material in the content processor: if (geometry.Material.Textures.ContainsKey(normalMapKey)) { ExternalReference<TextureContent> texRef = geometry.Material.Textures[normalMapKey]; geometry.Material.Textures.Remove("NormalMap"); geometry.Material.Textures.Add("NormalMap", texRef); } ... foreach (KeyValuePair<String, ExternalReference<TextureContent>> texture in material.Textures) { if ((texture.Key == "Texture") || (texture.Key == "NormalMap") || (texture.Key == "SpecularMap")) mat.Textures.Add(texture.Key, texture.Value); } In the shader I obviously use: float4 data = tex2D(specularMapSampler, TexCoords); so data.a is always 1 in my case, could you suggest a reason?

    Read the article

  • How to create an attached-property to change a resource's property?

    - by king.net
    I have a DrawingBrush as a resource like this: <DrawingBrush x:Key="Calendar" Stretch="Uniform"> <DrawingBrush.Drawing> <DrawingGroup> <DrawingGroup.Children> <GeometryDrawing Geometry="F1 M 28.0917,2.13333C 42.4005,2.13333 54,13.7329 54,28.0417C 54,42.3504 42.4004,53.95 28.0917,53.95C 13.7829,53.95 2.18334,42.3504 2.18334,28.0417C 2.18334,13.7329 13.7829,2.13333 28.0917,2.13333 Z "> <GeometryDrawing.Pen> <Pen Thickness="4" LineJoin="Round" Brush="#FF000000"/> </GeometryDrawing.Pen> </GeometryDrawing> <GeometryDrawing Geometry="F1 M 16.9667,16.7083L 39.7167,16.7083L 39.7167,41.625L 16.9667,41.625L 16.9667,16.7083 Z "> <GeometryDrawing.Pen> <Pen Thickness="2.66667" StartLineCap="Square" EndLineCap="Square" MiterLimit="2.75" Brush="#FF000000"/> </GeometryDrawing.Pen> </GeometryDrawing> <GeometryDrawing Brush="#FF000000" Geometry="F1 M 15.6333,15.9583L 40.7167,15.9583L 40.7167,25.2917L 15.6333,25.2917L 15.6333,15.9583 Z "/> <GeometryDrawing Brush="#FFFFFFFF" Geometry="F1 M 18.2167,11.9583L 22.9667,11.9583L 22.9667,20.875L 18.2167,20.875L 18.2167,11.9583 Z "> <GeometryDrawing.Pen> <Pen Thickness="1.33333" StartLineCap="Square" EndLineCap="Square" MiterLimit="2.75" Brush="#FF000000"/> </GeometryDrawing.Pen> </GeometryDrawing> <GeometryDrawing Brush="#FFFFFFFF" Geometry="F1 M 33.7167,11.925L 38.4667,11.925L 38.4667,20.8417L 33.7167,20.8417L 33.7167,11.925 Z "> <GeometryDrawing.Pen> <Pen Thickness="1.33333" StartLineCap="Square" EndLineCap="Square" MiterLimit="2.75" Brush="#FF000000"/> </GeometryDrawing.Pen> </GeometryDrawing> <GeometryDrawing Brush="#FF000000" Geometry="F1 M 28.0154,36.2658L 28.0154,37.4894L 21.6254,37.4894C 21.6169,37.1934 21.6615,36.908 21.7592,36.6333C 21.915,36.1815 22.165,35.7425 22.5091,35.3162C 22.8533,34.8899 23.3617,34.4091 24.0344,33.8738C 25.0782,32.983 25.776,32.295 26.1279,31.81C 26.4799,31.3249 26.6558,30.8551 26.6558,30.4005C 26.6558,29.9473 26.4894,29.5653 26.1566,29.2544C 25.8238,28.9435 25.3904,28.7881 24.8565,28.7881C 24.2915,28.7881 23.8393,28.9442 23.5001,29.2565C 23.161,29.5688 22.9892,30.0018 22.985,30.5556L 21.7614,30.4196C 21.8449,29.5345 22.1576,28.86 22.6993,28.3962C 23.241,27.9323 23.9686,27.7004 24.882,27.7004C 25.8054,27.7004 26.5358,27.9596 27.0733,28.4779C 27.6107,28.9963 27.8795,29.6385 27.8795,30.4047C 27.8795,30.7942 27.8065,31.1769 27.6607,31.5529C 27.5148,31.9289 27.2726,32.3251 26.9341,32.7415C 26.5957,33.1579 26.0115,33.7215 25.1816,34.4325C 24.4692,35.0216 24.0008,35.4214 23.7763,35.6317C 23.5518,35.842 23.3667,36.0533 23.2208,36.2658L 28.0154,36.2658 Z "/> <GeometryDrawing Brush="#FF000000" Geometry="F1 M 33.3178,37.4894L 33.3178,35.1781L 28.9671,35.1781L 28.9671,33.9545L 33.5897,27.8364L 34.5414,27.8364L 34.5414,33.9545L 35.765,33.9545L 35.765,35.1781L 34.5414,35.1781L 34.5414,37.4894L 33.3178,37.4894 Z M 33.3178,33.9545L 33.3178,30.1774L 30.4648,33.9545L 33.3178,33.9545 Z "/> </DrawingGroup.Children> </DrawingGroup> </DrawingBrush.Drawing> </DrawingBrush> And I can use it like this: <Rectangle Fill="{DynamicResource Calender}" /> Now, my question is: how can I create an attached-property to change all brushes on my resource? e.g. I be able to create this: <Rectangle Fill="{DynamicResource Calendar}" attached:IconHelper.Foreground="Blue" /> on my Rectangle and in my resource, I can get: <DrawingBrush x:Key="Calendar" Stretch="Uniform"> <DrawingBrush.Drawing> <DrawingGroup> <DrawingGroup.Children> <GeometryDrawing Geometry="blah blah"> <GeometryDrawing.Pen> <Pen Brush={attached:ReadItFromAboveRectangle}/> </GeometryDrawing.Pen> </GeometryDrawing> <GeometryDrawing Geometry="blah blah"> <GeometryDrawing.Pen> <Pen Brush={attached:ReadItFromAboveRectangle}/> </GeometryDrawing.Pen> <!-- etc... --> Is there any way to read an attached-property on Rectangle in Calendar resource? Or is there any other way to do this? Thanks in advance.

    Read the article

  • 3D RTS pathfinding

    - by xcrypt
    I understand the A* algorithm, but I have some trouble doing it in 3D to suit the needs of my RTS Basically, in the game I'm making, there will be agents with different sizes of OBB collision boxes. I can use steering behaviours for avoiding other agents, so I don't need complete dynamic pathfinding. However, there is a problem because different agents have different collision geometry, and structures can be placed in almost any place. This means that there might be a gap between two structures where some agents can go through and some can't. A solution I have found to this problem is to do a sweep of the collision geometry of the agent from start node of the edge the pf algorithm is currently testing, to the end node of that edge. But this is probably a bit overkill since every edge the algorithm tests would also have to create and test with a collision geometry sweep. What are some reasonable approaches to this problem? I should mention that I'd prefer not to use navmeshes, I prefer waypoints because my entire system is based on it atm.

    Read the article

  • Storing a Hex Grid

    - by Pedro Caetano
    I've been creating a small hex grid framework for Unity3D and have come to the following dilema. This is my coordinate system (taken from here) Link because I'm a new user It all works pretty nicely except for the fact I have no idea how to store it. I originally intended to store this in a 2D array and use images to generate my maps. One problem was that it had negative values (this was easily fixed by offsetting the coordinates a bit). However, due to this coordinate system, such an image or bitmap would have to be diamond shaped - and since these structures are square shaped, this would cause a lot of headaches even if I hack something together. Is there anything I'm missing that could fix this? I recall seeing a forum post regarding this in the unity forums but I can no longer find the link. Is writing a set of coordinate translators the best solution here? If you guys think it would be helpful, I can post code and images of my problem.

    Read the article

  • implementing a high level function in a script to call a low level function in the game engine

    - by eat_a_lemon
    In my 2d game engine I have a function that does pathfinding, find_shortest_path. It executes for each time step in the game loop and calculates the next coordinate pair in the series of coordinates to reach the destination object. Now I want to call this function in a scripting language and have it only return the last coordinate pair result. I want the game engine to go about the business of rendering the incremental steps but I don't want the high level script to care about the rendering. The high level script is only for ai game logic. Now I know how to bind a method from C to python but how can I signal and coordinate the wait time between the incremental steps without the high level function returning until its time for the last step?

    Read the article

  • WPF 3D - converting from Point2D to Point3D and back again

    - by DanM
    I'm new to WPF 3D, so I may just be missing something obvious, but how would I go about converting from a 2D coordinate to a 3D coordinate and back again? I'd like the 2D coordinate to be the location measured from the upper-left corner of Viewport3D and the 3D coordinate to be the location relative to the origin (0, 0, 0) of the 3D world. The conversion functions should have these signatures: public Point3D Point2DAndWorldZToPoint3D(Point2D point2D, double worldZ) // usually I want to know where a 2D point will be on the ground plane // so worldZ will usually be zero (but not always) public Point2D Point3DToPoint2D(Point3D point3D) I found this related question, but it only addresses conversion from 3D to 2D (not the reverse), and I'm not sure if the answers are up-to-date. Note, I'm currently using .NET 3.5, but if there are improvements in .NET 4.0 that would help me, please let me know.

    Read the article

  • database design to speed up hibernate querying of large dataset

    - by paddydub
    I currently have the below tables representing a bus network mapped in hibernate, accessed from a Spring MVC based bus route planner I'm trying to make my route planner application perform faster, I load all the above tables into Lists to perform the route planner logic. I would appreciate if anyone has any ideas of how to speed my performace Or any suggestions of another method to approach this problem of handling a large set of data Coordinate Connections Table (INT,INT,INT)( Containing 50,000 Coordinate Connections) ID, FROMCOORDID, TOCOORDID 1 1 2 2 1 17 3 1 63 4 1 64 5 1 65 6 1 95 Coordinate Table (INT,DECIMAL, DECIMAL) (Containing 4700 Coordinates) ID , LAT, LNG 0 59.352669 -7.264341 1 59.352669 -7.264341 2 59.350012 -7.260653 3 59.337585 -7.189798 4 59.339221 -7.193582 5 59.341408 -7.205888 Bus Stop Table (INT, INT, INT)(Containing 15000 Stops) StopID RouteID COORDINATEID 1000100001 100 17 1000100002 100 18 1000100003 100 19 1000100004 100 20 1000100005 100 21 1000100006 100 22 1000100007 100 23 This is how long it takes to load all the data from each table: stop.findAll = 148ms, stops.size: 15670 Hibernate: select coordinate0_.COORDINATEID as COORDINA1_2_, coordinate0_.LAT as LAT2_, coordinate0_.LNG as LNG2_ from COORDINATES coordinate0_ coord.findAll = 51ms , coordinates.size: 4704 Hibernate: select coordconne0_.COORDCONNECTIONID as COORDCON1_3_, coordconne0_.DISTANCE as DISTANCE3_, coordconne0_.FROMCOORDID as FROMCOOR3_3_, coordconne0_.TOCOORDID as TOCOORDID3_ from COORDCONNECTIONS coordconne0_ coordinateConnectionDao.findAll = 238ms ; coordConnectioninates.size:48132 Hibernate Annotations @Entity @Table(name = "STOPS") public class Stop implements Serializable { @Id @GeneratedValue @Column(name = "COORDINATEID") private Integer CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Table(name = "COORDINATES") public class Coordinate { @Id @GeneratedValue @Column(name = "COORDINATEID") private Integer CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Entity @Table(name = "COORDCONNECTIONS") public class CoordConnection { @Id @GeneratedValue @Column(name = "COORDCONNECTIONID") private Integer CoordinateID; /** * From Coordinate_id value */ @Column(name = "FROMCOORDID", nullable = false) private int fromCoordID; /** * To Coordinate_id value */ @Column(name = "TOCOORDID", nullable = false) private int toCoordID; //private Coordinate toCoordID; }

    Read the article

  • MacRuby - CLLocation Properties Not Accessible

    - by Craig Williams
    Anyone know why this works in Objective-C but not in MacRuby? Objective-C Version: CLLocation *loc = [[CLLocation alloc] initWithLatitude:38.0 longitude:-122.0]; NSLog(@"Lat: %.2f", loc.coordinate.latitude); NSLog(@"Long: %.2f", loc.coordinate.longitude); [loc release]; // Results: // 2010-04-30 16:48:55.568 OCCoreLocationTest[70030:a0f] Lat: 38.00 // 2010-04-30 16:48:55.570 OCCoreLocationTest[70030:a0f] Long: -122.00 Here is the MacRuby version with results: loc = CLLocation.alloc.initWithLatitude(38.0, longitude:-122.0) puts loc.class # => CLLocation puts loc.description # => <+38.00000000, -122.00000000> +/- 0.00m (speed -1.00 mps / course -1.00) @ 2010-04-30 16:37:47 -0600 puts loc.respond_to?(:coordinate) # => true puts loc.coordinate.latitude # => Error: unrecognized runtime type `{?=dd}' (TypeError)

    Read the article

  • Assigning figure size to a figure with a given handle (MATLAB)

    - by James
    Hi, is there a way to assign the outerposition property of a figure to a figure with a given handle? For example, if I wanted to define a figure as say figure 1, I would use: figure(1) imagesc(Arrayname) % I.e. any array I can also change the properties of a figure using the code: figure('Name', 'Name of figure','NumberTitle','off','OuterPosition',[scrsz(1) scrsz(2) 700 700]); Is there a propertyname I can use to assign the outerposition property to the figure assigned as figure 1? The reason I am asking this is because I am using a command called save2word (from the MATLAB file exchange) to save some plots from a function I have made to a word file, and I want to limit the number of figures I have open as it does this. The rest of the code I have is: plottedloops = [1, 5:5:100]; % Specifies which loops I want to save GetGeometry = getappdata(0, 'GeometryAtEachLoop') % Obtains a 4D array containing geometry information at each loop NumSections = size(GetGeometry,4); %Defined by the fourth dimension of the 4D array for j = 1:NumSections for i = 1:plottedloops P = GetGeometry(:,:,i,j); TitleSize = 14; Fsize = 8; % Save Geometry scrsz = get(0,'ScreenSize'); %left, bottom, width height figure('Name', 'Geometry at each loop','NumberTitle','off','OuterPosition',[scrsz(1) scrsz(2) 700 700]); This specifies the figure name, dims etc., but also means multiple figures are opened as the command runs. % I have tried this, but it doesn't work: % figure(0, 'OuterPosition',[scrsz(1) scrsz(2) 700 700]); imagesc(P), title('Geometry','FontSize', TitleSize), axis([0 100 0 100]); text(20,110,['Loop:',num2str(i)], 'FontSize', TitleSize); % Show loop in figure text(70,110,['Section:',num2str(j)], 'FontSize', TitleSize);% Show Section number in figure save2word('Geometry at each loop'); % Saves figure to a word file end end Thanks

    Read the article

  • How do faces in .obj work?

    - by Adl
    Hi When parsing an .obj-file, with vertices and vertex-faces, it is easy to pass the vertices to the shader and the use glDrawElements using the vertex-faces. When parsing an .obj-file, with vertices and texture-coordinates, another type of face occur: texture-coordinate faces. When displaying textures, apart from loading images, binding them and passing texture coordinates into the parser, how to use the texture-coordinate faces? They differ from the vertex-faces and I suppose that the texture-coordinate faces have a purpose when displaying textures? Regards Niclas

    Read the article

  • pass array by reference in c

    - by Yassir
    How can I pass an array of struct by reference ? example : struct Coordinate { int X; int Y; }; SomeMethod(Coordinate *Coordinates[]){ //Do Something with the array } int main(){ Coordinate Coordinates[10]; SomeMethod(&Coordinates); }

    Read the article

  • View results of affine transform

    - by stckjp
    I am trying to find out the reason why when I apply affine transformations on an image in OpenCV, the result of it is not visible in the preview window, but the entire window is black.How can I find workaround for this problem so that I can always view my transformed image (the result of the affine transform) in the window no matter the applied transformation? Update: I think that this happens because all the transformations are calculated with respect to the origin of the coordinate system (top left corner of the image). While for rotation I can specify the center of the rotation, and I am able to view the result, when I perform scaling I am not able to control where the transformed image goes. Is it possible to somehow move the coordinate system to make the image fit in the window? Update2: I have an image which contains only ROI at some position in it (the rest of the image is black), and I need to apply a set of affine transforms on it. To make things simpler and to see the effect of each individual transform, I applied each transform one by one. What I noticed is that, whenever I move (translate) the image such that the center of the ROI is in the center of the coordinate system (top left corner of the view window), all the affine transforms perform correctly without moving. However, by translating the center of ROI at the center of the coordinate system, the upper and the left part of the ROI remain cut out of the current view window. If I move ROI's central point to another point in the view window (for example the window center), an affine transform of type: A=[a 0 0; 0 b 0] (A is 2x3 matrix, parameter of the warpAffine function) moves the image (ROI), outside of the view window (which doesn't happen if the ROI's center is in the top-left corner). How can I modify the affine transform so the image doesn't move out of its place (behaves the same way as when the ROI center is in the center of the coordinate system)?

    Read the article

  • matlab: how to transform screen pixels into specific coordinates

    - by user3137385
    I have to draw a curve captured on a image using screen pixels (mouse clicks) into a coordinate system. E.g.: Pixels on the screen, from left to right (130 px to 970 px) correspond to the x-axis of my coordinate system (1000 to 6000). Pixels from bottom to top (670 to 99) correspond to the y-axis of coordinate system (0 to 1.2). How can this be done? Maybe there's a function in matlab doing something like that? Some more explanation: I have a jpg image of a curve on a coordinate system. I've got pixel positions (x,y) of several points on that curve. Now I want to plot same curve into a matlab figure with same x and y axis as on the jpg image.

    Read the article

  • How to fetch managed objects sorted by calculated value

    - by Marcin Zbijowski
    Hello, I'm working on the app that uses CoreData. There is location entity that holds latitude and longitude values. I'd like to fetch those entities sorted by distance to the user's location. I tried to set sort descriptor to distance formula sqrt ((x1 - x2)^2 + (y1 - y2)^2) but it fails with exception "... keypath ... not found in entity". NSString *distanceFormula = [NSString stringWithFormat:@"sqrt(((latitude - %f) * (latitude - %f)) + ((longitude - %f) * (longitude - %f)))", location.coordinate.latitude, location.coordinate.latitude, location.coordinate.longitude, location.coordinate.longitude]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:distanceFormula ascending:YES]; [fetchRequest setSortDescriptors:[NSArray arrayWithObject:sortDescriptor]]; NSError *error; NSArray *result = [[self managedObjectContext] executeFetchRequest:fetchRequest error:&error]; I'd like to fetch already sorted objects rather then fetch them all and then sort in the code. Any tips appreciated.

    Read the article

  • Undefined javascript function?

    - by user74283
    Working on a google maps project and stuck on what seems to be a minor issue. When i call displayMarkers function firebug returns: ReferenceError: displayMarkers is not defined [Break On This Error] displayMarkers(1); <script type="text/javascript"> function initialize() { var center = new google.maps.LatLng(25.7889689, -80.2264393); var map = new google.maps.Map(document.getElementById('map'), { zoom: 10, center: center, mapTypeId: google.maps.MapTypeId.ROADMAP }); //var data = [[25.924292, -80.124314], [26.140795, -80.3204049], [25.7662857, -80.194692]] var data = {"crs": {"type": "link", "properties": {"href": "http://spatialreference.org/ref/epsg/4326/", "type": "proj4"}}, "type": "FeatureCollection", "features": [{"geometry": {"type": "Point", "coordinates": [25.924292, -80.124314]}, "type": "Feature", "properties": {"industry": [2], "description": "hosp", "title": "shaytac hosp2"}, "id": 35}, {"geometry": {"type": "Point", "coordinates": [26.140795, -80.3204049]}, "type": "Feature", "properties": {"industry": [1, 2], "description": "retail", "title": "shaytac retail"}, "id": 48}, {"geometry": {"type": "Point", "coordinates": [25.7662857, -80.194692]}, "type": "Feature", "properties": {"industry": [2], "description": "hosp2", "title": "shaytac hosp3"}, "id": 36}]} var markers = []; for (var i = 0; i < data.features.length; i++) { var latLng = new google.maps.LatLng(data.features[i].geometry.coordinates[0], data.features[i].geometry.coordinates[1]); var marker = new google.maps.Marker({ position: latLng, title: console.log(data.features[i].properties.industry[0]), map: map }); marker.category = data.features[i].properties.industry[0]; marker.setVisible(true); markers.push(marker); } function displayMarkers(category) { var i; for (i = 0; i < markers.length; i++) { if (markers[i].category === category) { markers[i].setVisible(true); } else { markers[i].setVisible(false); } } } } google.maps.event.addDomListener(window, 'load', initialize); </script> <div id="map-container"> <div id="map"></div> </div> <input type="button" value="Retail" onclick="displayMarkers(1);">

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >