Search Results

Search found 1507 results on 61 pages for 'coordinates'.

Page 52/61 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Performance of map overlay in conjunction with ItemizedOverlay is very poor

    - by oviroa
    I am trying to display one png (drawable) on a map in about 300 points. I am retrieving the coordinates from a Sqlite table, dumping them in a cursor. When I try to display them by parsing through the cursor, it takes for ever for the images to be drawn, about .5 second per image. I find that to be suspiciously slow, so some insight on how I can increase performance would help. Here is the snippet of my code that does the rendering: while (!mFlavorsCursor.isAfterLast()) { Log.d("cursor",""+(i++)); point = new GeoPoint( (int)(mFlavorsCursor.getFloat(mFlavorsCursor.getColumnIndex(DataBaseHelper.KEY_LATITUDE))*1000000), (int)(mFlavorsCursor.getFloat(mFlavorsCursor.getColumnIndex(DataBaseHelper.KEY_LONGITUDE))*1000000)); overlayitem = new OverlayItem(point, "", ""); itemizedoverlay.addOverlay(overlayitem); itemizedoverlay.doPopulate(); mFlavorsCursor.moveToNext(); } mapOverlays.add(itemizedoverlay); I tried to isolate all the steps and it looks like the slow one is this: itemizedoverlay.doPopulate(); This is a public method in my class that extends ItemizedOverlay that runs the private populate() method.

    Read the article

  • OpenGL particle system

    - by allan
    I'm really new with OpenGL, so bear with me. I'm trying to simulate a particle system using OpenGl but I can't get it to work, this is what I have so far: #include <GL/glut.h> int main (int argc, char **argv){ // data allocation, various non opengl stuff ............ glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE ); glutInitWindowPosition(100,100); glutInitWindowSize(size, size); glPointSize (4); glutCreateWindow("test gl"); ............ // initial state, not opengl ............ glViewport(0,0,size,size); glutDisplayFunc(display); glutIdleFunc(compute); glutMainLoop(); } void compute (void) { // change state not opengl glutPostRedisplay(); } void display (void) { glClear(GL_COLOR_BUFFER_BIT); glBegin(GL_POINTS); for(i = 0; i<nparticles; i++) { // two types of particles if (TYPE(particle[i]) == 1) glColor3f(1,0,0); else glColor3f(0,0,1); glVertex2f(X(particle[i]),Y(particle[i])); } glEnd(); glFlush(); glutSwapBuffers(); } I get a black window after a couple of seconds (the window has just the title bar before that). Where do I go wrong? Any help would be very much appreciated. Thanks. LE: the x and y coordinates of each particle are within the interval (0,size)

    Read the article

  • Google Maps Terms of Service - saving some data to a database

    - by R.M.
    I've read the terms of service, and, from what I understand, I'm not allowed to store any information I retrieve from the Google Maps API. Are there any exceptions to this? More to the point, I'm planning on building an application that shows the user several points of interest (like restaurants, libraries etc) at a certain distance around a location he chooses (it can be in one city or more, depending on the distance he chooses). There are two problems: The first problem is that (at least for my country) the geocoder doesn't locate exact addresses, at best it only locates street names (but completely ignores street numbers) in larger cities. It is even worse for smaller rural areas. So the only way to accurately show the places on the map is by storing their coordinates in the database. Another problem seems to be with calculating distances. To show the points located below a certain distance from the user, it would mean I would have to use GDirections to get all distances between the user's location and the other points, to see which ones to show. That would be really slow for the user (since I also have to set a small delay between requests), and it would also send a pretty large amount of requests to google. Would I be allowed to store those distances in a database? The users would not be able to access a list of all the stored information, they would only see the names of the places, and a map with some markers on it. Thank you.

    Read the article

  • Browser relative positioning with jQuery and CutyCapt

    - by Acoustic
    I've been using CutyCapt to take screen shots of several web pages with great success. My challenge now is to paint a few dots on those screen shots that represent where a user clicked. CutyCapt goes through a process of resizing the web page to the scroll width before taking a screen shot. That's extremely useful because you only get content and not much (if any) of the page's background. My challenge is trying to map a user's mouse X coordinates to the screen shot. Obviously users have different screen resolutions and have their browser window open to different sizes. The image below shows 3 examples with the same logo. Assume, for example, that the logo is 10 pixels to the left edge of the content area (in red). In each of these cases, and for any resolution, I need a JavaScript routine that will calculate that the logo's X coordinate is 10. Again, the challenge (I think) is differing resolutions. In the center-aligned examples, the logo's position, as measured from the left edge of the browser (in black), differs with changing browser size. The left-aligned example should be simple as the logo never moves as the screen resizes. Can anyone think of a way to calculate the scrollable width of a page? In other words, I'm looking for a JavaScript solution to calculate the minimum width of the browser window before a horizontal scroll bar shows up. Thanks for your help!

    Read the article

  • ActionScript Tweening Matrix Transform (big problem)

    - by TheDarkIn1978
    i'm attempting to tween the position and angle of a sprite. if i call the functions without tweening, to appear in one step, it's properly set at the correct coordinates and angle. however, tweening it makes it all go crazy. i'm using an rotateAroundInternalPoint matrix, and assume tweening this along with coordinate positions is messing up the results. works fine (without tweening): public function curl():void { imageWidth = 400; imageHeight = 600; parameters.distance = 0.5; parameters.angle = 45; backCanvas.x = imageWidth - imageHeight * parameters.distance; backCanvas.y = imageHeight - imageHeight * parameters.distance; var internalPointMatrix:Matrix = backCanvas.transform.matrix; MatrixTransformer.rotateAroundInternalPoint(internalPointMatrix, backCanvas.width * parameters.distance, 0, parameters.angle); backCanvas.transform.matrix = internalPointMatrix; } doesn't work properly (with tweening): public function curlUp():void { imageWidth = 400; imageHeight = 600; parameters.distance = 0.5; parameters.angle = 45; distanceTween = new Tween(parameters, "distance", None.easeNone, 0, distance, 1, true); angleTween = new Tween(parameters, "angle", None.easeNone, 0, angle, 1, true); angleTween.addEventListener(TweenEvent.MOTION_CHANGE, animateCurl); } private function animateCurl(evt:TweenEvent):void { backCanvas.x = imageWidth - imageHeight * parameters.distance; backCanvas.y = imageHeight - imageHeight * parameters.distance; var internalPointMatrix:Matrix = backCanvas.transform.matrix; MatrixTransformer.rotateAroundInternalPoint(internalPointMatrix, backCanvas.width * parameters.distance, 0, parameters.angle - previousAngle); backCanvas.transform.matrix = internalPointMatrix; previousAngle = parameters.angle; } in order for the angle to tween properly, i had to add a variable that would track it's last angle setting and subtract it from the new one. however, i still can not get this tween to return the same end position and angle as is without tweening. i've been stuck on this problem for a day now, so any help would be greatly appreciated.

    Read the article

  • Combinatorial optimisation of a distance metric

    - by Jose
    I have a set of trajectories, made up of points along the trajectory, and with the coordinates associated with each point. I store these in a 3d array ( trajectory, point, param). I want to find the set of r trajectories that have the maximum accumulated distance between the possible pairwise combinations of these trajectories. My first attempt, which I think is working looks like this: max_dist = 0 for h in itertools.combinations ( xrange(num_traj), r): for (m,l) in itertools.combinations (h, 2): accum = 0. for ( i, j ) in itertools.izip ( range(k), range(k) ): A = [ (my_mat[m, i, z] - my_mat[l, j, z])**2 \ for z in xrange(k) ] A = numpy.array( numpy.sqrt (A) ).sum() accum += A if max_dist < accum: selected_trajectories = h This takes forever, as num_traj can be around 500-1000, and r can be around 5-20. k is arbitrary, but can typically be up to 50. Trying to be super-clever, I have put everything into two nested list comprehensions, making heavy use of itertools: chunk = [[ numpy.sqrt((my_mat[m, i, :] - my_mat[l, j, :])**2).sum() \ for ((m,l),i,j) in \ itertools.product ( itertools.combinations(h,2), range(k), range(k)) ]\ for h in itertools.combinations(range(num_traj), r) ] Apart from being quite unreadable (!!!), it is also taking a long time. Can anyone suggest any ways to improve on this?

    Read the article

  • Turning off antialiasing in Löve2D

    - by cjanssen
    I'm using Löve2D for writing a small game. Löve2D is an open source game engine for Lua. The problem I'm encountering is that some antialias filter is automatically applied to your sprites when you draw it at non-integer positions. love.graphics.draw( sprite, x, y ) So when x or y is not round (for example, x=100.24), the sprite appears blurred. The same happens when the sprite size is not even, because (x,y) points to the center of the sprite. For example, a sprite which is 31x30 big will appear blurred again, because its pixels are painted in non-integer positions. Since I am using pixel art, I want to avoid this all the way, otherwise the art is destroyed by this effect. The workaround I am using so far is to force the coordinates to be round by littering the code with calls to math.floor(), and forcing all the sprites to have even sizes by adding a row or column of transparent pixels with the paint program, if needed. Is there some command to deactivate the antialiasing I can call at program startup?

    Read the article

  • Geolocation and jQuery - Can't post results using ajax

    - by etombaugh
    I'm currently working on a project to make a location-aware site. In essence, the user comes to the page, and their location is found using the HTML5 method and then using jQuery, the location is posted to a page which saves the location/address to a codeigniter session, but if they want to update their location, or change to a different location(IE they want to use their work address as the location instead of their present address), theres a jQuery colorbox that displays and lets them type in a custom address. Everything works flawlessly to get the initial location, but when I try and get the updated location saved, I receive the error "Uncaught TypeError: Object [object DOMWindow] has no method 'lat'" which then Google Chrome references as being an error not in jQuery, but in the file for Google Maps API. Any suggestions? jQuery('.inputsubmit').click(function() { //Takes values from user submitted fields and parses them into an address string var street = jQuery('.inputaddress').val(); var city = jQuery('.inputcity').val(); var state = jQuery('.inputstate').val(); var address = street +" "+ city +", " +state; geocoder = new google.maps.Geocoder(); var geocoderresult= geocoder.geocode({'address': address}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { var newlocation = results[0].geometry.location; //Posts coordinates and address string to a CodeIgniter function to update users session information jQuery.post("somepage", {location: newlocation, address: address},function(data) { alert("Data Loaded: " + data); }); } else { alert("Geocoder failed due to: " + status); } }); }); I've tried everything I can think of to get the post to work. All of the code works up till the point, and I've commented out the post line and everything works correctly. This is one of our main website's features, to provide instant and quick results based off of location. Thanks!

    Read the article

  • Suggested (simple) approach for drawing large numbers of visual elements in WPF?

    - by Ender
    I'm writing an interface that features a large (~50000px width) "canvas"-type area that is used to display a lot of data in a fairly novel way. This involves lots of lines, rectangles, and text. The user can scroll around to explore the entire canvas. At the moment I'm just using a standard Canvas panel with various Shapes placed on it. This is nice and easy to do: construct a shape, assign some coordinates, and attach it to the Canvas. Unfortunately, it's pretty slow (to construct the children, not to do the actual rendering). I've looked into some alternatives, it's a bit intimidating. I don't need anything fancy - just the ability to efficiently construct and place objects in a coordinate plane. If all I get are lines, colored rectangles, and text, I'll be happy. Do I need Geometry instances inside of Geometry Groups inside of GeometryDrawings inside of some Panel container? Note: I'd like to include text and graphics (i.e. colored rectangles) in the same space, if possible.

    Read the article

  • How to Use XSLT to Replace Coordinate Separator With List of Tuples?

    - by kuloch
    I have a space-separated list of coordinate tuples. Each tuple consists of a space-separated list of 2-dimensional coordinates. E.g. "1.1 2.8 1.2 2.9" represents a line from POINT(1.1 2.8) to POINT(1.2 2.9). I need this to instead be "1.1,2.8 1.2,2.9". How would I use XSLT to perform the replacement of space-to-comma between pairs of numbers? I have the "string(gml:LinearRing/gml:posList)". This is being used on a Java Web Service that spits out GML 3.1.1 features with geometries. The service supports optional KML output, by using XSLT to transform the GML document into a KML document (at least, the chunks deemed "important"). I am locked into XSLT 1.0, so regex from XSLT 2.0 is not an option. I am aware that GML uses lat/lon while KML uses lon/lat. That's being handled before XSLT, though it would be nice to have that also done with XSLT.

    Read the article

  • [jQuery 1.4] move one div over another

    - by Tomasz Zielinski
    I have two div elements that are twins (i.e. their dimensions and contents are identical). I want to move of those div-s over another, so that their corners are at exactly the same coordinates. What I try to do is: var offset = $('div#placeholder').offset(); $('div#overlay').css('position', 'absolute').css('left', offset.left + 'px').css('top', offset.top + 'px') -- but this causes the overlay to be exactly (or almost exactly, taking subpixel accuracy into account) 16px below the placeholder (below, i.e. overlay_top = placeholder_top + 16px). I'm aware that offset() gives position relative to the document, and position: absolute sets position relative to body element, but compensating for body offset() doesn't help much (I'm getting 8px offset, equal to margins): offset.top -= $('body').offset().top; offset.left -= $('body').offset().left; Also, compensating for body margins (in case they were different that offset() didn't help, as they were set to 8px). Does somebody know what I'm doing wrong here? UPDATE: Take a look at here - I'm getting the same result in FireFox 3.6.3 and Opera 10.10.

    Read the article

  • Calculate lookat vector from position and Euler angles

    - by Jaap
    I've implemented an FPS style camera, with the camera consisting of a position vector, and Euler angles pitch and yaw (x and y rotations). After setting up the projection matrix, I then translate to camera coordinates by rotating, then translating to the inverse of the camera position: // Load projection matrix glMatrixMode(GL_PROJECTION); glLoadIdentity(); // Set perspective gluPerspective(m_fFOV, m_fWidth/m_fHeight, m_fNear, m_fFar); // Load modelview matrix glMatrixMode(GL_MODELVIEW); glLoadIdentity(); // Position camera glRotatef(m_fRotateX, 1.0, 0.0, 0.0); glRotatef(m_fRotateY, 0.0, 1.0, 0.0); glTranslatef(-m_vPosition.x, -m_vPosition.y, -m_vPosition.z); Now I've got a few viewports set up, each with its own camera, and from every camera I render the position of the other cameras (as a simple box). I'd like to also draw the view vector for these cameras, except I haven't a clue how to calculate the lookat vector from the position and Euler angles. I've tried to multiply the original camera vector (0, 0, -1) by a matrix representing the camera rotations then adding the camera position to the transformed vector, but that doesn't work at all (most probably because I'm way off base): vector v1(0, 0, -1); matrix m1 = matrix::IDENTITY; m1.rotate(m_fRotateX, 0, 0); m1.rotate(0, m_fRotateY, 0); vector v2 = v1 * m1; v2 = v2 + m_vPosition; // add camera position vector glBegin(GL_LINES); glVertex3fv(m_vPosition); glVertex3fv(v2); glEnd(); What I'd like is to draw a line segment from the camera towards the lookat direction. I've looked all over the place for examples of this, but can't seem to find anything. Thanks a lot!

    Read the article

  • iPhone: Strange Movement of Two UIImageView "Sprites"

    - by David Pollak
    I have two UIImageViews moving like sprites on a superview. Each imageview moves properly by itself but when I put both imageviews on the superview at the same time, their individual movement becomes strangely restricted to two different areas of the screen. They will not touch even programmed to the same coordinates. This is my movement code for the first imageView: - (void)viewDidLoad { [super viewDidLoad]; pos = CGPointMake(14.0, 7.0); [NSTimer scheduledTimerWithTimeInterval:0.05 target:self selector:@selector(onTimer) userInfo:nil repeats:YES]; } - (void) onTimer { pallone.center = CGPointMake(pallone.center.x+pos.x, pallone.center.y+pos.y); if(pallone.center.x > 320 || pallone.center.x < 0) pos.x = -pos.x; if(pallone.center.y > 480 || pallone.center.y < 0) pos.y = -pos.y; } and for the second imageview: - (IBAction)spara{ cos = CGPointMake(8.0, 4.0); [NSTimer scheduledTimerWithTimeInterval:0.05 target:self selector:@selector(inTimer) userInfo:nil repeats:YES]; } - (void)inTimer{ bomba.center = CGPointMake(bomba.center.x+pos.x, bomba.center.y+pos.y); if(bomba.center.x > 50 || bomba.center.x < 0) pos.x = -pos.x; if(bomba.center.y > 480 || bomba.center.y < 0) pos.y = -pos.y; } Why causes this strange behavior? Thanks for your help. I am a newbie.

    Read the article

  • Send Click Message to another application process

    - by Nazar
    Hi Guys I have a scenario, i need to send click events to an independent application. I started that application with the following code. private Process app; app = new Process(); app.StartInfo.FileName = app_path; app.StartInfo.WorkingDirectory = dir_path; app.Start(); Now i want to send Mouse click message to that applicaiton, I have specific coordinates in relative to application window. How can i do it using Windows Messaging or any other technique. I used [DllImport("user32.dll")] private static extern void mouse_event(UInt32 dwFlags, UInt32 dx, UInt32 dy, UInt32 dwData, IntPtr dwExtraInfo); It works well but cause the pointer to move as well. So not fit for my need. Then i use. [DllImport("user32.dll", CharSet = CharSet.Auto, SetLastError = false)] static extern IntPtr SendMessage(IntPtr hWnd, int Msg, int wParam, int lParam); It works well for minimize maximize, but do not work for mouse events. The codes for mousevents i am using are, WM_LBUTTONDOWN = 0x201, //Left mousebutton down WM_LBUTTONUP = 0x202, //Left mousebutton up WM_LBUTTONDBLCLK = 0x203, //Left mousebutton doubleclick WM_RBUTTONDOWN = 0x204, //Right mousebutton down WM_RBUTTONUP = 0x205, //Right mousebutton up WM_RBUTTONDBLCLK = 0x206, //Right mousebutton do Thanks for the help in advance, and waiting for feedback.

    Read the article

  • Bilinear interpolation - DirectX vs. GDI+

    - by holtavolt
    I have a C# app for which I've written GDI+ code that uses Bitmap/TextureBrush rendering to present 2D images, which can have various image processing functions applied. This code is a new path in an application that mimics existing DX9 code, and they share a common library to perform all vector and matrix (e.g. ViewToWorld/WorldToView) operations. My test bed consists of DX9 output images that I compare against the output of the new GDI+ code. A simple test case that renders to a viewport that matches the Bitmap dimensions (i.e. no zoom or pan) does match pixel-perfect (no binary diff) - but as soon as the image is zoomed up (magnified), I get very minor differences in 5-10% of the pixels. The magnitude of the difference is 1 (occasionally 2)/256. I suspect this is due to interpolation differences. Question: For a DX9 ortho projection (and identity world space), with a camera perpendicular and centered on a textured quad, is it reasonable to expect DirectX.Direct3D.TextureFilter.Linear to generate identical output to a GDI+ TextureBrush filled rectangle/polygon when using the System.Drawing.Drawing2D.InterpolationMode.Bilinear setting? For this (magnification) case, the DX9 code is using this (MinFilter,MipFilter set similarly): Device.SetSamplerState(0, SamplerStageStates.MagFilter, (int)TextureFilter.Linear); and the GDI+ path is using: g.InterpolationMode = InterpolationMode.Bilinear; I thought that "Bilinear Interpolation" was a fairly specific filter definition, but then I noticed that there is another option in GDI+ for "HighQualityBilinear" (which I've tried, with no difference - which makes sense given the description of "added prefiltering for shrinking") Followup Question: Is it reasonable to expect pixel-perfect output matching between DirectX and GDI+ (assuming all external coordinates passed in are equal)? If not, why not? Finally, there are a number of other APIs I could be using (Direct2D, WPF, GDI, etc.) - and this question generally applies to comparing the output of "equivalent" bilinear interpolated output images across any two of these. Thanks!

    Read the article

  • NSOpenGLFullScreen and SetSystemUIMode freeze bug!?

    - by Mattias
    Hi! I have a really strange problem which is perfectly re-producable using sample code! If I use Apple's NSOpenGLFullScreen sample I can click a button to enter fullscreen OpenGL mode. However if I click the mouse in the area where the menubar would be if I was running windowed mode, the entire program freezes because I really activate the menu-choice behind the OpenGL screen, so to speak. The solution I have found after some Googling is to use SetSystemUIMode to hide the menubar. Also I want to initiate the application to fullscreen at startup by adding a call to EnterFullScreen after initialization. Entering FullScreen works perfectly, BUT - if I add the call to SetSystemUIMode I get a really strange error! The entire screen hangs, the animation stops and no mouse coordinates seem to be reported. If I then exit the Fullscreen mode and press the FullScreen button again everything works and the menubar is gone.. What could be wrong here? I mean it obviously works to remove the menubar in that manner and it obviously works to enter fullscreen mode like that (using Cocoa), but why doesn't the combination work!? Pleeease help :) Sincerely, / Mattias

    Read the article

  • Suggestions on implementing an iPad magazine app

    - by alku83
    I've been tasked with creating a magazine style app for iPad. Ideally it would look a little something like the Zinio app: http://www.zinio.com/ipad/ . This app is effectively a shell, allowing you to sample magazines (eg. read the first few pages) and select them for download. The magazines appear to have some sort of overlay, allowing you to interact with some things (eg. tap to watch a video). A few questions that come to mind: How would I go about delivering content to the user? In-app purchases aren't really an option, as some content will need to be delivered for free. Is it possible to download a package and make this available within the application? What format would be suitable for displaying the magazine? Sequential images, PDF, ebook? I'll need to have some form of interactivity. I guess I could have some form of lookup table, which would include information such as if the user taps on this page, within these coordinates, then launch this item. Anyone dealt with any similar issues?

    Read the article

  • Vertex Buffers in opengl

    - by JB
    I'm making a small 3d graphics game/demo for personal learning. I know d3d9 and quite a bit about d3d11 but little about opengl at the moment so I'm intending to abstract out the actual rendering of the graphics so that my scene graph and everything "above" it needs to know little about how to actually draw the graphics. I intend to make it work with d3d9 then add d3d11 support and finally opengl support. Just as a learning exercise to learn about 3d graphics and abstraction. I don't know much about opengl at this point though, and don't want my abstract interface to expose anything that isn't simple to implement in opengl. Specifically I'm looking at vertex buffers. In d3d they are essentially an array of structures, but looking at the opengl interface the equivalent seems to be vertex arrays. However these seem to be organised rather differently where you need a separate array for vertices, one for normals, one for texture coordinates etc and set the with glVertexPointer, glTexCoordPointer etc. I was hoping to be able to implement a VertexBuffer interface much like the the directx one but it looks like in d3d you have an array of structures and in opengl you need a separate array for each element which makes finding a common abstraction quite hard to make efficient. Is there any way to use opengl in a similar way to directx? Or any suggestions on how to come up with a higher level abstraction that will work efficiently with both systems?

    Read the article

  • Assemble an image browser side with JavaScript or Flash?

    - by Kris Walker
    Would it be possible to assemble an image on the browser by 'concatenating' other downloaded images together? The use case is this. The page will display 36 different tiles (small images). The user should be able to arrange those tiles into a 6 x 6 grid and save the resulting grid to disk as an image. The best solution would be to do it all in the browser without Flash. The next best solution would be to allow the user to create the grid in the browser with simple JavaScript drag and drop functionality and then send the coordinates to the server for image processing. The last solution would be to do it all in the browser with Flash. Is it even possible for Flash to create an image and then allow the user to save it from the browser? I am familiar with the Pixastic JavaScript library ( http://www.pixastic.com/ ), but it relies on getting image data to and from a canvas element which is not very well supported. What if I send the tile images to the browser as base64 encoded strings? Could I use JavaScript to create the 6 x 6 grid image? And if so, is there some way of allowing the user to get it onto disk without relying on the canvas element?

    Read the article

  • Getting Depth Value on Kinect SDK 1.6

    - by AlexanderPD
    this is my first try on Kinect and Kinect SDK so I'm having a lot of "newbie issues" :) my goal is to point my mouse on the Kinect standard video output and get the depth value. I already have both normal video and depth video outputs by using the 2 "Color Basic-WPF" and "Depth Basic-WPF" samples, and handling mouse events or position is not a problem. In fact i already did all and i already got a depth value, but this value is always HIGHLY imprecise. It jumps from 500 to 4000 by just moving to the next pixel in a plane surface. So.. I'm pretty sure I'm reading the depth value in the wrong way. This is how i read it: short debugValue = depthPixels[x*y].Depth; debug.Text = "X = "+x+", Y = "+y+", value = "+debugValue.ToString(); i know it's pretty out of context, this little piece of code is inside the same SensorDepthFrameReady function in "Depth Basic-WPF"! "x" and "y" are the mouse coordinates and depthPixels is DepthImagePixel[] type, a temporary array filled with the "depthFrame.CopyDepthImagePixelDataTo(this.depthPixels);" instruction. Depth frame is filled here: DepthImageFrame depthFrame = e.OpenDepthImageFrame() the "e" comes from here: private void SensorDepthFrameReady(object sender, DepthImageFrameReadyEventArgs e) and this last one is called here: this.sensor.DepthFrameReady += this.SensorDepthFrameReady; how i must handle that depth value i get? I know the value must be between 800 and 4000 but i get values between about 500 and about 8000. i already google a lot (here on SO too) and i still can't understand if the depth value is 11 or 13 bit. The sdk examples uses shrink this value to 8 bit and this is making even more confusion in my head :(

    Read the article

  • Confused about UIView frame property

    - by slowfungus
    I'm building a prototype iPad app that draws diagrams. I have the following view hierarchy: UIView UIScrollView DiagramView : UIView TabBar NavigationBar And a UIViewController subclass holding all that together. Before drawing the diagram the first time I calculate the dimensions of the diagram, and set the DiagramView frame to that size, and the content size of the scrollview as well. -(void)recalculateBounds { [renderer diagram:diagram shouldDraw:NO]; SQXRect diagramRect = SQXMakeRect(0.0,0.0,[diagram bounds].size.width,[diagram bounds].size.height); self.frame = diagramRect; [(UIScrollView*)[self superview] setContentSize:diagramRect.size]; } I should disclose that the frame is being set to about 1500 x 3500 which i know is ridiculous. I just want to focus on some other parts of the app before I get into optimizing the render code. This works beautifully, except that the rect being passed to drawRect is not the size that I set, and my drawing is getting clipped at the bottom. Its close the size i set, but bigger in width, and shorter in height. Also of note, is the fact that if I force the frame to be much bigger than what I know the diagram needs, then the drawRect:rect is big enough, and no clipping occurs. Of course this has me wondering if the frame size needs to take into account some other screen real estate like the toolbars but my reading of the docs tells me the frame is in superview coordinates, which would be the scrollview so I reckon I need to worry about such things. Any idea what is causing this discrepancy?

    Read the article

  • Raphael.js: Adding a new custom element

    - by Claudia
    I would like to create a custom Raphael element, with custom properties and functions. This object must also contain predefined Raphael objects. For example, I would have a node class, that would contain a circle with text and some other elements inside it. The problem is to add this new object to a set. These demands are needed because non-Raphael objects cannot be added to sets. As a result, custom objects that can contain Raphael objects cannot be used. The code would look like this: var Node = function (paper) { // Coordinates & Dimensions this.x = 0, this.y = 0, this.radius = 0, this.draw = function () { this.entireSet = paper.set(); var circle = paper.circle(this.x, this.y, this.radius); this.circleObj = circle; this.entireSet.push(circle); var text = paper.text(this.x, this.y, this.text); this.entireSet.push(text); } // other functions } var NodeList = function(paper){ this.nodes = paper.set(), this.populateList = function(){ // in order to add a node to the set // the object must be of type Raphael object // otherwise the set will have no elements this.nodes.push(// new node) } this.nextNode = function(){ // ... } this.previousNode = function(){ // ... } }

    Read the article

  • How can I tell when something outside my UITableViewCell has been touched?

    - by kk6yb
    Similar to this question I have a custom subclass of UITableViewCell that has a UITextField. Its working fine except the keyboard for doesn't go away when the user touches a different table view cell or something outside the table. I'm trying to figure out the best place to find out when something outside the cell is touched, then I could call resignFirstResponder on the text field. If the UITableViewCell could receive touch events for touches outside of its view then it could just resignFirstResponder itself but I don't see any way to get those events in the cell. The solution I'm considering is to add a touchesBegan:withEvent: method to the view controller. There I could send a resignFirstResponder to all tableview cells that are visible except the one that the touch was in (let it get the touch event and handle it itself). Maybe something like this pseudo code: - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { CGPoint touchPoint = // TBD - may need translate to cell's coordinates for (UITableViewCell* aCell in [theTableView visibleCells]) { if (![aCell pointInside:touchPoint withEvent:event]) { [aCell resignFirstResponder]; } } } I'm not sure if this is the best way to go about this. There doesn't seem to be any way for the tableviewcell itself to receive event notifications for events outside its view.

    Read the article

  • What Javascript graphing package will let me plot points against a user-selected coordinate system?

    - by wes
    My customer has some specific requirements for a graph to show in our web app. We use HighCharts elsewhere in the app for more traditional graphing, but it doesn't seem to work for this situation. Their requirements: Allow the user to select a background image, set the scale and origin of the coordinate system. We'll graph our points against the user-defined coordinates. Points can be color coded Mouse-over boxes show more detail about the points Support for zooming and panning, scaling the background appropriately Less importantly: Support for drawing vectors off the points Some of this seems basic, but looking around at different graph packages, I was unable to find any with an example of this kind of usage. I've entertained the thought of just hacking it together in canvas myself, but I've never worked with canvas before so I don't think it would be cost effective. The basics of plotting points with a scaled coordinate system against an image background wouldn't be too hard, but the mouse-over details, zooming and panning sound much more daunting to me. More info: Right now we use jQuery, HighCharts, and ExtJS for our app. We tried flot in the past but switched to HighCharts after flot didn't meet our needs.

    Read the article

  • CIE XYZ colorspace: do I have RGBA or XYZA?

    - by Tronic
    I plan to write a painting program based on linear combinations of xy plane points (0,1), (1,0) and (0,0). Such system works identically to RGB, except that the primaries are not within the gamut but at the corners of a triangle that encloses the entire gamut. I have seen the three points being referred to as X, Y and Z (upper case) somewhere, but I cannot find the page anymore (I marked them to the picture myself). My pixel format stores the intensity of each of those three components the same way as RGB does, together with alpha value. This allows using pretty much any image manipulation operation designed for RGBA without modifying the code. What is my format called? Is it XYZA, RGBA or something else? Google doesn't seem to know of XYZA. RGBA will get confused with sRGB + alpha (which I also need to use in the same program). Notice that the primaries X, Y and Z and their intensities have little to do with the x, y and z coordinates (lower case) that are more commonly used.

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >