Search Results

Search found 3789 results on 152 pages for 'pixel tracking'.

Page 38/152 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • GLSL: How to get pixel x,y,z world position?

    - by Rookie
    I want to adjust the colors depending on which xyz position they are in the world. I tried this in my fragment shader: vec4 pos = vec4(gl_FragCoord); // get pixel position but it seems that the z-coord is always towards my camera... how do i make the coords independent from my camera position/angle? Edit: if it matters, heres my vertex shader: gl_Position = ftransform(); Edit2: changed title, so i want world coords, not screen coords!

    Read the article

  • Entity Framework - Self Tracking Objects - how to reset client side?

    - by David
    I am using wcf with self tracking entity framework objects. On the client side i have bound an entity to an edit form (which has multiple textboxes and comboboxes). After the user hits Save, the entity is sent through wcf to the server wcf service which will attempt to save the entity. If there is a failure (say a network failure), I need to reset the current entity back to original values. How best can I do this client side? (I recognize with Self Tracking objects there is a property OriginalValues however that collection seems to have count=0) so not sure how to get the original values? Thanks-

    Read the article

  • Search BitmapData object for matching pixel values from another Bitmap.

    - by Cos
    Using Actionscript 3 is there a way to search one bitmap for the coordinates matching pixels of another bitmap? http://dl.dropbox.com/u/1914/wired.png Somehow you would have to loop through the bigger bitmap to find and the the pixel range that matches and return those coordinates. For example the Bitmap with the "E" is 250 pixels over and 14 pixels down in the bigger bitmap. I haven't been able to come up with the solution on my own. Thanks.

    Read the article

  • Which tool do you use for tracking daily/weekly progress on your long-term goals?

    - by NoCatharsis
    I read through this post and this post to get an idea of short-term task management applications out there. However, now I'm having a problem tracking my longer-term goals pertaining to things like career, education, and exercise. I just discovered Disciplanner which might meet my needs but haven't had time to set it up just yet. Not to mention, it took me about 2 weeks of playing with every online to-do list app I could find before I came to a decision on which to use. I'm very particular about my tasks and goal-setting, so I would prefer to get some advice from other superusers before diving into the web again to find another life-management website.

    Read the article

  • Is there a point to using theft tracking software like Prey on my laptop, if you have login security?

    - by Reckage
    Hey, so I have a Thinkpad that I use in a variety of places (coffee shops, work, etc.). I don't generally abandon it, but I figure there's a chance I might get careless and it gets stolen at some point. I was thinking of installing something like Prey (http://preyproject.com/), but my OS installs are password secured, and on top of that, I have a fingerprint reader that you need just to get through the BIOS. So: is there actually any benefit to setting up software that tracks the laptop's whereabouts? I imagine that either: The laptop won't boot or login, if the thief doesn't get past the security. If the thief goes around said security somehow, presumably they've split the laptop for parts or bypassed BIOS security, gotten stuck on Windows security and formatted it. Given that it's highly unlikely that the thief would go to the trouble, what's the utility in installing laptop tracking software like Prey?

    Read the article

  • Most optimal way to detect if black (or any color pixels) exist in an image file?

    - by Zando
    What's the best and most flexible algorithm to detect any black (or colored pixel) in a given image file? Say I'm given an image file that could, say, have a blue background. And any non blue pixel, including a white pixel, is counted as a "mark". The function returns true if there are X number of pixels that deviate from each other at a certain threshold. I thought it'd be fastest to just simply iterate through every pixel and see if its color matches the last. But if it's the case that pixel (0,0) is deviant, and every other pixel is the same color (and I want to allow at least a couple deviated pixels before considering an image to be "marked), this won't work or be terribly efficient.

    Read the article

  • How does Windows 7 taskbar "color hot-tracking" feature calculate the colour to use?

    - by theyetiman
    This has intrigued me for quite some time. Does anyone know the algorithm Windows 7 Aero uses to determine the colour to use as the hot-tracking hover highlight on taskbar buttons for currently-running apps? It is definitely based on the icon of the app, but I can't see a specific pattern of where it's getting the colour value from. It doesn't seem to be any of the following: An average colour value from the entire icon, otherwise you would get brown all the time with multi-coloured icons like Chrome. The colour used the most in the image, otherwise you'd get yellow for the SQL Server Management Studio icon (6th from left). Also, the Chrome icon used red, green and yellow in equal measure. A colour located at certain pixel coordinates within the icon, because Chrome is red -indicating the top of the icon - and Notepad++ (2nd from right) is green - indicating the bottom of the icon. I asked this question on ux.stackoverflow.com and it got closed as off-topic, but someone answered with the following: As described by Raymond Chen in this MSDN blog article: Some people ask how it's done. It's really nothing special. The code just looks for the predominant color in the icon. (And, since visual designers are sticklers for this sort of thing, black, white, and shades of gray are not considered "colors" for the purpose of this calculation.) However I wasn't really satisfied with that answer because it doesn't explain how the "predominant" colour is calculated. Surely on the SQL Management Studio icon, the predominant colour, to my eyes at least, is yellow. Yet the highlight is green. I want to know, specifically, what the algorithm is.

    Read the article

  • Is there any algorithm for finding LINES by PIXEL COLORS on picture?

    - by Ole Jak
    So I have Image like this I want to get something like this (I hevent drawn all lines I want but I hope you can get my idea) I need algorithm for finding all straight lines on it by just reading colors of pixels. No hard math, no Haar, no Hough. Some algorithm which would be based on points colors. I want to give to algorithm parameters like min line length and max line distortion. I want to get relative to picture pixel coords start and end points of lines. So I need algorithm for finding straight lines of different colors on picture. Algorithm which would be based on idea of image of different colors and Lines of static colors. Yes - such algorithm will not work for images with lots of shadows and lights. But It willl probably be fast (I hope so). Is there any such algorithm?

    Read the article

  • How do you draw a line on a canvas in WPF that is 1 pixel thick.

    - by xarzu
    The method for drawing a line on a canvas in WPF that uses the line class actually draws a line that is two pixels thick: Line myLine = new Line(); myLine.Stroke = System.Windows.Media.Brushes.Black; myLine.X1 = 100; myLine.X2 = 140; // 150 too far myLine.Y1 = 200; myLine.Y2 = 200; myLine.StrokeThickness = 1; graphSurface.Children.Add(myLine); Microsoft might have decided to set a standard for line thickness and the minimum is 2 pixels thick when you set the strockThickness to 1, but when you already have rectangles drawn in XAML and even error fonts using WingDings, it is an obvious miss-match. How do you draw a line that is truly 1 pixel thick?

    Read the article

  • Getting pixel averages of a vector sitting atop a bitmap...

    - by user346511
    I'm currently involved in a hardware project where I am mapping triangular shaped LED to traditional bitmap images. I'd like to overlay a triangle vector onto an image and get the average pixel data within the bounds of that vector. However, I'm unfamiliar with the math needed to calculate this. Does anyone have an algorithm or a link that could send me in the right direction? (I tagged this as Python, which is preferred, but I'd be happy with the general algorithm!) I've created a basic image of what I'm trying to capture here: http://imgur.com/Isjip.gif

    Read the article

  • How would you build a "pixel perfect" GUI on Linux?

    - by splicer
    I'd like build a GUI where every single pixel is under my control (i.e. not using the standard widgets that something like GTK+ provides). Renoise is a good example of what I'm looking to produce. Is getting down to the Xlib or XCB level the best way to go, or is it possible to achieve this with higher level frameworks like GTK+ (maybe even PyGTK)? Should I be looking at Cairo for the drawing? I'd like to work in Python or Ruby if possible, but C is fine too. Thanks!

    Read the article

  • UIImageView is clipping a pixel of the bottom of my UIImage...?

    - by akaii
    I'm not sure what might be causing this, but UIImageView occasionally clips off about a pixel or 2 from the bottom of some square/rectangular UIImages I'm using as subviews for UITableViewCells. These UIImageViews are well within the borders of the cell, so it shouldn't be due to cliptobounds. There seems to be no consistency or pattern to which images are being clipped, nor when they're clipped, other than that it only happens to (or is only noticable in) square/rectangular icons, and only ones that are parented to UITableViewCells (or their subclasses). I'm having trouble reproducing the problem consistently, which is why I haven't posted any code this time. Has anyone encountered something similar to this before? I've encountered a similar bug that involved floating point values for origin/size being interpreted weirdly... but that doesn't seem to be the cause of this particular problem. I don't need a specific solution at this point, I'm just making sure I haven't missed any well-known bugs or documented problems that involve UIImageView.

    Read the article

  • Need to get pixel averages of a vector sitting on a bitmap...

    - by user346511
    I'm currently involved in a hardware project where I am mapping triangular shaped LED to traditional bitmap images. I'd like to overlay a triangle vector onto an image and get the average pixel data within the bounds of that vector. However, I'm unfamiliar with the math needed to calculate this. Does anyone have an algorithm or a link that could send me in the right direction? I'm not even clear what this type of math is called. I've created a basic image of what I'm trying to capture here: http://imgur.com/Isjip.gif

    Read the article

  • SD card won't appear after upgrade to 13.10

    - by Pixel
    My SD card won't mount when I put it into my lap top, everything was fine before the upgrade. The information about the SD card appears just fine when I type "sudo fdisk l " it just says that it doesn't have a valid partition table. When I type "_sudo blkid" I get the following answer: /dev/sda1: UUID="CCA8-9030" TYPE="vfat" /dev/sda2: UUID="8a1d135b-384b-432d-b608-64dcf09ada24" TYPE="ext2" /dev/sda3: UUID="7s6PtU-kj2Z-N8XD-0mzl-840i-i3HG-enlbAf" TYPE="LVM2_member" /dev/sr0: LABEL="Bamboo CD" TYPE="iso9660" /dev/mapper/ubuntu--vg-root: UUID="c9b521c8-7c9f-493b-95c8-a7d79c465318" TYPE="ext4" /dev/mapper/ubuntu--vg-swap_1: UUID="7f155ab6-e1b9-485b-a2bc-443c0622284d" TYPE="swap" When I use lsusb: Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 003: ID 13d3:5710 IMC Networks UVC VGA Webcam Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 002: ID 046d:c52f Logitech, Inc. Unifying Receiver Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub I've read the other threads and I couldn't really find any good answers, my card reader was compatible with the previous version of ubuntu, so technically it should still be compatible with the next version. Also I can't erase what's on the card, it contains important data which I need... :/ If you need anymore information just ask, I'll give it as soon as I can. Pixel.

    Read the article

  • Is Google tracking our web history even when we do not visit Google or if affiliate websites?

    - by Anoyon-12
    I have recently updated to Google Chrome 29.0.1547.76. And when I click a the new tab button there is a new homepage now. Ok. My current settings forbid from third party cookies being set. And I clear all of my browsing data every time I close or well if its been too long browsing. Ok so there is this help dialog that appears first time you open the new tab page. What I did was Cleared all my browsing settings (From Beginning of time) and then again I went to the new tab, the Help dialog appears. And for the third time I did the same and the Same thing Happened. So For the fourth time I cleared all my Browsing data. Clicked open new tab and then. navigate myself to chrome://settings/cookies. and there were So does this mean google is tracking our web History just for using Chrome. I know Its not Illegal because these cookies only appear when you click the new tab Google Chrome 29.0.1547.76. maybe that was the reason google redesigned the entire New:tab page. From this Google is forcing us to allow them track us. I don't want to set another page as my new:tab page. I just want the old one. Google has a long History of invading Privacy without users consent. There was that Safari incident. I am sure you people remember. So can anyone tell me about this issue? I maybe wrong. So please explain.

    Read the article

  • Javascript and webshop tracking/affiliate across websites, how to do?

    - by H4mm3rHead
    Hi, I have a small front end to a webshop. All customers that go through my website and buy an item from the webshop I get back 5% of the amount. I need to find a way af tracking the customers i forward from my webshop to the other webshop. And then get the webshop to reply to me when the purchase has been made. In my webshop i have made a small page: collect.aspx that requests and saves the values passed in the querystring, something like this pseudo code: string orderid = Request["orderid"]; string amount = Request["amount"]; ..save to database On the webshop i forward customers to i get to insert a javascript on the last page in the purchase flow. I have tried a lot of things but it seems that the only thing that works is to fool the browser into thinking im referring a javascript, like this: <script type="text/javascript" src="http://domain.com/mypage.aspx?orderid=4&amount=45/> I saw how other trackers did their bit, and this seems to be the general way of doing it. With this script however, i get all the orders, i only want to log those that belong tome, those who entered through my website. Here is my big problem, how to do this? I added a cookie when the user opens my page, and i want to check for this cookie again when the purchase page make the callback. It weems that i cant get the cookie from the browser when it makes the "" call. This is really buggin me now. Could anyone please tell me how this is generally done, this tracking. And what am i missing in regards to this cookie thing? All ideas on how to do this is very welcome.

    Read the article

  • Why my tracking service freezes when the phone moves?

    - by user2878181
    I have developed a service which includes timer task and runs after every 5 minutes for keeping tracking record of the device, every five minutes it adds a record to the database. My service is working fine when the phone is not moving i.e it gives records after every 5 minutes as it should be. But i have noticed that when the phone is on move it updates the points after 10 or 20 minutes , i.e whenever the user stops in his way whenever he is on the move. Do service freezes on the move, if yes! how is whatsapp messenger managing it?? Please help! i am writing my onstart method. please help @Override public void onStart(Intent intent, int startId) { Toast.makeText(this, "My Service Started", Toast.LENGTH_LONG).show(); Log.d(TAG, "onStart"); mLocationClient.connect(); final Handler handler_service = new Handler(); timer_service = new Timer(); TimerTask thread_service = new TimerTask() { @Override public void run() { handler_service.post(new Runnable() { @Override public void run() { try { some function of tracking } }); } }; timer_service.schedule(thread_service, 1000, service_timing); //sync thread final Handler handler_sync = new Handler(); timer_sync = new Timer(); TimerTask thread_sync = new TimerTask() { @Override public void run() { handler_sync.post(new Runnable() { @Override public void run() { try { //connecting to the central server for updation Connect(); } catch (Exception e) { // TODO Auto-generated catch block } } }); } }; timer_sync.schedule(thread_sync,2000, sync_timing); }

    Read the article

  • Rendering shadow sprites in cocos2d-x

    - by lukeluke
    I am writing a 2D game with cocos2d-x. I want to put a "shadow" sprite on a background sprite using the equation: MAX(0, Cd*1 - Cs*S) where Cd is the destination color (that is, a background pixel), Cs is the source color (the shadow pixel) , S is the scale factor (between 0 and 1). The MAX() function is used to avoid negative results. This is a lighting effect: when the shadow sprite pixel is 0, there is no effect on the background pixel, otherwise, the background pixel becomes darker. Now, the only way that comes to my mind is to change the blending equation to GL_FUNC_SUBTRACT, but it doesn't compile with cocos2d-x (can't found it)... I would subclass the CCSprite class in order to implement the draw() method in order to change, when needed, the blending equation, call the original draw() method and restore the blending equation to its previous state at the end of the method. So my questions are two: how to use glBlendEquation() with cocos2d-x? Keep in mind that i am writing a game for iphone/android/windows. are shadows handled this way in 2D games? Thx

    Read the article

  • Changing Palette for Day/Light Mode using GIMP

    - by J.C.
    Hello, Suppose I've a picture, which want to achieve day/light mode by changing 8bpp color palette. If I want the pixel index of my picture is always fixed for both day mode and night mode. For example, the 1st pixel index is 100. Which I can look up index 100 in day mode palette and night mode palette. How can I use GIMP to do so? My goal is to not update my pixel index of my picture. Also, as you see in two palette, they are not one one mapping. That is index 1 of the day mode palette and index 1 of the night mode palette may not used in the same pixel of the picture, how can I tackle this problem? Actually, my use case is as follow I want to use one 8bpp picture to achieve day/night mode by update only the color palette (without updating the pixel index). The advantage is I only have to prepare 2 256 byte palette rather than saving 2 big pictures in my limited data ram. Thanks a lot

    Read the article

  • Xorg.conf (nvidia) Second Monitor getting settings of first

    - by HennyH
    I've been spending the weekend (and some time before that) trying to set up my Korean QHD270 and Benq G2222HDL monitors with Ubuntu 13.10. With the nouveau drivers install both monitor function perfectly fine. After installing the nvidia drivers the Benq works but the QHD270 does not. Now, after days of struggling I managed to get the QHD270 to work following a mixture of blogs, particularly; this one and learnitwithme. Now, unfortunatly my G2222HDL does not work. I fixed the QHD270 by supplying a custom EDID, my xorg.conf looks like so (excluding keyboard and mouse): Section "ServerLayout" Identifier "Layout0" Screen "Default Screen" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Monitor" Identifier "Configured Monitor" EndSection Section "Device" Identifier "Configured Video Device" Driver "nvidia" Option "CustomEDID" "DFP:/etc/X11/edid-shimian.bin" EndSection Section "Screen" Identifier "Default Screen" Device "Configured Video Device" Monitor "Configured Monitor" EndSection Now, I tried defining a new Device,Monitor and Screen then in ServerLayout adding Screen "Second Screen" RightOf "Default Screen", but after doing so neither monitor worked. Hoping to fix the issue using a GUI based tool I opened up NVIDIA X Server Settings, which shows my current layout as: It seems that something is being output to the monitor, as suggested by my print screen: Any help would be greatly appreciated. Output of xrandr: Screen 0: minimum 8 x 8, current 5120 x 1440, maximum 16384 x 16384 DVI-I-0 disconnected (normal left inverted right x axis y axis) DVI-I-1 connected primary 2560x1440+0+0 (normal left inverted right x axis y axis) 597mm x 336mm 2560x1440 60.0*+ HDMI-0 disconnected (normal left inverted right x axis y axis) DP-0 disconnected (normal left inverted right x axis y axis) DVI-D-0 connected 2560x1440+2560+0 (normal left inverted right x axis y axis) 597mm x 336mm 2560x1440 60.0*+ DP-1 disconnected (normal left inverted right x axis y axis) And an extract from my log file (perhaps this is relevant?) [ 7.862] (--) NVIDIA(0): Valid display device(s) on GeForce GTX 680 at PCI:2:0:0 [ 7.862] (--) NVIDIA(0): CRT-0 [ 7.862] (--) NVIDIA(0): ACB QHD270 (DFP-0) (boot, connected) [ 7.862] (--) NVIDIA(0): DFP-1 [ 7.862] (--) NVIDIA(0): DFP-2 [ 7.862] (--) NVIDIA(0): DFP-3 [ 7.862] (--) NVIDIA(0): DFP-4 [ 7.862] (--) NVIDIA(0): CRT-0: 400.0 MHz maximum pixel clock [ 7.862] (--) NVIDIA(0): ACB QHD270 (DFP-0): 330.0 MHz maximum pixel clock [ 7.862] (--) NVIDIA(0): ACB QHD270 (DFP-0): Internal Dual Link TMDS [ 7.862] (--) NVIDIA(0): DFP-1: 165.0 MHz maximum pixel clock [ 7.862] (--) NVIDIA(0): DFP-1: Internal Single Link TMDS [ 7.862] (--) NVIDIA(0): DFP-2: 165.0 MHz maximum pixel clock [ 7.862] (--) NVIDIA(0): DFP-2: Internal Single Link TMDS [ 7.862] (--) NVIDIA(0): DFP-3: 330.0 MHz maximum pixel clock [ 7.862] (--) NVIDIA(0): DFP-3: Internal Single Link TMDS [ 7.862] (--) NVIDIA(0): DFP-4: 960.0 MHz maximum pixel clock [ 7.862] (--) NVIDIA(0): DFP-4: Internal DisplayPort

    Read the article

  • wxCam can't open /dev/dsp

    - by SIJAR
    I try to run wxCam through PulseAudio using the following command: $ padsp -d wxcam Although wxCam started ok, wxCam is been set to use xdiv format to enable sound during recording, but while recording the I get the error: Cannot open /dev/dsp. Video file will be recorded without audio track Please help me in fixing this issue. Below is some debug information: $ padsp -d wxcam Determining video4linux API version... Using video4linux 2 API VIDIOC_ENUM_FRAMESIZES: Invalid argument V4L2_CID_GAMMA is not supported Determining pixel format... pixel format: YUV 4:2:2 (YUYV) Found V4L2_PIX_FMT_YUYV pixel format pixel format: MJPEG Found V4L2_PIX_FMT_MJPEG pixel format --DEBUG: [wxcam] Generating standard Huffman tables for this frame. Corrupt JPEG data: 2 extraneous bytes before marker 0xd2 ... repeats a couple of times ... open of failed: No such file or directory --DEBUG: [wxcam] Generating standard Huffman tables for this frame. Corrupt JPEG data: 2 extraneous bytes before marker 0xd4 ... repeats a couple of times .... /home/sij/Videos/Webcam/video.avi written: 640x480, 1334396 bytes --DEBUG: [wxcam] Generating standard Huffman tables for this frame. Corrupt JPEG data: 2 extraneous bytes before marker 0xd3 ... repeats a couple of times ...

    Read the article

  • Picture rendered from above and below using an Orthographic camera do not match

    - by Roy T.
    I'm using an orthographic camera to render slices of a model (in order to voxelize it). I render each slice both from above and below in order to determine what is inside each slice. I am using an orthographic camera The model I render is a simple 'T' shape constructed from two cubes. The cubes have the same dimensions and have the same Y (height) coordinate. See figure 1 for a render of it in Blender. I render this model once directly from above and once directly from below. My expectation was that I would get exactly the same image (except for mirroring over the y-axis). However when I render using a very low resolution render target (25x25) the position (in pixels) of the 'T' is different when rendered from above as opposed to rendered from below. See figure 2 and 3. The pink blocks are not part of the original rendering but I've added them so you can easily count/see the differences. Figure 2: the T rendered from above Figure 3: the T rendered from below This is probably due to what I've read about pixel and texel coordinates which might be biased to the top-left as seen from the camera. Since I'm using the same 'up' vector for both of my camera's my bias only shows on the x-axis. I've tried to change the position of the camera and it's look-at by, what I thought, should be half a pixel. I've tried both shifting a single camera and shifting both cameras and while I see some effect I am not able to get a pixel-by-pixel perfect copy from both camera's. Here I initialize the camera and compute, what I believe to be, half pixel. boundsDimX and boundsDimZ is a slightly enlarged bounding box around the model which I also use as the width and height of the view volume of the orthographic camera. Matrix projection = Matrix.CreateOrthographic(boundsDimX, boundsDimZ, 0.5f, sliceHeight + 0.5f); Vector3 halfPixel = new Vector3(boundsDimX / (float)renderTarget.Width, 0, boundsDimY / (float)renderTarget.Height) * 0.5f; This is the code where I set the camera position and camera look ats // Position camera if (downwards) { float cameraHeight = bounds.Max.Y + 0.501f - (sliceHeight * i); Vector3 cameraPosition = new Vector3 ( boundsCentre.X, // possibly adjust by half a pixel? cameraHeight, boundsCentre.Z ); camera.Position = cameraPosition; camera.LookAt = new Vector3(cameraPosition.X, cameraHeight - 1.0f, cameraPosition.Z); } else { float cameraHeight = bounds.Max.Y - 0.501f - (sliceHeight * i); Vector3 cameraPosition = new Vector3 ( boundsCentre.X, cameraHeight, boundsCentre.Z ); camera.Position = cameraPosition; camera.LookAt = new Vector3(cameraPosition.X, cameraHeight + 1.0f, cameraPosition.Z); } Main Question Now you've seen all the problems and code you can guess it. My main question is. How do I align both camera's so that they each render exactly the same image (mirrored along the Y axis)? Figure 1 the original model rendered in blender

    Read the article

  • Which of these algorithms is best for my goal?

    - by JonathonG
    I have created a program that restricts the mouse to a certain region based on a black/white bitmap. The program is 100% functional as-is, but uses an inaccurate, albeit fast, algorithm for repositioning the mouse when it strays outside the area. Currently, when the mouse moves outside the area, basically what happens is this: A line is drawn between a pre-defined static point inside the region and the mouse's new position. The point where that line intersects the edge of the allowed area is found. The mouse is moved to that point. This works, but only works perfectly for a perfect circle with the pre-defined point set in the exact center. Unfortunately, this will never be the case. The application will be used with a variety of rectangles and irregular, amorphous shapes. On such shapes, the point where the line drawn intersects the edge will usually not be the closest point on the shape to the mouse. I need to create a new algorithm that finds the closest point to the mouse's new position on the edge of the allowed area. I have several ideas about this, but I am not sure of their validity, in that they may have far too much overhead. While I am not asking for code, it might help to know that I am using Objective C / Cocoa, developing for OS X, as I feel the language being used might affect the efficiency of potential methods. My ideas are: Using a bit of trigonometry to project lines would work, but that would require some kind of intense algorithm to test every point on every line until it found the edge of the region... That seems too resource intensive since there could be something like 200 lines that would have each have to have as many as 200 pixels checked for black/white.... Using something like an A* pathing algorithm to find the shortest path to a black pixel; however, A* seems resource intensive, even though I could probably restrict it to only checking roughly in one direction. It also seems like it will take more time and effort than I have available to spend on this small portion of the much larger project I am working on, correct me if I am wrong and it would not be a significant amount of code (100 lines or around there). Mapping the border of the region before the application begins running the event tap loop. I think I could accomplish this by using my current line-based algorithm to find an edge point and then initiating an algorithm that checks all 8 pixels around that pixel, finds the next border pixel in one direction, and continues to do this until it comes back to the starting pixel. I could then store that data in an array to be used for the entire duration of the program, and have the mouse re-positioning method check the array for the closest pixel on the border to the mouse target position. That last method would presumably execute it's initial border mapping fairly quickly. (It would only have to map between 2,000 and 8,000 pixels, which means 8,000 to 64,000 checked, and I could even permanently store the data to make launching faster.) However, I am uncertain as to how much overhead it would take to scan through that array for the shortest distance for every single mouse move event... I suppose there could be a shortcut to restrict the number of elements in the array that will be checked to a variable number starting with the intersecting point on the line (from my original algorithm), and raise/lower that number to experiment with the overhead/accuracy tradeoff. Please let me know if I am over thinking this and there is an easier way that will work just fine, or which of these methods would be able to execute something like 30 times per second to keep mouse movement smooth, or if you have a better/faster method. I've posted relevant parts of my code below for reference, and included an example of what the area might look like. (I check for color value against a loaded bitmap that is black/white.) // // This part of my code runs every single time the mouse moves. // CGPoint point = CGEventGetLocation(event); float tX = point.x; float tY = point.y; if( is_in_area(tX,tY, mouse_mask)){ // target is inside O.K. area, do nothing }else{ CGPoint target; //point inside restricted region: float iX = 600; // inside x float iY = 500; // inside y // delta to midpoint between iX,iY and tX,tY float dX; float dY; float accuracy = .5; //accuracy to loop until reached do { dX = (tX-iX)/2; dY = (tY-iY)/2; if(is_in_area((tX-dX),(tY-dY),mouse_mask)){ iX += dX; iY += dY; } else { tX -= dX; tY -= dY; } } while (abs(dX)>accuracy || abs(dY)>accuracy); target = CGPointMake(roundf(tX), roundf(tY)); CGDisplayMoveCursorToPoint(CGMainDisplayID(),target); } Here is "is_in_area(int x, int y)" : bool is_in_area(NSInteger x, NSInteger y, NSBitmapImageRep *mouse_mask){ NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; NSUInteger pixel[4]; [mouse_mask getPixel:pixel atX:x y:y]; if(pixel[0]!= 0){ [pool release]; return false; } [pool release]; return true; }

    Read the article

  • Take a snapshot with JavaFX!

    - by user12610255
    JavaFX 2.2 has a "snapshot" feature that enables you to take a picture of any node or scene. Take a look at the API Documentation and you will find new snapshot methods in the javafx.scene.Scene class. The most basic version has the following signature: public WritableImage snapshot(WritableImage image) The WritableImage class (also introduced in JavaFX 2.2) lives in the javafx.scene.image package, and represents a custom graphical image that is constructed from pixels supplied by the application. In fact, there are 5 new classes in javafx.scene.image: PixelFormat: Defines the layout of data for a pixel of a given format. WritablePixelFormat: Represents a pixel format that can store full colors and so can be used as a destination format to write pixel data from an arbitrary image. PixelReader: Defines methods for retrieving the pixel data from an Image or other surface containing pixels. PixelWriter: Defines methods for writing the pixel data of a WritableImage or other surface containing writable pixels. WritableImage: Represents a custom graphical image that is constructed from pixels supplied by the application, and possibly from PixelReader objects from any number of sources, including images read from a file or URL. The API documentation contains lots of information, so go investigate and have fun with these useful new classes! -- Scott Hommel

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >