Search Results

Search found 833 results on 34 pages for 'gesture recognition'.

Page 8/34 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Simultaneous gesture recognizers in Iphone SDK

    - by minaevs
    I need to catch two different swipping gestures using UISwipeGestureRecognizer(for example, UISwipeGestureRecognizerDirectionRight and UISwipeGestureRecognizerDirectionLeft). When I add two different recognisers with addGestureRecognizer method, only last added recognizer works. I've read that I need to implement gestureRecognizer:shouldRecognizeSimultaneouslyWithGestureRecognizer: method of UIGestureRecognizerDelegate protocol, but nothing works. Can anyone help with simple example of catching two or more same gestures? Thanks!

    Read the article

  • iOS: Gesture recogniser for smooth scrolling and flicking a View

    - by AppleDeveloper
    I am building an iPad app where I needed to allow resizing views functionality using divider view provided between two views. This divider view is just a 20px height view between two half screen content views - please refer attached images. When user scrolls this divider view up or down, both content views changes their sizes appropriately. I have extended UIView and implemented this using touchMoved delegate as code given below in touchesMoved delegate. It works fine. The only thing is missing with TouchMoved is you can't flick divider view to top or bottom directly. You have to scroll all the way to top or bottom! To support flicking the view I have tried UIPanGestureRecognizer but I don't see smooth scrolling with it. When I handle split position change in UIGestureRecognizerStateChanged state, just touching divider view flick it to top or bottom. Handling split position change in UIGestureRecognizerStateEnded does the same but I don't see content view resizing with dividerview scrolling! Could someone please tell me how could I achieve both smooth scrolling of divider view with resizing content views(like touchMoved) and flicking the view. Any alternative approach would also fine. Thanks. - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; if (touch) { CGPoint lastPt = [touch previousLocationInView:self]; CGPoint pt = [touch locationInView:self]; float offset = pt.y - lastPt.y; self.parentViewController.splitPosition = self.parentViewController.splitPosition + offset; } } - (void)handlePan:(UIPanGestureRecognizer*)recognizer { CGPoint translation = [recognizer translationInView:recognizer.view]; CGPoint velocity = [recognizer velocityInView:recognizer.view]; if (recognizer.state == UIGestureRecognizerStateBegan) { } else if (recognizer.state == UIGestureRecognizerStateChanged) { // If I change split position here, I don't see smooth scrolling dividerview...it directly jumps to the top or bottom! self.parentViewController.splitPosition = self.parentViewController.splitPosition + translation.y; } else if (recognizer.state == UIGestureRecognizerStateEnded) { // If I change split position here, the same thing happens at end and I don't see my divider view moving with my scrolling and resizing my views. self.parentViewController.splitPosition = self.parentViewController.splitPosition + translation.y; } } Initial screen Increased top view size by scrolling divider view Top view is totally hidden here but I have to scroll divider view all the way to top. I want to flick the divider view so that it directly goes from any position to top

    Read the article

  • iPhone: detect "touch-and-drag" gesture from UIBarButtonItem?

    - by Greg Maletic
    I have an "add" button that's represented by a UIBarButtonItem. Hitting the "add" button adds an object into a list that represents a moment in time. By default, that time is "now"...but I'd like to be able to use dragging behavior to let the user specify earlier times for the object. Here's the behavior I want to implement: If the user touches on the UIBarButtonItem and lets go quickly, an object is added to the list that represents "now." If the user touches on the UIBarButtonItem and drags, a little UI pops up that shows the time that the distance of their drag represents. The further they drag, the further back in time their touch will represent. When they let go, the object representing an earlier time will get added to the list. (Though the description of the behavior is complicated, I'm convinced this will be pretty intuitive for users of the app.) I haven't implemented code for anything but the most simple touches in the past, and I'm at a loss as to the best way to try this. Does anyone have any suggestions, or could point me towards some sample code that implements something like this? Thanks very much.

    Read the article

  • How to create a gesture controlled rotating image for a UI

    - by ocdtrekkie
    I'm trying to figure out the best way to make an image rotate along with a user's finger dragging it left or right. I want to try and match the rate a user's finger is moving with the rate the image is rotating. I've got the basic setup for my application going, with the menus and whatnot I want to have, and that's all running great on the emulator, I'm just not sure how to approach this part. I can code all the logic I need for my app, I'm just not doing to well designing the UI, I have a picture in mind, I've actually made a couple mock images of it, I just can't figure out how to get it going in Android, and any help would be appreciated.

    Read the article

  • GNU screen configuration for optimal keycode recognition

    - by intuited
    Currently GNU screen will botch up certain keystrokes, for example CTRL pressed in combination with the arrow keys, so that eg when in vim insert mode, CTRL-PGUp will Uppercase the next/current word (or something like that). I'd like for it to work pretty much transparently, so that the functionality is the same as when it's not running (with the obvious exception of CTRL-a control sequences)... is this possible? Also — and I suspect this is more or less a separate issue — I'd like for the scrollwheel to scroll back in the session log rather than cycling through the history as it does now. Doable? Or perhaps it could be set to emulate a much larger screen size that the terminal app that it's running under could keep that text in its session log. either way the goal would be to be able to use the mouse wheel and/or shift-up-arrow to scroll back in the session log.

    Read the article

  • missing NTLDR / no CD drive recognition

    - by Daems
    I formatted C: after installing win 7 on F: partition. on startup reported missing NTLDR changed boot seq to CD. only bleep sounds and failing boot. I could try to put the files on a USB stick and boot, but are there better options ?

    Read the article

  • Recognition of USB drive

    - by lamcro
    I have a Motorola phone, with a MicroSD slot, which my Ubuntu machine does not recognize as a USB drive. It does know there is something there since It automatically charges my phone whenever I connect it. using the command lsusb shows it as "Motorola PCS" Is there a way to tell Ubuntu/Linux that the connected hardware is a disk drive?

    Read the article

  • Windows 8, NVIDIA graphics recognition fails

    - by Roy Grubb
    I just installed Windows 8 Pro OEM 64-bit (clean install) and it won't properly recognize my graphics adapter. When I installed Win8, it automatically installed the BasicDisplay.sys driver dated 6/21/2006. 6.2.9200.16384 (win8_rtm.120725-1247). Hardware - Mobo:MSi G41M-P33 Combo CPU:Intel CoreDuo 6600 Graphics:NVIDIA GeForce 9400GT *OS* - Windows 8 Pro 64-bit OEM The graphics adapter worked fine in Windows XP. The PC is a generic box, bought locally and its mobo failed recently, so I replaced it with the G41M. Microsoft wouldn't let me re-activate Windows XP with a different mobo, so I installed Win8, which appears to work except as described next. Win8 only partially recognizes the graphics adapter and won't allow NVIDIA latest driver installer to see that it's an NVIDIA card. As a result, OpenGL doesn't work, and this is needed by the software I most use. Other than that the graphics look OK. When I say 'partially recognizes', I mean that via the Control Panel, I can see that the adapter is described as NVIDIA, but the driver remains stuck at Microsoft Basic Display Adapter no matter what I try, including "Update driver..." in adapter properties. Display Screen Resolution Advanced Settings Adapter shows: Adapter Type: Microsoft Basic Display Adapter Chip Type: NVIDIA DAC Type: NVIDIA Corporation Bios Information: G27 Board - p381n17 Don't know what this means ... no mention of 9400GT Total Available Graphics Memory: 256 MB Dedicated Video Memory: 0 MB In fact the adapter has 512MB on-board video memory. System Video Memory: 0 MB Shared System Memory: 256 MB And Control Panel Device Manager Display adapters just shows Microsoft Basic Display Adapter. No other graphics adapter, and no unknown device or yellow question mark. What I have tried so far: 1. Cleared CMOS and reset. Updated BIOS and all mobo drivers as follows: 1st I used Driver Reviver to see if any driver updates were required. It found some but I didn't use that to get the drivers. Then I switched to MSi's own mobo driver utility Live Update 5. This also showed the board needed to update several so I used it to fetch the new drivers. After that it showed that everything was up to date and I checked with Driver Reviver again, which also reported no drivers now needed updating. Rebooted. Went to the NVIDIA site to get the latest graphics adapter driver. Their auto-detect "Option 2: Automatically find drivers for my NVIDIA products" said "The NVIDIA Smart Scan was unable to evaluate your system hardware. Please use Option 1 to manually find drivers for your NVIDIA products." So I downloaded 310.70-desktop-win8-win7-winvista-64bit-international-whql.exe, which lists 9400 GT under supported products, but when I run it, it says: "NVIDIA Installer cannot continue This graphics driver could not find compatible graphics hardware." Connected the display to the on-board Intel graphics (G41 Intel Express), removed the NVIDIA card and rebooted, changed to internal graphics in CMOS. Again it installs the MS Basic Display Adapter, and can't properly run my s/w that needs OpenGL. It runs on other machines with Intel Express graphics (WinXP and 7) Shut down and pulled out the power cord. Held start button to discharge all capacitors. Removed and re-inserted NVIDIA adapter in PCI-E slot and made sure properly seated. Connected the monitor to the card, screwed plug to socket. Reconnected power cord. Started and checked in BIOS that Primary Graphics Adapter was set to PCI-E. Started Windows. Uninstalled MS Basic Display Adapter in Device Manager. Screen blanks briefly, reappears. No Graphics adapter entry was then visible in Device Manager. Restarted PC. MS Basic Display Adapter Visible again in Device Manager. Clicked in Device Manager View Show hidden devices. No other graphics adapter appears, no unknown devices. Rebooted. Tried Scan for Hardware changes. None detected. Tried right-click on MS Basic Display Adapter Properties Driver Update Driver... Search automatically. It replied that it had determined driver was up to date. I checked that there were no graphic driver-related entries in Programs and Features that I could delete (none). Searched for any other drivers with nvidia in their name and deleted them, just keeping the 306.97 installer exe file. Did a Windows Update. Ran GPU-Z which shows (main items): Microsoft Basic Display Adapter GPU G72 BIOS 5.72.22.76.88 Device ID 10DE - 01D5 DDR2 Bus Width 32 Bit Memory size 64MB Driver Version nvlddmkm 6.2.9200.16384 (ForceWare 0.00) / Win8 64 NVIDIA SLI Unknown in the drop-down at the foot, "Microsoft Basic Display Adapter" is the only option If I swap hard disks in that machine to one with a Ubuntu 10.4 installation (originally installed on the same PC), lspci shows "VGA compatible controller as NVIDIA Corporation Device 01d5 (rev a1) (prog-if 00 [VGA controller])" and "kernel driver in use: nvidia" I'm out of ideas for new things to try and would be really grateful of suggestions. Thanks!

    Read the article

  • ip recognition is different

    - by Cougar
    some months ago i bought a dedicated server from usa (http://www.dacentec.com/) datacenter. my ips are look like this : 162.248.243.blo blo blo when i check my ip in this site : http://whatismyipaddress.com/ it shows me : ISP: Dacentec Services: None Detected Country: United States why Services: None Detected and what did they do with this ip block? also when i open some sites like google, yahoo, etc they show me india or china as country. what is the problem about these ips and why i don't have a stable location for them?

    Read the article

  • Pattern Recognition for image comparision in .net

    - by vinod R
    hi Can anybody share code or algorithm(using pattern recognition) for image comparision in .net . I need to compare 2 images of different resolution and textures and the find the difference . Now i have code to find the difference between 2 images using C# // Load the images. Bitmap bm1 = (Bitmap) (Image.FromFile(txtFile1.Text)); Bitmap bm2 = (Bitmap) (Image.FromFile(txtFile2.Text)); // Make a difference image. int wid = Math.Min(bm1.Width, bm2.Width); int hgt = Math.Min(bm1.Height, bm2.Height); Bitmap bm3 = new Bitmap(wid, hgt); // Create the difference image. bool are_identical = true; int r1; int g1; int b1; int r2; int g2; int b2; int r3; int g3; int b3; Color eq_color = Color.Transparent; Color ne_color = Color.Transparent; for (int x = 0; x <= wid - 1; x++) { for (int y = 0; y <= hgt - 1; y++) { if (bm1.GetPixel(x, y).Equals(bm2.GetPixel(x, y))) { bm3.SetPixel(x, y, eq_color); } else { bm1.SetPixel(x, y, ne_color); are_identical = false; } } } // Display the result. picResult.Image = bm1; Bitmap Logo = new Bitmap(picResult.Image); Logo.MakeTransparent(Logo.GetPixel(1, 1)); picResult.Image = (Image)Logo; //this.Cursor = Cursors.Default; if ((bm1.Width != bm2.Width) || (bm1.Height != bm2.Height)) { are_identical = false; } if (are_identical) { MessageBox.Show("The images are identical"); } else { MessageBox.Show("The images are different"); } //bm1.Dispose() // bm2.Dispose() BUT this compare if the 2 images are of same resolution and size.if some shadow is there on one image(but the 2 images are same) it shows the difference between the image..so i am trying to compare using pattern recognition. Thanks in advance

    Read the article

  • Sliding window algorithm for activiting recognition MATLAB

    - by csc
    I want to write a sliding window algorithm for use in activity recognition. The training data is <1xN so I'm thinking I just need to take (say window_size=3) the window_size of data and train that. I also later want to use this algorithm on a matrix . I'm new to matlab so i need any advice/directions on how to implement this correctly.

    Read the article

  • Marker Recognition on Android (recognising Rubik's Cubes)

    - by greenie
    Hi everybody. I'm developing an augmented reality application for Android that uses the phone's camera to recognise the arrangement of the coloured squares on each face of a Rubik's Cube. One thing that I am unsure about is how exactly I would go about detecting and recognising the coloured squares on each face of the cube. If you look at a Rubik's Cube then you can see that each square is one of six possible colours with a thin black border. This lead me to think that it should be relativly simply to detect a square, possibly using an existing marker detection API. My question is really, has anybody here had any experience with image recognition and Android? Ideally I'd like to be able to implement and existing API, but it would be an interesting project to do from scratch if somebody could point me in the right direction to get started. Many thanks in advance.

    Read the article

  • Computer Vision application(+web interface) for face detection and recognition from database

    - by Kush
    My project is a computer vision java application which should implement the following : A web interface through which the form entry+images(for example a student data) will be stored into a database(Mysql) & images into directory common to my java application. Then the data & images can be retrieved from my java Gui application and I can perform the following operations of image processing through OpenCV. Actually,I want to run the face detection on images retrieved and discard the false entries(no proper face). Also the application user/admin can search an image based on text search(By Id) or By another reference image using face recognition. I am well familiar with Java but the problem is that I need a guidance on how to organise it in a stepwise manner(links appreciated).OpenCv,Php and mySql are really messy.I know doing the openCV stuff within java is real overhead but i really want to do it.But If there is any suggestion to do it elseway please guide me.So any kind of help is a ray of hope for me. Thanks.

    Read the article

  • iPhone UIImage number recognition

    - by Skeep
    Hi All, I have a small UIImage (jpg) with a single typed number. I want to be able to read the number with some kind of pattern recognition. I'm really not sure where to start, so any help would be appreciated. my initial idea was to compare this image with other images. For instance compare the image with that of a 1,2,3, etc until a match was found. That just seems slow and cumbersome and wondered if there was a better way to do it? Thanks

    Read the article

  • OCR combined with font recognition?

    - by Adam
    I have a bold idea where a user could take an image like the following and in a few seconds of processing, be able to edit a document which looks roughly the same. The software would use WhatTheFont (or something similar) to recognize the fonts used, and OCR and other software to handle the font size, color, line-spacing, and of course the text content itself. In the case of the example image, there would be three separate "textboxes" produced, each starting at the upper left corner of the text, and extending as far to the bottom right as it could before running into another text box. So the user would then see something like this: (The rectangles are just used to show the boundaries of each textbox.) From here, the user would be able to edit the text in each of these boxes to create a new document. Of course there are tons of obvious uses for such an application, especially on a mobile phone with a built in camera. So my questions are the following: I doubt the answer is yes, but does anything do this already? If I'm going to try to build this, what should I write it in? Can I use Python? What would be the best OCR libraries to start with? Is there a service other than WhatTheFont for font recognition that has better API support? Anybody want to help me build it? :) etc. etc. Update: One thing I wanted to mention (but forgot) is I would also like the background to be preserved. In other words, if the example above had an image behind the text, I'd like the document to use that image with text removed. I know this complicates things a lot because that would require some image editing techniques too (something akin to Photoshop CS5' "content-aware fill"). But if we can solve diminished reality on iPhones, I think we can figure this out!

    Read the article

  • Palm Centro not even appearing on desktop

    - by DaimyoKirby
    Background: I'm trying to set up my dad's new installation of Xubuntu 12.10 (I finally got him to switch from Windows :-D) so he can sync his Palm Centro on his computer. I installed J-Pilot, but the problem is that his palm isn't even showing up anywhere on the computer. When it's plugged in, it lit up and began to charge when I told it to try and sync with the computer, but it failed the sync and Xubuntu still doesn't recognize it. Question: Does anyone know how I can get his Palm to be recognized by Xubuntu?

    Read the article

  • Vectorization of matlab code for faster execution

    - by user3237134
    My code works in the following manner: 1.First, it obtains several images from the training set 2.After loading these images, we find the normalized faces,mean face and perform several calculation. 3.Next, we ask for the name of an image we want to recognize 4.We then project the input image into the eigenspace, and based on the difference from the eigenfaces we make a decision. 5.Depending on eigen weight vector for each input image we make clusters using kmeans command. Source code i tried: clear all close all clc % number of images on your training set. M=1200; %Chosen std and mean. %It can be any number that it is close to the std and mean of most of the images. um=60; ustd=32; %read and show images(bmp); S=[]; %img matrix for i=1:M str=strcat(int2str(i),'.jpg'); %concatenates two strings that form the name of the image eval('img=imread(str);'); [irow icol d]=size(img); % get the number of rows (N1) and columns (N2) temp=reshape(permute(img,[2,1,3]),[irow*icol,d]); %creates a (N1*N2)x1 matrix S=[S temp]; %X is a N1*N2xM matrix after finishing the sequence %this is our S end %Here we change the mean and std of all images. We normalize all images. %This is done to reduce the error due to lighting conditions. for i=1:size(S,2) temp=double(S(:,i)); m=mean(temp); st=std(temp); S(:,i)=(temp-m)*ustd/st+um; end %show normalized images for i=1:M str=strcat(int2str(i),'.jpg'); img=reshape(S(:,i),icol,irow); img=img'; end %mean image; m=mean(S,2); %obtains the mean of each row instead of each column tmimg=uint8(m); %converts to unsigned 8-bit integer. Values range from 0 to 255 img=reshape(tmimg,icol,irow); %takes the N1*N2x1 vector and creates a N2xN1 matrix img=img'; %creates a N1xN2 matrix by transposing the image. % Change image for manipulation dbx=[]; % A matrix for i=1:M temp=double(S(:,i)); dbx=[dbx temp]; end %Covariance matrix C=A'A, L=AA' A=dbx'; L=A*A'; % vv are the eigenvector for L % dd are the eigenvalue for both L=dbx'*dbx and C=dbx*dbx'; [vv dd]=eig(L); % Sort and eliminate those whose eigenvalue is zero v=[]; d=[]; for i=1:size(vv,2) if(dd(i,i)>1e-4) v=[v vv(:,i)]; d=[d dd(i,i)]; end end %sort, will return an ascending sequence [B index]=sort(d); ind=zeros(size(index)); dtemp=zeros(size(index)); vtemp=zeros(size(v)); len=length(index); for i=1:len dtemp(i)=B(len+1-i); ind(i)=len+1-index(i); vtemp(:,ind(i))=v(:,i); end d=dtemp; v=vtemp; %Normalization of eigenvectors for i=1:size(v,2) %access each column kk=v(:,i); temp=sqrt(sum(kk.^2)); v(:,i)=v(:,i)./temp; end %Eigenvectors of C matrix u=[]; for i=1:size(v,2) temp=sqrt(d(i)); u=[u (dbx*v(:,i))./temp]; end %Normalization of eigenvectors for i=1:size(u,2) kk=u(:,i); temp=sqrt(sum(kk.^2)); u(:,i)=u(:,i)./temp; end % show eigenfaces; for i=1:size(u,2) img=reshape(u(:,i),icol,irow); img=img'; img=histeq(img,255); end % Find the weight of each face in the training set. omega = []; for h=1:size(dbx,2) WW=[]; for i=1:size(u,2) t = u(:,i)'; WeightOfImage = dot(t,dbx(:,h)'); WW = [WW; WeightOfImage]; end omega = [omega WW]; end % Acquire new image % Note: the input image must have a bmp or jpg extension. % It should have the same size as the ones in your training set. % It should be placed on your desktop ed_min=[]; srcFiles = dir('G:\newdatabase\*.jpg'); % the folder in which ur images exists for b = 1 : length(srcFiles) filename = strcat('G:\newdatabase\',srcFiles(b).name); Imgdata = imread(filename); InputImage=Imgdata; InImage=reshape(permute((double(InputImage)),[2,1,3]),[irow*icol,1]); temp=InImage; me=mean(temp); st=std(temp); temp=(temp-me)*ustd/st+um; NormImage = temp; Difference = temp-m; p = []; aa=size(u,2); for i = 1:aa pare = dot(NormImage,u(:,i)); p = [p; pare]; end InImWeight = []; for i=1:size(u,2) t = u(:,i)'; WeightOfInputImage = dot(t,Difference'); InImWeight = [InImWeight; WeightOfInputImage]; end noe=numel(InImWeight); % Find Euclidean distance e=[]; for i=1:size(omega,2) q = omega(:,i); DiffWeight = InImWeight-q; mag = norm(DiffWeight); e = [e mag]; end ed_min=[ed_min MinimumValue]; theta=6.0e+03; %disp(e) z(b,:)=InImWeight; end IDX = kmeans(z,5); clustercount=accumarray(IDX, ones(size(IDX))); disp(clustercount); Running time for 50 images:Elapsed time is 103.947573 seconds. QUESTIONS: 1.It is working fine for M=50(i.e Training set contains 50 images) but not for M=1200(i.e Training set contains 1200 images).It is not showing any error.There is no output.I waited for 10 min still there is no output. I think it is going infinite loop.What is the problem?Where i was wrong?

    Read the article

  • Content Manager Assistant PSVita Linux Does NOT Recognize USB Port

    - by Nicky Bailuc
    I have an external copy of Windows 7 alongside Quantal and I installed Content Manager Assistant on it. I was able to start the program successfully by finding the Executable file of the program in the program folder in Windows and run it in Wine, however Wine didn't recognize my PSVita that was connected through one of my USB ports. Is there any way to configure WINE to properly recognize the Vita? Content Manager Assistant is a Windows and Mac only program that allows you to transfer files between your PC and PSVita, kinda like iTunes for iPod.

    Read the article

  • Automating repetitive game development tasks

    - by MrDatabase
    Disclaimer: this is an open-ended and kinda "far out" question Over the last few years I've made a few iPhone games. I use very common programs like Xcode and Illustrator to make the games. Lately I've become tired of repeating certain tasks over and over again. Here are some examples: in Xcode: "clean target, build, run" over and over again in Xcode: delete image resources and then import updated image resources (identical names) I'd like to automate these tasks in Xcode. Any ideas? I've done some automation in Photoshop using the "button mode" thing where you record a macro... that's been very useful. Here's the kinda wacky or "far out" part of the question: how can this automation be done via voice commands? (perhaps using a Nuance product or something) Here's an example of what I'd love to do via a few voice commands: Save artwork from illustrator at a user-specified size (@2x versions as well) Delete "someArt.png" and "[email protected]" from Xcode Add the updated versions of someArt.png to Xcode In Xcode: clean target, build, and run I know this question probably seems bizarre... but something like this could make certain things substantially easier for game developers. Edit: wonder if a combination of AppleScript and Nuance might work?

    Read the article

  • Intercepting/Hijacking iPhone Touch Events for MKMapView

    - by Shawn
    Is there a bug in the 3.0 SDK that disables real-time zooming and intercepting the zoom-in gesture for the MKMapView? I have some real simple code so I can detect tap events, but there are two problems: zoom-in gesture is always interpreted as a zoom-out none of the zoom gestures update the Map's view in realtime. In hitTest, if I return the "map" view, the MKMapView functionality works great, but I don't get the opportunity to intercept the events. Any ideas? MyMapView.h: @interface MyMapView : MKMapView { UIView *map; } MyMapView.m: - (id)initWithFrame:(CGRect)frame { if (![super initWithFrame:frame]) return nil; self.multipleTouchEnabled = true; return self; } - (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event { NSLog(@"Hit Test"); map = [super hitTest:point withEvent:event]; return self; } - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event { NSLog(@"%s", __FUNCTION__); [map touchesCancelled:touches withEvent:event]; } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent*)event { NSLog(@"%s", __FUNCTION__); [map touchesBegan:touches withEvent:event]; } - (void)touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event { NSLog(@"%s, %x", __FUNCTION__, mViewTouched); [map touchesMoved:touches withEvent:event]; } - (void)touchesEnded:(NSSet*)touches withEvent:(UIEvent*)event { NSLog(@"%s, %x", __FUNCTION__, mViewTouched); [map touchesEnded:touches withEvent:event]; }

    Read the article

  • Windows Login Integration

    - by Dusty Roberts
    Hi Peeps. I am building facial recognition software for a certain purpose, however, as a spin-off i would like to use that same software / concept, to automatically recognize me when i sit in front of the PC, and log me in. recognition is handled.. however, i need to incorporate this into windows, the same way fingerprint logins work. where can i go to get some more info on the doing this?

    Read the article

  • iPhone speech recognition API?

    - by CaptainAwesomePants
    The new iPhone 3GS has support for voice commands, stuff like "call Bill" or "play music by the strokes" and whatnot. I was looking through the iPhone SDK, but I cannot find any references to this capability. All of the search keywords I choose seem to only find the new voice chat functionality. Does anyone know whether Apple has added voice command APIs to the SDK, or whether it's yet another forbidden API? If it does exist, could someone point a particular class out to me?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >