Search Results

Search found 833 results on 34 pages for 'gesture recognition'.

Page 10/34 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • How do you add gestures to a UITableViewController?

    - by mea36
    I want to implement right-to-left and left-to-right gestures on a view that inherits from UITableViewController. I have the code for the gestures implemented in another view (UIViewController) and it works. It does't seem like touchesBegan is even getting called. Does anyone know know to do this? Thanks

    Read the article

  • Improving the efficiency of Kinect for Windows DTWGestureRecognition Application

    - by Ray
    Currently I am using the DTWGestureRecognition open source tool for Kinect SDK v1.5. I have recorded a few gestures and use them to navigate through Windows 7. I also have implemented voice control for simple things such as opening PowerPoint, Chrome, etc. My main issue is that the application uses quite a bit of my CPU power which causes it to become slow. During gestures and voice commands, the CPU usage sometimes spikes to 80-90%, which causes the application to be unresponsive for a few seconds. I am running it on a 64 bit Windows 7 machine with an i5 processor and 8 GB of RAM. I was wondering if anyone with any experience using this tool or Kinect in general has made it more efficient and less performance hogging. Right now I removed sections which display the RGB video and the Depth video but even doing that did not make a big impact. Any help is appreciated, thanks!

    Read the article

  • Apple Magic Trackpad Gestures carried out as personalized Commands in Windows

    - by Adele
    I want to have the Apple Magic Trackpad to work on Windows, but NOT as a regular Mouse! I will need the normal Trackpad Gestures to work with a c# Application (i.e. when carring out a 2 finger swipe to the left, it will start playing a song...). I guess I´ll have to write my own driver? Is there a way to use Apples MAgic Trackpad Driver for Windows and re-write that one? OR is there any way (API, self-written driver), so that I could just hook the gestures to my Commands? Or any RAW Input examples? Does anybody know how to do that, or where to start? Thank you so much, I´m really lost.

    Read the article

  • Implementing tracing gestures on iPhone

    - by bmoeskau
    I'd like to create an iPhone app that supports tracing of arbitrary shapes using your finger (with accuracy detection). I have seen references to an Apple sample app called "GestureMatch" that supposedly implemented exactly that, but it was removed from the SDK at some point and I cannot find the source anywhere via Google. Does anyone know of a current official sample that demonstrates tracing like this? Or any solid suggestions on other resources to look at? I've done some iPhone programming, but not really anything with the graphics API's or custom handling of touch gestures, so I'm not sure where to start.

    Read the article

  • Algorithm to emulate mouse movement as a human does?

    - by Eye of Hell
    Hello I need to test a software that treats some mouse movements as "gestures". For such a task I need to emulate mouse movement from point A to point B, not in straight line, but as a real mouse moves - with curves, a bit of jaggedyness etc. Is there any available solution (algorithm/code itself, not a library/exe) that I can use? Of course I can write some simple sinusoidal math by myself, but this would be a very crude emulation of a human hand leading a mouse. Perhaps such a task has been solved already numerous times, and I can just borrow an existing code? :)

    Read the article

  • How to start animation when I'm pressing on item of ListView?

    - by Ares
    I want to implement next functional: When I'm tapping on ListView's item, it's turning into red color. If I release it has to reverse color back. I made transition drawable and set it as item's background. I tried to implement SimpleGestureListener and started animation on onDown and reversed it on onSingleTapUp events, but it doesn't give me any results (works fine, but items don't turn into red). If I return super.onDown (not true), the onLongPress raise every time(even I manage to release it). @Override public boolean onDown(MotionEvent e) { if(_currentRow.getTag()!=null) if(_currentRow.getTag().toString().equals("anim")) _currentRow.setBackgroundDrawable(context.getResources() .getDrawable(R.xml.listviewitem_yellow_red)); _drawable = (TransitionDrawable) _currentRow.getBackground(); _drawable.startTransition(500); return true; //super.onDown(e); // IF I UNCOMMENT IT TURNS INTO RED, BUT TRIGGERED onLongClick event!!! } @Override public boolean onSingleTapUp(MotionEvent e) { if(_drawable!=null){ _drawable.reverseTransition(500); } return super.onSingleTapUp(e); } @Override public void onLongPress(MotionEvent e) { if(getLongClickListener()!=null) getLongClickListener().onLongClick(_currentRow, _object); super.onLongPress(e); } If someone knows how to solve this problem, please help me.

    Read the article

  • iPhone Gestures Adding 2 at once

    - by BahaiResearch.com
    Objective C answers are fine too. Currently I am using this code to add 2 gestures (left / right) to my WebView. Works fine. Can I combine this into less code though to indicate that both gestures go to the same action? //LEFT UISwipeGestureRecognizer sgr = new UISwipeGestureRecognizer (); sgr.AddTarget (this, MainViewController.MySelector); sgr.Direction = UISwipeGestureRecognizerDirection.Left; sgr.Delegate = new SwipeRecognizerDelegate (); this.View.AddGestureRecognizer (sgr); //RIGHT UISwipeGestureRecognizer sgrRight = new UISwipeGestureRecognizer (); sgrRight.AddTarget (this, MainViewController.MySelector); sgrRight.Direction = UISwipeGestureRecognizerDirection.Right; sgrRight.Delegate = new SwipeRecognizerDelegate (); this.View.AddGestureRecognizer (sgrRight);

    Read the article

  • Emacs saying: <M-kp-7> is undefined when dictating quotes with Dragon naturally speaking 12

    - by Keks Dose
    I dictating my text via Dragon Naturally Speaking 12 into Emacs. Whenever I say (translation from German): 'open quotes', I expect something like " or » to appear on the screen, but I simply get a message <M-kp-2> is undefined . Same goes for 'close quotes', I get <M-kp-7> is undefined. Does anybody know how to define those virtual keyboard strokes? (global-set-key [M-kp-2] "»") does not work.

    Read the article

  • Looking for speech-to-text tool (convert .wav to text)

    - by David
    I have the ability to get .wav files of voice mails emailed to me, but sometimes I'll be sitting in a meeting and I need to know the content of a message without playing it out loud. Are there any good (and, preferably, free) tools for converting .wav files to text? I know Google Voice has this capability, but I can't determine if it'll work on a file-by-file basis. I realize that this is a difficult research problem, but even an 80% solution might be workable.

    Read the article

  • Automatically detect faces in a picture

    - by abel
    At my work place, passport sized photographs are scanned together, then cut up into individual pictures and saved with unique file numbers. Currently we use Paint.net to manually select, cut and save the pictures. I have seen Sony's Cybershot Camera has face detection. Google also gives me something about iphoto when searching for face detection. Picasa has facedetection too. Are there any ways to autodetect the faces in a document, which would improve productivity at my workplace by reducing the time needed to cut up individual images. Sample Scanned Document(A real document has 5 rows of 4 images each=20 pics): (from: http://www.memorykeeperphoto.com/images/passport_photo.jpg, fairuse) For eg. In Picasa 3.8, On clicking View People, all the faces are shown and I am asked to name them, can I save these individual pictures automatically with the names as different pictures.

    Read the article

  • onServiceConnected never called after bindService method

    - by Tobia Loschiavo
    Hi, I have a particular situation: a service started by a broadcast receiver starts an activity. I want to make it possible for this activity to communicate back to the service. I have chosen to use AIDL to make it possible. Everything seems works good except for bindService() method called in onCreate() of the activity. bindService(), in fact, throws a null pointer exception because onServiceConnected() is never called while onBind() method of the service is. Anyway bindService() returns true. The service is obviously active because it starts the activity. I know that calling an activity from a service could sound strange, but unfortunately this is the only way to have speech recognition in a service. Thanks in advance

    Read the article

  • Server side speech to text

    - by teepusink
    Hi, I'm trying to install a speech recognition engine server side. (non commercial preferred since it's just for experimentation) The idea is to allow a user to say something from a website then whatever he/she says will show up on the screen (as text) I've read about many available softwares ranging from Microsoft Speech, Sphinx, Julius etc just not sure which one will perform best and easiest to install. Also do typically do I need to have root permission on my hosting to do this kind of stuff? I'm using a regular shared hosting right now. Thank you, Tee

    Read the article

  • Storing and comparing biometric information

    - by Chathuranga Chandrasekara
    I am not sure whether this is the best place to post this. But this is strongly related with programming so decided to put this here. In general we use biometrics in computer applications say for authentication. Lets get 2 examples finger prints and facial recognition. In those cases how we keep the information for comparison. As an example we can't keep a image and process it every time. So what are the methodologies we use to store/determine the similarity in such cases? Are there any special algorithms that designed for that purposes.? (Ex : To return a approximately equal value for a finger print of a certain person every time)

    Read the article

  • Quickest and easiest way to implement speech to text conversion for a small speech subset.

    - by sgtpeppers
    Hi, I want to implement a system that receives speech through a microphone on my Mac OS x. I know arbitrary speech recognition is close to impossible without training the system so I'm willing to restrict it to 10 simple sentences. It must recognize with a high degree of accuracy which of these 10 sentences are being spoken, generate the text and add an entry to a remote MySQL database. With these being the architecture of the system I want to implement, could anyone give me an overview of what would be the best way to go about implementing this system? I'm looking for ideas like open source libraries to minimize the coding as this is just a prototype application for a demonstration. Basically I'm looking for a quick and easy solution. Thanks!

    Read the article

  • How to use DoG Pyramid in SIFT

    - by Ahmet Keskin
    Hi all, I am very new in image processing and pattern recognition. I am trying to implement SIFT algorithm where I am able to create the DoG pyramid and identify the local maximum or minimum in each octave. What I don't understand is that how to use these local max/min in each octave. How do I combine these points? My question may sound very trivial. I have read Lowe's paper, but could not really understand what he did after he built the DoG pyramid. Any help is appreciated. Thank you

    Read the article

  • How do you get speech dictated without adding it to a grammar list?

    - by joe
    I'm new to Speech Recognition, and I'm working on a project that will receive a command from a recognizable list. For example, I would say "Play song". The computer would ask the song title, and I can say it. It will then compare my answer to my music library and find it. I know how to add recognizable grammar to the SpeechRecognizer object, how to make the computer speak, and how to play a song in iTunes. I cannot, however, figure out how to get it to dictate or listen and interpret something that isn't in the grammar list. Is there a method I'm missing? Or not yet been simplified by Microsoft? I have no code to show for this, as I am not even sure how to search for this particular idea. Of course, I could have the program read my entire library, but that's not an optimal solution considering I have tens of thousands of songs. Thanks in advance!

    Read the article

  • UPK 3.6.1 Enablement Service Pack 1

    - by marc.santosusso
    UPK 3.6.1 Enablement Service Pack 1 now available on My Oracle Support as Patch ID 9533920 (requires My Oracle Support account). Below is a list of the enhancements included in this Enablement Service Pack. Tabbed Gateway Users now have the option to deliver multiple help resources through the in-application support using UPK's new tabbed gateway. This feature is managed using the Configuration Utility for In-Application Support. This feature is documented in the In-Application Support Guide. Firefox 3.6 The latest release of Mozilla Firefox, version 3.6, is now supported by the UPK Player, SmartHelp browser add-on, and SmartMatch recording technology. Oracle E-Business Suite -- Added support for version 12.1.2 for enhanced object and context recognition. -- The UPK PLL is no longer need for Oracle versions 12.1.2 and higher. Agile PLM Agile PLM version 9.3 supported for enhanced object recognition. Customer Needs Management Customer Needs Management schema 1.0.014 is supported for context recognition. Siebel CRM Siebel CRM (On Premise) versions 8.2, 8.1.1.2, 8.0.0.9, and 8.1.1 build 21112 (in addition to the previously supported build 21111) supported for enhanced object and context recognition. SAP SAP GUI for HTML version 7.10 patch 16 supported for enhanced object and context recognition. CA -- CA Clarity PPM version R12.5 supported for context recognition. -- CA Service Desk version R12.5 supported for context recognition. Java Added support for Java 6 update 12

    Read the article

  • OpenCV: Shift/Align face image relative to reference Image (Image Registration)

    - by Abhischek
    I am new to OpenCV2 and working on a project in emotion recognition and would like to align a facial image in relation to a reference facial image. I would like to get the image translation working before moving to rotation. Current idea is to run a search within a limited range on both x and y coordinates and use the sum of squared differences as error metric to select the optimal x/y parameters to align the image. I'm using the OpenCV face_cascade function to detect the face images, all images are resized to a fixed (128x128). Question: Which parameters of the Mat image do I need to modify to shift the image in a positive/negative direction on both x and y axis? I believe setImageROI is no longer supported by Mat datatypes? I have the ROIs for both faces available however I am unsure how to use them. void alignImage(vector<Rect> faceROIstore, vector<Mat> faceIMGstore) { Mat refimg = faceIMGstore[1]; //reference image Mat dispimg = faceIMGstore[52]; // "displaced" version of reference image //Rect refROI = faceROIstore[1]; //Bounding box for face in reference image //Rect dispROI = faceROIstore[52]; //Bounding box for face in displaced image Mat aligned; matchTemplate(dispimg, refimg, aligned, CV_TM_SQDIFF_NORMED); imshow("Aligned image", aligned); } The idea for this approach is based on Image Alignment Tutorial by Richard Szeliski Working on Windows with OpenCV 2.4. Any suggestions are much appreciated.

    Read the article

  • Writing a program which uses voice recogniton... where should I start?

    - by Katsideswide
    Hello! I'm a design student currently dabbling with Arduino code (based on c/c++) and flash AS3. What I want to do is to be able to write a program with a voice control input. So, program prompts user to spell a word. The user spells out the word. The program recognizes if this is right, adds one to a score if it's correct, and corrects the user if it's wrong. So I'm seeing a big list of words, each with an audio file of the word being read out, with the voice recognition part checking to see if the reply matches the input. Ideally i'd like to be able to interface this with an Arduino microcontroller so that a physical output with a motor could be achieved in reaction also. Thing is i'm not sure if I can make this program in flash, in Processing (associated with arduino) or if I need another CS3 program-making-program. I guess I need to download a good voice recognizing program, but how can I interface this with anything else? Also, I'm on a mac. (not sure if this makes a difference) I apologize for my cluelessness, any hints would be great! -Susan

    Read the article

  • Recognize objects in image

    - by DoomStone
    Hello I am in the process of doing a school project, where we have a robot driving on the ground in between Flamingo plates. We need to create an algorithm that can identify the locations of these plates, so we can create paths around them (We are using A Star for that). So far have we worked with AForged Library and we have created the following class, the only problem with this is that when it create the rectangles dose it not take in account that the plates are not always parallel with the camera border, and it that case will it just create a rectangle that cover the whole plate. So we need to some way find the rotation on the object, or another way to identify this. I have create an image that might help explain this Image the describe the problem: http://img683.imageshack.us/img683/9835/imagerectangle.png Any help on how I can do this would be greatly appreciated. Any other information or ideers are always welcome. public class PasteMap { private Bitmap image; private Bitmap processedImage; private Rectangle[] rectangels; public void initialize(Bitmap image) { this.image = image; } public void process() { processedImage = image; processedImage = applyFilters(processedImage); processedImage = filterWhite(processedImage); rectangels = extractRectangles(processedImage); //rectangels = filterRectangles(rectangels); processedImage = drawRectangelsToImage(processedImage, rectangels); } public Bitmap getProcessedImage { get { return processedImage; } } public Rectangle[] getRectangles { get { return rectangels; } } private Bitmap applyFilters(Bitmap image) { image = new ContrastCorrection(2).Apply(image); image = new GaussianBlur(10, 10).Apply(image); return image; } private Bitmap filterWhite(Bitmap image) { Bitmap test = new Bitmap(image.Width, image.Height); for (int width = 0; width < image.Width; width++) { for (int height = 0; height < image.Height; height++) { if (image.GetPixel(width, height).R > 200 && image.GetPixel(width, height).G > 200 && image.GetPixel(width, height).B > 200) { test.SetPixel(width, height, Color.White); } else test.SetPixel(width, height, Color.Black); } } return test; } private Rectangle[] extractRectangles(Bitmap image) { BlobCounter bc = new BlobCounter(); bc.FilterBlobs = true; bc.MinWidth = 5; bc.MinHeight = 5; // process binary image bc.ProcessImage( image ); Blob[] blobs = bc.GetObjects(image, false); // process blobs List<Rectangle> rects = new List<Rectangle>(); foreach (Blob blob in blobs) { if (blob.Area > 1000) { rects.Add(blob.Rectangle); } } return rects.ToArray(); } private Rectangle[] filterRectangles(Rectangle[] rects) { List<Rectangle> Rectangles = new List<Rectangle>(); foreach (Rectangle rect in rects) { if (rect.Width > 75 && rect.Height > 75) Rectangles.Add(rect); } return Rectangles.ToArray(); } private Bitmap drawRectangelsToImage(Bitmap image, Rectangle[] rects) { BitmapData data = image.LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb); foreach (Rectangle rect in rects) Drawing.FillRectangle(data, rect, Color.Red); image.UnlockBits(data); return image; } }

    Read the article

  • LoadDictation with SAPI

    - by Naveen
    I am able to create alternate dictation grammars using the dictation resource kit or directions given here. I am not able to load the new dictation topic with c++. I am trying to modify the simpledict sample provided with the sapi5.1 sdk. The following doesn't work. std::wstring stemp = s2ws("grammar:dictation#Genre"); LPCWSTR mygrammar = stemp.c_str(); hr = m_cpDictationGrammar-LoadDictation(mygrammar, SPLO_STATIC);

    Read the article

  • How to use Speech 2 Text in Microsoft Surface

    - by Roflcoptr
    I'd like to use some speech 2 text in my microsoft surface application. I saw that it is possible, but I don't really know where to start. Is there any framework/library available, or a code snippet, or a tutorial?? I don't even know exactly what i should google for ;) ===EDIT=== I read that it is necessary to use a grammar to recognize words. So if I want to proceed free text, is there a predefined grammar for the english language? Or is it a better choice to don't use speech2text but just audio files instead?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >