Search Results

Search found 833 results on 34 pages for 'gesture recognition'.

Page 13/34 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Doubt with c# handlers?

    - by aF
    I have this code in c# public void startRecognition(string pName) { presentationName = pName; if (WaveNative.waveInGetNumDevs() > 0) { string grammar = System.Environment.GetEnvironmentVariable("PUBLIC") + "\\SoundLog\\Presentations\\" + presentationName + "\\SpeechRecognition\\soundlog.cfg"; /* if (File.Exists(grammar)) { File.Delete(grammar); } executeCommand();*/ recContext = new SpSharedRecoContextClass(); recContext.CreateGrammar(0, out recGrammar); if (File.Exists(grammar)) { recGrammar.LoadCmdFromFile(grammar, SPLOADOPTIONS.SPLO_STATIC); recGrammar.SetGrammarState(SPGRAMMARSTATE.SPGS_ENABLED); recGrammar.SetRuleIdState(0, SPRULESTATE.SPRS_ACTIVE); } recContext.Recognition += new _ISpeechRecoContextEvents_RecognitionEventHandler(handleRecognition); //recContext.RecognitionForOtherContext += new _ISpeechRecoContextEvents_RecognitionForOtherContextEventHandler(handleRecognition); //System.Windows.Forms.MessageBox.Show("olari"); } } private void handleRecognition(int StreamNumber, object StreamPosition, SpeechLib.SpeechRecognitionType RecognitionType, SpeechLib.ISpeechRecoResult Result) { System.Windows.Forms.MessageBox.Show("entrei"); string temp = Result.PhraseInfo.GetText(0, -1, true); _recognizedText = ""; foreach (string word in recognizedWords) { if (temp.Contains(word)) { _recognizedText = word; } } } public void run() { if (File.Exists(System.Environment.GetEnvironmentVariable("PUBLIC") + "\\SoundLog\\Serialization\\Voices\\identifiedVoicesDLL.txt")) { deserializer = new XmlSerializer(_identifiedVoices.GetType()); FileStream fs = new FileStream(System.Environment.GetEnvironmentVariable("PUBLIC") + "\\SoundLog\\Serialization\\Voices\\identifiedVoicesDLL.txt", FileMode.Open); Object o = deserializer.Deserialize(fs); fs.Close(); _identifiedVoices = (double[])o; } if (File.Exists(System.Environment.GetEnvironmentVariable("PUBLIC") + "\\SoundLog\\Serialization\\Voices\\deletedVoicesDLL.txt")) { deserializer = new XmlSerializer(_deletedVoices.GetType()); FileStream fs = new FileStream(System.Environment.GetEnvironmentVariable("PUBLIC") + "\\SoundLog\\Serialization\\Voices\\deletedVoicesDLL.txt", FileMode.Open); Object o = deserializer.Deserialize(fs); fs.Close(); _deletedVoices = (ArrayList)o; } myTimer.Interval = 5000; myTimer.Tick += new EventHandler(clearData); myTimer.Start(); if (WaveNative.waveInGetNumDevs() > 0) { _waveFormat = new WaveFormat(_samples, 16, 2); _recorder = new WaveInRecorder(-1, _waveFormat, 8192 * 2, 3, new BufferDoneEventHandler(DataArrived)); _scaleHz = (double)_samples / _fftLength; _limit = (int)((double)_limitVoice / _scaleHz); SoundLogDLL.MelFrequencyCepstrumCoefficients.calculateFrequencies(_samples, _fftLength); } } startRecognition is a method for Speech Recognition that load a grammar and makes the recognition handler here: recContext.Recognition += new _ISpeechRecoContextEvents_RecognitionEventHandler(handleRecognition); Now I have a problem, when I call the method startRecognition before method run, both handlers (the recognition one and the handler for the Tick) work well. If a word is recognized, handlerRecognition method is called. But, when I call the method run before the method startRecognition, both methods seem to run well but then the recognition Handler is never executed! Even when I see that words are recognized (because they happear on the Windows Speech Recognition app). What can I do for the recognition handler be allways called?

    Read the article

  • What is the best way to save a list of objects to an XML file and load that list back using C#?

    - by Siracuse
    I have a list of "Gesture" classes in my application: List<Gesture> gestures = new List<Gesture>(); These gesture classes are pretty simple: public class Gesture { public String Name { get; set; } public List<Point> Points { get; set; } public List<Point> TransformedPoints { get; set; } public Gesture(List<Point> Points, String Name) { this.Points = new List<Point>(Points); this.Name = Name; } } I would like to allow the user to both save the current state of "gestures" to a file and also be able to load a file that contains the data of the gestures. What is the standard way to do this in C#? Should I use Serialization? Should I write a class to handle writing/reading this XML file by hand myself? Are there any other ways?

    Read the article

  • How can I do something ~after~ an event has fired in C#?

    - by Siracuse
    I'm using the following project to handle global keyboard and mouse hooking in my C# application. This project is basically a wrapper around the Win API call SetWindowsHookEx using either the WH_MOUSE_LL or WH_KEYBOARD_LL constants. It also manages certain state and generally makes this kind of hooking pretty pain free. I'm using this for a mouse gesture recognition software I'm working on. Basically, I have it setup so it detects when a global hotkey is pressed down (say CTRL), then the user moves the mouse in the shape of a pre-defined gesture and then releases global hotkey. The event for the KeyDown is processed and tells my program to start recording the mouse locations until it receives the KeyUp event. This is working fine and it allows an easy way for users to enter a mouse-gesture mode. Once the KeyUp event fires and it detects the appropriate gesture, it is suppose to send certain keystrokes to the active window that the user has defined for that particular gesture they just drew. I'm using the SendKeys.Send/SendWait methods to send output to the current window. My problem is this: When the user releases their global hotkey (say CTRL), it fires the KeyUp event. My program takes its recorded mouse points and detects the relevant gesture and attempts to send the correct input via SendKeys. However, because all of this is in the KeyUp event, that global hotkey hasn't finished being processed. So, for example if I defined a gesture to send the key "A" when it is detected, and my global hotkey is CTRL, when it is detected SendKeys will send "A" but while CTRL is still "down". So, instead of just sending A, I'm getting CTRL-A. So, in this example, instead of physically sending the single character "A" it is selecting-all via the CTRL-A shortcut. Even though the user has released the CTRL (global hotkey), it is still being considered down by the system. Once my KeyUp event fires, how can I have my program wait some period of time or for some event so I can be sure that the global hotkey is truly no longer being registered by the system, and only then sending the correct input via SendKeys?

    Read the article

  • Issue in understanding how to compare performance of classifier using ROC

    - by user1214586
    I am trying to demystify pattern recognition techniques and understood few of them. I am trying to design a classifier M. A gesture is classified based on the hamming distance between the sample time series y and the training time series x. The result of the classifier are probabilistic values. There are 3 classes/categories with labels A,B,C which classifies hand gestures where there are 100 samples for each class which are to be classified (single feature and data length=100). The data are different time series (x coordinate vs time). The training set is used to assign probabilities indicating which gesture has occured how many times. So,out of 10 training samples if gesture A appeared 6 times then probability that a gesture falls under category A is P(A)=0.6 similarly P(B)=0.3 and P(C)=0.1 Now, I am trying to compare the performance of this classifier with Bayes classifier, K-NN, Principal component analysis (PCA) and Neural Network. On what basis,parameter and method should I do it if I consider ROC or cross validate since the features for my classifier are the probabilistic values for the ROC plot hence what shall be the features for k-nn,bayes classification and PCA? Is there a code for it which will be useful. What should be the value of k is there are 3 classes of gestures? Please help. I am in a fix.

    Read the article

  • Building a Universal iPad App - Where is the device recognition code?

    - by JustinXXVII
    I noticed that when I create a new project in XCode for a Universal iPad/iPhone application, the template comes with two separate App Delegate files, one for each device. I can't seem to locate the place in code where it tries to decide which app delegate to use. I have an existing iPhone project I'd like to port to iPad. My thinking was that if I went ahead and designed the iPad project, I could just import my iPhone classes and nibs, and then use the App Delegate and UIDevice to decide which MainWindow.xib to load. The process went like this: Create an iPad project coded as a split-view create brand new classes and nibs for the iPad import iPhone classes and nibs Change build/target settings in accordance with Universal Apps Use [[UIDevice currentDevice] model] in the AppDelegate to decide which MainWindow to load Will this work, or does the app just automatically know which device it's being deployed on? Thanks for any insight you can offer.

    Read the article

  • Creating Java Neural Networks

    - by Tori Wieldt
    A new article on OTN/Java, titled “Neural Networks on the NetBeans Platform,” by Zoran Sevarac, reports on Neuroph Studio, an open source Java neural network development environment built on top of the NetBeans Platform. This article shows how to create Java neural networks for classification.From the article:“Neural networks are artificial intelligence (machine learning technology) suitable for ill-defined problems, such as recognition, prediction, classification, and control. This article shows how to create some Java neural networks for classification. Note that Neuroph Studio also has support for image recognition, text character recognition, and handwritten letter recognition...”“Neuroph Studio is a Java neural network development environment built on top of the NetBeans Platform and Neuroph Framework. It is an IDE-like environment customized for neural network development. Neuroph Studio is a GUI that sits on top of Neuroph Framework. Neuroph Framework is a full-featured Java framework that provides classes for building neural networks…”The author, Zoran Sevarac, is a teaching assistant at Belgrade University, Department for Software Engineering, and a researcher at the Laboratory for Artificial Intelligence at Belgrade University. He is also a member of GOAI Research Network. Through his research, he has been working on the development of a Java neural network framework, which was released as the open source project Neuroph.Brainy stuff. Read the article here.

    Read the article

  • Drawing an arrow cursor on user dragging in XNA/MonoGame

    - by adrin
    I am writing a touch enabled game in MonoGame (XNA-like API) and would like to display a an arrow 'cursor' as user is making a drag gesture from point A to point B. I am not sure on how to correctly approach this problem. It seems that its best to just draw a sprite from A to B and scale it as required. This would however mean it gets stretched as user continues dragging gesture in one direction. Or maybe its better to dynamically render the arrow so it looks better?

    Read the article

  • Best game engine 2D for iOS

    - by Adelino
    which is the best 2D game enginefor iOS? I really need a game engine that allows me to modify the game code because I need to control the multi-touch events. I have a framework that detects the gesture that the player makes and I need to test this gesture recognizer in a game, so I have to have the freedom to change the game code. I don't want anything like GameSalad where you can't control anything. Thanks in advance.

    Read the article

  • How do I disable tablet gestures in windows 8?

    - by ???
    I'm using a Wacom Intuos4 and I have recently upgraded to Windows 8. I don't have a problem when using Photoshop however I occasionally draw on flash based online boards. The problem is, when I drag the pen in a direction repetitively (which is basically all I do when drawing) it's detected as a gesture, sometimes causing Chrome to go to the previous page (left drag) and making me lose the entire thing. Is there a way to disable these "gestures"? I believe this is not something caused by Windows 8 (or Charms) because I run Windows in English although it's not the initial language that Windows was installed in. I changed to English long after the installation. When Windows takes a move as a gesture, a small text pops up next to the cursor informing me about what I have just done and those pop ups are not even in English. I'm sorry for failing to be any more specific here but these gestures could be a feature of either Windows (unlikely), the tablet, Chrome or the computer itself. It's an Acer Aspire and it has one of those little stickers on it that specifies some of the features and one of them reads "Multi-Gesture" (referring to the touchpad, I guess). Could it be that this Multi-Gesture feature somehow decided to expand and apply for my tablet as well? If so, how do I disable it?

    Read the article

  • User intentions analysis

    - by Mark Bramnik
    I'm going to work on some project that would do a user-action recognition based on what he/she does in the system. As far as I understand there are two main parts here: Intercept the user actions (say http traffic in web/ui interaction in thick-client) analysis of user intentions. While the first part is rather technical and therefor easy to implement, the second one is AI related and can be academic. So I was wondering whether someone knows some third-parties/academic projects that would implement the 'action-recognition' stuff?

    Read the article

  • Adding a UITapGestureRecognizer to a view then removing seems to short circuit button events

    - by heymon
    In the code below I am popping up a ImageView as the result of a users touchUpInside on a simple info button. There are other buttons on the view. To dismiss the info I added a UITapGestureRecognizer to my controllers view, and hide the view when the tap is detected. If I don't remove the tapGestureRecognizer, the action is called every time some. Even when I do remove the gesture action, no bottons receive touchUpInside events once this gesture recognizer is added. Why? Code from my MainViewController (void) dismissInfo: (UITapGestureRecognizer *)gesture { [kInfoView setHidden: YES]; [gesture removeTarget: self action: NULL]; } (IBAction) displayInfo { CGRect startFrame = CGRectMake(725, 25, 0, 0), origFrame; CGFloat yCenter = [kInfoView frame].size.height/2 + 200; CGPoint startCenter = CGPointMake(724, 25), displayCenter = CGPointMake(384, yCenter); UITapGestureRecognizer *g = [[UITapGestureRecognizer alloc] initWithTarget: self action: @selector(dismissInfo:)]; [self.view addGestureRecognizer: g]; origFrame = [kInfoView frame]; [kInfoView setCenter: startCenter]; [kInfoView setHidden: NO]; [kInfoView setFrame: startFrame]; [UIView beginAnimations: @"info" context: nil]; [UIView setAnimationDuration: .5]; [UIView setAnimationDelegate: self]; [kInfoView setFrame: origFrame]; [kInfoView setCenter: displayCenter]; [UIView commitAnimations]; }

    Read the article

  • How to disable WM6.5.3 gestures?

    - by Duncan Watts
    I am working on a .NET 2.0 application targetting the WM5 SDK, what is the correct way to disable the gesture functionality when running on a WM6.5.3 device that only affects the forms I am using? This is causing an issue when I have a signature capture control inside a tab control - when the signature is entered it's quite common for the tab control to switch tabs as WM6.5.3 picks it up as a gesture. I don't want to disable the gesture functionality device wide, nor can I upgrade the application to target the WM6.5.3 SDK as it still needs to work on older devices. Cheers

    Read the article

  • How to disable three finger gestures on touchpad?

    - by oznah
    I am using 12.04 on a macbook pro. Everything works really well. I want to disable 3 finger gestures. It is way to easy to accidentally drag/move a window(3 finger gesture) while scrolling(2 finger gesture) when using a touchpad. I found this answer on ask ubuntu. It was marked answered but it is not. The functionality is still there. All this recommendation does is disable the drag marks on the window. This may be tricky because I want to disable 3 finger gestures but not 2 finger gestures. (ie I don't want to disable touchpad gestures all together)

    Read the article

  • Drag Gestures - fractional delta values

    - by Den
    I have an issue with objects moving roughly twice as far as expected when dragging them. I am comparing my application to the standard TouchGestureSample sample from MSDN. For some reason in my application gesture samples have fractional positions and deltas. Both are using same Microsoft.Xna.Framework.Input.Touch.dll, v4.0.30319. I am running both apps using standard Windows Phone Emulator. I am setting my break point immediately after this line of code in a simple Update method: GestureSample gesture = TouchPanel.ReadGesture(); Typical values in my app: Delta = {X:-13.56522 Y:4.166667} Position = {X:184.6956 Y:417.7083} Typical values in sample app: Delta = {X:7 Y:16} Position = {X:497 Y:244} Have anyone seen this issue? Does anyone have any suggestions? Thank you.

    Read the article

  • XNA Drag Gestures - fractional delta values

    - by Den
    I have an issue with objects moving roughly twice as far as expected when dragging them. I am comparing my application to the standard TouchGestureSample sample from MSDN. For some reason in my application gesture samples have fractional positions and deltas. Both are using same Microsoft.Xna.Framework.Input.Touch.dll, v4.0.30319. I am running both apps using standard Windows Phone Emulator. I am setting my break point immediately after this line of code in a simple Update method: GestureSample gesture = TouchPanel.ReadGesture(); Typical values in my app: Delta = {X:-13.56522 Y:4.166667} Position = {X:184.6956 Y:417.7083} Typical values in sample app: Delta = {X:7 Y:16} Position = {X:497 Y:244} Have anyone seen this issue? Does anyone have any suggestions? Thank you.

    Read the article

  • Alternative methods to login to Windows

    - by jay
    I've always wanted some cool way to log into Windows like inserting a designated USB or voice recognition. Only recently did I discover http://www.luxand.com/blink/ which uses facial recognition to log into your PC. What other software lets you change the way you log in?

    Read the article

  • Android HorizontalScrollView scroll by page

    - by Ionic Walrus
    Hi all, I have implemented a slideshow in my Android app using . This works well except that I want to scroll to next image on a scroll gesture (now it just scrolls past few images before decelerating). I have couldn't find a appropriate way to do this, should I be using a FrameLayout instead ? How do I scroll to the next (or previous) image on scroll gesture ? Any help is appreciated, thanks.

    Read the article

  • How can you add a UIGestureRecognizer to a UIBarButtonItem as in the common undo/redo UIPopoverContr

    - by SG
    Problem In my iPad app, I cannot attach a popover to a button bar item only after press-and-hold events. But this seems to be standard for undo/redo. How do other apps do this? Background I have an undo button (UIBarButtonSystemItemUndo) in the toolbar of my UIKit (iPad) app. When I press the undo button, it fires it's action which is undo:, and that executes correctly. However, the "standard UE convention" for undo/redo on iPad is that pressing undo executes an undo but pressing and holding the button reveals a popover controller where the user selected either "undo" or "redo" until the controller is dismissed. The normal way to attach a popover controller is with presentPopoverFromBarButtonItem:, and I can configure this easily enough. To get this to show only after press-and-hold we have to set a view to respond to "long press" gesture events as in this snippet: UILongPressGestureRecognizer *longPressOnUndoGesture = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:@selector(handleLongPressOnUndoGesture:)]; //Broken because there is no customView in a UIBarButtonSystemItemUndo item [self.undoButtonItem.customView addGestureRecognizer:longPressOnUndoGesture]; [longPressOnUndoGesture release]; With this, after a press-and-hold on the view the method handleLongPressOnUndoGesture: will get called, and within this method I will configure and display the popover for undo/redo. So far, so good. The problem with this is that there is no view to attach to. self.undoButtonItem is a UIButtonBarItem, not a view. Possible solutions 1) [The ideal] Attach the gesture recognizer to the button bar item. It is possible to attach a gesture recognizer to a view, but UIButtonBarItem is not a view. It does have a property for .customView, but that property is nil when the buttonbaritem is a standard system type (in this case it is). 2) Use another view. I could use the UIToolbar but that would require some weird hit-testing and be an all around hack, if even possible in the first place. There is no other alternative view to use that I can think of. 3) Use the customView property. Standard types like UIBarButtonSystemItemUndo have no customView (it is nil). Setting the customView will erase the standard contents which it needs to have. This would amount to re-implementing all the look and function of UIBarButtonSystemItemUndo, again if even possible to do. Question How can I attach a gesture recognizer to this "button"? More specifically, how can I implement the standard press-and-hold-to-show-redo-popover in an iPad app? Ideas? Thank you very much, especially if someone actually has this working in their app (I'm thinking of you, omni) and wants to share...

    Read the article

  • Prevent UIGestureRecognizer from firing selector more than once

    - by JK
    I utilize a UILongPressGestureRecognizer in my app. This is a continuous gesture recognizer which means it continuously fires the selector for the target it was initialized with. I would like the selector to be fired only once. I have tried to prevent further selectors being fired by setting the gesture recognizer's enabled property to Note the first time the selector is fired, but this only takes effect after the selector is fired again. How can I ensure the selector is fired only once?

    Read the article

  • WM6.5 embedded Internet Explorer finder scrolling

    - by Aaron
    I'm writing a .NET 3.5 application targetted for Windows Mobile 6.5. My application uses an embedded IE control to display content. The IE application allows the user to finger scroll around the webpage (i.e. touch the screen and drag instead of using the scrollbar). My IE control has a scrollbar and when I emulate the gesture, I highlight text instead of scrolling. Is there a way to add finger gesture support to an embedded IE control? Thanks, Aaron

    Read the article

  • Can I have multiple colors in a single TextBlock in WPF?

    - by Siracuse
    I have a line of text in a textblock that reads: "Detected [gesture] with an accuracy of [accuracy]" In WPF, is it possible for me to be able to change the color of the elements within a textblock? Can I have a textblock be multiple colors? For example, I would like the whole TextBlock to be black except the gesture name, which I would like to be red. Is this possible in WPF?

    Read the article

  • Attach GestureRecogniser to multiple imageviews

    - by AppleDeveloper
    Something strange I encountered today while attaching same gesture recogniser to multiple image views. It gets attached to only the last one, in other words, it can be attached to only one view! I had to create multiple gesture recognisers to meet my requirements. Following is what I have done. Am I doing correct? Is that's the only way to attach recognisers to the multiple imageviews? Please note that I don't want to use UITableView or UIVIew and put all imageviews in it and attach gesture recogniser to only UITableView or UIVIew. I have all image scattered and I have to detect which image is being dragged. Thanks. [imgView1 setUserInteractionEnabled:YES]; [imgView1 setMultipleTouchEnabled:YES]; [imgView2 setUserInteractionEnabled:YES]; [imgView2 setMultipleTouchEnabled:YES]; [imgView3 setUserInteractionEnabled:YES]; [imgView3 setMultipleTouchEnabled:YES]; [imgView4 setUserInteractionEnabled:YES]; [imgView4 setMultipleTouchEnabled:YES]; [imgView5 setUserInteractionEnabled:YES]; [imgView5 setMultipleTouchEnabled:YES]; [imgView6 setUserInteractionEnabled:YES]; [imgView6 setMultipleTouchEnabled:YES]; //Attach gesture recognizer to each imagviews gestureRecognizer1 = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:@selector(gestureHandler:)]; gestureRecognizer1.delegate = self; gestureRecognizer2 = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:@selector(gestureHandler:)]; gestureRecognizer2.delegate = self; gestureRecognizer3 = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:@selector(gestureHandler:)]; gestureRecognizer3.delegate = self; gestureRecognizer4 = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:@selector(gestureHandler:)]; gestureRecognizer4.delegate = self; gestureRecognizer5 = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:@selector(gestureHandler:)]; gestureRecognizer5.delegate = self; gestureRecognizer6 = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:@selector(gestureHandler:)]; gestureRecognizer6.delegate = self; [imgView1 addGestureRecognizer:gestureRecognizer1]; [imgView2 addGestureRecognizer:gestureRecognizer2]; [imgView3 addGestureRecognizer:gestureRecognizer3]; [imgView4 addGestureRecognizer:gestureRecognizer4]; [imgView5 addGestureRecognizer:gestureRecognizer5]; [imgView6 addGestureRecognizer:gestureRecognizer6];

    Read the article

  • EMEA OPN Partner Specialization Awards

    - by Paulo Folgado
    Announcing the EMEA OPN Partner Specialization AwardsPartner recognition is a fundamental part of OPN Specialized, and we are delighted to announce a new award program for partners in EMEA, the EMEA OPN Partner Specialization Awards. With these awards we will recognize the partners who have specialized their business with Oracle and who are delivering real customer value. Partners who have achieved one or more Specializations in OPN are eligible to submit nominations to become a Partner of the Year for 2010. Our winners will gain valuable prestige and recognition, and will be awarded in a ceremony at Oracle OpenWorld on 19 September 2010. Seven award categories are available: Technology Partner of the Year Applications Partner of the Year ISV Partner of the Year Midsize Partner of the Year Industry Partner of the Year Value Added Distributor of the Year Accelerate Partner of the Year We encourage you to submit your nominations today! Nominations are open from March 1 to June 11, 2010 For more information on the award categories and criteria, please visit the awards page on the OPN Portal here. 

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >