Search Results

Search found 1303 results on 53 pages for 'voice recognition'.

Page 14/53 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • iPhone SDK 3.2 UIGestureRecognizer interfering with UIView animations?

    - by Brian Cooley
    Are there known issues with gesture recognizers and the UIView class methods for animation? I am having problems with a sequence of animations on a UIImageView from UIGestureRecognizer callback. If the sequence of animations is started from a standard callback like TouchUpInside, the animation works fine. If it is started via the UILongPressGestureRecognizer, then the first animation jumps to the end and the second animation immediately begins. Here's a sample that illustrates my problem. In the .xib for the project, I have a UIImageView that is connected to the viewToMove IBOutlet. I also have a UIButton connected to the startButton IBOutlet, and I have connected its TouchUpInside action to the startButtonClicked IBAction. The TouchUpInside action works as I want it to, but the longPressGestureRecognizer skips to the end of the first animation after about half a second. When I NSLog the second animation (animateTo200) I can see that it is called twice when a long press starts the animation but only once when the button's TouchUpInside action starts the animation. - (void)viewDidLoad { [super viewDidLoad]; UILongPressGestureRecognizer *longPressRecognizer = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:@selector(startButtonClicked)]; NSArray *recognizerArray = [[NSArray alloc] initWithObjects:longPressRecognizer, nil]; [startButton setGestureRecognizers:recognizerArray]; [longPressRecognizer release]; [recognizerArray release]; } -(IBAction)startButtonClicked { if (viewToMove.center.x < 150) { [self animateTo200:@"Right to left" finished:nil context:nil]; } else { [self animateTo100:@"Right to left" finished:nil context:nil]; } } -(void)animateTo100:(NSString *)animationID finished:(NSNumber *)finished context:(void *)context { [UIView beginAnimations:@"Right to left" context:nil]; [UIView setAnimationDuration:4]; [UIView setAnimationDelegate:self]; [UIView setAnimationDidStopSelector:@selector(animateTo200:finished:context:)]; viewToMove.center = CGPointMake(100.0, 100.0); [UIView commitAnimations]; } -(void)animateTo200:(NSString *)animationID finished:(NSNumber *)finished context:(void *)context { [UIView beginAnimations:@"Left to right" context:nil]; [UIView setAnimationDuration:4]; viewToMove.center = CGPointMake(200.0, 200.0); [UIView commitAnimations]; }

    Read the article

  • Intercepting/Hijacking iPhone Touch Events for MKMapView

    - by Shawn
    Is there a bug in the 3.0 SDK that disables real-time zooming and intercepting the zoom-in gesture for the MKMapView? I have some real simple code so I can detect tap events, but there are two problems: zoom-in gesture is always interpreted as a zoom-out none of the zoom gestures update the Map's view in realtime. In hitTest, if I return the "map" view, the MKMapView functionality works great, but I don't get the opportunity to intercept the events. Any ideas? MyMapView.h: @interface MyMapView : MKMapView { UIView *map; } MyMapView.m: - (id)initWithFrame:(CGRect)frame { if (![super initWithFrame:frame]) return nil; self.multipleTouchEnabled = true; return self; } - (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event { NSLog(@"Hit Test"); map = [super hitTest:point withEvent:event]; return self; } - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event { NSLog(@"%s", __FUNCTION__); [map touchesCancelled:touches withEvent:event]; } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent*)event { NSLog(@"%s", __FUNCTION__); [map touchesBegan:touches withEvent:event]; } - (void)touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event { NSLog(@"%s, %x", __FUNCTION__, mViewTouched); [map touchesMoved:touches withEvent:event]; } - (void)touchesEnded:(NSSet*)touches withEvent:(UIEvent*)event { NSLog(@"%s, %x", __FUNCTION__, mViewTouched); [map touchesEnded:touches withEvent:event]; }

    Read the article

  • Recognize objects in image

    - by DoomStone
    Hello I am in the process of doing a school project, where we have a robot driving on the ground in between Flamingo plates. We need to create an algorithm that can identify the locations of these plates, so we can create paths around them (We are using A Star for that). So far have we worked with AForged Library and we have created the following class, the only problem with this is that when it create the rectangles dose it not take in account that the plates are not always parallel with the camera border, and it that case will it just create a rectangle that cover the whole plate. So we need to some way find the rotation on the object, or another way to identify this. I have create an image that might help explain this Image the describe the problem: http://img683.imageshack.us/img683/9835/imagerectangle.png Any help on how I can do this would be greatly appreciated. Any other information or ideers are always welcome. public class PasteMap { private Bitmap image; private Bitmap processedImage; private Rectangle[] rectangels; public void initialize(Bitmap image) { this.image = image; } public void process() { processedImage = image; processedImage = applyFilters(processedImage); processedImage = filterWhite(processedImage); rectangels = extractRectangles(processedImage); //rectangels = filterRectangles(rectangels); processedImage = drawRectangelsToImage(processedImage, rectangels); } public Bitmap getProcessedImage { get { return processedImage; } } public Rectangle[] getRectangles { get { return rectangels; } } private Bitmap applyFilters(Bitmap image) { image = new ContrastCorrection(2).Apply(image); image = new GaussianBlur(10, 10).Apply(image); return image; } private Bitmap filterWhite(Bitmap image) { Bitmap test = new Bitmap(image.Width, image.Height); for (int width = 0; width < image.Width; width++) { for (int height = 0; height < image.Height; height++) { if (image.GetPixel(width, height).R > 200 && image.GetPixel(width, height).G > 200 && image.GetPixel(width, height).B > 200) { test.SetPixel(width, height, Color.White); } else test.SetPixel(width, height, Color.Black); } } return test; } private Rectangle[] extractRectangles(Bitmap image) { BlobCounter bc = new BlobCounter(); bc.FilterBlobs = true; bc.MinWidth = 5; bc.MinHeight = 5; // process binary image bc.ProcessImage( image ); Blob[] blobs = bc.GetObjects(image, false); // process blobs List<Rectangle> rects = new List<Rectangle>(); foreach (Blob blob in blobs) { if (blob.Area > 1000) { rects.Add(blob.Rectangle); } } return rects.ToArray(); } private Rectangle[] filterRectangles(Rectangle[] rects) { List<Rectangle> Rectangles = new List<Rectangle>(); foreach (Rectangle rect in rects) { if (rect.Width > 75 && rect.Height > 75) Rectangles.Add(rect); } return Rectangles.ToArray(); } private Bitmap drawRectangelsToImage(Bitmap image, Rectangle[] rects) { BitmapData data = image.LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb); foreach (Rectangle rect in rects) Drawing.FillRectangle(data, rect, Color.Red); image.UnlockBits(data); return image; } }

    Read the article

  • LoadDictation with SAPI

    - by Naveen
    I am able to create alternate dictation grammars using the dictation resource kit or directions given here. I am not able to load the new dictation topic with c++. I am trying to modify the simpledict sample provided with the sapi5.1 sdk. The following doesn't work. std::wstring stemp = s2ws("grammar:dictation#Genre"); LPCWSTR mygrammar = stemp.c_str(); hr = m_cpDictationGrammar-LoadDictation(mygrammar, SPLO_STATIC);

    Read the article

  • How to use Speech 2 Text in Microsoft Surface

    - by Roflcoptr
    I'd like to use some speech 2 text in my microsoft surface application. I saw that it is possible, but I don't really know where to start. Is there any framework/library available, or a code snippet, or a tutorial?? I don't even know exactly what i should google for ;) ===EDIT=== I read that it is necessary to use a grammar to recognize words. So if I want to proceed free text, is there a predefined grammar for the english language? Or is it a better choice to don't use speech2text but just audio files instead?

    Read the article

  • How to add words to an already loaded grammar using System.Speech and SAPI 5.3

    - by Kim Major
    Given the following code, Choices choices = new Choices(); choices.Add(new GrammarBuilder(new SemanticResultValue("product", "<product/>"))); GrammarBuilder builder = new GrammarBuilder(); builder.Append(new SemanticResultKey("options", choices.ToGrammarBuilder())); Grammar grammar = new Grammar(builder) { Name = Constants.GrammarNameLanguage}; grammar.Priority = priority; _recognition.LoadGrammar(grammar); How can I add additional words to the loaded grammar? I know this can be achieved both in native code and using the SpeechLib interop, but I prefer to use the managed library. Update: What I want to achieve, is not having to load an entire grammar repeatedly because of individual changes. For small grammars I got good results by calling _recognition.RequestRecognizerUpdate() and then doing the unload of the old grammar and loading of a rebuilt grammar in the event: void Recognition_RecognizerUpdateReached(object sender, RecognizerUpdateReachedEventArgs e) For large grammars this becomes too expensive.

    Read the article

  • Using android gesture on top of menu buttons

    - by chriacua
    What I want is to have an options menu where the user can choose to navigate the menu between: 1) touching a button and then pressing down on the trackball to select it, and 2) drawing predefined gestures from Gestures Builder As it stands now, I have created my buttons with OnClickListener and the gestures with GestureOverlayView. Then I select starting a new Activity depending on whether the using pressed a button or executed a gesture. However, when I attempt to draw a gesture, it is not picked up. Only pressing the buttons is recognized. The following is my code: public class Menu extends Activity implements OnClickListener, OnGesturePerformedListener { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); //create TextToSpeech myTTS = new TextToSpeech(this, this); myTTS.setLanguage(Locale.US); //create Gestures mLibrary = GestureLibraries.fromRawResource(this, R.raw.gestures); if (!mLibrary.load()) { finish(); } // Set up click listeners for all the buttons. View playButton = findViewById(R.id.play_button); playButton.setOnClickListener(this); View instructionsButton = findViewById(R.id.instructions_button); instructionsButton.setOnClickListener(this); View modeButton = findViewById(R.id.mode_button); modeButton.setOnClickListener(this); View statsButton = findViewById(R.id.stats_button); statsButton.setOnClickListener(this); View exitButton = findViewById(R.id.exit_button); exitButton.setOnClickListener(this); GestureOverlayView gestures = (GestureOverlayView) findViewById(R.id.gestures); gestures.addOnGesturePerformedListener(this); } public void onGesturePerformed(GestureOverlayView overlay, Gesture gesture) { ArrayList<Prediction> predictions = mLibrary.recognize(gesture); // We want at least one prediction if (predictions.size() > 0) { Prediction prediction = predictions.get(0); // We want at least some confidence in the result if (prediction.score > 1.0) { // Show the gesture Toast.makeText(this, prediction.name, Toast.LENGTH_SHORT).show(); //User drew symbol for PLAY if (prediction.name.equals("Play")) { myTTS.shutdown(); //connect to game // User drew symbol for INSTRUCTIONS } else if (prediction.name.equals("Instructions")) { myTTS.shutdown(); startActivity(new Intent(this, Instructions.class)); // User drew symbol for MODE } else if (prediction.name.equals("Mode")){ myTTS.shutdown(); startActivity(new Intent(this, Mode.class)); // User drew symbol to QUIT } else { finish(); } } } } @Override public void onClick(View v) { switch (v.getId()){ case R.id.instructions_button: startActivity(new Intent(this, Instructions.class)); break; case R.id.mode_button: startActivity(new Intent(this, Mode.class)); break; case R.id.exit_button: finish(); break; } } Any suggestions would be greatly appreciated!

    Read the article

  • How do we calculate the filters in Mel-Frequency Cepstrum Coefficients Algorithm?

    - by André Ferreira
    After calculating the FFT and with the frequency we need to do something like this: http://instruct1.cit.cornell.edu/courses/ece576/FinalProjects/f2008/pae26%5Fjsc59/pae26%5Fjsc59/images/melfilt.png We filter the frequency spectrum with those triangles. I saw that we can use distint ways to calculcate the triangles. I will make the size of the triangles equal till 1kz and after that obtained with log function. What should we do now? With the frequency spectrum and the triangles defined.. - We should filter the frequency (frequencies limited to the triangles, if goes higher only counts till the triangle limit) and calculate the value of each triangle (and after that continue the algorithm). But when does the mel conversation happens? m = 2595 log (f/700 + 1) When do we pass from frequency to mel.. Can someone guide me in the right direction plz :d

    Read the article

  • Android 2.1 fling gesture captured on textview but still a contextmenu opens

    - by hermo
    The following problem seems unique to 2.1, happens both on an emulator and on a nexus. The same example works fine on other platforms I've tested (1.5, 1.6 and 2.0 emulators). I've added created gestureListener as described in this post. The difference is that I've added the listener on a TextView which also has a contextMenu registered, i.e. sth like the following: onCreate(...) { ... // Layout contains a large TextView on which I want to add a context menu tv = findViewById(R.id.text_view); tv.registerForContextMenu(this); // create the gestureListener according above mentioned post. gestureListener = ... // set the listener on the text-view tv.setOnTouchListener(gestureListener); ... } When testing it, the correct gesture is recognized alright, but every other time it also causes the context menu to be opened. As the same example is working on non 2.1 platforms, I've got a feeling it is not my code that is the problem... Thankful for any suggestions.

    Read the article

  • How to record / capture audio with RecordControl on Java ME, SE K770i

    - by tomaszs
    I want to record sound on my Java ME App on K770i. So I used this: http://java.sun.com/javame/reference/apis/jsr135/javax/microedition/media/control/RecordControl.html example of RecordControl in my code. It goes like this: import java.util.Vector; import javax.microedition.lcdui.Choice; import javax.microedition.lcdui.Command; import javax.microedition.lcdui.CommandListener; import javax.microedition.lcdui.Display; import javax.microedition.lcdui.Displayable; import javax.microedition.lcdui.List; import javax.microedition.media.Manager; import javax.microedition.media.MediaException; import javax.microedition.midlet.MIDlet; import java.io.*; import javax.microedition.lcdui.*; import javax.microedition.media.*; import javax.microedition.media.control.*; import javax.microedition.midlet.*; import javax.microedition.rms.*; (...) try { // Create a Player that captures live audio. Player p = Manager.createPlayer("capture://audio"); p.realize(); // Get the RecordControl, set the record stream, // start the Player and record for 5 seconds. RecordControl rc = (RecordControl)p.getControl("RecordControl"); ByteArrayOutputStream output = new ByteArrayOutputStream(); rc.setRecordStream(output); rc.startRecord(); p.start(); Thread.currentThread().sleep(5000); rc.commit(); p.close(); } catch (IOException ioe) { } catch (MediaException me) { } catch (InterruptedException ie) { } But unfortunately when I try to build it, it tells me: *** Creating directories *** *** Compiling source files *** ..\src\example\audiodemo\AudioPlayer.java:121: cannot find symbol symbol : class RecordControl location: class example.audiodemo.AudioPlayer RecordControl rc = (RecordControl)p.getControl("RecordControl"); ^ ..\src\example\audiodemo\AudioPlayer.java:121: cannot find symbol symbol : class RecordControl location: class example.audiodemo.AudioPlayer RecordControl rc = (RecordControl)p.getControl("RecordControl"); ^ 2 errors So my question is: why there is no RecordControl class if in documentations it is written this class should be there. Or is there other method to record / capture audio from microfone in Java ME of Sony Ericsson? How do you record sound?

    Read the article

  • Finding a picture in a picture with java?

    - by tarrasch
    what i want to to is analyse input from screen in form of pictures. I want to be able to identify a part of an image in a bigger image and get its coordinates within the bigger picture. Example: would have to be located in And the result would be the upper right corner of the picture in the big picture and the lower left of the part in the big picture. As you can see, the white part of the picture is irrelevant, what i basically need is just the green frame. Is there a library that can do something like this for me? Runtime is not really an issue. What i want to do with this is just generating a few random pixel coordinates and recognize the color in the big picture at that position, to recognize the green box fast later. And how would it decrease performance, if the white box in the middle is transparent? The question has been asked several times on SO as it seems without a single answer. I found i found a solution at http://werner.yellowcouch.org/Papers/subimg/index.html . Unfortunately its in C++ and i do not understand a thing. Would be nice to have a Java implementation on SO.

    Read the article

  • how to apply Discrete wavelet transform on image

    - by abuasis
    I am implementing an android application that will verify signature images , decided to go with the Discrete wavelet transform method (symmlet-8) the method requires to apply the discrete wavelet transform and separate the image using low-pass and high-pass filter and retrieve the wavelet transform coefficients. the equations show notations that I cant understand thus can't do the math easily , also didn't know how to apply low-pass and high-pass filters to my x and y points. is there any tutorial that shows you how to apply the discrete wavelet transform to my image easily that breaks it out in numbers? thanks alot in advance.

    Read the article

  • Recognize Dates In A String

    - by Tim Scott
    I want a class something like this: public interface IDateRecognizer { DateTime[] Recognize(string s); } The dates might exist anywhere in the string and might be any format. For now, I could limit to U.S. culture formats. The dates would not be delimited in any way. They might have arbitrary amounts of whitespace between parts of the date. The ideas I have are: ANTLR Regex Hand rolled I have never used ANTLR, so I would be learning from scratch. I wonder if there are libraries or code samples out there that do something similar that could jump start me. Is ANTLR too heavy for such a narrow use? I have used Regex a lot before, but I hate it for all the reasons that most people hate it. I could certainly hand roll it but I'd rather not re-solve a solved problem. Suggestions? UPDATE: Here is an example. Given this input: This is a date 11/3/63. Here is another one: November 03, 1963; and another one Nov 03, 63 and some more (11/03/1963). The dates could be in any U.S. format. They might have dashes like 11-2-1963 or weird extra whitespace inside like this: Nov   3,   1963, and even maybe the comma is missing like [Nov 3 63] but that's an edge case. The output should be an array of seven DateTimes. Each date would be the same: 11/03/1963 00:00:00.

    Read the article

  • How to engineer features for machine learning

    - by Ivo Danihelka
    Do you have some advices or reading how to engineer features for a machine learning task? Good input features are important even for a neural network. The chosen features will affect the needed number of hidden neurons and the needed number of training examples. The following is an example problem, but I'm interested in feature engineering in general. A motivation example: What would be a good input when looking at a puzzle (e.g., 15-puzzle or Sokoban)? Would it be possible to recognize which of two states is closer to the goal?

    Read the article

  • Why doesn't SetNotifyWindowMessage() call my WndProc()?

    - by manuel
    I'm using WinForms, and I'm trying to get SetNotifyWindowMessage() to call the WndProc, but it does not do so. The function call: HRESULT initSAPI(HWND hWnd) { ... if(FAILED( g_cpRecoCtxt->SetNotifyWindowMessage( hWnd, WM_RECOEVENT, 0, 0 ))) MessageBoxW(hWnd, L"Error sending window message", L"SAPI Initialization Error", 0); ... } The WndProc: LRESULT WndProc (HWND hWnd, UINT message, WPARAM wparam, LPARAM lparam) { case WM_RECOEVENT: ProcessRecoEvent(hWnd); break; default: return DefWindowProc(hWnd, message, wParam, lParam); } Note: initSAPI() is called on a mouse click event.

    Read the article

  • HOW-TO Make computer sing

    - by Ofir
    Hi, I'm trying to develop an online application where the user writes some text and the software sings it back to the user. I can currently generate the audio file with the words spoken by the computer using espeak, but I have no idea how to make it sound like a song, how to add rhythm to it. I'm able to change the pitch and tempo using rubberband, but that's as far as I've gotten. Does anyone have a clue how to make this happen?

    Read the article

  • Implementing tracing gestures on iPhone

    - by bmoeskau
    I'd like to create an iPhone app that supports tracing of arbitrary shapes using your finger (with accuracy detection). I have seen references to an Apple sample app called "GestureMatch" that supposedly implemented exactly that, but it was removed from the SDK at some point and I cannot find the source anywhere via Google. Does anyone know of a current official sample that demonstrates tracing like this? Or any solid suggestions on other resources to look at? I've done some iPhone programming, but not really anything with the graphics API's or custom handling of touch gestures, so I'm not sure where to start.

    Read the article

  • Calculating probability that a string has been randomized? - Python

    - by RadiantHex
    Hi folks, this is correlated to a question I asked earlier (question) I have a list of manually created strings such as: lucy87 gordan_king fancy_unicorn77 joplucky_kanga90 base_belong_to_narwhals and a list of randomized strings: johnkdf pancake90kgjd fancy_jagookfk manhattanljg What gives away that the last set of strings are randomized is that sequences such as 'kjg', 'jgf', 'lkd', ... . Any clever way I could separate strings that contain these apparently randomized strings from the crowd? I guess that this plays a lot on the fact that certain characters are more likely to be placed next to others (e.g. 'co', 'ka', 'ja', ...). Any ideas on this one? Kylotan mentioned Reverend, but I am not sure if it can be used fr such purpose. Help would be much appreciated!

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >