Search Results

Search found 916 results on 37 pages for 'speech recognition'.

Page 12/37 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Jacob + Microsoft SpeechAPI trouble

    - by guai
    Following groovy code kills JVM on SetInterest method invocation It uses Scriptom, the groovy DSL for Jacob Spend whole day searching SAPI's SPFEI macro to rewrite it in groovy. Don't sure does it came out right. Or maybe other bugs %) Help welcomed import org.codehaus.groovy.scriptom.* import static org.codehaus.groovy.scriptom.tlb.sapi.SpeechVoiceSpeakFlags.* import static org.codehaus.groovy.scriptom.tlb.sapi.SpeechRunState.* import static org.codehaus.groovy.scriptom.tlb.sapi.SpeechLib.* import static org.codehaus.groovy.scriptom.tlb.sapi.SPEVENTENUM.* def spfei(Integer[] args) { res = 1<<SPEI_RESERVED1 | 1<<SPEI_RESERVED2 args.each {res |= 1<<it} res } Scriptom.inApartment { def voice = new ActiveXObject('SAPI.SpVoice') evtsrc = voice.toInterface(ISpEventSource) evtsrc.setInterest(spfei(SPEI_WORD_BOUNDARY), spfei(SPEI_WORD_BOUNDARY)) evtsrc.events.Callbacks = {args -> println 'jjj'} voice.speak "Hello, world", SVSFlagsAsync }

    Read the article

  • TextToSpeech setOnUtteranceCompletedListener always returns -1 error?

    - by Robert Nekic
    I've been working with Android's TTS functions with general success however, one piece of it refuses to work for me; I can not successfully assign an OnUtteranceCompletedListener to my TextToSpeech object. I've tried implementing OnUtteranceCompletedListener in one of my classes and I've tried creating a new, stand-alone OnUtteranceCompletedListener instance. Both approaches are simple enough to implement and appear to yield proper listeners without exceptions...yet setOnUtteranceCompleteListener(myListener) ALWAYS returns -1 (ERROR). The documentation for this seems straight forward. Has anyone gotten this to work? I'm targeting SDK 4. Are there known issues with this with SDK4/v1.6?

    Read the article

  • Android: Voice Recording and saving audio

    - by user1320912
    I am working on application that will record the voice of the user and save the file on the SD card and then allow the user to listen to the audio again. I am able to allow the user to record his voice using the RecognizerIntent, but I cant figure out how to save the audio file and allow the user to hear the audio. I would appreciate it if someone could help me out. I have displayed my code below: // Setting up the onClickListener for Audio Button attachVoice = (Button) findViewById(R.id.AttachVoice_questionandanswer); attachVoice.setOnClickListener(new OnClickListener() { public void onClick(View v) { Intent voiceIntent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); voiceIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); voiceIntent.putExtra(RecognizerIntent.EXTRA_PROMPT, "Please Speak"); startActivityForResult(voiceIntent, VOICE_REQUEST); } }); protected void onActivityResult(int requestCode, int resultCode, Intent data) { if(requestCode == VOICE_REQUEST && resultCode == RESULT_OK){ }

    Read the article

  • Using android gesture on top of menu buttons

    - by chriacua
    What I want is to have an options menu where the user can choose to navigate the menu between: 1) touching a button and then pressing down on the trackball to select it, and 2) drawing predefined gestures from Gestures Builder As it stands now, I have created my buttons with OnClickListener and the gestures with GestureOverlayView. Then I select starting a new Activity depending on whether the using pressed a button or executed a gesture. However, when I attempt to draw a gesture, it is not picked up. Only pressing the buttons is recognized. The following is my code: public class Menu extends Activity implements OnClickListener, OnGesturePerformedListener { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); //create TextToSpeech myTTS = new TextToSpeech(this, this); myTTS.setLanguage(Locale.US); //create Gestures mLibrary = GestureLibraries.fromRawResource(this, R.raw.gestures); if (!mLibrary.load()) { finish(); } // Set up click listeners for all the buttons. View playButton = findViewById(R.id.play_button); playButton.setOnClickListener(this); View instructionsButton = findViewById(R.id.instructions_button); instructionsButton.setOnClickListener(this); View modeButton = findViewById(R.id.mode_button); modeButton.setOnClickListener(this); View statsButton = findViewById(R.id.stats_button); statsButton.setOnClickListener(this); View exitButton = findViewById(R.id.exit_button); exitButton.setOnClickListener(this); GestureOverlayView gestures = (GestureOverlayView) findViewById(R.id.gestures); gestures.addOnGesturePerformedListener(this); } public void onGesturePerformed(GestureOverlayView overlay, Gesture gesture) { ArrayList<Prediction> predictions = mLibrary.recognize(gesture); // We want at least one prediction if (predictions.size() > 0) { Prediction prediction = predictions.get(0); // We want at least some confidence in the result if (prediction.score > 1.0) { // Show the gesture Toast.makeText(this, prediction.name, Toast.LENGTH_SHORT).show(); //User drew symbol for PLAY if (prediction.name.equals("Play")) { myTTS.shutdown(); //connect to game // User drew symbol for INSTRUCTIONS } else if (prediction.name.equals("Instructions")) { myTTS.shutdown(); startActivity(new Intent(this, Instructions.class)); // User drew symbol for MODE } else if (prediction.name.equals("Mode")){ myTTS.shutdown(); startActivity(new Intent(this, Mode.class)); // User drew symbol to QUIT } else { finish(); } } } } @Override public void onClick(View v) { switch (v.getId()){ case R.id.instructions_button: startActivity(new Intent(this, Instructions.class)); break; case R.id.mode_button: startActivity(new Intent(this, Mode.class)); break; case R.id.exit_button: finish(); break; } } Any suggestions would be greatly appreciated!

    Read the article

  • Android 2.1 fling gesture captured on textview but still a contextmenu opens

    - by hermo
    The following problem seems unique to 2.1, happens both on an emulator and on a nexus. The same example works fine on other platforms I've tested (1.5, 1.6 and 2.0 emulators). I've added created gestureListener as described in this post. The difference is that I've added the listener on a TextView which also has a contextMenu registered, i.e. sth like the following: onCreate(...) { ... // Layout contains a large TextView on which I want to add a context menu tv = findViewById(R.id.text_view); tv.registerForContextMenu(this); // create the gestureListener according above mentioned post. gestureListener = ... // set the listener on the text-view tv.setOnTouchListener(gestureListener); ... } When testing it, the correct gesture is recognized alright, but every other time it also causes the context menu to be opened. As the same example is working on non 2.1 platforms, I've got a feeling it is not my code that is the problem... Thankful for any suggestions.

    Read the article

  • Finding a picture in a picture with java?

    - by tarrasch
    what i want to to is analyse input from screen in form of pictures. I want to be able to identify a part of an image in a bigger image and get its coordinates within the bigger picture. Example: would have to be located in And the result would be the upper right corner of the picture in the big picture and the lower left of the part in the big picture. As you can see, the white part of the picture is irrelevant, what i basically need is just the green frame. Is there a library that can do something like this for me? Runtime is not really an issue. What i want to do with this is just generating a few random pixel coordinates and recognize the color in the big picture at that position, to recognize the green box fast later. And how would it decrease performance, if the white box in the middle is transparent? The question has been asked several times on SO as it seems without a single answer. I found i found a solution at http://werner.yellowcouch.org/Papers/subimg/index.html . Unfortunately its in C++ and i do not understand a thing. Would be nice to have a Java implementation on SO.

    Read the article

  • how to apply Discrete wavelet transform on image

    - by abuasis
    I am implementing an android application that will verify signature images , decided to go with the Discrete wavelet transform method (symmlet-8) the method requires to apply the discrete wavelet transform and separate the image using low-pass and high-pass filter and retrieve the wavelet transform coefficients. the equations show notations that I cant understand thus can't do the math easily , also didn't know how to apply low-pass and high-pass filters to my x and y points. is there any tutorial that shows you how to apply the discrete wavelet transform to my image easily that breaks it out in numbers? thanks alot in advance.

    Read the article

  • Recognize Dates In A String

    - by Tim Scott
    I want a class something like this: public interface IDateRecognizer { DateTime[] Recognize(string s); } The dates might exist anywhere in the string and might be any format. For now, I could limit to U.S. culture formats. The dates would not be delimited in any way. They might have arbitrary amounts of whitespace between parts of the date. The ideas I have are: ANTLR Regex Hand rolled I have never used ANTLR, so I would be learning from scratch. I wonder if there are libraries or code samples out there that do something similar that could jump start me. Is ANTLR too heavy for such a narrow use? I have used Regex a lot before, but I hate it for all the reasons that most people hate it. I could certainly hand roll it but I'd rather not re-solve a solved problem. Suggestions? UPDATE: Here is an example. Given this input: This is a date 11/3/63. Here is another one: November 03, 1963; and another one Nov 03, 63 and some more (11/03/1963). The dates could be in any U.S. format. They might have dashes like 11-2-1963 or weird extra whitespace inside like this: Nov   3,   1963, and even maybe the comma is missing like [Nov 3 63] but that's an edge case. The output should be an array of seven DateTimes. Each date would be the same: 11/03/1963 00:00:00.

    Read the article

  • How to engineer features for machine learning

    - by Ivo Danihelka
    Do you have some advices or reading how to engineer features for a machine learning task? Good input features are important even for a neural network. The chosen features will affect the needed number of hidden neurons and the needed number of training examples. The following is an example problem, but I'm interested in feature engineering in general. A motivation example: What would be a good input when looking at a puzzle (e.g., 15-puzzle or Sokoban)? Would it be possible to recognize which of two states is closer to the goal?

    Read the article

  • Why doesn't SetNotifyWindowMessage() call my WndProc()?

    - by manuel
    I'm using WinForms, and I'm trying to get SetNotifyWindowMessage() to call the WndProc, but it does not do so. The function call: HRESULT initSAPI(HWND hWnd) { ... if(FAILED( g_cpRecoCtxt->SetNotifyWindowMessage( hWnd, WM_RECOEVENT, 0, 0 ))) MessageBoxW(hWnd, L"Error sending window message", L"SAPI Initialization Error", 0); ... } The WndProc: LRESULT WndProc (HWND hWnd, UINT message, WPARAM wparam, LPARAM lparam) { case WM_RECOEVENT: ProcessRecoEvent(hWnd); break; default: return DefWindowProc(hWnd, message, wParam, lParam); } Note: initSAPI() is called on a mouse click event.

    Read the article

  • Playback of opus-codec on Android

    - by qdot
    I'm looking for a way to integrate opus-codec (the decoder part) with my Android application. Do you know of any implementations that have done so? We are currently using ogg-vorbis for spoken prompts, considering going with either speex (deprecated, but with few documented attempts) or opus (currently no documented attempts). If we would have to go the NDK route, do you think it should provide us with a application size improvement? OggVorbis is supported by the platform, neither speex nor opus are.

    Read the article

  • Improving the efficiency of Kinect for Windows DTWGestureRecognition Application

    - by Ray
    Currently I am using the DTWGestureRecognition open source tool for Kinect SDK v1.5. I have recorded a few gestures and use them to navigate through Windows 7. I also have implemented voice control for simple things such as opening PowerPoint, Chrome, etc. My main issue is that the application uses quite a bit of my CPU power which causes it to become slow. During gestures and voice commands, the CPU usage sometimes spikes to 80-90%, which causes the application to be unresponsive for a few seconds. I am running it on a 64 bit Windows 7 machine with an i5 processor and 8 GB of RAM. I was wondering if anyone with any experience using this tool or Kinect in general has made it more efficient and less performance hogging. Right now I removed sections which display the RGB video and the Depth video but even doing that did not make a big impact. Any help is appreciated, thanks!

    Read the article

  • Great computer-science speeches

    - by sub
    I've looked into some questions here where the "best" programming books are listed and then thought why there isn't a question concerning speeches yet. I think that speeches or presentations from developers or even creators of programming languages which were or are heavily used at some point are particulary interesting. One of my favorite speeches was recommended to me by someone here on SO: The future of C# I also like Guido van Rossum's speeches but he sometimes seems pretty nervous. Another in my opinion good presentation would be the Google tech talk about Go. Which (recorded) programming presentations/speeches are worth watching? edit: Made this a community wiki as the answer would probably be a pretty long list.

    Read the article

  • Implementing tracing gestures on iPhone

    - by bmoeskau
    I'd like to create an iPhone app that supports tracing of arbitrary shapes using your finger (with accuracy detection). I have seen references to an Apple sample app called "GestureMatch" that supposedly implemented exactly that, but it was removed from the SDK at some point and I cannot find the source anywhere via Google. Does anyone know of a current official sample that demonstrates tracing like this? Or any solid suggestions on other resources to look at? I've done some iPhone programming, but not really anything with the graphics API's or custom handling of touch gestures, so I'm not sure where to start.

    Read the article

  • Calculating probability that a string has been randomized? - Python

    - by RadiantHex
    Hi folks, this is correlated to a question I asked earlier (question) I have a list of manually created strings such as: lucy87 gordan_king fancy_unicorn77 joplucky_kanga90 base_belong_to_narwhals and a list of randomized strings: johnkdf pancake90kgjd fancy_jagookfk manhattanljg What gives away that the last set of strings are randomized is that sequences such as 'kjg', 'jgf', 'lkd', ... . Any clever way I could separate strings that contain these apparently randomized strings from the crowd? I guess that this plays a lot on the fact that certain characters are more likely to be placed next to others (e.g. 'co', 'ka', 'ja', ...). Any ideas on this one? Kylotan mentioned Reverend, but I am not sure if it can be used fr such purpose. Help would be much appreciated!

    Read the article

  • Recognizing individual voices

    - by raheel
    I plan to write a conversation analysis software, which will recognize the individual speakers, their pitch and intensity. Pitch and intensity are somewhat straightforward (pitch via autocorrelation). How would I go about recognizing individual speakers? For starters I can assume that only one person speaks at a time.

    Read the article

  • How to have a UISwipeGestureRecognizer AND UIPanGestureRecognizer work on the same view

    - by Shizam
    How would you setup the gesture recognizers so that you could have a UISwipeGestureRecognizer and a UIPanGestureRecognizer work at the same time? Such that if you touch and move quickly (quick swipe) it detects the gesture as a swipe but if you touch then move (short delay between touch & move) it detects it as a pan? I've tried various permutations of requireGestureRecognizerToFail and that didn't help exactly, it made it so that if the SwipeGesture was left then my pan gesture would work up, down and right but any movement left was detected by the swipe gesture.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >