Search Results

Search found 1303 results on 53 pages for 'voice recognition'.

Page 12/53 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • License plate recognition

    - by WowtaH
    As a side-project I'm trying to create an app that scans license plates from passing cars from my living-room window. I have hooked up a high resolution camera to my system to capture (moving) images into my c# app. Now all I need is a way to interpret these images into something readable. I´m thinking about some sort of OCR-solution that is fast/accurate enough for moving images. Hope you can give me some direction!

    Read the article

  • How to change word recognition in vim spell?

    - by David
    I like that vim 7.0 supports spell checking via :set spell, and I like that it by default only checks comments and text strings in my C code. But I wanted to find a way to change the behavior so that vim will know that when I write words containing underscores, I don't want that word spell checked. The problem is that I often will refer to variable or function names in my comments, and so right now vim thinks that each piece of text that isn't a complete correct word is a spelling error. Eg. /* The variable proj_abc_ptr is used in function do_func_stuff' */ Most of the time, the pieces seperated by underscores are complete words, but other times they are abbreviations that I would prefer not to add to a word list. Is there any global way to tell vim to include _'s as part of the word when spell checking?

    Read the article

  • Software available for singing "lessons" via computer microphone?

    - by drozzy
    Looking for a software to help my friend learn to sing. Can't seem to find anything on googles. Does there exist software (preferably downloadable and from this century!) that records one's voice and then analyzes it to see how "accurate" it was. It would be great if it also had some kind of "lessons" of some sort, and not simply sound recorder that shows waveforms. I can't imagine it would be so hard to implement, and there probably is one out there - I just can't find it. Any recommendations are welcome. Thanks.

    Read the article

  • Google Talk and Video outside of GMail

    - by mankoff
    I'd like to use Google Talk/Video with having the full gmail or igoogle interface displayed. The ideal setup would be the lightweight popout interface (link below) in a small Fluid.app single instance browser as a stand-alone desktop app. If I log into GMail, the chat sidebar has a phone icon so I can use Google Voice, and a camera icon next to me and some of my contacts. If I log into iGoogle, the chat sidebar has a camera next to me and some contacts, but no phone. I would like to have video chat (and perhaps the phone option) elsewhere. Google provides a chat talkgadget popout URL: http://talkgadget.google.com/talkgadget/popout but there is no phone or camera icon accessible.

    Read the article

  • SIP Service to record all calls?

    - by TK Kocheran
    I read an article that I can't find at the moment which detailed a way to have Google Voice point to a SIP phone number which forwards to your phone in order to take advantage of the SIP service in order to Have all calls use a data connection = no usage of cell-phone plan minutes. Record each and every conversation.* I really want to be able to accomplish this, primarily issue number 2, as all of the phone recorder tools in the Android Market essentially don't work for my Nexus One. I figure that I have one of two options with this. I could 1) use an existing (hopefully free) service which will do this for me or 2) I could set up a SIP service at my home. to somehow forward calls through my home server which will record the calls as well as forward calls to my cell phone. Obviously, the path of least resistance is the one I'd like to go down. Can anyone help me out with this? * I do understand that the legality of this varies from state to state here in the US.

    Read the article

  • Best programming aids for a quadriplegic programmer

    - by Peter Rowell
    Before you jump to conclusions, yes, this is programming related. It covers a situation that comes under the heading of, "There, but for the grace of God, go you or I." This is brand new territory for me so I'm asking for some serious help here. A young man, Honza Ripa, in a nearby town did the classic Dumb Thing two weeks after graduating from High School -- he dove into shallow water in the Russian River and had a C-4/C-5 break, sometimes called a Swimming Pool break. In a matter of seconds he went from an exceptional golfer and wrestler to a quadriplegic. (Read the story ... all of us should have been so lucky as to have a girlfriend like Brianna.) That was 10 months ago and he has regained only tiny amounts of control of his right index finger and a couple of other hand/foot motions, none of them fine-grained. His total control of his computer (currently running Win7, but we can change that as needed) is via voice command. Honza's not dumb. He had a 3.7 GPA with AP math and physics. The Problems: Since all of his input is via voice command, he is concerned that the predominance of special characters in programming will require vast amount of verbose commands. Does anyone know of any well done voice input system specifically designed for programmers? I'm thinking about something that might be modal--e.g. you say "Python input" and it goes into a macro mode for doing class definitions, etc. Given all of the RSI in programmer-land there's got to be something out there. What OS(es) does it run on? I am planning on teaching him Python, which is my preferred language for programming and teaching. Are there any applications / whatevers that are written in Python and would be a particularly good match for engaging him mentally while supporting his disability? One of his expressed interests is in stock investing, but that not might be a good starting point for a brand-new programmer. There are a lot of environments (Flash, JavaScript, etc) that are not particularly friendly to people with accessibility challenges. I vaguely remember (but cannot find) a research project that basically created an overlay system on top of a screen environment and then allowed macro command construction on top of the screen image. If we can get/train this system, we may be able to remove many hurdles to using the net. I am particularly interested in finding open source Python-based robotics and robotic prostheses projects so that he can simultaneously learn advanced programming concepts while learning to solve some of his own immediate problems. I've done a ton of googling on this, but I know there things I'm missing. I'm asking the SO community to step up to the plate here. I know this group has the answers, so let me hear them! Overwhelm me with the opportunities that any of us might have/need to still program after such a life-changing event.

    Read the article

  • Navigate touch-tone menus via modem

    - by Kongress
    I have a system that I need to programmatically interface with that requires a set of numbers to be dialed after the phone line is picked up, like a standard automated phone answering system. For instance, dial the number 123-456-7890, wait for the line to be answered, wait 15 seconds for the voice prompt, dial 1234#, hang up. The question is, can I and how do I do that through a modem? I know how to dial a number through a modem, it's simply ATDT[phone number], but that will attempt to initiate a data connection which will not allow touch-tone number entry. Would a voice modem provide the necessary capability? If so, could anyone provide example commands to accomplish this?

    Read the article

  • Improving the efficiency of Kinect for Windows DTWGestureRecognition Application

    - by Ray
    Currently I am using the DTWGestureRecognition open source tool for Kinect SDK v1.5. I have recorded a few gestures and use them to navigate through Windows 7. I also have implemented voice control for simple things such as opening PowerPoint, Chrome, etc. My main issue is that the application uses quite a bit of my CPU power which causes it to become slow. During gestures and voice commands, the CPU usage sometimes spikes to 80-90%, which causes the application to be unresponsive for a few seconds. I am running it on a 64 bit Windows 7 machine with an i5 processor and 8 GB of RAM. I was wondering if anyone with any experience using this tool or Kinect in general has made it more efficient and less performance hogging. Right now I removed sections which display the RGB video and the Depth video but even doing that did not make a big impact. Any help is appreciated, thanks!

    Read the article

  • How to make possible on Asterisk meetme.conf

    - by kartook
    how can i configure in my Asterisk Server on meetme.conf Details :For conformance bridge extension : virtual Room 1 : Conference Call 567.xxx.xxxx Voice :Enter for conference dial 1 Voice : Enter your conference Pin then press pound my confrance ID: 10935 virtual Room 2 : Conference Call 567.xxx.xxxx Voice :Enter for conference dial 1 Voice : Enter your conference Pin then press pound my confrance ID: 20202 virtual Room 3 : Conference Call 567.xxx.xxxx Voice :Enter for conference dial 1 Voice : Enter your conference Pin then press pound my confrance ID: 30303

    Read the article

  • SpeechSynthesizer Exception - Please Help

    - by Chris
    Hi. I have the following code: private List<VoiceInfo> GetInstalledVoices(SpeechSynthesizer synthesizer) { CultureInfo currentCulture = CultureInfo.CurrentCulture; var listOfVoiceInfo = from voice in synthesizer.GetInstalledVoices(currentCulture) select voice.VoiceInfo; return listOfVoiceInfo.ToList<VoiceInfo>(); } I then call the code from the following code snippet: var synthesizer = new SpeechSynthesizer(); var installedVoices = GetInstalledVoices(synthesizer); VoiceInfo voice = null; if (installedVoices != null && installedVoices.Count > 0) { voice = installedVoices.FirstOrDefault(); } if (voice != null) { synthesizer.SelectVoice(voice.Name); } The line of code that selects the voice throws the following exception: "Cannot set voice. No matching voice is installed or the voice was disabled." This is being done from within an ASP.NET web application - running on Windows Server 2003 R2. When I run this from within Visual Studio 2008 - everything works fine. I created a simple Console app to perform the same action - then ran it from the Windows Server 2003 machine - and it worked fine. I even modified the code in the Console app to loop through each of the installed voices and select the voice. No problems. However, when doing the same from within the web application, I get the same error. I am beating my head against a wall on this one. ANY help on this would be greatly appreciated. Thanks. Chris

    Read the article

  • "SpeechHypothesized event not raised"

    - by Jankhana
    Hi all, I need to detect the user voice when they pick-up the reciever on the other end. Because Modems usually start playing files (playback terminal) when the first ring goes there. So I planned to use speech recognition when they say "hello", it can start playing the file until wait for playing file. Or even any noise interference it can start speak. I accomplished this with few settings. I found few common words that my engine detects when we speak and the words that comes when it's ringing. It works fine as a stand alone application but if I try to integrate this with my application it just does not raises "SpeechHypothesized" event. I cant understand why this happens. If i see using a break point, the engine is having the delegate assign and invocation property also is initialized properly but than to is doesn't call the event. For calling I'm using C4F tapi manager and for speech recognition i'm using System.Speech library of .Net 3.5. The code for events is as follows : engine.SpeechDetected += new EventHandler<SpeechDetectedEventArgs>(engine_SpeechDetected); engine.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(engine_SpeechRecognized); engine.SpeechHypothesized+=new EventHandler<SpeechHypothesizedEventArgs> (engine_SpeechHypothesized); engine.SpeechRecognitionRejected += new EventHandler<SpeechRecognitionRejectedEventArgs>(engine_SpeechRecognitionRejected); All event's are raised except the speechhypothesized event. Any idea why this happens ????

    Read the article

  • Finding patterns in source code

    - by trex279
    If I wanted to learn about pattern recognition in general what would be a good place to start (recommend a book)? Also, does anybody have any experience/knowledge on how to go about applying these algorithms to find abstraction patterns in programs? (repeated code, chunks of code that do the same thing, but in slightly different ways, etc.) Thanks Edit: I don't mind mathematically intensive books. In fact, that would be a good thing.

    Read the article

  • Finding a small image in a bigger one

    - by tur1ng
    Given an image with a large dimension ( 1.000 x 1.000). What is a good approach to find a small image (e.g. 50 x 50) in the big one? The smaller image can be rotated and differ in the size, but only with a 1:1 ratio. It's not related to any programming language - I'm just interested in pattern recognition. Thank you.

    Read the article

  • Why am I getting GKVoiceChatServiceUnableToConnectError? What's wrong?

    - by erotsppa
    I'm trying to implement GK's voice chat. I have the underlying network done over wifi instead of bluetooth. Between the simulator and my test device, I was able to accept the invitation. But immediately after, I get the call didNotStartWithParticipantID, with the following error: Error Domain=GKVoiceChatServiceErrorDomain Code=32002 UserInfo=0x3b286f0 "Network conditions prevented connection." Any ideas? What's causing this?

    Read the article

  • Is it possible to programmatically edit a sound file based on frequency?

    - by K-RAN
    Just wondering if it's possible to go through a flac, mp3, wav, etc file and edit portions, or the entire file by removing sections based on a specific frequency range? So for example, I have a recording of a friend reciting a poem with a few percussion instruments in the background. Could I write a C program that goes through the entire file and cuts out everything except the vocals (human voice frequency ranges from 85-255 Hz, from what I've been reading)? Thanks in advance for any ideas!

    Read the article

  • Is it possible to edit a sound file based on frequency???

    - by K-RAN
    Just wondering if it's possible to go through a flac, mp3, wav, etc file and edit portions, or the entire file by removing sections based on a specific frequency range? So for example, I have a recording of a friend reciting a poem with a few percussion instruments in the background. Could I write a C program that goes through the entire file and cuts out everything except the vocals (human voice frequency ranges from 85-255 Hz, from what I've been reading)? Thanks in advance for any ideas!

    Read the article

  • Android: How/where to put gesture code into IME?

    - by CardinalFIB
    Hi, I'm new to Android but I'm trying to create an IME that allows for gesture-character recognition. I can already do simple apps that perform gesture recognition but am not sure where to hook in the gesture views/obj with an IME. Here is a starting skeleton of what I have for the IME so far. I would like to use android.gesture.Gesture/Prediction/GestureOverlayView/OnGesturePerformedListener. Does anyone have advice? -- CardinalFIB gestureIME.java public class gestureIME extends InputMethodService { private static Keyboard keyboard; private static KeyboardView kView; private int lastDisplayWidth; @Override public void onCreate() { super.onCreate(); } @Override public void onInitializeInterface() { int displayWidth; if (keyboard != null) { displayWidth = getMaxWidth(); if (displayWidth == lastDisplayWidth) return; else lastDisplayWidth = getMaxWidth(); } keyboard = new GestureKeyboard(this, R.xml.keyboard); } @Override public View onCreateInputView() { kView = (KeyboardView) getLayoutInflater().inflate(R.layout.input, null); kView.setOnKeyboardActionListener(kListener); kView.setKeyboard(keyboard); return kView; } @Override public View onCreateCandidatesView() { return null; } @Override public void onStartInputView(EditorInfo attribute, boolean restarting) { super.onStartInputView(attribute, restarting); kView.setKeyboard(keyboard); kView.closing(); //what does this do??? } @Override public void onStartInput(EditorInfo attribute, boolean restarting) { super.onStartInput(attribute, restarting); } @Override public void onFinishInput() { super.onFinishInput(); } public KeyboardView.OnKeyboardActionListener kListener = new KeyboardView.OnKeyboardActionListener() { @Override public void onKey(int keyCode, int[] otherKeyCodes) { if(keyCode==Keyboard.KEYCODE_CANCEL) handleClose(); if(keyCode==10) getCurrentInputConnection().commitText(String.valueOf((char) keyCode), 1); //keyCode RETURN } @Override public void onPress(int primaryCode) {} // TODO Auto-generated method stub @Override public void onRelease(int primaryCode) {} // TODO Auto-generated method stub @Override public void onText(CharSequence text) {} // TODO Auto-generated method stub @Override public void swipeDown() {} // TODO Auto-generated method stub @Override public void swipeLeft() {} // TODO Auto-generated method stub @Override public void swipeRight() {} // TODO Auto-generated method stub @Override public void swipeUp() {} // TODO Auto-generated method stub }; private void handleClose() { requestHideSelf(0); kView.closing(); } } GestureKeyboard.java package com.android.jt.gestureIME; import android.content.Context; import android.inputmethodservice.Keyboard; public class GestureKeyboard extends Keyboard { public GestureKeyboard(Context context, int xmlLayoutResId) { super(context, xmlLayoutResId); } } GesureKeyboardView.java package com.android.jt.gestureIME; import android.content.Context; import android.inputmethodservice.KeyboardView; import android.inputmethodservice.Keyboard.Key; import android.util.AttributeSet; public class GestureKeyboardView extends KeyboardView { public GestureKeyboardView(Context context, AttributeSet attrs) { super(context, attrs); } public GestureKeyboardView(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); } @Override protected boolean onLongPress(Key key) { return super.onLongPress(key); } } keyboard.xml <?xml version="1.0" encoding="utf-8"?> <Keyboard xmlns:android="http://schemas.android.com/apk/res/android" android:keyWidth="10%p" android:horizontalGap="0px" android:verticalGap="0px" android:keyHeight="@dimen/key_height" > <Row android:rowEdgeFlags="bottom"> <Key android:codes="-3" android:keyLabel="Close" android:keyWidth="20%p" android:keyEdgeFlags="left"/> <Key android:codes="10" android:keyLabel="Return" android:keyWidth="20%p" android:keyEdgeFlags="right"/> </Row> </Keyboard> input.xml <?xml version="1.0" encoding="utf-8"?> <com.android.jt.gestureIME.GestureKeyboardView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/gkeyboard" android:layout_alignParentBottom="true" android:layout_width="fill_parent" android:layout_height="wrap_content" />

    Read the article

  • What to use to make voice chat (and some more) on a web?

    - by Tunococ
    I am trying to make available on my website a voice chat for a small group of people that allows some other means to interact such as text messaging, photo sharing, file sharing, simple drawing and silly games. In other words, something similar to older MSN Messenger, but on the web. Any ideas on what to use? To clarify, I am looking for suggestions on languages and libraries to use. I want to be able to fully customize it as much as possible because I might want to add other (somewhat interesting) functions later. Low-level programming is fine if required, but platform dependency isn't that much preferred.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >