Search Results

Search found 2499 results on 100 pages for 'face recognition'.

Page 2/100 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • What is the difference between System.Speech.Recognition and Microsoft.Speech.Recognition?

    - by Michael
    There are two similar namespaces and assemblies for speech recognition in .NET. I’m trying to understand the differences and when it is appropriate to use one or the other. There is System.Speech.Recognition from the assembly System.Speech (in System.Speech.dll). System.Speech.dll is a core DLL in the .NET Framework class library 3.0 and later There is also Microsoft.Speech.Recognition from the assembly Microsoft.Speech (in microsoft.speech.dll). Microsoft.Speech.dll is part of the UCMA 2.0 SDK I find the docs confusing and I have the following questions: System.Speech.Recognition says it is for "The Windows Desktop Speech Technology", does this mean it cannot be used on a server OS or cannot be used for high scale applications? The UCMA 2.0 Speech SDK ( http://msdn.microsoft.com/en-us/library/dd266409%28v=office.13%29.aspx ) says that it requires Microsoft Office Communications Server 2007 R2 as a prerequisite. However, I’ve been told at conferences and meetings that if I do not require OCS features like presence and workflow I can use the UCMA 2.0 Speech API without OCS. Is this true? If I’m building a simple recognition app for a server application (say I wanted to automatically transcribe voice mails) and I don’t need features of OCS, what are the differences between the two APIs?

    Read the article

  • voice recognition in android

    - by jaymin
    Hi, I am an android application developer. I was curious as to how does voice recognition could be implemented using android. There is inbuilt support for speech recognition in android, but how can it be used to implement voice recognition...Are there any links which would help me in learning on this topic.. Thanks

    Read the article

  • Windows 8 Speech Recognition Language

    - by Greg
    I've got Windows 8 Pro installed (RTM version from MSDN). For an application I use I need to have the speech recognition language set to English - US. The only option I have is English - UK. I have tried going to Language in Control Panel and setting the only language to English - US, however English - UK is still the only option in speech properties. How can I add a language to the Speech Properties?

    Read the article

  • Can I take the voice data (f.e. in mp3 format) from speech recognition? [closed]

    - by Ersin Gulbahar
    Possible Duplicate: Android: Voice Recording and saving audio I mean ; I use voice recognition classes on android and I succeed voice recognition. But I want to real voice data not words instead of it. For example I said 'teacher' and android get you said teacher.Oh ok its good but I want to my voice which include 'teacher'.Where is it ? Can I take it and save another location? I use this class to speech to text : package net.viralpatel.android.speechtotextdemo; import java.util.ArrayList; import android.app.Activity; import android.content.ActivityNotFoundException; import android.content.Intent; import android.os.Bundle; import android.speech.RecognizerIntent; import android.view.Menu; import android.view.View; import android.widget.ImageButton; import android.widget.TextView; import android.widget.Toast; public class MainActivity extends Activity { protected static final int RESULT_SPEECH = 1; private ImageButton btnSpeak; private TextView txtText; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); txtText = (TextView) findViewById(R.id.txtText); btnSpeak = (ImageButton) findViewById(R.id.btnSpeak); btnSpeak.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Intent intent = new Intent( RecognizerIntent.ACTION_RECOGNIZE_SPEECH); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, "en-US"); try { startActivityForResult(intent, RESULT_SPEECH); txtText.setText(""); } catch (ActivityNotFoundException a) { Toast t = Toast.makeText(getApplicationContext(), "Ops! Your device doesn't support Speech to Text", Toast.LENGTH_SHORT); t.show(); } } }); } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.activity_main, menu); return true; } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); switch (requestCode) { case RESULT_SPEECH: { if (resultCode == RESULT_OK && null != data) { ArrayList<String> text = data .getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS); txtText.setText(text.get(0)); } break; } } } } Thanks.

    Read the article

  • How to define bold, italic using @font-face

    - by Felix
    I'm looking at the MDC page for the @font-face CSS rule, but I don't get one thing. I have separate files for bold, italic and bold + italic, how can I embed all three files in one @font-face rule? For example, if I have: @font-face { font-family: "DejaVu Sans"; src: url("./fonts/DejaVuSans.ttf") format("ttf"); } strong { font-family: "DejaVu Sans"; font-weight: bold; } The browser will not know what font to use for bold (because that files is DejaVuSansBold.ttf), so it will default to something I probably don't want. How can I tell the browser all the different variants I have for a certain font?

    Read the article

  • @font-face Not Working on Other Computers

    - by Raphael Essoo-Snowdon
    Hey Guys, I've been working on my first HTML5/CSS3 site, and it's been going well for the most part. Totally loving the new @font-face property, and it works perfectly on my machine. The problem I'm having is when previewed on another device (computer, ipad, iphone), it doesn't seem to be recognising the @font-face property and uses the fallback font instead. Site link: http://williamben.com/ Here's the CSS I'm using: @font-face { font-family: 'League Gothic'; src: url('_/type/league_gothic.otf') format('otf'); } Any help would be appreciated. Thanks in advance.

    Read the article

  • Sound recognition software

    - by Cawas
    I'm looking for a software able to recognize an specific sound and then do some action. I want to leave my notebook close by the house intercom so when I hear someone ringing it, a very specific and unique sound, it will send me an email at my office or something. The main issue is that there's a lot of different noises there, but none would be as loud as the intercom for the specific place I've left the microphone. Is there any software out there able to do this? Hopefully with a mac version. I trust there's nothing closely related in this to speech or voice recognition technologies, and specially the softwares in there.

    Read the article

  • Form recognition using OCR and return image of the value

    - by Jonathan
    I'm on a project that process hundreds of forms. The forms have consistent formats but are filled out by hand by different people. I need a way to quickly processing all of these data into electronic forms. OCR recognition for typed document seems mature but for hand-writting is very lacking. For this thought, let's consider a form with several fields like this. Field_1: Value1 (example, Name: John, where Name is Field and John is value) Considering that forms are structured and typed, OCR should be able to recognize the fields. However, for the values of the fields, they are written and OCR will perform very poorly. So is there a way where the fields would be recognized on the imagem, then a image chunk of the value would be returned? Thanks.

    Read the article

  • Face Recognition in AS3

    - by dontPanic
    Hey all, I have been working on a project which involves Marilena(project that ports Face Detection part of OPENCV to ActionScript3). Right now I can take the faces and keep them as byteArrays. I am working on Flash Builder 4. I want to add Face Recognition part either. I will identify the faces by connecting to a database but I couldnt figure out how to do it without OpenCV on flash.You guys have any idea???

    Read the article

  • Is there any cons to use @font-face?

    - by jitendra
    I found @font-face is good alternative of sIFR3 but every browser need different extension of font. If any font which is freely available as a download on net or if font is purchased by client of purchased by my company. in all condition can i use those fonts? Is there any cons to use @font-face in compare to sIFR3?

    Read the article

  • Conditionally styling @font-face

    - by Gnee
    I'm using @font-face for some headers. The replaced typeface is different in dimension and overall character. When the switch happens, the old typeface's rules don't look so good. Other than writing a conditional Javascript script, is there a way to have a set of CSS rules for @font-face fonts (if the browsers supports it) and CSS rules for the unreplaced default fonts?

    Read the article

  • @font-face problems

    - by codedude
    Right now I'm trying desperately to get @font-face to work in my website. This is the code I am using right now. @font-face { font-family: romeral; src: url(fonts/romeral.otf ) format("opentype"); } And then.... h1 { font-size:2.5em; font-family:romeral; } I am using the font Romeral. Here's a link to it: http://www.smashingmagazine.com/2007/02/06/freefont-of-the-week-romeral/ For some reason it just won't work. It won't render the font on the page. I've tried using other fonts like Ripe, and they work. I've made sure I don't have any spelling errors. What I'm wondering is if there is a restriction that some fonts use to stop people from using their fonts with @font-face. Or maybe I've made an obvious mistake in my code. Thanks in advance.

    Read the article

  • OpenCV: Shift/Align face image relative to reference Image (Image Registration)

    - by Abhischek
    I am new to OpenCV2 and working on a project in emotion recognition and would like to align a facial image in relation to a reference facial image. I would like to get the image translation working before moving to rotation. Current idea is to run a search within a limited range on both x and y coordinates and use the sum of squared differences as error metric to select the optimal x/y parameters to align the image. I'm using the OpenCV face_cascade function to detect the face images, all images are resized to a fixed (128x128). Question: Which parameters of the Mat image do I need to modify to shift the image in a positive/negative direction on both x and y axis? I believe setImageROI is no longer supported by Mat datatypes? I have the ROIs for both faces available however I am unsure how to use them. void alignImage(vector<Rect> faceROIstore, vector<Mat> faceIMGstore) { Mat refimg = faceIMGstore[1]; //reference image Mat dispimg = faceIMGstore[52]; // "displaced" version of reference image //Rect refROI = faceROIstore[1]; //Bounding box for face in reference image //Rect dispROI = faceROIstore[52]; //Bounding box for face in displaced image Mat aligned; matchTemplate(dispimg, refimg, aligned, CV_TM_SQDIFF_NORMED); imshow("Aligned image", aligned); } The idea for this approach is based on Image Alignment Tutorial by Richard Szeliski Working on Windows with OpenCV 2.4. Any suggestions are much appreciated.

    Read the article

  • trouble with font-face in meteor based on Discover Meteor microscope app

    - by charliemagee
    I've gone through the Discover Meteor book and successfully created Microscope. Now I'm trying to build my own app based on what I've learned. I want to use @font-face for fonts and icon fonts. I can't get them to show up. Here's my directory structure: client/stylesheets I've got my fonts in the stylesheets folder. I'm using scss, by the way, and that's working fine with the scss package. Here's how I'm calling the fonts in the stylesheet: @font-face { font-family: 'AmaranthItalic'; src: url('Amaranth-Italic-webfont.eot'); src: url('Amaranth-Italic-webfont.eot?#iefix') format('embedded-opentype'), url('Amaranth-Italic-webfont.woff') format('woff'), url('Amaranth-Italic-webfont.ttf') format('truetype'), url('Amaranth-Italic-webfont.svg#AmaranthItalic') format('svg'); font-weight: normal; font-style: normal; } I've tried '/stylesheets/Amaranth etc. and all other combinations that I can think of and nothing is working. I've tried putting them in public. Nothing. I know files like this are supposed to go in public folder but that seems to kill the stylesheets entirely. I'm not sure why the Microscope directory design would cause that to happen. These question/answers didn't help: using font-face in meteor? Icon font from fontello not working with Meteor js Thanks for any help.

    Read the article

  • PCA extended face recognition

    - by cMinor
    The state of the art says that we can use PCA to perform face recognition. like this, this or this I am working with a project that involves training a classifier to detect a person who is wearing glasess or hats or even a mustache. The purpose of doing this is to detect when a person that has robbed a bank, store, or have commeted some sort of crime(s) (we have their image in a database), enters a certain place ( historically we know these guys have robbed, so we should take care to avoid problems). We came first to have a distributed database with all images of criminals, then I thought to have a layer of them clasifying these criminals using accesories like hats, mustache or anything that hides their face etc... Then, to apply that knowledge to detect when a particular or a suspect person enters a comercial place. ( In practice when someone is going to rob not all the times they are using an accesorie...) What do you think about this idea of doing PCA to first detect principal components of the face and then the components of an accesory. I was thinking that maybe a probabilistic approach is better so we can compute the probability the criminal is the person that entered a place and call the respective authorities.

    Read the article

  • @font-face fonts only work on their own domain

    - by Ben
    I am trying to create a type of font repository for use on my websites, so that I can call to any font in the repository in my css without any other set-up. To do this I created a subdomain on which I placed folders for each font in the repository that contained the various file types for each font. I also placed a css file called font-face.css on the root of the subdomain and filled it with @font-face declarations for each of the fonts, the fonts a linked with an absolute link so that they can be used from anywhere. My issue is that it seems that I can only use the fonts on that subdomain where they are located, on my other sites the font does not show. Using firebug I determined that the font-face.css file was successfully being linked to and loaded. So why does the font not correctly load? Is there protection on the font files or something? I am using all fonts that I should be allowed to do this with, so I don't see why this is occurring. Maybe it is an apache issue, but I can download the font just fine when I link to it. Oh, and just to clarify, I am not violating any copyrights by setting this up, all the fonts I am using are licensed to allow this sort of thing. I would however like to set up a way that only I can have access to this repository of fonts but that's another project.

    Read the article

  • Speech Recognition.

    - by Arun Thakkar
    Hello Everyone, Hope you all are fine and also in one of your best moods!! I need your help, I need to develop iPhone application which recognize Voice, and based on Result it perform further tasks. I know iPhone 3.0 Doesn't support Speech recognition and i need to implement Speech recognition Software at server side, i know this thing only, Since i am newbie i Don't know How to deal with that. Mean Which software i need to buy and implement it at server side, and how to use that Service ?? So if you have any idea about Speech Recognition and its related softwre and also usage of that, Please post your Reply. Thank You, Regards, Arun Thakkar.

    Read the article

  • Using Nearest Neighbour Algorithm for image pattern recognition

    - by user293895
    So I want to be able to recognise patterns in images (such as a number 4), I have been reading about different algorithms and I would really like to use the Nearest Neighbour algorithm, it looks simple and I do understand it based on this tutorial: http://people.revoledu.com/kardi/tutorial/KNN/KNN_Numerical-example.html Problem is, although I understand how to use it to fill in missing data sets, I don't understand how I could use it as a pattern recognition tool to aim in Image Shape Recognition. Could someone please shed some light as to how this algorithm could work for pattern recognition? I have seen tutorials using OpenCV, however I don't really want to use this library as I have the ability to do the pre-processing myself, and it seems silly that I would implement this library just for what should be a simple nearest neighbour algorithm.

    Read the article

  • Custom component which displays voice recognition button if available

    - by steff
    Hi evereyone, I'd like to create a custom component which supports voice recognition. It will primarily be an extended EditText which should show the microphone button for voice recognition if it is available. I wanted to to look at the search app-widget on the homescreen but I don't find it in the source. This is intended to use the voice recognition as some sort of dictation device, i.e. the user does not have to type but use his voice instead. So could anyone please point me in some direction? Thanks in advance, Steff

    Read the article

  • Introducing the New Face of Fusion Applications

    - by mvaughan
    By Misha Vaughan and Kathy Miedema, Oracle Applications User Experience At OpenWorld 2012, the Oracle Applications User Experience (UX) team unveiled the new face of Fusion Applications. You may have seen it in sessions presented by Chris Leone, Anthony Lye, Jeremy Ashley or others, or you may have gotten a look on the demogrounds. This screenshot shows the new Oracle Fusion Applications entry experience.Why are we delivering a new face for Fusion Applications? Because, says Ashley, the vice president of the Oracle Applications User Experience team, we want to provide a simple, modern, productive way for users to complete their top quick-entry tasks. The idea is to provide a clear, productive user experience that is backed by the full functionality of Fusion Applications. The first release of the new face of Fusion focuses on three types of users. It provides a fully functional gateway to Fusion Applications for: New and casual users who need quick access to self-service tasks Professional users who need fast access to quick-entry, high-volume tasks Users who are looking for a way to quickly brand their portal for employees The new face of Fusion allows users to move easily from navigation to action, Ashley said, and it has been designed for any device -- Mac, PC, iPad, Android, SmartBoard -- in the browser. The Oracle Fusion Applications Employee Directory. How did we build it? The new face of Fusion essentially is a custom shell, developed by the Apps UX team, and a set of page templates that embodies a simple design aesthetic. It’s repeatable, providing consistency across its pages, and requires little to zero training. More specifically, the new face of Fusion has been built on ADF. The Applications UX team created pages in JDeveloper using local tasks flows bound to existing view objects. Three new components were commissioned from ADF, and existing Fusion components were re-skinned to deliver a simple, modern user experience. It really is that simple – and to prove that point, we’ve been sharing our story around the new face of Fusion on several Oracle channels such as this one. Want to know more? Check the VoX blog for our favorite highlights from OpenWorld, which included demos of the new face of Fusion. And take a look at these posts from Ace Directors Debra Lilley, and Floyd Teter. Special mention to Floyd for the first screen shot credit. Also a nod to Wilfred vander Deijl for capturing the demo to share as part 1 and part 2. We will also be hitting upcoming user group conferences with our demos, and you can always reach out to one of our Fusion User Experience Advocates for a look.

    Read the article

  • Adaboost algorithm and its usage in face detection

    - by Hani
    I am trying to understand Adaboost algorithm but i have some troubles. After reading about Adaboost i realized that it is a classification algorithm(somehow like neural network). But i could not know how the weak classifiers are chosen (i think they are haar-like features for face detection) and how finally the H result which is the final strong classifier can be used. I mean if i found the alpha values and compute the H ,how am i going to benefit from it as a value (one or zero) for new images. Please is there an example describes it in a perfect way? i found the plus and minus example that is found in most adaboost tutorials but i did not know how exactly hi is chosen and how to adopt the same concept on face detection. I read many papers and i had many ideas but until now my ideas are not well arranged. Thanks....

    Read the article

  • My @font-face is for some reason not showing up on my website. Is there something wrong with my synt

    - by Tapha
    Here is the css code: /*Custom*Font*Declerations*/ /*Delicious-Bold*Italic*/ @font-face { font-family: delicious-bolditalic; src: url('dc30.otf'); format("opentype"); } /*Chunkfive*/ /*(OpenType)*/ @font-face { font-family: Chunkfive; src: url('Chunkfive.otf'); format("opentype"); } /*Delicious-Italic*/ @font-face { font-family: delicious-italic; src: url('dc32.otf'); format("opentype"); } /*Chunkfive*/ /*(TrueType)*/ @font-face { font-family: Chunkfive; src: url('Chunkfive.ttf'); format("truetype"); } /*Delicious-Heavy*/ @font-face { font-family: delicious-heavy; src: url('dc31.otf'); format("opentype"); } /*Delicious-Bold*/ @font-face { font-family: delicious-bold; src: url('dc35.otf'); format("opentype"); } /*Delicious-Roman*/ @font-face { font-family: delicious-roman; src: url('dc33.otf'); format("opentype"); } /*Delicious-Smallcaps*/ @font-face { font-family: delicious-smallcaps; src: url('dc29.otf') format("opentype"); } /*DJ GROSS*/ @font-face {font-family: DJ Gross; src: url('DJGROSS.ttf') font-weight: normal;} /*Jinky*/ @font-face {font-family: jinky; src: url('jinky.ttf')} Thank you

    Read the article

  • Drawing text to <canvas> with @font-face does not work at the first time

    - by lemonedo
    Hi all, First try the test case please: http://lemon-factory.net/test/font-face-and-canvas.html I'm not good at English, so I made the test case to be self-explanatory. On the first click to the DRAW button, it will not draw text, or will draw with an incorrect typeface instead of the specified "PressStart", according to your browser. After then it works as expected. At the first time the text does not appear correctly in all browsers I've tested (Firefox, Google Chrome, Safari, Opera). Is it the standard behavior or something? Thank you. PS: Following is the code of the test case <!DOCTYPE html> <html> <head> <meta http-equiv=Content-Type content="text/html;charset=utf-8"> <title>@font-face and canvas</title> <style> @font-face { font-family: 'PressStart'; src: url('http://lemon-factory.net/css/fonts/prstart.ttf'); } canvas, pre { border: 1px solid #666; } pre { float: left; margin: .5em; padding: .5em; } </style> </head> <body> <div> <canvas id=canvas width=250 height=250> Your browser does not support the CANVAS element. Try the latest Firefox, Google Chrome, Safari or Opera. </canvas> <button>DRAW</button> </div> <pre id=style></pre> <pre id=script></pre> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> <script> var canvas = document.getElementById('canvas') var ctx = canvas.getContext('2d') var x = 30 var y = 10 function draw() { ctx.font = '12px PressStart' ctx.fillStyle = '#000' ctx.fillText('Hello, world!', x, y += 20) ctx.fillRect(x - 20, y - 10, 10, 10) } $('button').click(draw) $('pre#style').text($('style').text()) $('pre#script').text($('script').text()) </script> </body> </html>

    Read the article

  • is @font-face server dependant

    - by samquo
    Not sure if this has something to do the live host I'm working with, but I'm using @font-face in the following format, @font-face { font-family: 'UbuntuTitle'; src: url('Ubuntu-Title-webfont.eot'); src: local('?'), url('Ubuntu-Title-webfont.woff') format('woff'), url('Ubuntu-Title-webfont.ttf') format('truetype'), url('Ubuntu-Title-webfont.svg') format('svg'); font-weight: normal; font-style: normal; } I'm finding though that I can't save the document because of the strange character in () in local local('?') so I save it as UTF-8, but that changes the character to this local('☺'). Could that be the reason why it's not being picked up on the server? Any other possibilities?

    Read the article

  • Voice Recognition Connection problem

    - by user244190
    I,m trying to work through and test a Voice Recognition example based on the VoiceRecognition.java example at http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/app/VoiceRecognition.html but when click on the button to create the activity, I get a dialog that says Connection problem. My Manifest file is using the Internet Permission, and I understand it passes the to the Google Servers. Do I need to do anything else to use this. Code below UPDATE: Ok, I was able to replace my emulator image with one from HTC that appears to come with Google Voice Search, however now when I run from the emulator, i'm getting an Audio Problem message with Speak Again or Cancel buttons. It appears to make it back to the onActivityResult(), but the resultCode is 0. Here is the LogCat output: 03-07 20:21:25.396: INFO/ActivityManager(578): Starting activity: Intent { action=android.speech.action.RECOGNIZE_SPEECH comp={com.google.android.voicesearch/com.google.android.voicesearch.RecognitionActivity} (has extras) } 03-07 20:21:25.406: WARN/ActivityManager(578): Activity is launching as a new task, so cancelling activity result. 03-07 20:21:25.968: WARN/ActivityManager(578): Activity pause timeout for HistoryRecord{434f7850 {com.ikonicsoft.mileagegenie/com.ikonicsoft.mileagegenie.MileageGenie}} 03-07 20:21:26.206: WARN/AudioHardwareInterface(554): getInputBufferSize bad sampling rate: 16000 03-07 20:21:26.256: ERROR/AudioRecord(819): Recording parameters are not supported: sampleRate 16000, channelCount 1, format 1 03-07 20:21:26.696: INFO/ActivityManager(578): Displayed activity com.google.android.voicesearch/.RecognitionActivity: 1295 ms 03-07 20:21:29.890: DEBUG/dalvikvm(806): threadid=3: still suspended after undo (s=1 d=1) 03-07 20:21:29.896: INFO/dalvikvm(806): Uncaught exception thrown by finalizer (will be discarded): 03-07 20:21:29.896: INFO/dalvikvm(806): Ljava/lang/IllegalStateException;: Finalizing cursor android.database.sqlite.SQLiteCursor@435d3c50 on ml_trackdata that has not been deactivated or closed 03-07 20:21:29.896: INFO/dalvikvm(806): at android.database.sqlite.SQLiteCursor.finalize(SQLiteCursor.java:596) 03-07 20:21:29.896: INFO/dalvikvm(806): at dalvik.system.NativeStart.run(Native Method) 03-07 20:21:31.468: DEBUG/dalvikvm(806): threadid=5: still suspended after undo (s=1 d=1) 03-07 20:21:32.436: WARN/IInputConnectionWrapper(806): showStatusIcon on inactive InputConnection I,m still not sure why I,m getting the Connect problem on the Droid. I can use Voice Search ok. I also tried clearing the cache, and data as described in some posts, butstill not working?? /** * Fire an intent to start the speech recognition activity. */ private void startVoiceRecognitionActivity() { Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); intent.putExtra(RecognizerIntent.EXTRA_PROMPT, "Speech recognition demo"); startActivityForResult(intent, VOICE_RECOGNITION_REQUEST_CODE); } /** * Handle the results from the recognition activity. */ @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == VOICE_RECOGNITION_REQUEST_CODE && resultCode == RESULT_OK) { // Fill the list view with the strings the recognizer thought it could have heard ArrayList<String> matches = data.getStringArrayListExtra( RecognizerIntent.EXTRA_RESULTS); mList.setAdapter(new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, matches)); } super.onActivityResult(requestCode, resultCode, data); }

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >