Search Results

Search found 436 results on 18 pages for 'speech'.

Page 8/18 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Firefox add-on tab-specific buttons and scripts, similar to Page Actions in Google Chrome

    - by Chetan
    I want to write a Firefox extension that acts exactly like the built-in RSS feed scanner (as an exercise). It should do the following: On each new page / tab load, it should scan the content of the page for RSS feeds If there are RSS feeds in the page, it should put a button in the location bar that the user can click On clicking the button, a speech bubble should appear under the button (the way a speech bubble appears under the bookmarks star when you click on it), with information on the feeds and buttons to subscribe to them So my main questions are: What is the process to run specific content scripts for specific pages? What is the process to use the results of those scripts to update the speech bubble for each location bar button for each tab? Basically, I'm trying to figure out how to do in Firefox what Page Actions are in Google Chrome. Please help! :)

    Read the article

  • JavaFX Threading issue - GUI freezing while method call ran.

    - by David Meadows
    Hi everyone, I hoped someone might be able to help as I'm a little stumped. I have a javafx class which runs a user interface, which includes a button to read some text out loud. When you press it, it invokes a Java object which uses the FreeTTS java speech synth to read out loud a String, which all works fine. The problem is, when the speech is being read out, the program stops completely until its completed. I'm not an expert on threaded applications, but I understand that usually if I extend the Thread class, and provided my implementation of the speech synth code inside an overridden run method, when I call start on the class it "should" create a new Thread, and run this code there, allowing the main thread which has the JavaFX GUI on to continue as normal. Any idea why this isn't the case? Thanks a lot in advance!

    Read the article

  • visual c# 2010 communicating between two projects

    - by cake is a joke
    I am trying to create a windows form project, and use speech recognition for the Kinect with the Kinect to Windows SDK. I have the form application project (p1) and the Kinect speech project (p2) which is a command prompt. I made it a command prompt because it was the easiest way to do things. Anyway, I have read and found two things about this. 1)I found out how to run two projects at the same time in the same solution. 2) I also found out how to add references to get classes from each project to the other. So, how would I get variables from each project? Just by using project references, or something? P2 can recognize speech and save it to variables, if that counts for anything.

    Read the article

  • What did Stallman mean in this quote about implementing other languages in Lisp?

    - by Charlie Flowers
    I just read the following quote from Stallman as part of a speech he gave many years ago. He's talking about how it is feasible to implement other programming languages in Lisp, but not feasible to implement Lisp in those other programming languages. He seems to take for granted that the listeners/readers understand why. But I don't see why. I think the answer will explain something about Lisp to me, and I'd like to understand it. Can someone explain it? Here's the quote: "There's an interesting benefit you can get from using such a powerful language as a version of Lisp as your primary extensibility language. You can implement other languages by translating them into your primary language. If your primary language is TCL, you can't very easily implement Lisp by translating it into TCL. But if your primary language is Lisp, it's not that hard to implement other things by translating them." The full speech is here: http://www.gnu.org/gnu/rms-lisp.html Thanks.

    Read the article

  • ubuntu 12.04 refuses to install with windows 7

    - by Amitabh Pandey
    I have a desktop computer with windows 7 installed on it . Recently I downloaded ubuntu 12.04 and burned the iso image on a new blank DVD . After successfully burning DVD , I booted from the DVD. Ubutu interface appeared asking me to either choose try ubuntu or install ubuntu . I chose to install ubuntu. Again on next screen I choose to install ubuntu inside windows 7. After pressing continue button the following message appeared : " checking battery state .............. ok checking for running unattended upgrades : acpid : exiting speech dispatcher disabled ; edit /etc/default/ speech -dispatcher Asking all remaining processes to terminate ............. ok Please remove installation media and close the tray (if any) then press enter : " Now the problem is that when I remove the installation media ( ie the DVD ) and press enter then instead of installing ubuntu the computer reboots into windows 7 !!! I am a newbie to ubuntu and therefore do not know much about it . What should I do?

    Read the article

  • what is the best way of giving the feedback to the user

    - by Nubkadiya
    im using speech recognition by pressing a button in my application. i want to show the users that when they click the button they should speech. i was thinking about using a progress bar. but i dont think its a good idea. then i thought about putting a label saying whats going on. can someone suggest any more options. please

    Read the article

  • Tablet interface for the physically disabled?

    - by Glenn
    My sister has Cerebral Palsy, which in her case means she has only gross motor control and her speech is slurred. Implications should be obvious: traditional computer/phone/tablet interfaces won't work for her and she can't speak clearly enough for speech recognition software to help her at all. She enjoys reading but has difficulty holding the book and/or turning a page. There are a few options for helping her use a computer, but nothing for tablets or eReaders. That's where you come in. I would like to make (or buy, if such a thing exists) a better interface to an Android tablet that would work for someone with little to no physical dexterity or speech ability. I'd also be interested in work on the Kindle or iPad, but I'm most familiar with Android so I'm starting there. I know Android has Bluetooth capability. Is it possible to interface a joystick to control the Android device? By "control", I mean the entire operating system - selecting an app, launching it, controlling the menus, etc. I want to give her control over the whole thing, not just a specific app. On a PC this can be accomplished by creating a generic USB HID interface and an arcade joystick to move the mouse over the screen and click on thigns. Is it possible to do something like that in Android? Any help you can offer would be greatly appreciated. Thanks!

    Read the article

  • SpeechRecognition issue

    - by Leosa99 _
    I'm creating a Speech Recognition Application like Siri in vb.net. I have found a database of words (in a .txt file) and i want to insert them in my application but its not working . Here my code : Dim WithEvents reco As New Recognition.SpeechRecognitionEngine Dim IA_VOICE As New SpeechSynthesizer Dim List_Word As New Recognition.SrgsGrammar.SrgsOneOf("IN database.") Public Sub New() reco.SetInputToDefaultAudioDevice() Dim gram As New Recognition.SrgsGrammar.SrgsDocument Dim WORD_RULE As New Recognition.SrgsGrammar.SrgsRule("MOT") LOAD_DATABSE(Application.StartupPath & "\RECO_WORD\DataBase.txt") WORD_RULE.Add(List_Word) gram.Rules.Add(WORD_RULE) gram.Root = WORD_RULE reco.LoadGrammar(New Recognition.Grammar(gram)) reco.RecognizeAsync() End Sub Private Sub reco_RecognizeCompleted(ByVal sender As Object, ByVal e As System.Speech.Recognition.RecognizeCompletedEventArgs) Handles reco.RecognizeCompleted reco.RecognizeAsync() End Sub Private Sub reco_SpeechRecognized(ByVal sender As Object, ByVal e As System.Speech.Recognition.RecognitionEventArgs) Handles reco.SpeechRecognized If e.Result.Text = "hi" Then MsgBox("HI!") End If End Sub Sub LOAD_DATABSE(Database_PATH As String) Dim lines() As String = File.ReadAllLines(Database_PATH) Dim numberLinesTotal = lignes.Length Dim numberlignedone As Integer = 0 Dim MOT As New StreamReader(BDD_PATH) While numberlignedone <> numberLinesTota numberlignedone += 1 Dim ITEM As New Recognition.SrgsGrammar.SrgsItem(MOT.ReadLine) Word_List.Items.Add(ITEMS) 'I think its here that its not working. End While MsgBox("END LOADING") End Sub</code> If you know why its not working... Thanks.

    Read the article

  • JavaScript audio not playing outside of jQuery function

    - by user1814016
    I know the question title doesn't make much sense, but I can't think of a better way to put it. I am a newbie to jQuery and I'm using this code to fade in a <div> and play a sound: $(document).ready(function(){ $('#speech').fadeIn('medium', function() { play('msg_appear'); var sptx = $('<p class="stext">').text('There is nothing here.'); $('#speech').append(sptx); $('.stext').typeOut({marker: '', delay: 22}); }); }); This code runs fine however the sound plays after the fade-in is complete. I wanted it to play while it was fading in, so I tried placing the play() call outside of the fade-in function like this: $(document).ready(function(){ play('msg_appear'); $('#speech').fadeIn('medium', function() { However, now it's not playing at all. There's no errors on the JavaScript console so I'm unsure if it's a syntax error, and probably something obvious, but I don't know what. play() is a function I found to play audio, here it is if it matters at all. I placed it in the same file the above code is; right above the $(document).ready(). function play(sound) { if (window.HTMLAudioElement) { var snd = new Audio(''); if(snd.canPlayType('audio/ogg')) { snd = new Audio(sound + '.ogg'); } else if(snd.canPlayType('audio/mp3')) { snd = new Audio(sound + '.mp3'); } snd.play(); } else { alert('HTML5 Audio is not supported by your browser!'); } }

    Read the article

  • Get to Know a Candidate (7 of 25): Will Christensen&ndash;Independent American Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. NOTE: Wikipedia does not have a page for Christensen.  If you follow links to the party site you can find information about him. Christensen served in the United States Marine Corps and has degrees from Penn State University (my alma mater), Drexel Institute of Technology, University of Utah, and Brigham Young University (BYU) focusing on Math, Physics, and Electrical Engineering.  He has worked for IBM and BYU but for the last 35 years has run small businesses including an Internet book business as well as an Amway franchise. He has held numerous offices in various political parties including, County Campaign Chairman for Barry Goldwater in 1964, County Central Committee, Republican Party; National Committeeman, and State Chairman of the American Party; one of the Founders, and the State Chairman of the Independent American Party of Utah; Vice-Chairman, Chairman, and the Treasurer of the National Independent American Party. The Independent American Party (IAP) officially started in 1998 and began as the Utah Independent American Party. The founders claim to have been inspired by a speech given by Ezra Taft Benson, former United States Secretary of Agriculture, entitled “The Proper Role of Government”. The 15 principles for the proper role of government, taken from his speech, are held as the IAP’s basis for recruiting. Learn more about the Independent American Party on Wikipedia.

    Read the article

  • Is Openness at the heart of the EU Digital Agenda?

    - by trond-arne.undheim
    At OpenForum Europe Summit 2010, to be held in Brussels, Autoworld, 11 Parc du Cinquantenaire on Thursday 10 June 2010, a number of global speakers will discuss whether it indeed provides an open digital market as a catalyst for economic growth and if it will deliver a truly open e-government and digital citizenship (see Summit 2010). In 2008, OpenForum Europe, a not-for-profit champion of openness through open standards, hosted one of the most cited speeches by Neelie Kroes, then Commissioner of Competition. Her forward-looking speech on openness and interoperability as a way to improve the competitiveness of ICT markets set the EU on a path to eradicate lock-in forever. On the two-year anniversary of that event, Vice President Kroes, now the first-ever Commissioner of the Digital Agenda, is set to outline her plans for delivering on that vision. Much excitement surrounds open standards, given that Kroes is a staunch believer. The EU's Digital Agenda promises IT standardization reform in Europe and vows to recognize global standards development organizations (fora/consortia) by 2010. However, she avoided the term "open standards" in her new strategy. Markets are, of course, asking why she is keeping her cards tight on this crucial issue. Following her speech, Professor Yochai Benkler, award-winning author of "The Wealth of Networks", and Professor Nigel Shadbolt, appointed by the UK Government to work alongside Sir Tim Berners-Lee to help transform public access to UK Government information join dozens of speakers in the quest to analyse, entertain and challenge European IT policy, people, and documents. Speakers at OFE Summit 2010 include David Drummond, Senior VP Corporate Development and Chief Legal Officer, Google; Michael Karasick, VP Technology and Strategy, IBM; Don Deutsch, Vice President, Standards Strategy and Architecture for Oracle Corp; Thomas Vinje, Partner Clifford Chance; Jerry Fishenden, Director, Centre for Policy Research, and Rishab Ghosh, head, collaborative creativity group, UNU-MERIT, Maastricht (see speakers). Will openness stay at the heart of EU Digital Agenda? Only time will show.

    Read the article

  • How could I represent 1.625 by 0 or a 1 (binary digit)?

    - by pepito
    This is an excerpt from wikipedia about 'full rate' speech coding standard. Full Rate or FR or GSM-FR or GSM 06.10 was the first digital speech coding standard used in the GSM digital mobile phone system. The bit rate of the codec is 13 kbit/s, or 1.625 bits/audio sample. And this one is an excerpt from wikipedia about bit. In computing parlance, bit is the abbreviation for a single binary digit, represented by a 0 or a 1. How could I represent 1.625 by 0 or a 1? Actually, that's my lecturer's question that I could not answer. Some links to papers are more than welcome. Thanks in advance.

    Read the article

  • Elementary OS boots to a terminal (other OS) [on hold]

    - by Benjamin Watson
    Im new to this site, please forgive me if I missed some posting protocol of some sort. I am attempting to install Luna on my samsung s2 laptop (a8 amd radeon 7640g) and when I click on try luna, it just pulls up a terminal after the insignia (curvy E). When I install it, same issue. CTRL-ALT-f7 reveals this (hand typed, sorry if there's typos) Starting preload: *starting CUPS printing spooler/server *stopping save kernel messages preload. fsck from util-linux 2.20.1 fsck from util-linux 2.20.1 dosfsck 3.0.12, 29 oct 2011 FAT32, LFN /dev/sda1: 3 files, 245/189518 clusters /dev/sda2: clean, 133841/30294016 files, 2529529/121164544 blocks Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd *starting AppArmor profiles speech-dispacher disabled; edit /etc/default/speech-dispenser *stopping system V initialisation compatibility *starting system V runlevel compatability *starting apci daemon *starting anac(h)ronistic cron *starting save kernal messages *starting ntp server ntpd *starting regular background program processing damon *starting deferred execution scheduler *stopping anac(h)ronistic cron *starting LightDM Display Manager *starting bluetooth daemon *starting mDNS/DNS-SD daemon *starting CPU interrupts balancing daemon *stopping Send an event to indicate plymouth is up saned disabled ; edit /etc/default/saned *starting network connection manager *starting crash report submission daemon *checking battery state... That's it. I can't make heads or tails of it. Please note that while I've been running linux for about a year, I'm still fairly new to all of this, so try to be detailed in your explanations and/or descriptions of what I need to do. Any/all help would be appreciated. Thank you for your time.

    Read the article

  • early audio offset in Audacity and VLC, but not Banshee

    - by reek
    I'm editing audio files with speech in Audacity, marking particular types of speech. I just noticed that files edited in Windows have different intervals marked than files edited in Ubuntu. After testing and confirming this error, it seems that the audio playback in Ubuntu clips the sound too early from the end (early offset), which causes the person doing the editing to mark the interval wrongly. Interestingly, the error appears in Audacity and VLC (which I sometimes use for playback), but NOT Banshee. Since both Audacity and VLC have this problem, I assume it is not application-specific. I don't know why Banshee handles this without problem though... Are there any ALSA or Pulseaudio settings that are likely to cause this problem (I know very little about either)? The task itself does not appear to consume large amounts of resources, but I am on an old laptop, so here are my specs: Ubuntu 11.10. Dell XPS m1210 1.6 GHz Intel Core, 2 x 512 Mb 667 MHz RAM, Audio device: Intel Corporation N10/ICH 7 Family High Definition Audio Controller (rev 01). Audacity settings: Device Interface: ALSA (cannot select anything else)

    Read the article

  • Star Trek inspired home automation visualisation

    - by Zak McKracken
    I’ve always been a more or less active fan of Star Trek. During the construction phase of my house I started coding a GUI for controlling the house which has an EIB. Just for fun I designed a version inspired by the LCARS design used in Star Trek TNG and showed this to my wife. I showed her several designs before but this was the only one, she really liked. So I decided to go on with this. I started a C# WinForms application. The software runs on a wall mounted Shuttle Barebone-PC. First plan was an industrial panel-pc but the processor was too slow. The now-used Atom is ok. I started with the LCARS-controls found on Codeproject. Since the classic LCARS design divides the screen into two parts this tended to be impracticable, so I used my own design For now the software is able to: Switch lights/wall outlets Show current temperatures for all room controllers Show outside temperature with a 24h trend chart Show the status of the two heat pumps Provide an alarm clock (e.g. for cooking) Play internet radio streams Control absence Mute the door bell Speak status messages via speech synthesis For now, I’m working on an integration of my electric meter. The main heat pump and the electric meter are connected to my LAN. I also tried some speech recognition, but I’ve problems with the microphone. I't’s working when you are right in front of the PC, but not far away, let’s say on the other side of the room. So this is the main view. The table displays raw values which are sent over the EIB – completely useless but looks great For each floor I have a different view. Here you can see the temperatures and check the status of the lights (the buttons are blinking when a light is switched on) This is the view for the heat pump:   Next step would be to integrate a control of my squeezebox server (I use different Squeezeboxes through the house as a multiroom audio solution)

    Read the article

  • login takes long time

    - by Arkaprovo Bhattacharjee
    I am using Ubuntu 12.04 from past 12 days. In the beginning login was fast enough after I put the password it hardly takes 3 to 4 sec to enter in desktop, but now its taking like more that 40 sec to show desktop after entering password. whats the problem, is there any solution? P.S there is only two programs (psensor and jupiter) that starts automatically after login. boot.log fsck from util-linux 2.20.1 /dev/sda6: clean, 254544/3325952 files, 2133831/13285632 blocks * Stopping Userspace bootsplash[164G[ OK ] * Stopping Flush boot log to disk[164G[ OK ] * Starting mDNS/DNS-SD daemon[164G[ OK ] Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox * Starting bluetooth daemon[164G[ OK ] * Starting network connection manager[164G[ OK ] * Starting AppArmor profiles [170G [164G[ OK ] * Stopping System V initialisation compatibility[164G[ OK ] * Starting CUPS printing spooler/server[164G[ OK ] * Starting System V runlevel compatibility[164G[ OK ] * Starting Bumblebee supporting nVidia Optimus cards[164G[ OK ] * Starting LightDM Display Manager[164G[ OK ] * Starting save kernel messages[164G[ OK ] * Starting anac(h)ronistic cron[164G[ OK ] * Starting ACPI daemon[164G[ OK ] * Starting regular background program processing daemon[164G[ OK ] * Starting deferred execution scheduler[164G[ OK ] speech-dispatcher disabled; edit /etc/default/speech-dispatcher * Starting CPU interrupts balancing daemon[164G[ OK ]

    Read the article

  • Is there an audio recording application/tool that has Tivo-like functionality?

    - by Bob
    I do a lot of live speech recording that requires me to quickly jump back and then transcribe a particular piece of the audio, then go back to recording again, while still maintaining the full audio file. So Far I've done this by splitting the audio and running one line to a recorder (for the whole audio), and one to my computer. Then I use something like Audacity to record, and then stop/go back whenever I hear something worth transcribing. This requires me to stop the recording, then start it up again and I end up missing chunks of the speech I'm listening to. Is there a tool that would let me rewind, then listen again and continue listening at a buffered distance from the audio recording, the way Tivo does with television shows?

    Read the article

  • How can i compare Audio, what programming language should i use

    - by Pimmetje
    I have 2 audio files that are from almost the same source. But at some points there shifted a bit. Also the codecs does not match. I would like to make a program that takes a sample 2 - 4 seconds. And looks for it in the other file. (Most of the time it's not shifted more than 30 seconds). Than take the time and store it, Go ahead for a few seconds take a sample and find it again. This way i want to create a file where i can see on what points the file is shifted. For people who are more interested in what i want. I have a audio/video file speech and subtitles. But i have same speech from different sources with differs a bit in time. And i like to make a program that can correct the subtitle time for me. Enough about the problem I looked on the Internet for ways to compare audio files. Based on what i read comparing 2 audio files isn't that easy as i had hoped. Some talk about algorithms http://www.perlmonks.org/?node_id=169641 Some audio-library's portaudio.com aubio.org sourceforge.net/projects/ccaudio/ ambiera.com/irrklang/ The biggest problem i have is that i can't find something i can generate from the audio that i can use to compare with. I hope someone here can point me in the right direction.

    Read the article

  • app not working

    - by pranay
    hi, i have written a simple app which would speak out to the user any incoming message. Both programmes seem to work perfectly when i lauched them as two separate pgms , but on keeping them in the same project/package only the speaker programme screen is seen and the receiver pgm doesn't seem to work . Can someone please help me out on it? the speaker pgm is: package com.example.TextSpeaker; import java.util.Locale; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.speech.tts.TextToSpeech; import android.speech.tts.TextToSpeech.OnInitListener; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.Toast; // the following programme converts the msg user to speech public class TextSpeaker extends Activity implements OnInitListener { /** Called when the activity is first created. */ int MY_DATA_CHECK_CODE = 0; public TextToSpeech mtts; public Button button; //public EditText edittext; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); button = (Button)findViewById(R.id.button); //edit text=(EditText)findViewById(R.id.edittext); button.setOnClickListener(new OnClickListener(){ @Override public void onClick(View v) { //mtts.speak(edittext.getText().toString(), TextToSpeech.QUEUE_FLUSH, null); Toast.makeText(getApplicationContext(), "The service has been started\n Every new message will now be read out", Toast.LENGTH_LONG).show(); } }); Intent myintent = new Intent(); myintent.setAction(TextToSpeech.Engine.ACTION_CHECK_TTS_DATA); startActivityForResult(myintent, MY_DATA_CHECK_CODE); } protected void onActivityResult(int requestcode,int resultcode,Intent data) { if(requestcode == MY_DATA_CHECK_CODE) { if(resultcode==TextToSpeech.Engine.CHECK_VOICE_DATA_PASS) { // success so create the TTS engine mtts = new TextToSpeech(this,this); mtts.setLanguage(Locale.ENGLISH); } else { //install the Engine Intent install = new Intent(); install.setAction(TextToSpeech.Engine.ACTION_INSTALL_TTS_DATA); startActivity(install); } } } public void onDestroy(Bundle savedInstanceStatBundle) { mtts.shutdown(); } public void onPause() { super.onPause(); // if our app has no focus if(mtts!=null) mtts.stop(); } @Override public void onInit(int status) { if(status==TextToSpeech.SUCCESS) button.setEnabled(true); } } and the Receiver programme is: package com.example.TextSpeaker; import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; import android.os.Bundle; import android.speech.tts.TextToSpeech; import android.telephony.SmsMessage; // supports both gsm and cdma import android.widget.Toast; public class Receiver extends BroadcastReceiver{ @Override public void onReceive(Context context, Intent intent) { Bundle bundle = intent.getExtras(); SmsMessage[] msgs = null; String str=""; if(bundle!=null) { // retrive the sms received Object[] pdus = (Object[])bundle.get("pdus"); msgs = new SmsMessage[pdus.length]; for(int i=0;i } } }

    Read the article

  • Speakers, Please Check Your Time

    - by AjarnMark
    Woodrow Wilson was once asked how long it would take him to prepare for a 10 minute speech. He replied "Two weeks". He was then asked how long it would take for a 1 hour speech. "One week", he replied. 2 hour speech? "I'm ready right now," he replied.  Whether that is a true story or an urban legend, I don’t really know, but either way, it is a poignant reminder for all speakers, and particularly apropos this week leading up to the PASS Community Summit. (Cross-posted to the PASS Professional Development Virtual Chapter blog #PASSProfDev.) What’s the point of that story?  Simply this…if you have plenty of time to do your presentation, you don’t need to prepare much because it is easy to throw in more and more material to stretch out to your allotted time.  But if you are on a tight time constraint, then it will take significant preparation to distill your talk down to only the essential points. I have attended seven of the last eight North American Summit events, and every one of them has been fantastic.  The speakers are great, the material is timely and relevant, and the networking opportunities are awesome.  And every year, there is one little thing that just bugs me…speakers going over their allotted time.  Why does it bother me so?  Well, if you look at a typical schedule for a Summit, you’ll see that there are six or more sessions going on at the same time, and only 15 minutes to move from one to another.  If you’re trying to maximize your training dollar by attending something during every session time slot, and you don’t want to be the last guy trying to squeeze into the middle of the row, then those 15 minutes can be critical.  All the more so if you need to stop and use the bathroom or if you have to hike to the opposite end of the convention center.  It is really a bad position to find yourself having to choose between learning the last key points of Speaker A who is going over time, and getting over to Speaker B on time so you don’t miss her key opening remarks. And frankly, I think it is just rude.  Yes, the speakers are the function, after all they are bringing the content that the rest of us are paying to learn.  But it is also an honor to be given the opportunity to speak at a conference like this, and no one speaker is so important that the conference would be a disaster without him.  Speakers know when they submit their abstract, long before the conference, how much time they will have.  It has been the same pattern at the Summit for at least the last eight years.  Program Sessions are 75 minutes long.  Some speakers who have a good track record, and meet other qualifying criteria, are extended an invitation to present a Spotlight Session which is 90 minutes (a 20% increase).  So there really is no excuse.  It’s not like you were promised a 2-hour segment and then discovered when you got here that it was only 75 minutes.  In fact, it’s not like PASS advertised 90-minute sessions for everyone and then a select few were cut back to only 75.  As a speaker, you know well before you get here which type of session you are doing and how long it is, so as a professional, you should plan accordingly. Now you might think that this only happens to rookies, but I’ll tell you that some of the worst offenders are big-name veterans who draw huge attendance numbers for their sessions.  Some attendees blow this off as, “Hey, it’s so-and-so, and I’d stay here for hours and listen to him/her talk.”  To which I would reply, “Then they should have submitted for a pre- or post-conference day-long seminar instead, but don’t try to squeeze your day-long talk into a 90-minute session.”  Now I don’t really believe that these speakers are being malicious or just selfishly trying to extend their time in the spotlight.  I think that most of them are merely being undisciplined and did not trim their presentation sufficiently, or allowed themselves to get off-track (often in a generous attempt to help someone in the audience with a question or problem that really should have been noted for further discussion after the session). So here is my recommendation…my plea, even.  TRIM THE FAT!  Now.  Before it’s too late.  Before you even get on the airplane, take a long, hard look at your presentation and eliminate some of the points that you originally thought you had to make, but in reality are not truly crucial to your main topic.  Delete a few slides.  Test your demos and have them already scripted rather than typing them during your talk.  It is better to cut out too much and end up with plenty of time at the end for Questions & Answers.  And you can always keep some notes on the stuff that you cut out so that you could fill it back in at the end as bonus material if you really do end up with a whole bunch of time on your hands.  But I don’t think you will.  And if you do, that will look even better to the audience as it will look like you’re giving them something extra that not every audience gets.  And they will thank you for that.

    Read the article

  • Voice Recognition Connection problem

    - by user244190
    I,m trying to work through and test a Voice Recognition example based on the VoiceRecognition.java example at http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/app/VoiceRecognition.html but when click on the button to create the activity, I get a dialog that says Connection problem. My Manifest file is using the Internet Permission, and I understand it passes the to the Google Servers. Do I need to do anything else to use this. Code below UPDATE 2: Thanks to Steve, I have been able to install the USB Driver and debug the app directly on my Droid. Here is the LogCat output from clicking on my mic button: 03-08 18:36:45.686: INFO/ActivityManager(1017): Starting activity: Intent { act=android.speech.action.RECOGNIZE_SPEECH cmp=com.google.android.voicesearch/.IntentApiActivity (has extras) } 03-08 18:36:45.686: WARN/ActivityManager(1017): Activity is launching as a new task, so cancelling activity result. 03-08 18:36:45.787: DEBUG/NetworkLocationProvider(1017): setMinTime: 120000 03-08 18:36:45.889: INFO/ActivityManager(1017): Displayed activity com.google.android.voicesearch/.IntentApiActivity: 135 ms (total 135 ms) 03-08 18:36:45.905: DEBUG/NetworkLocationProvider(1017): onCellLocationChanged [802,0,0,4192,3] 03-08 18:36:45.951: INFO/MicrophoneInputStream(1429): Starting voice recognition with audio source VOICE_RECOGNITION 03-08 18:36:45.998: DEBUG/AudioHardwareMot(990): Codec sampling rate already 16000 03-08 18:36:46.092: INFO/RecognitionService(1429): ssfe url=http://www.google.com/m/voice-search 03-08 18:36:46.092: WARN/RecognitionService(1429): required parameter 'calling_package' is missing in IntentAPI request 03-08 18:36:46.115: DEBUG/AudioHardwareMot(990): Codec sampling rate already 16000 03-08 18:36:46.131: WARN/InputManagerService(1017): Starting input on non-focused client com.android.internal.view.IInputMethodClient$Stub$Proxy@4487d240 (uid=10090 pid=3132) 03-08 18:36:46.131: WARN/IInputConnectionWrapper(3132): showStatusIcon on inactive InputConnection 03-08 18:36:46.248: WARN/MediaPlayer(1429): info/warning (1, 44) 03-08 18:36:46.334: DEBUG/dalvikvm(3206): GC freed 3682 objects / 369416 bytes in 293ms 03-08 18:36:46.358: WARN/MediaPlayer(1429): info/warning (1, 44) 03-08 18:36:46.412: WARN/MediaPlayer(1429): info/warning (1, 44) 03-08 18:36:46.444: WARN/MediaPlayer(1429): info/warning (1, 44) 03-08 18:36:46.475: WARN/MediaPlayer(1429): info/warning (1, 44) 03-08 18:36:46.506: WARN/MediaPlayer(1429): info/warning (1, 44) 03-08 18:36:46.514: INFO/MediaPlayer(1429): Info (1,44) 03-08 18:36:46.514: INFO/MediaPlayer(1429): Info (1,44) 03-08 18:36:46.514: INFO/MediaPlayer(1429): Info (1,44) 03-08 18:36:46.514: INFO/MediaPlayer(1429): Info (1,44) 03-08 18:36:46.514: INFO/MediaPlayer(1429): Info (1,44) 03-08 18:36:46.514: INFO/MediaPlayer(1429): Info (1,44) The line that concerns me is the warning of the missing parameter calling-package. UPDATE: Ok, I was able to replace my emulator image with one from HTC that appears to come with Google Voice Search, however now when I run from the emulator, i'm getting an Audio Problem message with Speak Again or Cancel buttons. It appears to make it back to the onActivityResult(), but the resultCode is 0. Here is the LogCat output: 03-07 20:21:25.396: INFO/ActivityManager(578): Starting activity: Intent { action=android.speech.action.RECOGNIZE_SPEECH comp={com.google.android.voicesearch/com.google.android.voicesearch.RecognitionActivity} (has extras) } 03-07 20:21:25.406: WARN/ActivityManager(578): Activity is launching as a new task, so cancelling activity result. 03-07 20:21:25.968: WARN/ActivityManager(578): Activity pause timeout for HistoryRecord{434f7850 {com.ikonicsoft.mileagegenie/com.ikonicsoft.mileagegenie.MileageGenie}} 03-07 20:21:26.206: WARN/AudioHardwareInterface(554): getInputBufferSize bad sampling rate: 16000 03-07 20:21:26.256: ERROR/AudioRecord(819): Recording parameters are not supported: sampleRate 16000, channelCount 1, format 1 03-07 20:21:26.696: INFO/ActivityManager(578): Displayed activity com.google.android.voicesearch/.RecognitionActivity: 1295 ms 03-07 20:21:29.890: DEBUG/dalvikvm(806): threadid=3: still suspended after undo (s=1 d=1) 03-07 20:21:29.896: INFO/dalvikvm(806): Uncaught exception thrown by finalizer (will be discarded): 03-07 20:21:29.896: INFO/dalvikvm(806): Ljava/lang/IllegalStateException;: Finalizing cursor android.database.sqlite.SQLiteCursor@435d3c50 on ml_trackdata that has not been deactivated or closed 03-07 20:21:29.896: INFO/dalvikvm(806): at android.database.sqlite.SQLiteCursor.finalize(SQLiteCursor.java:596) 03-07 20:21:29.896: INFO/dalvikvm(806): at dalvik.system.NativeStart.run(Native Method) 03-07 20:21:31.468: DEBUG/dalvikvm(806): threadid=5: still suspended after undo (s=1 d=1) 03-07 20:21:32.436: WARN/IInputConnectionWrapper(806): showStatusIcon on inactive InputConnection I,m still not sure why I,m getting the Connect problem on the Droid. I can use Voice Search ok. I also tried clearing the cache, and data as described in some posts, butstill not working?? /** * Fire an intent to start the speech recognition activity. */ private void startVoiceRecognitionActivity() { Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); intent.putExtra(RecognizerIntent.EXTRA_PROMPT, "Speech recognition demo"); startActivityForResult(intent, VOICE_RECOGNITION_REQUEST_CODE); } /** * Handle the results from the recognition activity. */ @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == VOICE_RECOGNITION_REQUEST_CODE && resultCode == RESULT_OK) { // Fill the list view with the strings the recognizer thought it could have heard ArrayList<String> matches = data.getStringArrayListExtra( RecognizerIntent.EXTRA_RESULTS); mList.setAdapter(new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, matches)); } super.onActivityResult(requestCode, resultCode, data); }

    Read the article

  • Voice Recognition Connection problem

    - by user244190
    I,m trying to work through and test a Voice Recognition example based on the VoiceRecognition.java example at http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/app/VoiceRecognition.html but when click on the button to create the activity, I get a dialog that says Connection problem. My Manifest file is using the Internet Permission, and I understand it passes the to the Google Servers. Do I need to do anything else to use this. Code below UPDATE: Ok, I was able to replace my emulator image with one from HTC that appears to come with Google Voice Search, however now when I run from the emulator, i'm getting an Audio Problem message with Speak Again or Cancel buttons. It appears to make it back to the onActivityResult(), but the resultCode is 0. Here is the LogCat output: 03-07 20:21:25.396: INFO/ActivityManager(578): Starting activity: Intent { action=android.speech.action.RECOGNIZE_SPEECH comp={com.google.android.voicesearch/com.google.android.voicesearch.RecognitionActivity} (has extras) } 03-07 20:21:25.406: WARN/ActivityManager(578): Activity is launching as a new task, so cancelling activity result. 03-07 20:21:25.968: WARN/ActivityManager(578): Activity pause timeout for HistoryRecord{434f7850 {com.ikonicsoft.mileagegenie/com.ikonicsoft.mileagegenie.MileageGenie}} 03-07 20:21:26.206: WARN/AudioHardwareInterface(554): getInputBufferSize bad sampling rate: 16000 03-07 20:21:26.256: ERROR/AudioRecord(819): Recording parameters are not supported: sampleRate 16000, channelCount 1, format 1 03-07 20:21:26.696: INFO/ActivityManager(578): Displayed activity com.google.android.voicesearch/.RecognitionActivity: 1295 ms 03-07 20:21:29.890: DEBUG/dalvikvm(806): threadid=3: still suspended after undo (s=1 d=1) 03-07 20:21:29.896: INFO/dalvikvm(806): Uncaught exception thrown by finalizer (will be discarded): 03-07 20:21:29.896: INFO/dalvikvm(806): Ljava/lang/IllegalStateException;: Finalizing cursor android.database.sqlite.SQLiteCursor@435d3c50 on ml_trackdata that has not been deactivated or closed 03-07 20:21:29.896: INFO/dalvikvm(806): at android.database.sqlite.SQLiteCursor.finalize(SQLiteCursor.java:596) 03-07 20:21:29.896: INFO/dalvikvm(806): at dalvik.system.NativeStart.run(Native Method) 03-07 20:21:31.468: DEBUG/dalvikvm(806): threadid=5: still suspended after undo (s=1 d=1) 03-07 20:21:32.436: WARN/IInputConnectionWrapper(806): showStatusIcon on inactive InputConnection I,m still not sure why I,m getting the Connect problem on the Droid. I can use Voice Search ok. I also tried clearing the cache, and data as described in some posts, butstill not working?? /** * Fire an intent to start the speech recognition activity. */ private void startVoiceRecognitionActivity() { Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); intent.putExtra(RecognizerIntent.EXTRA_PROMPT, "Speech recognition demo"); startActivityForResult(intent, VOICE_RECOGNITION_REQUEST_CODE); } /** * Handle the results from the recognition activity. */ @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == VOICE_RECOGNITION_REQUEST_CODE && resultCode == RESULT_OK) { // Fill the list view with the strings the recognizer thought it could have heard ArrayList<String> matches = data.getStringArrayListExtra( RecognizerIntent.EXTRA_RESULTS); mList.setAdapter(new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, matches)); } super.onActivityResult(requestCode, resultCode, data); }

    Read the article

  • Having Problems Getting FreeTTS and JSAPI Working

    - by Travis
    I have a simple project idea based off of FreeTTS and the JSAPI (Java Speech API) I've downloaded and unpacked FreeTTS and run their build script. Then tried compiling my code linking in the lib directory into the class path like this: javac -cp /home/travis/Desktop/freetts-1.2/lib HelloUnleashedReader.java Which then compiles to java bytecode just fine. However when I run: java HelloUnleashedReader I get the following error: Exception in thread "main" java.lang.NoClassDefFoundError: javax/speech/EngineModeDesc Any help on this issue would be greatly appreciated as there are many sites around the net discussing problems with getting it to work but not many that discuss their solution.

    Read the article

  • SpeechSynthesizer in C# creates wav that has 22kHz... needs to be 16kHz

    - by Adrian
    My C# application needs to covert text to wav file and inject it into a Skype call. The code that creates the wav file is below. The problem is that the file has 22kHz sample rate and Skype accepts only 16kHz. Is there any way to adjust this setting? using (System.IO.FileStream stream = System.IO.File.Create("message.wav")) { System.Speech.Synthesis.SpeechSynthesizer speechEngine = new System.Speech.Synthesis.SpeechSynthesizer(); speechEngine.SetOutputToWaveStream(stream); speechEngine.Speak(number); stream.Flush(); }

    Read the article

  • How to decide on going into management?

    - by Rob Wells
    I read the transcript of a speech by Richard Hamming included as a part of this SO question and the speech had a quote that got me thinking about when someone should move into development. When your vision of what you want to do is what you can do single-handedly, then you should pursue it. The day your vision, what you think needs to be done, is bigger than what you can do single-handedly, then you have to move toward management. And the bigger the vision is, the farther in management you have to go. Any other suggestions as to how you can decide if you want to move away from the coal face and into management?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >