Search Results

Search found 1303 results on 53 pages for 'voice recognition'.

Page 16/53 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • What software or service can I use to programatically make phone calls with?

    - by Jason
    I'm looking to programatically make phone call reminders to customers based upon their opt-in requests. I am NOT a telemarketer. I need to make a phone call, and play a message. I need to leave a message after the beep if an answering machine or voicemail is detected. I need to know if the message was successfully delivered. Ideally, I could offer the user feedback by pressing a button and recording their selection. I prefer Windows and .NET but would consider anything. What do you suggest?

    Read the article

  • Lync Server 2010

    - by ManojDhobale
    Microsoft Lync Server 2010 communications software and its client software, such as Microsoft Lync 2010, enable your users to connect in new ways and to stay connected, regardless of their physical location. Lync 2010 and Lync Server 2010 bring together the different ways that people communicate in a single client interface, are deployed as a unified platform, and are administered through a single management infrastructure. Workload Description IM and presence Instant messaging (IM) and presence help your users find and communicate with one another efficiently and effectively. IM provides an instant messaging platform with conversation history, and supports public IM connectivity with users of public IM networks such as MSN/Windows Live, Yahoo!, and AOL. Presence establishes and displays a user’s personal availability and willingness to communicate through the use of common states such as Available or Busy. This rich presence information enables other users to immediately make effective communication choices. Conferencing Lync Server includes support for IM conferencing, audio conferencing, web conferencing, video conferencing, and application sharing, for both scheduled and impromptu meetings. All these meeting types are supported with a single client. Lync Server also supports dial-in conferencing so that users of public switched telephone network (PSTN) phones can participate in the audio portion of conferences. Conferences can seamlessly change and grow in real time. For example, a single conference can start as just instant messages between a few users, and escalate to an audio conference with desktop sharing and a larger audience instantly, easily, and without interrupting the conversation flow. Enterprise Voice Enterprise Voice is the Voice over Internet Protocol (VoIP) offering in Lync Server 2010. It delivers a voice option to enhance or replace traditional private branch exchange (PBX) systems. In addition to the complete telephony capabilities of an IP PBX, Enterprise Voice is integrated with rich presence, IM, collaboration, and meetings. Features such as call answer, hold, resume, transfer, forward and divert are supported directly, while personalized speed dialing keys are replaced by Contacts lists, and automatic intercom is replaced with IM. Enterprise Voice supports high availability through call admission control (CAC), branch office survivability, and extended options for data resiliency. Support for remote users You can provide full Lync Server functionality for users who are currently outside your organization’s firewalls by deploying servers called Edge Servers to provide a connection for these remote users. These remote users can connect to conferences by using a personal computer with Lync 2010 installed, the phone, or a web interface. Deploying Edge Servers also enables you to federate with partner or vendor organizations. A federated relationship enables your users to put federated users on their Contacts lists, exchange presence information and instant messages with these users, and invite them to audio calls, video calls, and conferences. Integration with other products Lync Server integrates with several other products to provide additional benefits to your users and administrators. Meeting tools are integrated into Outlook 2010 to enable organizers to schedule a meeting or start an impromptu conference with a single click and make it just as easy for attendees to join. Presence information is integrated into Outlook 2010 and SharePoint 2010. Exchange Unified Messaging (UM) provides several integration features. Users can see if they have new voice mail within Lync 2010. They can click a play button in the Outlook message to hear the audio voice mail, or view a transcription of the voice mail in the notification message. Simple deployment To help you plan and deploy your servers and clients, Lync Server provides the Microsoft Lync Server 2010, Planning Tool and the Topology Builder. Lync Server 2010, Planning Tool is a wizard that interactively asks you a series of questions about your organization, the Lync Server features you want to enable, and your capacity planning needs. Then, it creates a recommended deployment topology based on your answers, and produces several forms of output to aid your planning and installation. Topology Builder is an installation component of Lync Server 2010. You use Topology Builder to create, adjust and publish your planned topology. It also validates your topology before you begin server installations. When you install Lync Server on individual servers, the installation program deploys the server as directed in the topology. Simple management After you deploy Lync Server, it offers the following powerful and streamlined management tools: Active Directory for its user information, which eliminates the need for separate user and policy databases. Microsoft Lync Server 2010 Control Panel, a new web-based graphical user interface for administrators. With this web-based UI, Lync Server administrators can manage their systems from anywhere on the corporate network, without needing specialized management software installed on their computers. Lync Server Management Shell command-line management tool, which is based on the Windows PowerShell command-line interface. It provides a rich command set for administration of all aspects of the product, and enables Lync Server administrators to automate repetitive tasks using a familiar tool. While the IM and presence features are automatically installed in every Lync Server deployment, you can choose whether to deploy conferencing, Enterprise Voice, and remote user access, to tailor your deployment to your organization’s needs.

    Read the article

  • Home Security + Voice Communication Setup....How to do this?

    - by RobDude
    I have several webcams and I have software that will provide a website I can visit and see all of the webcams. I also want to add voice communication. It can be one way; but I need to, from a remote location, be able to talk and have it come out of my speakers at home. I don't know of any software that does this in any easy fashion. Can someone recommend something for me? I'm running Windows 7

    Read the article

  • What's the difference between General Ledger Transfer Program, Create Accounting and Submit Accounting?

    - by Oracle_EBS
    In Release 12, the General Ledger Transfer Program is no longer used. Use Create Accounting or Submit Accounting instead. Submit Accounting spawns the Revenue Recognition Process. The Create Accounting program does not. So if you create transactions with rules, then you would want to run Submit Accounting Process to spawn Revenue Recognition to create the distribution rows, which Create Accounting is then spawned to process to the GL. Create Accounting Submit Accounting Short Name for Concurrent Program XLAACCPB ARACCPB Specific to Receivables No Yes Runs Revenue Recognition automatically No Yes Can be run real-time for one Transaction/Receipt at a time Yes No Spawns the following Programs 1) XLAACCPB module: Create Accounting 2) XLAACCUP module: Accounting Program 3) GLLEZL module: Journal Import 1) ARTERRPM module: Revenue Recognition Master Program 2) ARTERRPW module: Revenue Recognition with parallel workers - could be numerous 3) ARREVSWP - Revenue Contingency Analyzer 4) XLAACCPB module: Create Accounting 5) XLAACCUP module: Accounting Program 5) GLLEZL module: Journal Import Keep in mind, Reports owned by application 'Subledger Accounting' cannot be seen when running the report from Receivables responsibility. You may want to request your sysadmin to attach the following SLA reports/programs to your AR responsibility as you will need these for your AR closing process: XLAPEXRPT : Subledger Period Close Exception Report - shows transactions in status final, incomplete and unprocessed. XLAGLTRN : Transfer Journal Entries to GL - transfers transactions in final status and manually created transactions to GL To add reports/programs owned by application 'Subledger Accounting' (Subledger Period Close Exception Report and Transfer Journal Entries to GL_ Add to the request group as follows: Let's use Subledger Accounting Report XLATBRPT: Open Account Balances Listing Report as an example. Responsibility: System Administrator Navigation: Security > Responsibility > Define Query the name of your Receivables Responsibility and note the Request Group (ie. Receivables All) Navigation: Security > Responsibility > Request Query the Request Group Go to Request Zone and Click on Add Record Enter the following: Type: Program Name: Open Account Balances Listing Save Responsibility: Receivables Manager Navigation: Control > Requests > Run In the list of values you should now see 'Open Account Balances Listing' report References: Note: 748999.1 How to add reports for application subledger accounting to receivables responsibiilty Note: 759534.1 R12 ARGLTP General Ledger Transfer Program Errors Out Note: 1121944.1 Understanding and Troubleshooting Revenue Recognition in Oracle Receivables

    Read the article

  • Dallas First Regionals 2012&ndash; For Inspiration and Recognition of Science and Technology

    - by T
    Wow!  That is all I have to say after the last 3 days. Three full fun filled days in a world that fed the geek, sparked the competitor, inspired the humanitarian, encouraged the inventor, and continuously warmed my heart.  As part of the Dallas First Regionals, I was awed by incredible students who teach as much as they learn, inventive and truly caring mentors that make mentoring look easy, and completely passionate and dedicated volunteers that bring meaning to giving all that you have and making events fun and safe for all.  If you have any interest in innovation, robotics, or highly motivated students, I can’t recommend anything any higher than visiting a First Robotics event. This is my third year with First and I was honored enough to serve the Dallas First Regionals as both a Web Site evaluator and as the East Field Volunteer Coordinator.  This was also the first year my daughter volunteered with me.  My daughter and I both recognize how different the First program is from other team events.  The difference with First is that everyone is a first class citizen.   It is a difference we can feel through experiencing and observing interactions between executives, respected engineers, students, and event staff.  Even with a veracious competition, you still find a lot of cooperation and we never witnessed any belittling between teams or individuals. First Robotics coined the term “Gracious Professionalism”.   It's a way of doing things that encourages high-quality work, emphasizes the value of others, and respects individuals and the community.1 I was introduced to this term as the Volunteer Coordinator when I was preparing the volunteer instructional speech.  Through the next few days, I discovered it is truly core to everything that First does.  One of the ways First accomplishes Gracious Professionalism is by utilizing another term they came up with which is “CoopertitionTM”. At FIRST, CoopertitionTM is displaying unqualified kindness and respect in the face of fierce competition.1   One of the things I never liked about sports was the need to “destroy” the other team.  First has really found a way to integrate CoopertitionTM  into the rules so teams can be competitive and part of that competition rewards cooperation. Oh and did I mention it has ROBOTS!!!  This year it had basket ball playing Kinect connected, remote controlled, robots!  There are not words for how exciting these games are.  You HAVE to check out this years game, Rebound Rumble, on youtube or you can view live action on Dallas First Video (as of this posting, the recording haven’t posted yet but should be there soon).  There are also some images below. Whatever it is, these students get it and exemplify it and these mentors ooze with it.  I am glad that First supports these events so we can all learn and be inspired by these exceptional students and mentors.  I know that no matter how much I give, it will never compare to what I gain volunteering at First.  Even if you don’t have time to volunteer, you owe it to yourself to go check out one of these events.  See what the future holds, be inspired, be encouraged, take some knowledge, leave some knowledge and most of all, have FUN.  That is what First is all about and thanks to First, I get it.   First Dallas Regionals 2012 VIEW SLIDE SHOW DOWNLOAD ALL 1.  USFirst http://www.usfirst.org/aboutus/gracious-professionalism

    Read the article

  • Can someone explain how this IOS Pan Gesture Recognition works? [on hold]

    - by user79894
    It is ios app using Pan Gesture Recognizer It works great, but I didn't get it. I wanna do some changes if the dragged UIView reaches a specific position it would call another method. Any comments are appreciated. - (IBAction)handlePan1:(UIPanGestureRecognizer *)recognizer { CGPoint translation = [recognizer translationInView:self.view]; recognizer.view.center = CGPointMake(recognizer.view.center.x + translation.x, recognizer.view.center.y + translation.y); [recognizer setTranslation:CGPointMake(0, 0) inView:self.view]; /* [x1 setText:[NSString stringWithFormat: @"%.2f", recognizer.view.center.x]]; [y1 setText:[NSString stringWithFormat: @"%.2f", recognizer.view.center.y]]; [x2 setText:[NSString stringWithFormat: @"%.2f", translation.x]]; [y2 setText:[NSString stringWithFormat: @"%.2f", translation.y]];*/ }

    Read the article

  • How to document and teach others "optimized beyond recognition" computationally intensive code?

    - by rwong
    Occasionally there is the 1% of code that is computationally intensive enough that needs the heaviest kind of low-level optimization. Examples are video processing, image processing, and all kinds of signal processing, in general. The goals are to document, and to teach the optimization techniques, so that the code does not become unmaintainable and prone to removal by newer developers. (*) (*) Notwithstanding the possibility that the particular optimization is completely useless in some unforeseeable future CPUs, such that the code will be deleted anyway. Considering that software offerings (commercial or open-source) retain their competitive advantage by having the fastest code and making use of the newest CPU architecture, software writers often need to tweak their code to make it run faster while getting the same output for a certain task, whlist tolerating a small amount of rounding errors. Typically, a software writer can keep many versions of a function as a documentation of each optimization / algorithm rewrite that takes place. How does one make these versions available for others to study their optimization techniques?

    Read the article

  • How does one capture H.323 voice traffic on a VOIP network?

    - by Chris Holmes
    What I am trying to do is capture the WAV data of a phone conversation on a VOIP network using SharpPCap/PCap.Net. We are using the H.323 recommendation and my understanding is that voice data is located in the RTP packets. However, there is no way to heuristically determine if a UDP packet is a RTP packet, so we have to do more work before we can capture the data. The H.323 recommendation apparently uses a lot of traffic on specific TCP ports to negotiate the call before the WAV data is sent via RTP. However, I am having very little luck determining what data is actually sent on those TCP ports, when it is sent, what the packets look like, how to handle it, etc. If anyone has any information on how to go about this I'd really appreciate it. My Google-Fu seems to be failing me on this one.

    Read the article

  • How can I tag words when creating grammar rules to convert voice to text using xml ?

    - by jhone
    hii friends, I am doing project using c#, which is about converting voice to text. I use speech sdk for this. I want to tag words according to its category using a grammar file which is written in xml sheet and display it in a text box. eg : if the word is "eat" it should be display like "eat/verb". following is the xml code i have written but the part which i tagged won't display in the text box. only the converted word is there. <rule id="Verbs"> <item>eat<tag>$._value="/verb";</tag></item> </rule>

    Read the article

  • Is it possible to execute keyboard input programmatically in Linux?

    - by Taylor Hawkes
    For example is there a Linux command or way that I could from a program (c++ | python| or other) enter a series of keyboard inputs that are interpreted as though they are keyboard inputs. I have a bad case of Repetitive Stress Injury (RSI) from typing. To ease my pain I developed a voice controlled interface using pocket sphinx and a custom grammar and to run a number of very common commands. ex: "open chrome" , "open vim". Basically what is shown here, but with slightly diff tools: http://bloc.eurion.net/archives/2008/writing-a-command-and-control-application-with-voice-recognition/ I have run into some limitation as I can only execute command line commands given a voice command. Rather than having a "voice command" - "command line command" mapping, I would like to have "voice command" - "keyboard input" mapping. So when my active window is a browser and I type + n, and new tab opens. If I'm in vim and new vim tab opens. Any suggestions, ideas, tools or approaches to this problem would be much appreciated. I understand the answer may not be simple, but would like to develop it none the less.

    Read the article

  • is it possible to execute keyboard input programmaticly in linux?

    - by Taylor Hawkes
    For example is there a Linux command or way that I could from a program (c++ | python| or other) enter a serious of keyboard inputs that are interpreted as though they are keyboard inputs. I have a bad case of RSI from typing. To ease my pain I developed a voice controlled interface using pocket sphinx and a custom grammar and to run a number of very common commands. ex: "open chrome" , "open vim". basically what is shown here, but with slightly diff tools: http://bloc.eurion.net/archives/2008/writing-a-command-and-control-application-with-voice-recognition/ I have run into some limitation as I can only execute command line commands given a voice command. Rather than having a "voice command" - "command line command" mapping I would like to have "voice command" - "keyboard input" mapping. So when my active window is a browser and i type + n, and new tab opens. If i'm in vim and new vim tab opens. Any suggestions, ideas, tools or approaches to this problem would be much appreciated. I understand the answer may not be simple, but would like to develop it none the less.

    Read the article

  • How to Build Your Own Siri App In a Browser

    - by ultan o'broin
    This post from Applications User Experience team co-worker Mark Vilrokx (@mvilrokx) about building your own Siri-style voice app in a browser using Rails, Chrome, and WolframAlpha is so just good you've now got it thrice! I love these kind of How To posts. They not only show off innovation but inspire others to try it out too. Love the sharing of the code snippets too. Hat tip to Jake at the AppsLab (and now on board with the Applications UX team too) for picking up the original All Things Rails blog post. Oracle Voice & Nuance demo on the Oracle Applications User Experience Usable Apps YouTube Channel Mark recently presented on Oracle Voice at the Oracle Usability Advisory Board on Oracle Voice and Oracle Fusion Applications and opened customers and partners eyes to how this technology can work for their users in the workplace and what's coming down the line! Great job, Mark.

    Read the article

  • How do I get the best quality screenshot for OCR (Optical Character Recognition) and what tool would

    - by GiH
    I'm trying to get some data into a text file from screenshots. Apparently screenshots don't work very well with OCR because they are 75dpi and the minimum for good quality OCR is 150dpi. Does anyone know how what the best way to be taking screenshots for OCR would be? And also, what the best OCR software for doing this would be? Right now I'm getting pretty good results with the free online tools such as new-ocr but it does make mistakes that I have to correct every now and then. So I'd like some tips. UPDATE: I tested out ABBYY screenshot and it was pretty bad... the online tools are better

    Read the article

  • Doubt with c# handlers?

    - by aF
    I have this code in c# public void startRecognition(string pName) { presentationName = pName; if (WaveNative.waveInGetNumDevs() > 0) { string grammar = System.Environment.GetEnvironmentVariable("PUBLIC") + "\\SoundLog\\Presentations\\" + presentationName + "\\SpeechRecognition\\soundlog.cfg"; /* if (File.Exists(grammar)) { File.Delete(grammar); } executeCommand();*/ recContext = new SpSharedRecoContextClass(); recContext.CreateGrammar(0, out recGrammar); if (File.Exists(grammar)) { recGrammar.LoadCmdFromFile(grammar, SPLOADOPTIONS.SPLO_STATIC); recGrammar.SetGrammarState(SPGRAMMARSTATE.SPGS_ENABLED); recGrammar.SetRuleIdState(0, SPRULESTATE.SPRS_ACTIVE); } recContext.Recognition += new _ISpeechRecoContextEvents_RecognitionEventHandler(handleRecognition); //recContext.RecognitionForOtherContext += new _ISpeechRecoContextEvents_RecognitionForOtherContextEventHandler(handleRecognition); //System.Windows.Forms.MessageBox.Show("olari"); } } private void handleRecognition(int StreamNumber, object StreamPosition, SpeechLib.SpeechRecognitionType RecognitionType, SpeechLib.ISpeechRecoResult Result) { System.Windows.Forms.MessageBox.Show("entrei"); string temp = Result.PhraseInfo.GetText(0, -1, true); _recognizedText = ""; foreach (string word in recognizedWords) { if (temp.Contains(word)) { _recognizedText = word; } } } public void run() { if (File.Exists(System.Environment.GetEnvironmentVariable("PUBLIC") + "\\SoundLog\\Serialization\\Voices\\identifiedVoicesDLL.txt")) { deserializer = new XmlSerializer(_identifiedVoices.GetType()); FileStream fs = new FileStream(System.Environment.GetEnvironmentVariable("PUBLIC") + "\\SoundLog\\Serialization\\Voices\\identifiedVoicesDLL.txt", FileMode.Open); Object o = deserializer.Deserialize(fs); fs.Close(); _identifiedVoices = (double[])o; } if (File.Exists(System.Environment.GetEnvironmentVariable("PUBLIC") + "\\SoundLog\\Serialization\\Voices\\deletedVoicesDLL.txt")) { deserializer = new XmlSerializer(_deletedVoices.GetType()); FileStream fs = new FileStream(System.Environment.GetEnvironmentVariable("PUBLIC") + "\\SoundLog\\Serialization\\Voices\\deletedVoicesDLL.txt", FileMode.Open); Object o = deserializer.Deserialize(fs); fs.Close(); _deletedVoices = (ArrayList)o; } myTimer.Interval = 5000; myTimer.Tick += new EventHandler(clearData); myTimer.Start(); if (WaveNative.waveInGetNumDevs() > 0) { _waveFormat = new WaveFormat(_samples, 16, 2); _recorder = new WaveInRecorder(-1, _waveFormat, 8192 * 2, 3, new BufferDoneEventHandler(DataArrived)); _scaleHz = (double)_samples / _fftLength; _limit = (int)((double)_limitVoice / _scaleHz); SoundLogDLL.MelFrequencyCepstrumCoefficients.calculateFrequencies(_samples, _fftLength); } } startRecognition is a method for Speech Recognition that load a grammar and makes the recognition handler here: recContext.Recognition += new _ISpeechRecoContextEvents_RecognitionEventHandler(handleRecognition); Now I have a problem, when I call the method startRecognition before method run, both handlers (the recognition one and the handler for the Tick) work well. If a word is recognized, handlerRecognition method is called. But, when I call the method run before the method startRecognition, both methods seem to run well but then the recognition Handler is never executed! Even when I see that words are recognized (because they happear on the Windows Speech Recognition app). What can I do for the recognition handler be allways called?

    Read the article

  • GAE datastore - count records between one minute ago and two minutes ago?

    - by Arthur Wulf White
    I am using GAE datastore with python and I want to count and display the number of records between two recent dates. for examples, how many records exist with a time signature between two minutes ago and three minutes ago in the datastore. Thank you. #!/usr/bin/env python import wsgiref.handlers from google.appengine.ext import db from google.appengine.ext import webapp from google.appengine.ext.webapp import template from datetime import datetime class Voice(db.Model): when = db.DateTimeProperty(auto_now_add=True) class MyHandler(webapp.RequestHandler): def get(self): voices = db.GqlQuery( 'SELECT * FROM Voice ' 'ORDER BY when DESC') values = { 'voices': voices } self.response.out.write(template.render('main.html', values)) def post(self): voice = Voice() voice.put() self.redirect('/') self.response.out.write('posted!') def main(): app = webapp.WSGIApplication([ (r'.*', MyHandler)], debug=True) wsgiref.handlers.CGIHandler().run(app) if __name__ == "__main__": main()

    Read the article

  • Building a Universal iPad App - Where is the device recognition code?

    - by JustinXXVII
    I noticed that when I create a new project in XCode for a Universal iPad/iPhone application, the template comes with two separate App Delegate files, one for each device. I can't seem to locate the place in code where it tries to decide which app delegate to use. I have an existing iPhone project I'd like to port to iPad. My thinking was that if I went ahead and designed the iPad project, I could just import my iPhone classes and nibs, and then use the App Delegate and UIDevice to decide which MainWindow.xib to load. The process went like this: Create an iPad project coded as a split-view create brand new classes and nibs for the iPad import iPhone classes and nibs Change build/target settings in accordance with Universal Apps Use [[UIDevice currentDevice] model] in the AppDelegate to decide which MainWindow to load Will this work, or does the app just automatically know which device it's being deployed on? Thanks for any insight you can offer.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >