Search Results

Search found 5105 results on 205 pages for 'words'.

Page 24/205 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • How to mix Grammar (Rules) & Dictation (Free speech) with SpeechRecognizer in C#

    - by Lee Englestone
    I really like Microsofts latest speech recognition (and SpeechSynthesis) offerings. http://msdn.microsoft.com/en-us/library/ms554855.aspx http://estellasays.blogspot.com/2009/04/speech-recognition-in-cnet.html However I feel like I'm somewhat limited when using grammars. Don't get me wrong grammars are great for telling the speech recognition exactly what words / phrases to look out for, however what if I want it to recognise something i've not given it a heads up about? Or I want to parse a phrase which is half pre-determined command name and half random words? For example.. Scenario A - I say "Google [Oil Spill]" and I want it to open Google with search results for the term in brackets which could be anything. Scenario B - I say "Locate [Manchester]" and I want it to search for Manchester in Google Maps or anything else non pre-determined I want it to know that 'Google' and 'Locate' are commands and what comes after it are parameters (and could be anything). Question : Does anyone know how to mix the use of pre-determined grammars (words the speech recognition should recognise) and words not in its pre-determined grammar? Code fragments.. using System.Speech.Recognition; ... ... SpeechRecognizer rec = new SpeechRecognizer(); rec.SpeechRecognized += rec_SpeechRecognized; var c = new Choices(); c.Add("search"); var gb = new GrammarBuilder(c); var g = new Grammar(gb); rec.LoadGrammar(g); rec.Enabled = true; ... ... void rec_SpeechRecognized(object sender, SpeechRecognizedEventArgs e) { if (e.Result.Text == "search") { string query = "How can I get a word not defined in Grammar recognised and passed into here!"; launchGoogle(query); } } ... ... private void launchGoogle(string term) { Process.Start("IEXPLORE", "google.com?q=" + term); }

    Read the article

  • How to classify NN/NNP/NNS obtained from POS tagged document as a product feature

    - by Shweta .......
    I'm planning to perform sentiment analysis on reviews of product features (collected from Amazon dataset). I have extracted review text from the dataset and performed POS tagging on that. I'm able to extract NN/NNP as well. But my doubt is how do I come to know that extracted words classify as features of the products? I know there are classifiers in nltk but I don't know how I should use it for my project. I'm assuming there are 2 ways of finding whether the extracted word is a product feature or not. One is to compare with a bag of words and find out if my word exists in that. Doubt: How do I create/get bag of words? Second way is to implement some kind of apriori algorithm to find out frequently occurring words as features. I would like to know which method is good and how to go about implementing it. Some pointers to available softwares or code snippets would be helpful! Thanks!

    Read the article

  • Implementing a scrabble trainer

    - by bstullkid
    Hello, I've recently been playing alot of online scrabble so I decided to make a program that quickly searches through a dictionary of 200,000+ words with an input of up to any 26 letters. My first attempt was fail as it took a while when you input 8 or more letters (just a basic look through dictionary and cancel out a letter if its found kind of thing), so I made a tree like structure containing only an array of 26 of the same structure and a flag to indicate the end of a word, doing that It can output all possible words in under a second even with an input of 26 characters. But it seems that when I input 12 or more letters with some of the same characters repeated i get duplicates; can anyone see why I would be getting duplicates with this code? (ill post my program at the bottom) Also, the next step once the duplicates are weeded out is to actually be able to input the letters on the game board and then have it calculate the best word you can make on a given board. I am having trouble trying to figure out a good algorithm that can analyze a scrabble board and an input of letters and output a result; the possible words that could be made I have no problem with but actually checking a board efficiently (ie can this word fit here, or here etc... without creating a non dictionary word in the process on some other string of letters) Anyone have a idea for an approach at that? (given a scrabble board, and an input of 7 letters, find all possible valid words or word sets that you can make) lol crap i forgot to email myself the code from my other computer thats in another state... ill post it on monday when I get back there! btw the dictionary im using is sowpods (http://www.calvin.edu/~rpruim/scrabble/ospd3.txt)

    Read the article

  • Full-text search in C++

    - by Jen
    I have a database of many (though relatively short) HTML documents. I want users to be able to search this database by entering one or more search words in a C++ desktop application. Hence, I’m looking for a fast full-text search solution. Ideally, it should: Skip common words, such as the, of, and, etc. Support stemming, i.e. search for run also finds documents containing runner, running and ran. Be able to update its index in the background as new documents are added to the database. Be able to provide search word suggestions (like Google Suggest) To illustrate, assume the database has just two documents: Document 1: This is a test of text search. Document 2: Testing is fun. The following words should be in the index: fun, search, test, testing, text. If the user types t in the search box, I want the application to be able to suggest test, testing and text (Ideally, the application should be able to query the search engine for the 10 most common search words starting with t). A search for testing should return both documents. Can you suggest a C or C++ based solution? (I’ve briefly reviewed CLucene and Xapian, but I’m not sure if either will address my needs, especially querying the search word indexes for the suggest feature).

    Read the article

  • Paragraph with normal opacity within greyed-out div

    - by dmr
    I am greying out a web page when a user doesn't have permission to access it. In order to do that, I am placing a div with background-color white and a lowered opacity on top of the web page. I want to write some words in that div with the words having a normal opacity. As of now, the greyed out background is showing correctly. However, I can't seem to get the words to be a regular opacity. The derived styles on Firebug show the opacity on the words as normal, but it clearly isn't. What am I doing wrong? The HTML: <div class="noPermission"> <p>I'm sorry. You do not have permission to access this page.</p> </div> The CSS: div.noPermission { background-color: white; filter:alpha(opacity=50); /* IE */ opacity: 0.5; /* Safari, Opera */ -moz-opacity:0.50; /* FireFox */ z-index: 20; height: 100%; width: 100%; background-repeat:no-repeat; background-position:center; position:absolute; top: 0px; left: 0px; } div.noPermission p{ color: black; margin: 300px auto auto 50px; text-align: left; font-weight: bold; font-size: 18px; display: block; width: 250px; }

    Read the article

  • Refactoring code/consolidating functions (e.g. nested for-loop order)

    - by bmay2
    Just a little background: I'm making a program where a user inputs a skeleton text, two numbers (lower and upper limit), and a list of words. The outputs are a series of modifications on the skeleton text. Sample inputs: text = "Player # likes @." (replace # with inputted integers and @ with words in list) lower = 1 upper = 3 list = "apples, bananas, oranges" The user can choose to iterate over numbers first: Player 1 likes apples. Player 2 likes apples. Player 3 likes apples. Or words first: Player 1 likes apples. Player 1 likes bananas. Player 1 likes oranges. I chose to split these two methods of outputs by creating a different type of dictionary based on either number keys (integers inputted by the user) or word keys (from words in the inputted list) and then later iterating over the values in the dictionary. Here are the two types of dictionary creation: def numkey(dict): # {1: ['Player 1 likes apples', 'Player 1 likes...' ] } text, lower, upper, list = input_sort(dict) d = {} for num in range(lower,upper+1): l = [] for i in list: l.append(text.replace('#', str(num)).replace('@', i)) d[num] = l return d def wordkey(dict): # {'apples': ['Player 1 likes apples', 'Player 2 likes apples'..] } text, lower, upper, list = input_sort(dict) d = {} for i in list: l = [] for num in range(lower,upper+1): l.append(text.replace('#', str(num)).replace('@', i)) d[i] = l return d It's fine that I have two separate functions for creating different types of dictionaries but I see a lot of repetition between the two. Is there any way I could make one dictionary function and pass in different values to it that would change the order of the nested for loops to create the specific {key : value} pairs I'm looking for? I'm not sure how this would be done. Is there anything related to functional programming or other paradigms that might help with this? The question is a little abstract and more stylistic/design-oriented than anything.

    Read the article

  • how can modify or add new item into generic list of string's

    - by Sadegh
    hi, i want to remove some pretty words in list of words. public System.String CleanNoiseWord(System.String word) { string key = word; if (word.Length <= 2) key = System.String.Empty; else key = word; //other validation here return key; } public IList<System.String> Clean(IList<System.String> words) { var oldWords = words; IList<System.String> newWords = new string[oldWords.Count()]; string key; var i = 0; foreach (System.String word in oldWords) { key = this.CleanNoiseWord(word); if (!string.IsNullOrEmpty(key)) { newWords.RemoveAt(i); newWords.Insert(i++, key); } } return newWords.Distinct().ToList(); } but i can't add, remove or insert any thing in list! and exception NotSupportedException occured Collection was of a fixed size. how i can modify or add new item into generic list of string's?

    Read the article

  • Implementing a scrabble trainer (cheater)

    - by bstullkid
    Hello, I've recently been playing alot of online scrabble so I decided to make a program that quickly searches through a dictionary of 200,000+ words with an input of up to any 26 letters. My first attempt was fail as it took a while when you input 8 or more letters (just a basic look through dictionary and cancel out a letter if its found kind of thing), so I made a tree like structure containing only an array of 26 of the same structure and a flag to indicate the end of a word, doing that It can output all possible words in under a second even with an input of 26 characters. But it seems that when I input 12 or more letters with some of the same characters repeated i get duplicates; can anyone see why I would be getting duplicates with this code? (ill post my program at the bottom) Also, the next step once the duplicates are weeded out is to actually be able to input the letters on the game board and then have it calculate the best word you can make on a given board. I am having trouble trying to figure out a good algorithm that can analyze a scrabble board and an input of letters and output a result; the possible words that could be made I have no problem with but actually checking a board efficiently (ie can this word fit here, or here etc... without creating a non dictionary word in the process on some other string of letters) Anyone have a idea for an approach at that? (given a scrabble board, and an input of 7 letters, find all possible valid words or word sets that you can make) lol crap i forgot to email myself the code from my other computer thats in another state... ill post it on monday when I get back there! btw the dictionary im using is sowpods (http://www.calvin.edu/~rpruim/scrabble/ospd3.txt)

    Read the article

  • Replace without the replace function

    - by Molly Potter
    Assignment: Let X and Y be two words. Find/Replace is a common word processing operation that finds each occurrence of word X and replaces it with word Y in a given document. Your task is to write a program that performs the Find/Replace operation. Your program will prompt the user for the word to be replaced (X), then the substitute word (Y ). Assume that the input document is named input.txt. You must write the result of this Find/Replace operation to a file named output.txt. Lastly, you cannot use the replace() string function built into Python (it would make the assignment much too easy). To test your code, you should modify input.txt using a text editor such as Notepad or IDLE to contain different lines of text. Again, the output of your code must look exactly like the sample output. This is my code: input_data = open('input.txt','r') #this opens the file to read it. output_data = open('output.txt','w') #this opens a file to write to. userStr= (raw_input('Enter the word to be replaced:')) #this prompts the user for a word userReplace =(raw_input('What should I replace all occurences of ' + userStr + ' with?')) #this prompts the user for the replacement word for line in input_data: words = line.split() if userStr in words: output_data.write(line + userReplace) else: output_data.write(line) print 'All occurences of '+userStr+' in input.txt have been replaced by '+userReplace+' in output.txt' #this tells the user that we have replaced the words they gave us input_data.close() #this closes the documents we opened before output_data.close() It won't replace anything in the output file. Help!

    Read the article

  • Extending / changing how Zend_Search_Lucene searches

    - by Grant Collins
    Hi, I am currently using Zend_Search_Lucene to index and search a number of documents currently at around a 1000 or so. What I would like to do is change how the engine scores hits on a document, from the current default. Zend_Search_Lucene scores on the frequency of number of hits within a document, so a document that has 10 matches of the word PHP will score higher than a document with only 3 matches of PHP. What I am trying to do is pass a number of key words and score depending on the hits of those keywords. e.g. I pass 5 key words say,PHP, MySQL, Javascript, HTML and CSS that I search against the index. One document has 3 matches to those key words and one document has all 4 matches, the 4 matches scores the highest. The number of instances of those words in the document do not concern me. Now I've had a quick look at Zend_Search_Lucene_Search_Similarity however I have to confess that I am not sure (or that bright) to know how to use this to achieve what I am after. Is what I want to do possible using Lucene or is there a better solution out there?

    Read the article

  • Quantifying the amount of change in a git diff?

    - by Alex Feinman
    I use git for a slightly unusual purpose--it stores my text as I write fiction. (I know, I know...geeky.) I am trying to keep track of productivity, and want to measure the degree of difference between subsequent commits. The writer's proxy for "work" is "words written", at least during the creation stage. I can't use straight word count as it ignores editing and compression, both vital parts of writing. I think I want to track: (words added)+(words removed) which will double-count (words changed), but I'm okay with that. It'd be great to type some magic incantation and have git report this distance metric for any two revisions. However, git diffs are patches, which show entire lines even if you've only twiddled one character on the line; I don't want that, especially since my 'lines' are paragraphs. Ideally I'd even be able to specify what I mean by "word" (though \W+ would probably be acceptable). Is there a flag to git-diff to give diffs on a word-by-word basis? Alternately, is there a solution using standard command-line tools to compute the metric above?

    Read the article

  • is it better to query database or grab from file? php & mysql

    - by pfunc
    I am keeping a large amount of words in a database that I want to match up articles to. I was thinking that it would just be better to keep these words in an array and grab that array whenever needed instead of querying the database every time (since the words won't be changing that much). Is there much performance difference in doing this? And if I were to do this, how to I write a script that writes the array to a a new php file. I tried writing the array like so: while( $row = mysql_fetch_assoc($query)) { $newArray[] = $row; } $fp = fopen('noWordsArr.php', 'w'); fwrite($fp, $newArray); fclose($fp); But all I get in the other file is "Array". So i figured I could write this and then write have a chron hit up the file every few days or so in case things have changed. But I guess if there is no performance advantage then it prob won't be necessary and I can just query the database every time I need to access the words.

    Read the article

  • facebook Hacker cup: studious Student problem.

    - by smartmuki
    During the qualification round, the following question was asked: You've been given a list of words to study and memorize. Being a diligent student of language and the arts, you've decided to not study them at all and instead make up pointless games based on them. One game you've come up with is to see how you can concatenate the words to generate the lexicographically lowest possible string. Input As input for playing this game you will receive a text file containing an integer N, the number of word sets you need to play your game against. This will be followed by N word sets, each starting with an integer M, the number of words in the set, followed by M words. All tokens in the input will be separated by some whitespace and, aside from N and M, will consist entirely of lowercase letters. Output Your submission should contain the lexicographically shortest strings for each corresponding word set, one per line and in order. Constraints 1 <= N <= 100 1 <= M <= 9 1 <= all word lengths <= 10 Example input 5 6 facebook hacker cup for studious students 5 k duz q rc lvraw 5 mybea zdr yubx xe dyroiy 5 jibw ji jp bw jibw 5 uiuy hopji li j dcyi Example output cupfacebookforhackerstudentsstudious duzklvrawqrc dyroiymybeaxeyubxzdr bwjibwjibwjijp dcyihopjijliuiuy The program I wrote goes as: chomp($numberElements=<STDIN>); for(my $i=0; $i < $numberElements; $i++) { my $string; chomp ($string = <STDIN>); my @array=split(/\s+/,$string); my $number=shift @array; @sorted=sort @array; $sortedStr=join("",@sorted); push(@data,$sortedStr); } foreach (@data) { print "$_\n"; } The program gives the correct output for the given test cases but still facebook shows it to be incorrect. Is there something wrong with the program??

    Read the article

  • How to mix Grammer (Rules) & Dictation (Free speech) with SpeechRecognizer in C#

    - by Lee Englestone
    I really like Microsofts latest speech recognition (and SpeechSynthesis) offerings. http://msdn.microsoft.com/en-us/library/ms554855.aspx http://estellasays.blogspot.com/2009/04/speech-recognition-in-cnet.html However I feel like I'm somewhat limited when using grammers. Don't get me wrong grammers are great for telling the speech recognition exactly what words / phrases to look out for, however what if I want it to recognise something i've not given it a heads up about? Or I want to parse a phrase which is half pre-determined command name and half random words? For example.. Scenario A - I say "Google [Oil Spill]" and I want it to open Google with search results for the term in brackets which could be anything. Scenario B - I say "Locate [Manchester]" and I want it to search for Manchester in Google Maps or anything else non pre-determined I want it to know that 'Google' and 'Locate' are commands and what comes after it are parameters (and could be anything). Question : Does anyone know how to mix the use of pre-determined grammers (words the speech recognition should recognise) and words not in its pre-determined grammer? Code fragments.. using System.Speech.Recognition; ... ... SpeechRecognizer rec = new SpeechRecognizer(); rec.SpeechRecognized += rec_SpeechRecognized; var c = new Choices(); c.Add("search"); var gb = new GrammarBuilder(c); var g = new Grammar(gb); rec.LoadGrammar(g); rec.Enabled = true; ... ... void rec_SpeechRecognized(object sender, SpeechRecognizedEventArgs e) { if (e.Result.Text == "search") { string query = "How can I get a word not defined in Grammer recognised and passed into here!"; launchGoogle(query); } } ... ... private void launchGoogle(string term) { Process.Start("IEXPLORE", "google.com?q=" + term); }

    Read the article

  • mysql select update

    - by Tillebeck
    Hi I have read quite a few selcet+update questions in here but cannot understand how to do it. So will have to ask from the beginning. I would like to update a table based on data in another table. Setup is like this: - TABLE a ( int ; string ) ID WORD 1 banana 2 orange 3 apple - TABLE b ( "comma separated" string ; string ) WORDS TEXTAREA 0 banana -> 0,1 0 orange apple apple -> BEST:0,2,3 ELSE 0,2,3,3 0 banana orange apple -> 0,1,2,3 Now I would like to for each word in TABLE a append ",a.ID" to b.WORDS like: SELECT id, word FROM a (for each) -> UPDATE b SET words = CONCAT(words, ',', a.id) WHERE b.textarea like %a.word% Or even better: replace the word found in b.textarea with ",a.id" so it is the b.textarea that ends up beeing a comma separeted string of id's... But I do not know if that is possible.

    Read the article

  • Why are my bound parameters all identical (using Linq)?

    - by Scott Stafford
    When I run this snippet of code: string[] words = new string[] { "foo", "bar" }; var results = from row in Assets select row; foreach (string word in words) { results = results.Where(row => row.Name.Contains(word)); } I get this SQL: -- Region Parameters DECLARE @p0 VarChar(5) = '%bar%' DECLARE @p1 VarChar(5) = '%bar%' -- EndRegion SELECT ... FROM [Assets] AS [t0] WHERE ([t0].[Name] LIKE @p0) AND ([t0].[Name] LIKE @p1) Note that @p0 and @p1 are both bar, when I wanted them to be foo and bar. I guess Linq is somehow binding a reference to the variable word rather than a reference to the string currently referenced by word? What is the best way to avoid this problem? (Also, if you have any suggestions for a better title for this question, please put it in the comments.) Note that I tried this with regular Linq also, with the same results (you can paste this right into Linqpad): string[] words = new string[] { "f", "a" }; string[] dictionary = new string[] { "foo", "bar", "jack", "splat" }; var results = from row in dictionary select row; foreach (string word in words) { results = results.Where(row => row.Contains(word)); } results.Dump(); Dumps: bar jack splat

    Read the article

  • Password best practices

    - by pcampbell
    Given the recent events with a 'hacker' learning and retrying passwords from website administrators, what can we suggest to everyone about best practices when it comes to passwords? use unique passwords between sites (i.e. never re-use a password) words found in the dictionary are to be avoided consider using words or phrases from a non-English language use pass phrases and use the first letter of each word l33tifying doesn't help very much Please suggest more!

    Read the article

  • What are your favourite less well known keyboard shortcuts in Windows?

    - by Richard Szalay
    The title says it all. Here are a few of my favourites to get things started: WINDOWS + BREAK - System Properties CTRL + SHIFT + ESC - Task Manager SHIFT + F10 - Context Menu (on keyboards without a CM key) And general text selection: HOME/END - Go to the start/end of the current line CTRL + HOME/END - Go to the start/end of the document CTRL + LEFT/RIGHT - Skip entire words SHIFT + CTRL + LEFT/RIGHT - Select entire words Edit: Just remembered this one SHIFT + DEL - Delete file (skip recycle bin)

    Read the article

  • Moving Data from One Column into Six Columns

    - by Alex Rudd
    I have an Excel sheet that has six columns that are currently all combined into one column. I need to separate them out but the issue is the first column is words that sometimes are one word and sometimes two. Here is an example: Twin 70 442 186 310 221 Twin Futon 70 389 160 272 195 XL twin 70 463 196 324 231 XL Twin Futon 70 418 174 293 209 Double 100 590 245 413 295 How can I separate these data sets while keeping the words all in the same columns?

    Read the article

  • What is an efficient way to write password cracking algorithm (python)

    - by Luminance
    This problem might be relatively simple, but I'm given two text files. One text file contains all encrypted passwords encrypted via crypt.crypt in python. The other list contains over 400k+ normal dictionary words. The assignment is that given 3 different functions which transform strings from their normal case to all different permutations of capitalizations, transforms a letter to a number (if it looks alike, e.g. G - 6, B - 8), and reverses a string. The thing is that given the 10 - 20 encrypted passwords in the password file, what is the most efficient way to get the fastest running solution in python to run those functions on dictionary word in the words file? It is given that all those words, when transformed in whatever way, will encrypt to a password in the password file. Here is the function which checks if a given string, when encrypted, is the same as the encrypted password passed in: def check_pass(plaintext,encrypted): crypted_pass = crypt.crypt(plaintext,encrypted) if crypted_pass == encrypted: return True else: return False Thanks in advance.

    Read the article

  • word frequency problem

    - by davit-datuashvili
    hi in programming pearls i have meet following problem question is this:print words in decreasing frequency or as i understand probllem is this suppose there is given string array let call it s words i have taken randomly it does not matter String s[]={"cat","cat","dog","fox","cat","fox","dog","cat","fox}; and we see that string cat occurs 4 times fox 3 times and dog 2 times so result will be such cat fox dog i have following code in java mport java.util.*; public class string { public static void main(String[] args){ String s[]={"fox","cat","cat","fox","dog","cat","fox","dog","cat"}; Arrays.sort(s); int counts; int count[]=new int[s.length]; for (int i=0;i } } or i have sorted array and create count array where i write number of each word in array problem is that somehow index of integer array element and string array element is not same how to do such that print words according tu maximum elements of integer array? please help me

    Read the article

  • How to increase detail band height dynamically

    - by Chandu
    Hi All, I am using ireport. My requirement is to increase the detail band height dynamically when the text field has more data , are there any settings to increase to it? I am using one textfield in the detail band when it has more information(words), it is displaying only some information.i.e the words are being cut off. Depending on the detail band height the words are displaying.I would like to increase the band hieght dynamically when the text field has more data. Please advice me on this regard. Regards, Chandu

    Read the article

  • emacs & icicles: auto-completion from a dictionary?

    - by Mica
    I just installed icicles with emacs, and so far I am liking it a lot. I'm not entirely sure if this is possible, but I would like to implement (or use, if it already exists) a feature in icicles that would auto-complete words from an English dictionary. So, if I'm writing something and need a word that rhymes with floor, I can type in *or, or even better, for alliteration, type in flo* and have it return all the words from the dictionary that start with flo Questions: Does something like this exist? If it does not exist, what would be the best way to go about this? Should I somehow hook into aspell? Or just index a long file of words?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >