Search Results

Search found 6107 results on 245 pages for 'reserved words'.

Page 28/245 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Code Golf: Quickly Build List of Keywords from Text, Including # of Instances

    - by Jonathan Sampson
    I've already worked out this solution for myself with PHP, but I'm curious how it could be done differently - better even. The two languages I'm primarily interested in are PHP and Javascript, but I'd be interested in seeing how quickly this could be done in any other major language today as well (mostly C#, Java, etc). Return only words with an occurrence greater than X Return only words with a length greater than Y Ignore common terms like "and, is, the, etc" Feel free to strip punctuation prior to processing (ie. "John's" becomes "John") Return results in a collection/array Extra Credit Keep Quoted Statements together, (ie. "They were 'too good to be true' apparently")Where 'too good to be true' would be the actual statement Extra-Extra Credit Can your script determine words that should be kept together based upon their frequency of being found together? This being done without knowing the words beforehand. Example: "The fruit fly is a great thing when it comes to medical research. Much study has been done on the fruit fly in the past, and has lead to many breakthroughs. In the future, the fruit fly will continue to be studied, but our methods may change." Clearly the word here is "fruit fly," which is easy for us to find. Can your search'n'scrape script determine this too? Source text: http://sampsonresume.com/labs/c.txt Answer Format It would be great to see the results of your code, output, in addition to how long the operation lasted.

    Read the article

  • Problem: Sorting for GridView/ObjectDataSource changes depending on page

    - by user148298
    I have a GridView tied to an ObjectDataSource using paging. The paging works fine, except that the sort order changes depending on which page of the results is being viewed. This causes items to reappear on subsequent pages among other issues. I traced the problem to my DAL, which reads a page at a time and then sorts it. Obviously the sorting is going to change as the result set size changes. Is there an improvement to this algorithm. I would like to use a datareader if possible: [System.ComponentModel.DataObjectMethod(System.ComponentModel.DataObjectMethodType.Select)] public static WordsCollection LoadForCriteria(string sqlCriteria, int maximumRows, int startRowIndex, string sortExpression) { //DEFAULT SORT EXPRESSION if (string.IsNullOrEmpty(sortExpression)) sortExpression = "OrderBy"; //CREATE THE DYNAMIC SQL TO LOAD OBJECT StringBuilder selectQuery = new StringBuilder(); selectQuery.Append("SELECT"); if (maximumRows > 0) selectQuery.Append(" TOP " + (startRowIndex + maximumRows).ToString()); selectQuery.Append(" " + Words.GetColumnNames(string.Empty)); selectQuery.Append(" FROM sw_Words"); string whereClause = string.IsNullOrEmpty(sqlCriteria) ? string.Empty : " WHERE " + sqlCriteria; selectQuery.Append(whereClause); selectQuery.Append(" ORDER BY " + sortExpression); Database database = Token.Instance.Database; DbCommand selectCommand = database.GetSqlStringCommand(selectQuery.ToString()); //EXECUTE THE COMMAND WordsCollection results = new WordsCollection(); int thisIndex = 0; int rowCount = 0; using (IDataReader dr = database.ExecuteReader(selectCommand)) { while (dr.Read() && ((maximumRows < 1) || (rowCount < maximumRows))) { if (thisIndex >= startRowIndex) { Words varWords = new Words(); Words.LoadDataReader(varWords, dr); results.Add(varWords); rowCount++; } thisIndex++; } dr.Close(); } return results; }

    Read the article

  • "<" operator error

    - by Nona Urbiz
    Why is the ( i < UniqueWords.Count ) expression valid in the for loop, but returns "CS0019 Operator '<' cannot be applied to operands of type 'int' and 'method group'" error when placed in my if? They are both string arrays, previously declared. for (int i = 0;i<UniqueWords.Count;i++){ Occurrences[i] = Words.Where(x => x.Equals(UniqueWords[i])).Count(); Keywords[i] = UniqueWords[i]; if (i<UniqueURLs.Count) {rURLs[i] = UniqueURLs[i];} } EDITED to add declarations: List<string> Words = new List<string>(); List<string> URLs = new List<string>(); //elements added like so. . . . Words.Add (referringWords); //these are strings URLs.Add (referringURL); UniqueWords = Words.Distinct().ToList(); UniqueURLs = URLs.Distinct().ToList(); SOLVED. thank you, parentheses were needed for method .Count() I still do not fully understand why they are not always necessary. Jon Skeet, thanks, I guess I don't understand what exactly the declarations are either then? You wanted the actual values assigned? They are pulled from an external source, but are strings. I get it! Thanks. (the ()'s at least.)

    Read the article

  • Printing Arrays from Structs

    - by Carlll
    I've been stumped for a few hours on an exercise where I must use functions to build up an array inside a struct and print it. In my current program, it compiles but crashes upon running. #define LIM 10 typedef char letters[LIM]; typedef struct { int counter; letters words[LIM]; } foo; int main(int argc, char **argv){ foo apara; structtest(apara, LIM); print_struct(apara); } int structtest(foo *p, int limit){ p->counter = 0; int i =0; for(i; i< limit ;i++){ strcpy(p->words[p->counter], "x"); //only filling arrays with 'x' as an example p->counter ++; } return; I do believe it's due to my incorrect usage/combination of pointers. I've tried adjusting them, but either an 'incompatible types' error is produced, or the array is seemingly blank } void print_struct(foo p){ printf(p.words); } I haven't made it successfully up to the print_struct stage, but I'm unsure whether p.words is the correct item to be calling. In the output, I would expect the function to return an array of x's. I apologize in advance if I've made some sort of grievous "I should already know this" C mistake. Thanks for your help.

    Read the article

  • permutations gone wrong

    - by vbNewbie
    I have written code to implement an algorithm I found on string permutations. What I have is an arraylist of words ( up to 200) and I need to permutate the list in levels of 5. Basically group the string words in fives and permutated them. What I have takes the first 5 words generates the permutations and ignores the rest of the arraylist? Any ideas appreciated. Private Function permute(ByVal chunks As ArrayList, ByVal k As Long) As ArrayList ReDim ItemUsed(k) pno = 0 Permutate(k, 1) Return chunks End Function Private Shared Sub Permutate(ByVal K As Long, ByVal pLevel As Long) Dim i As Long, Perm As String Perm = pString ' Save the current Perm ' for each value currently available For i = 1 To K If Not ItemUsed(i) Then If pLevel = 1 Then pString = chunks.Item(i) 'pString = inChars(i) Else pString = pString & chunks.Item(i) 'pString += inChars(i) End If If pLevel = K Then 'got next Perm pno = pno + 1 SyncLock outfile outfile.WriteLine(pno & " = " & pString & vbCrLf) End SyncLock outfile.Flush() Exit Sub End If ' Mark this item unavailable ItemUsed(i) = True ' gen all Perms at next level Permutate(K, pLevel + 1) ' Mark this item free again ItemUsed(i) = False ' Restore the current Perm pString = Perm End If Next K above is = to 5 for the number of words in one permutation but when I change the for loop to the arraylist size I get an error of index out of bounds

    Read the article

  • SQL with Regular Expressions vs Indexes with Logical Merging Functions

    - by geeko
    Hello Lads, I am trying to develop a complex textual search engine. I have thousands of textual pages from many books. I need to search pages that contain specified complex logical criterias. These criterias can contain virtually any compination of the following: A: Full words. B: Word roots (semilar to stems; i.e. all words with certain key letters). C: Word templates (in some languages are filled in certain templates to form various part of speech such as adjactives, past/present verbs...). D: Logical connectives: AND/OR/XOR/NOT/IF/IFF and parentheses to state priorities. Now, would it be faster to have the pages' full text in database (not indexed) and search though them all using SQL and Regular Expressions ? Or would it be better to construct indexes of word/root/template-page-location tuples. Hence, we can boost searching for individual words/roots/templates. However, it gets tricky as we interdouce logical connectives into our query. I thought of doing the following steps in such cases: 1: Seperately search for each individual words/roots/templates in the specified query. 2: On priority bases, we merge two result lists (from step 1) at a time depedning on the logical connective For example, if we are searching for "he AND (is OR was)": 1: We shall search for "he", "is" and "was" seperately and get result lists for each word. 2: Merge the result lists of "is" and "was" using the merging function OR-MERGE 3: Merge the merged result list from the OR-MERGE function with the one of "he" using the merging function AND-MERGE The result of step 3 is then returned as the result of the specified query. What do you think gurues ? Which is faster ? Any better ideas ? Thank you all in advance.

    Read the article

  • dynamically created arrays

    - by DevAno1
    My task consists of two parts. First I have to create globbal char array of 100 elements, and insert some text to it using cin. Afterwards calculate amount of chars, and create dedicated array with the length of the inputted text. I was thinking about following solution : char[100]inputData; int main() { cin >> inputData >> endl; int length=0; for(int i=0; i<100; i++) { while(inputData[i] == "\0") { ++count; } } char c = new char[count]; Am I thinking good ? Second part of the task is to introduce in the first program dynamically created array of pointers to all inserted words. Adding a new word should print all the previous words and if there is no space for next words, size of the inputData array should be increased twice. And to be honest this is a bit too much for me. How I can create pointers to words specifically ? And how can I increase the size of global array without loosing its content ? With some temporary array ?

    Read the article

  • How do I locate a particular word in a text file using .NET

    - by cmrhema
    I am sending mails (in asp.net ,c#), having a template in text file (.txt) like below User Name :<User Name> Address : <Address>. I used to replace the words within the angle brackets in the text file using the below code StreamReader sr; sr = File.OpenText(HttpContext.Current.Server.MapPath(txt)); copy = sr.ReadToEnd(); sr.Close(); //close the reader copy = copy.Replace(word.ToUpper(),"#" + word.ToUpper()); //remove the word specified UC //save new copy into existing text file FileInfo newText = new FileInfo(HttpContext.Current.Server.MapPath(txt)); StreamWriter newCopy = newText.CreateText(); newCopy.WriteLine(copy); newCopy.Write(newCopy.NewLine); newCopy.Close(); Now I have a new problem, the user will be adding new words within an angle, say for eg, they will be adding <Salary>. In that case i have to read out and find the word <Salary>. In other words, I have to find all the words, that are located with the angle brackets (<). How do I do that?

    Read the article

  • Can you recommend a full-text search engine?

    - by Jen
    Can you recommend a full-text search engine? (Preferably open source) I have a database of many (though relatively short) HTML documents. I want users to be able to search this database by entering one or more search words in my C++ desktop application. Hence, I’m looking for a fast full-text search solution to integrate with my app. Ideally, it should: Skip common words, such as the, of, and, etc. Support stemming, i.e. search for run also finds documents containing runner, running and ran. Be able to update its index in the background as new documents are added to the database. Be able to provide search word suggestions (like Google Suggest) Have a well-documented API To illustrate, assume the database has just two documents: Document 1: This is a test of text search. Document 2: Testing is fun. The following words should be in the index: fun, search, test, testing, text. If the user types t in the search box, I want the application to be able to suggest test, testing and text (Ideally, the application should be able to query the search engine for the 10 most common search words starting with t). A search for testing should return both documents. Other points: I don't need multi-user support I don't need support for complex queries The database resides on the user's computer, so the indexing should be performed locally. Can you suggest a C or C++ based solution? (I’ve briefly reviewed CLucene and Xapian, but I’m not sure if either will address my needs, especially querying the search word indexes for the suggest feature).

    Read the article

  • adapting combination code for larger list

    - by vbNewbie
    I have the following code to generate combinations of string for a small list and would like to adapt this for a large list of over 300 string words.Can anyone suggest how to alter this code or to use a different method. Public Class combinations Public Shared Sub main() Dim myAnimals As String = "cat dog horse ape hen mouse" Dim myAnimalCombinations As String() = BuildCombinations(myAnimals) For Each combination As String In myAnimalCombinations 'Look on the Output Tab for the results! Console.WriteLine("(" & combination & ")") Next combination Console.ReadLine() End Sub Public Shared Function BuildCombinations(ByVal inputString As String) As String() 'Separate the sentence into useable words. Dim wordsArray As String() = inputString.Split(" ".ToCharArray) 'A plase to store the results as we build them Dim returnArray() As String = New String() {""} 'The 'combination level' that we're up to Dim wordDistance As Integer = 1 'Go through all the combination levels... For wordDistance = 1 To wordsArray.GetUpperBound(0) 'Go through all the words at this combination level... For wordIndex As Integer = 0 To wordsArray.GetUpperBound(0) - wordDistance 'Get the first word of this combination level Dim combination As New System.Text.StringBuilder(wordsArray(wordIndex)) 'And all all the remaining words a this combination level For combinationIndex As Integer = 1 To wordDistance combination.Append(" " & wordsArray(wordIndex + combinationIndex)) Next combinationIndex 'Add this combination to the results returnArray(returnArray.GetUpperBound(0)) = combination.ToString 'Add a new row to the results, ready for the next combination ReDim Preserve returnArray(returnArray.GetUpperBound(0) + 1) Next wordIndex Next wordDistance 'Get rid of the last, blank row. ReDim Preserve returnArray(returnArray.GetUpperBound(0) - 1) 'Return combinations to the calling method. Return returnArray End Function End Class

    Read the article

  • Is this a variation of the traveling salesman problem?

    - by Ville Koskinen
    I'm interested in a function of two word lists, which would return an order agnostic edit distance between them. That is, the arguments would be two lists of (let's say space delimited) words and return value would be the minimum sum of the edit (or Levenshtein) distances of the words in the lists. Distance between "cat rat bat" and "rat bat cat" would be 0. Distance between "cat rat bat" and "fat had bad" would be the same as distance between "rat bat cat" and "had fat bad", 4. In the case the number of words in the lists are not the same, the shorter list would be padded with 0-length words. My intuition (which hasn't been nurtured with computer science classes) does not find any other solution than to use brute force: |had|fat|bad| a solution ---+---+---+---+ +---+---+---+ cat| 2 | 1 | 2 | | | 1 | | ---+---+---+---+ +---+---+---+ rat| 2 | 1 | 2 | | 3 | | | ---+---+---+---+ +---+---+---+ bat| 2 | 1 | 1 | | | | 4 | ---+---+---+---+ +---+---+---+ Starting from the first row, pick a column and go to the next rows without ever revisiting a column you have already visited. Do this over and over again until you've tried all combinations. To me this sounds a bit like the traveling salesman problem. Is it, and how would you solve my particular problem?

    Read the article

  • How to mix Grammar (Rules) & Dictation (Free speech) with SpeechRecognizer in C#

    - by Lee Englestone
    I really like Microsofts latest speech recognition (and SpeechSynthesis) offerings. http://msdn.microsoft.com/en-us/library/ms554855.aspx http://estellasays.blogspot.com/2009/04/speech-recognition-in-cnet.html However I feel like I'm somewhat limited when using grammars. Don't get me wrong grammars are great for telling the speech recognition exactly what words / phrases to look out for, however what if I want it to recognise something i've not given it a heads up about? Or I want to parse a phrase which is half pre-determined command name and half random words? For example.. Scenario A - I say "Google [Oil Spill]" and I want it to open Google with search results for the term in brackets which could be anything. Scenario B - I say "Locate [Manchester]" and I want it to search for Manchester in Google Maps or anything else non pre-determined I want it to know that 'Google' and 'Locate' are commands and what comes after it are parameters (and could be anything). Question : Does anyone know how to mix the use of pre-determined grammars (words the speech recognition should recognise) and words not in its pre-determined grammar? Code fragments.. using System.Speech.Recognition; ... ... SpeechRecognizer rec = new SpeechRecognizer(); rec.SpeechRecognized += rec_SpeechRecognized; var c = new Choices(); c.Add("search"); var gb = new GrammarBuilder(c); var g = new Grammar(gb); rec.LoadGrammar(g); rec.Enabled = true; ... ... void rec_SpeechRecognized(object sender, SpeechRecognizedEventArgs e) { if (e.Result.Text == "search") { string query = "How can I get a word not defined in Grammar recognised and passed into here!"; launchGoogle(query); } } ... ... private void launchGoogle(string term) { Process.Start("IEXPLORE", "google.com?q=" + term); }

    Read the article

  • How to classify NN/NNP/NNS obtained from POS tagged document as a product feature

    - by Shweta .......
    I'm planning to perform sentiment analysis on reviews of product features (collected from Amazon dataset). I have extracted review text from the dataset and performed POS tagging on that. I'm able to extract NN/NNP as well. But my doubt is how do I come to know that extracted words classify as features of the products? I know there are classifiers in nltk but I don't know how I should use it for my project. I'm assuming there are 2 ways of finding whether the extracted word is a product feature or not. One is to compare with a bag of words and find out if my word exists in that. Doubt: How do I create/get bag of words? Second way is to implement some kind of apriori algorithm to find out frequently occurring words as features. I would like to know which method is good and how to go about implementing it. Some pointers to available softwares or code snippets would be helpful! Thanks!

    Read the article

  • How do I locate a particular word in a text file using C#

    - by cmrhema
    Hi, I am sending mails (in asp.net ,c#), having a template in text file (.txt) like below User Name :<User Name> Address : <Address>. I used to replace the words within the angle brackets in the text file using the below code StreamReader sr; sr = File.OpenText(HttpContext.Current.Server.MapPath(txt)); copy = sr.ReadToEnd(); sr.Close(); //close the reader copy = copy.Replace(word.ToUpper(),"#" + word.ToUpper()); //remove the word specified UC //save new copy into existing text file FileInfo newText = new FileInfo(HttpContext.Current.Server.MapPath(txt)); StreamWriter newCopy = newText.CreateText(); newCopy.WriteLine(copy); newCopy.Write(newCopy.NewLine); newCopy.Close(); Now I have a new problem, the user will be adding new words within an angle, say for eg, they will be adding <Salary>. In that case i have to read out and find the word <Salary>. In other words, I have to find all the words, that are located with the angle brackets (<). How do I do that. Kindly do let me know. Thanks.

    Read the article

  • Implementing a scrabble trainer

    - by bstullkid
    Hello, I've recently been playing alot of online scrabble so I decided to make a program that quickly searches through a dictionary of 200,000+ words with an input of up to any 26 letters. My first attempt was fail as it took a while when you input 8 or more letters (just a basic look through dictionary and cancel out a letter if its found kind of thing), so I made a tree like structure containing only an array of 26 of the same structure and a flag to indicate the end of a word, doing that It can output all possible words in under a second even with an input of 26 characters. But it seems that when I input 12 or more letters with some of the same characters repeated i get duplicates; can anyone see why I would be getting duplicates with this code? (ill post my program at the bottom) Also, the next step once the duplicates are weeded out is to actually be able to input the letters on the game board and then have it calculate the best word you can make on a given board. I am having trouble trying to figure out a good algorithm that can analyze a scrabble board and an input of letters and output a result; the possible words that could be made I have no problem with but actually checking a board efficiently (ie can this word fit here, or here etc... without creating a non dictionary word in the process on some other string of letters) Anyone have a idea for an approach at that? (given a scrabble board, and an input of 7 letters, find all possible valid words or word sets that you can make) lol crap i forgot to email myself the code from my other computer thats in another state... ill post it on monday when I get back there! btw the dictionary im using is sowpods (http://www.calvin.edu/~rpruim/scrabble/ospd3.txt)

    Read the article

  • Full-text search in C++

    - by Jen
    I have a database of many (though relatively short) HTML documents. I want users to be able to search this database by entering one or more search words in a C++ desktop application. Hence, I’m looking for a fast full-text search solution. Ideally, it should: Skip common words, such as the, of, and, etc. Support stemming, i.e. search for run also finds documents containing runner, running and ran. Be able to update its index in the background as new documents are added to the database. Be able to provide search word suggestions (like Google Suggest) To illustrate, assume the database has just two documents: Document 1: This is a test of text search. Document 2: Testing is fun. The following words should be in the index: fun, search, test, testing, text. If the user types t in the search box, I want the application to be able to suggest test, testing and text (Ideally, the application should be able to query the search engine for the 10 most common search words starting with t). A search for testing should return both documents. Can you suggest a C or C++ based solution? (I’ve briefly reviewed CLucene and Xapian, but I’m not sure if either will address my needs, especially querying the search word indexes for the suggest feature).

    Read the article

  • Paragraph with normal opacity within greyed-out div

    - by dmr
    I am greying out a web page when a user doesn't have permission to access it. In order to do that, I am placing a div with background-color white and a lowered opacity on top of the web page. I want to write some words in that div with the words having a normal opacity. As of now, the greyed out background is showing correctly. However, I can't seem to get the words to be a regular opacity. The derived styles on Firebug show the opacity on the words as normal, but it clearly isn't. What am I doing wrong? The HTML: <div class="noPermission"> <p>I'm sorry. You do not have permission to access this page.</p> </div> The CSS: div.noPermission { background-color: white; filter:alpha(opacity=50); /* IE */ opacity: 0.5; /* Safari, Opera */ -moz-opacity:0.50; /* FireFox */ z-index: 20; height: 100%; width: 100%; background-repeat:no-repeat; background-position:center; position:absolute; top: 0px; left: 0px; } div.noPermission p{ color: black; margin: 300px auto auto 50px; text-align: left; font-weight: bold; font-size: 18px; display: block; width: 250px; }

    Read the article

  • Refactoring code/consolidating functions (e.g. nested for-loop order)

    - by bmay2
    Just a little background: I'm making a program where a user inputs a skeleton text, two numbers (lower and upper limit), and a list of words. The outputs are a series of modifications on the skeleton text. Sample inputs: text = "Player # likes @." (replace # with inputted integers and @ with words in list) lower = 1 upper = 3 list = "apples, bananas, oranges" The user can choose to iterate over numbers first: Player 1 likes apples. Player 2 likes apples. Player 3 likes apples. Or words first: Player 1 likes apples. Player 1 likes bananas. Player 1 likes oranges. I chose to split these two methods of outputs by creating a different type of dictionary based on either number keys (integers inputted by the user) or word keys (from words in the inputted list) and then later iterating over the values in the dictionary. Here are the two types of dictionary creation: def numkey(dict): # {1: ['Player 1 likes apples', 'Player 1 likes...' ] } text, lower, upper, list = input_sort(dict) d = {} for num in range(lower,upper+1): l = [] for i in list: l.append(text.replace('#', str(num)).replace('@', i)) d[num] = l return d def wordkey(dict): # {'apples': ['Player 1 likes apples', 'Player 2 likes apples'..] } text, lower, upper, list = input_sort(dict) d = {} for i in list: l = [] for num in range(lower,upper+1): l.append(text.replace('#', str(num)).replace('@', i)) d[i] = l return d It's fine that I have two separate functions for creating different types of dictionaries but I see a lot of repetition between the two. Is there any way I could make one dictionary function and pass in different values to it that would change the order of the nested for loops to create the specific {key : value} pairs I'm looking for? I'm not sure how this would be done. Is there anything related to functional programming or other paradigms that might help with this? The question is a little abstract and more stylistic/design-oriented than anything.

    Read the article

  • how can modify or add new item into generic list of string's

    - by Sadegh
    hi, i want to remove some pretty words in list of words. public System.String CleanNoiseWord(System.String word) { string key = word; if (word.Length <= 2) key = System.String.Empty; else key = word; //other validation here return key; } public IList<System.String> Clean(IList<System.String> words) { var oldWords = words; IList<System.String> newWords = new string[oldWords.Count()]; string key; var i = 0; foreach (System.String word in oldWords) { key = this.CleanNoiseWord(word); if (!string.IsNullOrEmpty(key)) { newWords.RemoveAt(i); newWords.Insert(i++, key); } } return newWords.Distinct().ToList(); } but i can't add, remove or insert any thing in list! and exception NotSupportedException occured Collection was of a fixed size. how i can modify or add new item into generic list of string's?

    Read the article

  • Implementing a scrabble trainer (cheater)

    - by bstullkid
    Hello, I've recently been playing alot of online scrabble so I decided to make a program that quickly searches through a dictionary of 200,000+ words with an input of up to any 26 letters. My first attempt was fail as it took a while when you input 8 or more letters (just a basic look through dictionary and cancel out a letter if its found kind of thing), so I made a tree like structure containing only an array of 26 of the same structure and a flag to indicate the end of a word, doing that It can output all possible words in under a second even with an input of 26 characters. But it seems that when I input 12 or more letters with some of the same characters repeated i get duplicates; can anyone see why I would be getting duplicates with this code? (ill post my program at the bottom) Also, the next step once the duplicates are weeded out is to actually be able to input the letters on the game board and then have it calculate the best word you can make on a given board. I am having trouble trying to figure out a good algorithm that can analyze a scrabble board and an input of letters and output a result; the possible words that could be made I have no problem with but actually checking a board efficiently (ie can this word fit here, or here etc... without creating a non dictionary word in the process on some other string of letters) Anyone have a idea for an approach at that? (given a scrabble board, and an input of 7 letters, find all possible valid words or word sets that you can make) lol crap i forgot to email myself the code from my other computer thats in another state... ill post it on monday when I get back there! btw the dictionary im using is sowpods (http://www.calvin.edu/~rpruim/scrabble/ospd3.txt)

    Read the article

  • Replace without the replace function

    - by Molly Potter
    Assignment: Let X and Y be two words. Find/Replace is a common word processing operation that finds each occurrence of word X and replaces it with word Y in a given document. Your task is to write a program that performs the Find/Replace operation. Your program will prompt the user for the word to be replaced (X), then the substitute word (Y ). Assume that the input document is named input.txt. You must write the result of this Find/Replace operation to a file named output.txt. Lastly, you cannot use the replace() string function built into Python (it would make the assignment much too easy). To test your code, you should modify input.txt using a text editor such as Notepad or IDLE to contain different lines of text. Again, the output of your code must look exactly like the sample output. This is my code: input_data = open('input.txt','r') #this opens the file to read it. output_data = open('output.txt','w') #this opens a file to write to. userStr= (raw_input('Enter the word to be replaced:')) #this prompts the user for a word userReplace =(raw_input('What should I replace all occurences of ' + userStr + ' with?')) #this prompts the user for the replacement word for line in input_data: words = line.split() if userStr in words: output_data.write(line + userReplace) else: output_data.write(line) print 'All occurences of '+userStr+' in input.txt have been replaced by '+userReplace+' in output.txt' #this tells the user that we have replaced the words they gave us input_data.close() #this closes the documents we opened before output_data.close() It won't replace anything in the output file. Help!

    Read the article

  • Quantifying the amount of change in a git diff?

    - by Alex Feinman
    I use git for a slightly unusual purpose--it stores my text as I write fiction. (I know, I know...geeky.) I am trying to keep track of productivity, and want to measure the degree of difference between subsequent commits. The writer's proxy for "work" is "words written", at least during the creation stage. I can't use straight word count as it ignores editing and compression, both vital parts of writing. I think I want to track: (words added)+(words removed) which will double-count (words changed), but I'm okay with that. It'd be great to type some magic incantation and have git report this distance metric for any two revisions. However, git diffs are patches, which show entire lines even if you've only twiddled one character on the line; I don't want that, especially since my 'lines' are paragraphs. Ideally I'd even be able to specify what I mean by "word" (though \W+ would probably be acceptable). Is there a flag to git-diff to give diffs on a word-by-word basis? Alternately, is there a solution using standard command-line tools to compute the metric above?

    Read the article

  • Extending / changing how Zend_Search_Lucene searches

    - by Grant Collins
    Hi, I am currently using Zend_Search_Lucene to index and search a number of documents currently at around a 1000 or so. What I would like to do is change how the engine scores hits on a document, from the current default. Zend_Search_Lucene scores on the frequency of number of hits within a document, so a document that has 10 matches of the word PHP will score higher than a document with only 3 matches of PHP. What I am trying to do is pass a number of key words and score depending on the hits of those keywords. e.g. I pass 5 key words say,PHP, MySQL, Javascript, HTML and CSS that I search against the index. One document has 3 matches to those key words and one document has all 4 matches, the 4 matches scores the highest. The number of instances of those words in the document do not concern me. Now I've had a quick look at Zend_Search_Lucene_Search_Similarity however I have to confess that I am not sure (or that bright) to know how to use this to achieve what I am after. Is what I want to do possible using Lucene or is there a better solution out there?

    Read the article

  • is it better to query database or grab from file? php & mysql

    - by pfunc
    I am keeping a large amount of words in a database that I want to match up articles to. I was thinking that it would just be better to keep these words in an array and grab that array whenever needed instead of querying the database every time (since the words won't be changing that much). Is there much performance difference in doing this? And if I were to do this, how to I write a script that writes the array to a a new php file. I tried writing the array like so: while( $row = mysql_fetch_assoc($query)) { $newArray[] = $row; } $fp = fopen('noWordsArr.php', 'w'); fwrite($fp, $newArray); fclose($fp); But all I get in the other file is "Array". So i figured I could write this and then write have a chron hit up the file every few days or so in case things have changed. But I guess if there is no performance advantage then it prob won't be necessary and I can just query the database every time I need to access the words.

    Read the article

  • facebook Hacker cup: studious Student problem.

    - by smartmuki
    During the qualification round, the following question was asked: You've been given a list of words to study and memorize. Being a diligent student of language and the arts, you've decided to not study them at all and instead make up pointless games based on them. One game you've come up with is to see how you can concatenate the words to generate the lexicographically lowest possible string. Input As input for playing this game you will receive a text file containing an integer N, the number of word sets you need to play your game against. This will be followed by N word sets, each starting with an integer M, the number of words in the set, followed by M words. All tokens in the input will be separated by some whitespace and, aside from N and M, will consist entirely of lowercase letters. Output Your submission should contain the lexicographically shortest strings for each corresponding word set, one per line and in order. Constraints 1 <= N <= 100 1 <= M <= 9 1 <= all word lengths <= 10 Example input 5 6 facebook hacker cup for studious students 5 k duz q rc lvraw 5 mybea zdr yubx xe dyroiy 5 jibw ji jp bw jibw 5 uiuy hopji li j dcyi Example output cupfacebookforhackerstudentsstudious duzklvrawqrc dyroiymybeaxeyubxzdr bwjibwjibwjijp dcyihopjijliuiuy The program I wrote goes as: chomp($numberElements=<STDIN>); for(my $i=0; $i < $numberElements; $i++) { my $string; chomp ($string = <STDIN>); my @array=split(/\s+/,$string); my $number=shift @array; @sorted=sort @array; $sortedStr=join("",@sorted); push(@data,$sortedStr); } foreach (@data) { print "$_\n"; } The program gives the correct output for the given test cases but still facebook shows it to be incorrect. Is there something wrong with the program??

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >