Search Results

Search found 7755 results on 311 pages for 'word wrap'.

Page 98/311 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Full Text Search in MSSQL2008 shows wrong display_item for Thai language

    - by ensecoz
    I am working with MSSQL2008. My task is to investigate the issue where FTS cannot find the right result for Thai. First, I have the table which enables the FTS on the column 'ItemName' which is nvarchar. The Catalog is created with the Thai Language. Note that the Thai language is one of the languages that doesn't separate the word by spaces, so '????' '???' '????' are written like this in a sentence: '???????????' In the table, there are many rows that include the word (????); for example row#1 (ItemName: '???????????') On the webpage, I try to search for '????' but SQLServer cannot find it. So I try to investigate it by trying the following query in SQLServer: select * from sys.dm_fts_parser(N'"???????????"', 1054, 0, 0) ...to see how the words are broken. The first one is the text to be broken. The second parameter is to specify that we're using Thai (WorkBreaker, so on). Here is the result: row#1 (display_item: '????', source_item: '???????????') row#2 (display_item: '????', source_item: '???????????') row#3 (display_item: '??', source_item: '???????????') Notice that the first and second row display the wrong display_item '?' in the '????' isn't even Thai characters. '?' in '????' is not a Thai character either. So the question is where did those alien characters come from? I guess this why I cannot search for '????' because the word breaker is broken and keeping the wrong character in the indexes. Please help!

    Read the article

  • Flex wordwrap issue with multiple text instances

    - by Craig Myles
    Hi, I have a scenario where I want to dynamically add words of text to a container so that it forms a paragraph of text which is wrapped neatly according to the size of the parent container. Each text element will have differing formatting, and will have differing user interaction options. For example, imagine the text " has just spoken out about ". Each word will be added to the container one at a time, at run time. The username in this case would be bold, and if clicked on will trigger an event. Same with the news article. The rest of the text is just plain text which, when clicked on, would do nothing. Now, I'm using Flex 3 so I don't have access to the fancy new text formatting tools. I've implemented a solution where the words are plotted onto a canvas, but this means that the words are wrapped at a particular y position (an arbitrary value I've chosen). When the container is resized, the words still wrap at that position which leaves lots of space. I thought about adding each text element to an Array Collection and using this as a datasource for a Tile List, but Tile Lists don't support variable column widths (in my limited knowledge) so each word would use the same amount of space which isn't ideal. Does anyone know how I can plot words onto a container so that I can retain formatting, events and word wrapping at paragraph level, even if the container is resized?

    Read the article

  • Java Bucket Sort on Strings

    - by Michael
    I can't figure out what would be the best way to use Bucket Sort to sort a list of strings that will always be the same length. An algorithm would look like this: For the last character position down to the first: For each word in the list: Place the word into the appropriate bucket by current character For each of the 26 buckets(arraylists) Copy every word back to the list I'm writing in java and I'm using an arraylist for the main list that stores the unsorted strings. The strings will be five characters long each. This is what I started. It just abrubdly stops within the second for loop because I don't know what to do next or if I did the first part right. ArrayList<String> count = new ArrayList<String>(26); for (int i = wordlen; i > 0; i--) { for (int j = 0; i < myList.size(); i++) myList.get(j).charAt(i) } Thanks in advanced.

    Read the article

  • Custom UILabel does not show text.

    - by Oscar
    Hi! I've made an custom UILabel class in which i draw a red line at the bottom of the frame. The red line is showing but i cant get the text to show. #import <UIKit/UIKit.h> @interface LetterLabel : UILabel { } @end #import "LetterLabel.h" @implementation LetterLabel - (id)initWithFrame:(CGRect)frame { if (self = [super initWithFrame:frame]) { // Initialization code } return self; } - (void)drawRect:(CGRect)rect { CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 2.0); CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 1.0, 0.0, 0.0, 1.0); CGContextBeginPath(UIGraphicsGetCurrentContext()); CGContextMoveToPoint(UIGraphicsGetCurrentContext(), 0.0, self.frame.size.height); CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), self.frame.size.width, self.frame.size.height); CGContextStrokePath(UIGraphicsGetCurrentContext()); } - (void)dealloc { [super dealloc]; } @end #import <UIKit/UIKit.h> #import "Word.h" @interface WordView : UIView { Word *gameWord; } @property (nonatomic, retain) Word *gameWord; @end @implementation WordView @synthesize gameWord; - (id)initWithFrame:(CGRect)frame { if (self = [super initWithFrame:frame]) { self.backgroundColor = [UIColor whiteColor]; LetterLabel *label = [[LetterLabel alloc] initWithFrame:CGRectMake(0, 0, 20, 30)]; label.backgroundColor = [UIColor cyanColor]; label.textAlignment = UITextAlignmentCenter; label.textColor = [UIColor blackColor]; [label setText:@"t"]; [self addSubview:label]; } return self; } - (void)drawRect:(CGRect)rect { // Drawing code } - (void)dealloc { [gameWord release]; [super dealloc]; } @end

    Read the article

  • Which is better? OpenCyc or ConceptNet?

    - by Daniel Loureiro
    Hi, I'm doing a NLP project where I need to recognise concepts in sentences to find other similar concepts. I do this to infer word valences from a list I already have. I started using WordNet, but it gave many contradictory results. By contradictory results I mean word expansions that had contradictory valences. So now I'm looking into ConceptNet and OpenCyc. I've already implemented ConceptNet and it was all very easy and I love it. Problem is that OpenCyc appears to have a much larger and more logically rigid database, which is important when I found so many "contradictions" on WordNet... But I wouldn't know because I haven't tried it. Could someone tell me if it's worth going through the (considerable, for me) effort to implement OpenCyc, or is ConceptNet good enough to infer word valences? Are they that different? I'll be happy to explain myself further, if needed. Trying to keep it short for now! Thanks!

    Read the article

  • Conventions for the behavior of double or triple "click to select text" features?

    - by John Sullivan
    Almost any mature program that involves text implements "double click to select the word" and, in some cases, "triple click to select additional stuff like an entire line" as a feature. I find these features useful but they are often inconsistent between programs. Example - some programs' double clicks do not select the ending space after a word, but most do. Some recognize the - character as the end of a word, others do not. SO likes to select the entire paragraph as I write this post when I triple click it, VS web developer 2005 has no triple click support, and ultra-edit 32 will select one line upon triple clicking. We could come up with innumerable inconsistencies about how double and triple click pattern matching is implemented across programs. I am concerned about how to implement this behavior in my program if nobody else has achieved a convention about how the pattern matching should work. My question is, does a convention (conventions? maybe an MS or Linux convention?) exist that dictates how these features are supposed to behave to the end user? What, if any, are they?

    Read the article

  • JUnit confusion: use 'extend Testcase' or '@Test' ?

    - by Rabarberski
    I've found the proper use (or at least the documentation) of JUnit very confusing. This question serves both as a future reference and as a real question. If I've understood correctly, there are two main approaches to create and run a JUnit test: Approach A: create a class that extends TestCase, and start test methods with the word test. When running the class as a JUnit Test (in Eclipse), all methods starting with the word test are automatically run. import junit.framework.TestCase; public class DummyTestA extends TestCase { public void testSum() { int a = 5; int b = 10; int result = a + b; assertEquals(15, result); } } Approach B: create a 'normal' class and prepend a @Test annotation to the method. Note that you do NOT have to start the method with the word test. import org.junit.*; import static org.junit.Assert.*; public class DummyTestB { @Test public void Sum() { int a = 5; int b = 10; int result = a + b; assertEquals(15, result); } } Mixing the two seems not to be a good idea, see e.g. this stackoverflow question: Now, my questions(s): What is the preferred approach, or when would you use one instead of the other? Approach B allows for testing for exceptions by extending the @Test annotation like in @Test(expected = ArithmeticException.class). But how do you test for exceptions when using approach A? When using approach A, you can group a number of test classes in a test suite. TestSuite suite = new TestSuite("All tests");<br/> suite.addTestSuite(DummyTestA.class); suite.addTestSuite(DummyTestAbis.class);` But this can't be used with approach B (since each testclass should subclass TestCase). What is the proper way to group tests for approach B?

    Read the article

  • Creating a smart text generator

    - by royrules22
    I'm doing this for fun (or as 4chan says "for teh lolz") and if I learn something on the way all the better. I took an AI course almost 2 years ago now and I really enjoyed it but I managed to forget everything so this is a way to refresh that. Anyway I want to be able to generate text given a set of inputs. Basically this will read forum inputs (or maybe Twitter tweets) and then generate a comment based on the learning. Now the simplest way would be to use a Markov Chain Text Generator but I want something a little bit more complex than that as the MKC basically only learns by word order (which word is more likely to appear after word x given the input text). I'm trying to see if there's something I can do to make it a little bit more smarter. For example I want it to do something like this: Learn from a large selection of posts in a message board but don't weight it too much For each post: Learn from the other comments in that post and weigh these inputs higher Generate comment and post See what other users' reaction to your post was. If good weigh it positively so you make more posts that are similar to the one made, and vice versa if negative. It's the weighing and learning from mistakes part that I'm not sure how to implement. I thought about Artificial Neural Networks (mainly because I remember enjoying that chapter) but as far as I can tell that's mainly used to classify things (i.e. given a finite set of choices [x1...xn] which x is this given input) not really generate anything. I'm not even sure if this is possible or if it is what should I go about learning/figuring out. What algorithm is best suited for this? To those worried that I will use this as a bot to spam or provide bad answers to SO, I promise that I will not use this to provide (bad) advice or to spam for profit. I definitely will not post it's nonsensical thoughts on SO. I plan to use it for my own amusement. Thanks!

    Read the article

  • Powerpoint displays a "can't start the application" error when an Excel Chart object is embedded in

    - by A9S6
    This is a very common problem when Excel Worksheet or Chart is embedded into Word or Powerpoint. I am seeing this problem in both Word and Powerpoint and the reason it seems is the COM addin attached to Excel. The COM addin is written in C# (.NET). See the attached images for error dialogs. I debugged the addin and found a very strange behavior. The OnConnection(...), OnDisConnection(...) etc methods in the COM addin works fine until I add an event handler to the code. i.e. handle the Worksheet_SheetChange, SelectionChange or any similar event available in Excel. As soon as I add even a single event handler (though my code has several), Word and Powerpoint start complaining and do not Activate the embedded object. On some of the posts on the internet, people have been asked to remove the anti-virus addins for office (none in my case) so this makes me believe that the problem is somewhat related to COM addins which are loaded when the host app activates the object. Does anyone have any idea of whats happening here?

    Read the article

  • I'm looking for a way to evaluate reading rate in several languages

    - by i30817
    I have a software that is page oriented instead of scrollbar oriented so i can easily count the words, but i'd like a way to filter outliers and some default value for the text language (that is known). The goal is from the remaining text to calculate the remaining time. I'm not sure what is the best unit to use. WPM (words per minute) from here seems very fuzzy and human oriented. Besides i don't know how many "words" remain in the text. http://www.sfsu.edu/~testing/CalReadRate.htm So i came up with this: The user is reading the text. The total text size in characters is known. His position in the text is known. So the remaining characters to read is also known. If a language has a median word length of say 5 chars, then if i had a WPM speed for the user, i could calculate the remaining time. 3 things are needed for this: 1) A table of the median word length of the language. 2) A table of the median WPM of a median user per language. 3) Update the WPM to fit the user as data becomes available, filtering outliers. However i can't find these tables. And i'm not sure how precise it is assuming median word length.

    Read the article

  • Something confusing about FormsOf (Sql server Full-Text searching)

    - by AspOnMyNet
    hi I'm using Sql Server 2008 1) A given <simple_term> within a <generation_term> will not match both nouns and verbs. If I understand the above text correctly, then query SELECT * FROM someTable WHERE CONTAINS ( *, ' FORMSOF ( INFLECTIONAL, park ) ' ) should search for either nouns or verbs derived from the root word “park”, but not for both? Thus out of the two rows, one containing noun parks and other verb parking, the above query should return just one of the two rows? But as it turns out, query returns both rows, so are perhaps my assumptions a bit off or is the above quote wrong?! 2) From Msdn: If freetext_string is enclosed in double quotation marks, a phrase match is instead performed; stemming and thesaurus are not performed. According to the above quote the following query shouldn’t return rows containing strings surfing ( due to query not performing stemming ), surf ( due to query performing phrase matching and not individual word matching ) and surfing with suzy’s sister ( due to query not performing stemming and due to query performing phrase matching and not word matching ), but it does. Thus, it appears that even when *freetext_string* is enclosed in double quotation marks, stemming is still preformed, while phrase matching is not: SELECT * FROM someTable WHERE FREETEXT( *, ' "surf sister" ' ) So is the above quote wrong or...? thanx

    Read the article

  • Why is my program printing out the null termination character?

    - by Tyler Pfaff
    When I run this, it will SOMETIMES print out a null termination character. Most of the time it will, and probably 1/5 times it will print just the characters. void cryptogram::Encrypt(){ cout<<"encrypt"<tempS){ len=tempS.length(); int a=0; for(int j=0;j if(j!=len){ //if the word still has more characters j++; a=0; }else{ //if the word is done being scanned cout<<" "; } } } } } } } So that's it and this is the corresponding EXPECTED output that is printed SOMETIMES xvk bkikhxlr wggbtfkj wiylekgbdhx wjjm hko wigbtubxt xvk iwhj uedjkm glctb gvrmdiwhj iebbdielmeggtbx ctb xvtmk gbtubxvk wjjdxdthgbtubodll khvxvk imkbfdik xt xvk bkudth whj gbtfdjk hko tgxdthm whj tggtbxehdxdkm ctb mxejkhxmibdzdhtltur whj pemxdik mxejdkm mxdh cok wbk wlmt gbkgctb cteb hko zdh cgvrmdikjeiwhj qdhkmdtlturzzkjdydtivkzdmxbrw zdh zdjjlkkjeiwhj w jtixtbdh kjeiwjzdhdmxbittgkbodxv mjme whj eimj This is what normally prints though xvkÈ bkikhxlrÈ wggbtfkjÈ wiylekgbdhxÈ wjjmÈ hkoÈ wigbtubxtÈ xvkÈ iwhjÈ uedjkmÈ glctbÈ gvrmdiwhjÈ iebbdielmeggtbxÈ ctbÈ xvtmkÈ gbtubxvkÈ wjjdxdthgbtubodllÈ khvxvkÈ imkbfdikÈ xtÈ xvkÈ bkudthÈ whjÈ gbtfdjkÈ hkoÈ tgxdthmÈ whjÈ tggtbxehdxdkmÈ ctbÈ mxejkhxmibdzdhtlturÈ whjÈ pemxdikÈ mxejdkmÈ mxdhÈ cokÈ wbkÈ wlmtÈ gbkgctbÈ ctebÈ hkoÈ zdhÈ cgvrmdikjeiwhjÈ qdhkmdtlturzzkjdydtivkzdmxbrwÈ zdhÈ zdjjlkkjeiwhjÈ wÈ jtixtbdhÈ kjeiwjzdhdmxbittgkbodxvÈ mjmeÈ whjÈ eimj or some variation of an odd character at the end of each word This is what the cryptogram array is filled with by the way wyijkcuvdpqlzhtgabmxefonrs

    Read the article

  • ANTLR : How to replace all characters defined as space with actual space

    - by Puneet Pawaia
    Hi All, My ANTLR code is as follow : LPARENTHESIS : ('('); RPARENTHESIS : (')'); fragment CHARACTER : ('a'..'z'|'0'..'9'|); fragment QUOTE : ('"'); fragment WILDCARD : ('*'); fragment SPACE : (' '|'\n'|'\r'|'\t'|'\u000C'|';'|':'|','); WILD_STRING : (CHARACTER)* ( ('?') (CHARACTER)* )+ ; PREFIX_STRING : (CHARACTER)+ ( ('*') )+ ; WS : (SPACE) { $channel=HIDDEN; }; PHRASE : (QUOTE)(LPARENTHESIS)?(WORD)(WILDCARD)?(RPARENTHESIS)?((SPACE)+(LPARENTHESIS)?(WORD)(WILDCARD)?(RPARENTHESIS)?)*(SPACE)+(QUOTE); WORD : (CHARACTER)+; What I would like to do is to replace all characters marked as space to be replaced with actual space character in the PHRASE. Also if possible, I would then like all continuous spaces to be represented by a single space. Any help would be most appreciated. For some reason, I am finding it hard to understand ANTLR. Any good tutorials out there ?

    Read the article

  • Best practice for inserting large chunks of HTML into elements with Javscript?

    - by hamstar
    Hey guys. I'm building a web application (using prototype) at the moment that requires the addition of large chunks of HTML into the DOM. Most of these are rows that contain elements with all manner of attributes. Currently I keep a blank row of html in a variable and var blankRow = '<tr><td>' +'<a href="{LINK}" onclick="someFunc(\'{STRING}\');">{WORD}</a>' +'</td></tr>'; function insertRow(o) { newRow = blankRow .sub('{LINK}',o.link) .sub('{STRING}',o.string) .sub('{WORD}',o.word); $('tbodyElem').insert( newRow ); } Now that works all well and dandy, but is it the best practice? I have to update the code in the blankRow when I update code on the page, so the new elements being inserted are the same. It gets sucky when I have like 40 lines of HTML to go in a blankRow and then I have to escape it too. Is there an easier way? I was thinking of urlencoding and then decoding it before insertion but that would still mean a blankRow and lots of escaping. What would be mean would be a eof function a la PHP et al. $blankRow = <<<EOF text text EOF; That would mean no escaping but it would still need a blankRow. What do you do in this situation?

    Read the article

  • C# string syntax error

    - by Mesa
    I'm reading in data from a file and trying to write only the word immediately before 'back' in red text. For some reason it is displaying the word and then the word again backwords. Please help. Thank you. private void Form1_Load(object sender, EventArgs e) { Regex r = new Regex(" "); StreamReader sr = new StreamReader("KeyLogger.txt"); string[] tokens = r.Split(sr.ReadToEnd()); int index = 0; for(int i = 0; i <= tokens.Length; i++) { if (tokens[i].Equals("back")) { //richTextBox1.Text+="TRUE"; richTextBox1.SelectionColor = Color.Red; string myText; if (tokens[i - 1].Equals("back")) myText = ""; else myText = tokens[i - 1]; richTextBox1.SelectedText = myText; richTextBox1.Text += myText; } else { //richTextBox1.Text += "NOOOO"; } //index++; //richTextBox1.Text += index; } }

    Read the article

  • Navigating through code with keyboard shortcuts

    - by MarceloRamires
    I'm starting to feel the need to run fastly through code with keyboard shortcuts, to arrive faster where I want to make any changes (avoiding use of mouse or long times holding [up], [left], [right] and [down]). I'm already using some: [home] - first position in current line [end] - last position in current line [ctrl] + [home] - first line of the entire code [ctrl] + [end] - last line of the entire code [pageup] - same vertical position, one screen above [pagedown] - same vertical position, one screen below [ctrl] + [pageup] - first line in current screen [ctrl] + [end] - last line in current screen [ctrl] + [left/right] - skipping word per word What have you got ? I use Visual Studio. (but I'm open to any answer, as I maybe can use others soon) obs: I've searched through stackoverflow and didn't find a nice question with this content, nor a list of keyboard code searching. If it's repeated, I'm sorry for not finding it, I'm here in my best intentions. This question is NOT about any shortcuts, and not only about visual studio, it's about running through code with shortcuts. Answers that suit the question so far: [Ctrl] + [-] - jumps to last cursor position [Ctrl] + [F3] - Jumps to next occurance of the word the curson is in [Shift] + [F3] - Same as the above, backwards. [F12] - Goes to definition of method/variable the cursor is in [Ctrl] + [ ] ] - Jumps to matching brace and select I'll ad more as there are answers.

    Read the article

  • Python unicode search not giving correct answer

    - by user1318912
    I am trying to search hindi words contained one line per file in file-1 and find them in lines in file-2. I have to print the line numbers with the number of words found. This is the code: import codecs hypernyms = codecs.open("hindi_hypernym.txt", "r", "utf-8").readlines() words = codecs.open("hypernyms_en2hi.txt", "r", "utf-8").readlines() count_arr = [] for counter, line in enumerate(hypernyms): count_arr.append(0) for word in words: if line.find(word) >=0: count_arr[counter] +=1 for iterator, count in enumerate(count_arr): if count>0: print iterator, ' ', count This is finding some words, but ignoring some others The input files are: File-1: ???? ??????? File-2: ???????, ????-???? ?????-???, ?????-???, ?????_???, ?????_??? ????_????, ????-????, ???????_???? ????-???? This gives output: 0 1 3 1 Clearly, it is ignoring ??????? and searching for ???? only. I have tried with other inputs as well. It only searches for one word. Any idea how to correct this?

    Read the article

  • Help thinking "Pythony"

    - by Josh
    I'm brand new to Python and trying to learn it by replicating the following C++ function into python // determines which words in a vector consist of the same letters // outputs the words with the same letters on the same line void equivalentWords(vector <string> words, ofstream & outFile) { outFile << "Equivalent words\n"; // checkedWord is parallel to the words vector. It is // used to make sure each word is only displayed once. vector <bool> checkedWord (words.size(), false); for(int i = 0; i < words.size(); i++) { if (!checkedWord[i]){ outFile << " "; for(int j = i; j < words.size(); j++){ if(equivalentWords(words[i], words[j], outFile)) { outFile << words[j] << " "; checkedWord[j] = true; } } outFile << "\n"; } } } In my python code (below), rather than having a second vector, I have a list ("words") of lists of a string, a sorted list of the chars in the former string (because strings are immutable), and a bool (that tells if the word has been checked yet). However, I can't figure out how to change a value as you iterate through a list. for word, s_word, checked in words: if not checked: for word1, s_word1, checked1 in words: if s_word1 == s_word: checked1 = True # this doesn't work print word1, print "" Any help on doing this or thinking more "Pythony" is appreciated.

    Read the article

  • How to write efficient code for extracting Noun phrases?

    - by Arun Abraham
    I am trying to extract phrases using rules such as the ones mentioned below on text which has been POS tagged 1) NNP - NNP (- indicates followed by) 2) NNP - CC - NNP 3) VP - NP etc.. I have written code in this manner, Can someone tell me how i can do in a better manner. List<String> nounPhrases = new ArrayList<String>(); for (List<HasWord> sentence : documentPreprocessor) { //System.out.println(sentence.toString()); System.out.println(Sentence.listToString(sentence, false)); List<TaggedWord> tSentence = tagger.tagSentence(sentence); String lastTag = null, lastWord = null; for (TaggedWord taggedWord : tSentence) { if (lastTag != null && taggedWord.tag().equalsIgnoreCase("NNP") && lastTag.equalsIgnoreCase("NNP")) { nounPhrases.add(taggedWord.word() + " " + lastWord); //System.out.println(taggedWord.word() + " " + lastWord); } lastTag = taggedWord.tag(); lastWord = taggedWord.word(); } } In the above code, i have done only for NNP followed by NNP extraction, how can i generalise it so that i can add other rules too. I know that there are libraries available for doing this , but wanted to do this manually.

    Read the article

  • Converting contents of a byte array to wchar_t*

    - by Christopher MacKinnon
    I seem to be having an issue converting a byte array (containing the text from a word document) to a LPTSTR (wchar_t *) object. Every time the code executes, I am getting a bunch of unwanted Unicode characters returned. I figure it is because I am not making the proper calls somewhere, or not using the variables properly, but not quite sure how to approach this. Hopefully someone here can guide me in the right direction. The first thing that happens in we call into C# code to open up Microsoft Word and convert the text in the document into a byte array. byte document __gc[]; document = word->ConvertToArray(filename); The contents of document are as follows: {84, 101, 115, 116, 32, 68, 111, 99, 117, 109, 101, 110, 116, 13, 10} Which ends up being the following string: "Test Document". Our next step is to allocate the memory to store the byte array into a LPTSTR variable, byte __pin * value; value = &document[0]; LPTSTR image; image = (LPTSTR)malloc( document->Length + 1 ); Once we execute the line where we start allocating the memory, our image variable gets filled with a bunch of unwanted Unicode characters: ????????????????? And then we do a memcpy to transfer over all of the data memcpy(image,value,document->Length); Which just causes more unwanted Unicode characters to appear: ????????????????? I figure the issue that we are having is either related to how we are storing the values in the byte array, or possibly when we are copying the data from the byte array to the LPTSTR variable. Any help with explaining what I'm doing wrong, or anything to point me in the right direction will be greatly appreciated.

    Read the article

  • Appending unique values only in a linked list in C

    - by LuckySlevin
    typedef struct child {int count; char word[100]; inner_list*next;} child; typedef struct parent { char data [100]; inner_list * head; int count; outer_list * next; } parent; void append(child **q,char num[100],int size) { child *temp,*r,*temp2,*temp3; parent *out=NULL; temp = *q; temp2 = *q; temp3 = *q; char *str; if(*q==NULL) { temp = (child *)malloc(sizeof(child)); strcpy(temp->word,num); temp->count =size; temp->next=NULL; *q=temp; } else { temp = *q; while(temp->next !=NULL) { temp=temp->next; } r = (child *)malloc(sizeof(child)); strcpy(r->word,num); r->count = size; r->next=NULL; temp->next=r; } } This is my append function which I use for adding an element to my child list. But my problem is it only should append unique values which are followed by a string. Which means : Inputs : aaa bbb aaa ccc aaa bbb ccc aaa Append should act : For aaa string there should be a list like bbb->ccc(Not bbb->ccc->bbb since bbb is already there if bbb is coming more than one time it should be increase count only.) For bbb string there should be list like aaa->ccc only For ccc string there should be list like aaa only I hope i could make myself clear. Is there any ideas? Please ask for further info.

    Read the article

  • Optimization of Function with Dictionary and Zip()

    - by eWizardII
    Hello, I have the following function: def filetxt(): word_freq = {} lvl1 = [] lvl2 = [] total_t = 0 users = 0 text = [] for l in range(0,500): # Open File if os.path.exists("C:/Twitter/json/user_" + str(l) + ".json") == True: with open("C:/Twitter/json/user_" + str(l) + ".json", "r") as f: text_f = json.load(f) users = users + 1 for i in range(len(text_f)): text.append(text_f[str(i)]['text']) total_t = total_t + 1 else: pass # Filter occ = 0 import string for i in range(len(text)): s = text[i] # Sample string a = re.findall(r'(RT)',s) b = re.findall(r'(@)',s) occ = len(a) + len(b) + occ s = s.encode('utf-8') out = s.translate(string.maketrans("",""), string.punctuation) # Create Wordlist/Dictionary word_list = text[i].lower().split(None) for word in word_list: word_freq[word] = word_freq.get(word, 0) + 1 keys = word_freq.keys() numbo = range(1,len(keys)+1) WList = ', '.join(keys) NList = str(numbo).strip('[]') WList = WList.split(", ") NList = NList.split(", ") W2N = dict(zip(WList, NList)) for k in range (0,len(word_list)): word_list[k] = W2N[word_list[k]] for i in range (0,len(word_list)-1): lvl1.append(word_list[i]) lvl2.append(word_list[i+1]) I have used the profiler to find that it seems the greatest CPU time is spent on the zip() function and the join and split parts of the code, I'm looking to see if there is any way I have overlooked that I could potentially clean up the code to make it more optimized, since the greatest lag seems to be in how I am working with the dictionaries and the zip() function. Any help would be appreciated thanks!

    Read the article

  • Pre-formatting text to prevent reflowing

    - by mattjn
    I've written a fairly simple script that will take elements (in this case, <p> elements are the main concern) and type their contents out like a typewriter, one by one. The problem is that as it types, when it reaches the edge of the container mid-word, it reflows the text and jumps to the next line (like word wrap in any text editor). This is, of course, expected behavior; however, I would like to pre-format the text so that this does not happen. I figure that inserting <br> before the word that will wrap would be the best solution, but I'm not quite sure what the best way to go about doing that is that supports all font sizes and container widths, while also keeping any HTML tags intact. I figure something involving a hidden <span> element, adding text to it gradually and checking its width against the container width might be on the right track, but I'm not quite sure how to actually put this together. Any help or suggestions on better methods would be appreciated.

    Read the article

  • Php string handling triks

    - by Dam
    Hi my question Need to get the 10 word before and 10 words after for the given text . i mean need to start the 10 words before the keyword and end with 10 word after the key word. Given text : "Twenty-three" The main trick : content having some html tags etc .. tags need to keep that tag with this content only . need to display the words from 10before - 10after content is bellow : <div id="hpFeatureBoxInt"><h3><a href="/go/homepage/i/int/news/world/1/-/news/1/hi/world/europe/8592190.stm">Suicide bombings hit Moscow Metro</a></h3><p>Past suicide bombings in Moscow have been blamed on Islamist rebels At least 35 people have been killed after two female suicide bombers blew themselves up on Moscow Metro trains in the morning rush hour,<h2><span class="dy">Top News Story</span></h2> officials say.<img height="150" width="201" alt="Emergency services carry a body from a Metro station in Moscow (29 March 2010)" src="http://wwwimg.bbc.co.uk/feedengine/homepage/images/_47550689_moscowap203_201x150.jpg">Twenty-three died in the first blast at 0756 (0356 GMT) as a<a href="#"> train stood </a>at the central Lubyanka station, beneath the offices of the FSB intelligence agency.About 40 minutes later, a second explosion ripped through a train at Park Kultury, leaving another 12 dead.No-one has said they carried out the worst attack in the capital since 2004. </p><p id="fbilisten"><a href="/go/homepage/i/int/news/heading/-/news/">More from BBC News</a></p></div> Thank you

    Read the article

  • perl regular expressions substitution/replacement using variables with special characters

    - by user961627
    Okay I've checked previous similar questions and I've been juggling with different variations of quotemeta but something's still not right. I have a line with a word ID and two words - the first is the wrong word, the second is right. And I'm using a regex to replace the wrong word with the right one. $line = "ANN20021015_0104_XML_16_21 A$xAS A$xASA"; @splits = split("\t",$line); $wrong_word = quotemeta $splits[1]; $right_word = quotemeta $splits[2]; print $right_word."\n"; print $wrong_word."\n"; $line =~ s/$wrong_word\t/$right_word\t/g; print $line; What's wrong with what I'm doing? Edit The problem is that I'm unable to retain the complete words - they get chopped off at the special characters. This code works perfectly fine for words without special characters. The output I need for the above example is: ANN20021015_0104_XML_16_21 A$xASA A$xASA But what I get is ANN20021015_0104_XML_16_21 A A Because of the $ character.

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >