Search Results

Search found 10026 results on 402 pages for 'word documents'.

Page 351/402 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • How to load JPG file into NSBitmapImageRep?

    - by Adam
    Objective-C / Cocoa: I need to load the image from a JPG file into a two dimensional array so that I can access each pixel. I am trying (unsuccessfully) to load the image into a NSBitmapImageRep. I have tried several variations on the following two lines of code: NSString *filePath = [NSString stringWithFormat: @"%@%@",@"/Users/adam/Documents/phoneimages/", [outLabel stringValue]]; //this coming from a window control NSImageRep *controlBitmap = [[NSImageRep alloc] imageRepWithContentsOfFile:filePath]; With the code shown, I get a runtime error: -[NSImageRep imageRepWithContentsOfFile:]: unrecognized selector sent to instance 0x100147070. I have tried replacing the second line of code with: NSImage *controlImage = [[NSImage alloc] initWithContentsOfFile:filePath]; NSBitmapImageRep *controlBitmap = [[NSBitmapImageRep alloc] initWithData:controlImage]; But this yields a compiler error 'incompatible type' saying that initWithData wants a NSData variable not an NSImage. I have also tried various other ways to get this done, but all are unsuccessful either due to compiler or runtime error. Can someone help me with this? I will eventually need to load some PNG files in the same way (so it would be nice to have a consistent technique for both). And if you know of an easier / simpler way to accomplish what I am trying to do (i.e., get the images into a two-dimensional array), rather than using NSBitmapImageRep, then please let me know! And by the way, I know the path is valid (confirmed with fileExistsAtPath) -- and the filename in outLabel is a file with .jpg extension. Thanks for any help!

    Read the article

  • Is there a way to make changes to toggles in my .emacs file apply without re-starting Emacs?

    - by Vivi
    I want to be able to make the changes to my .emacs file without having to reload Emacs. I found three questions which sort of answer what I am asking (you can find them here, here and here), but the problem is that the change I have just made is to a toggle, and as the comments to two of the answers (a1, a2) to those questions explain, the solutions given there (such as M-x reload-file or M-x eval-buffer) don't apply to toggles. I imagine there is a way of toggling the variable again with a command, but if there is a way to reload the whole .emacs and have the all the toggles re-evaluated without having to specify them, I would prefer. In any case, I would also appreciate if someone told me how to toggle the value of a variable so that if I just changed one toggle I can do it with a command rather than re-start Emacs just for that (I am new to Emacs). I don't know how useful this information is, but the change I applied was the following (which I got from this answer to another question): (setq skeleton-pair t) (setq skeleton-pair-on-word t) (global-set-key (kbd "[") 'skeleton-pair-insert-maybe) (global-set-key (kbd "(") 'skeleton-pair-insert-maybe) (global-set-key (kbd "{") 'skeleton-pair-insert-maybe) (global-set-key (kbd "<") 'skeleton-pair-insert-maybe) Edit: I included the above in .emacs and reloaded Emacs, so that the changes took effect. Then I commented all of it out and tried M-x load-file. This doesn't work. The suggestion below (C-x C-e by PP works if I am using it to evaluate the toggle first time, but not when I want to undo it). I would like something that would evaluate the commenting out, if such thing exists... Thanks :)

    Read the article

  • Finding Common Phrases in MS SQL TEXT Column

    - by regex
    Hello All, Short Desc: I'm curious to see if I can use SQL Analysis services or some other MS SQL service to mine some data for me that will show commonalities between SQL TEXT fields in a dataset. Long Desc I am looking at a subset of data that consists of about 10,000 rows of TEXT blobs which are used as a notes column in a issue tracking (ticketing) software. I would like to use something out of the box (without having to build something) that might be able to parse through all of the rows and find commonly used byte sequences in the "Notes" column. In other words, I want to find commonly used phrases (two to three word phrases, so 9 - 20 character sections of the TEXT blob). This will help me better determine if associate's notes contain similar phrases (troubleshooting techniques) that we could standardize in our troubleshooting process flow. Closing Note I'd really rather not build an application to do this as my method will probably not be the most efficient way to do it. Hopefully all this makes sense. Please let me know in the comments if anything needs clarification. Thanks in advance for your help.

    Read the article

  • Lucene (.NET) Document stucture and performance suggestions.

    - by Josh Handel
    Hello, I am indexing about 100M documents that consist of a few string identifiers and a hundred or so numaric terms.. I won't be doing range queries, so I haven't dugg too deep into Numaric Field but I'm not thinking its the right choose here. My problem is that the query performance degrades quickly when I start adding OR criteria to my query.. All my queries are on specific numaric terms.. So a document looks like StringField:[someString] and N DataField:[someNumber].. I then query it with something like DataField:((+1 +(2 3)) (+75 +(3 5 52)) (+99 +88 +(102 155 199))). Currently these queries take about 7 to 16 seconds to run on my laptop.. I would like to make sure thats really the best they can do.. I am open to suggestions on field structure and query structure :-). Thanks Josh PS: I have already read over all the other lucene performance discussions on here, and on the Lucene wiki and at lucid imiagination... I'm a bit further down the rabbit hole then that...

    Read the article

  • How do I overwrite a file currently being read by Python

    - by Brian
    Hi guys, I am not too sure the best way to word this, but what I want to do, is read a pdf file, make various modifications, and save the modified pdf over the original file. As of now, I am able to save the modified pdf to a separate file, but I am looking to replace the original, not create a new file. Here is my current code: from pyPdf import PdfFileWriter, PdfFileReader output = PdfFileWriter() input = PdfFileReader(file('input.pdf', 'rb')) blank = PdfFileReader(file('C:\\BLANK.pdf', 'rb')) # Copy the input pdf to the output. for page in range(int(input.getNumPages())): output.addPage(input.getPage(page)) # Add a blank page if needed. if (input.getNumPages() % 2 != 0): output.addPage(blank.getPage(0)) # Write the output to pdf. outputStream = file('input.pdf', 'wb') output.write(outputStream) outputStream.close() If i change the outputStream to a different file name, it works fine, I just cant save over the input file because it is still being used. I have tried to .close() the stream, but it was giving me errors as well. I have a feeling this has a fairly simple solution, I just haven't had any luck finding it. Thanks!

    Read the article

  • spam and dirty words comment post filtering in python (django)

    - by sintaloo
    Hi All, My basic question is how to filter spam and dirty words in a comment post system under python (django). I have a collection of phrases (approximately 3000 phrases) to be filtered. Question (1), are there any existing open source python (or django) package/module/plugin which can handle this job? I knew there was one called Akismet. But from what I understood, it will not solve my problem. Akismet is just a web service and filter the words dictionary defined by Akismet. But I have my own collection of words. Please correct me if I am wrong. Question (2), If there is no such open source package I can use, how to create my own one? The only thing I can think of it's to use regular expression and join all the word phrases with 'or' in a regular expression. but I have 3000 phrases, I think it won't work in term of performance and filter every comment post. any suggestions where should I start from? Thank you very much for your help and time.

    Read the article

  • .NET 4.0 Forms Authentication change?

    - by James Koch
    I'm seeing some new behavior in Forms Authentication after upgrading to .NET 4.0. This occurs only on IIS 6, not on 7. Background - In web.config, we configure Forms Authentication, and then use <authorization tags to globally deny anonymous/unauthenticated users access. Then we explicitly allow access to a login.aspx page using a <location tag. Generally, this works fine, as it did when we were on .NET 2.0 (3.5). The issue only occurs when we visit the root path of the site, ie "http://myserver/". Our default document is configured in IIS to be login.aspx. Under .NET 4.0, upon visiting that URL, we're redirected to "http://myserver/login.aspx?ReturnUrl=/". If you log in from here, you're logged in and returned back at the log in page (yuck). Just wanted to post this here to see if anyone else is experiencing this. It's not listed on any "breaking changes" documentation I've been able to find. Either I'm missing something, or the UrlAuthorization module has changed and is no longer "smart" about IIS default documents.

    Read the article

  • Oracle why does creating trigger fail when there is a field called timestamp?

    - by Omar Kooheji
    I've just wasted the past two hours of my life trying to create a table with an auto incrementing primary key bases on this tutorial, The tutorial is great the issue I've been encountering is that the Create Target fails if I have a column which is a timestamp and a table that is called timestamp in the same table... Why doesn't oracle flag this as being an issue when I create the table? Here is the Sequence of commands I enter: Creating the Table: CREATE TABLE myTable (id NUMBER PRIMARY KEY, field1 TIMESTAMP(6), timeStamp NUMBER, ); Creating the Sequence: CREATE SEQUENCE test_sequence START WITH 1 INCREMENT BY 1; Creating the trigger: CREATE OR REPLACE TRIGGER test_trigger BEFORE INSERT ON myTable REFERENCING NEW AS NEW FOR EACH ROW BEGIN SELECT test_sequence.nextval INTO :NEW.ID FROM dual; END; / Here is the error message I get: ORA-06552: PL/SQL: Compilation unit analysis terminated ORA-06553: PLS-320: the declaration of the type of this expression is incomplete or malformed Any combination that does not have the two lines with a the word "timestamp" in them works fine. I would have thought the syntax would be enough to differentiate between the keyword and a column name. As I've said I don't understand why the table is created fine but oracle falls over when I try to create the trigger... CLARIFICATION I know that the issue is that there is a column called timestamp which may or may not be a keyword. MY issue is why it barfed when I tried to create a trigger and not when I created the table, I would have at least expected a warning. That said having used Oracle for a few hours, it seems a lot less verbose in it's error reporting, Maybe just because I'm using the express version though. If this is a bug in Oracle how would one who doesn't have a support contract go about reporting it? I'm just playing around with the express version because I have to migrate some code from MySQL to Oracle.

    Read the article

  • Compiling and using NTL c++ library for Windows

    - by Martin Lauridsen
    Hi there, I have compiled the NTL inifite precision integer arithmetic library for c++, using Microsoft Visual Studio 2008. I did as explained, on this site, using the Visual Studio interface, rather than from the command prompt. Actually I would rather do it from the command prompt, but I was not sure how to. Anyhow, I got the library compiled, and I now want to compile a program using the library, from the command prompt. The program I am trying to compile, has been tested on a linux system, where I compile it with the following c++ -I/appl/htopopt/Linux_x86_64/NTL-5.4.2/include mpqs.cpp main.cpp -o main -L/appl/htopopt/Linux_x86_64/NTL-5.4.2/lib -lntl -L/appl/htopopt/Linux_x86_64/gmp-4.2.1/lib -lgmp -lm Nevermind the gmp stuff, I dont have that installed on Windows. It is purely an optional thing that will make the NTL run faster. Anyhow, this works fine on linux. Now on Windows I write the following cl /EHsc /I D:\Downloads\WinNTL-5_5_2\include mpqs.cpp main.cpp /link /LIBPATH:"D:\Documents\Visual Studio 2008\Projects\ntl\Debug" But this results in the following errors: mpqs.cpp mpqs.cpp(38) : error C2039: 'find_smooth_vals' : is not a member of 'QS' d:\desktop\qs\mpqs.h(12) : see declaration of 'QS' mpqs.cpp(41) : error C2065: 'M' : undeclared identifier mpqs.cpp(41) : error C2065: 'n' : undeclared identifier mpqs.cpp(42) : error C2065: 'sieve_table' : undeclared identifier mpqs.cpp(42) : error C2228: left of '.size' must have class/struct/union type is ''unknown-type'' mpqs.cpp(43) : error C2065: 'sieve_table' : undeclared identifier mpqs.cpp(44) : error C2065: 'qx_table' : undeclared identifier mpqs.cpp(44) : error C3861: 'test_smoothness': identifier not found mpqs.cpp(45) : error C2065: 'smooth_indices' : undeclared identifier mpqs.cpp(45) : error C2228: left of '.push_back' must have class/struct/union type is ''unknown-type'' main.cpp Generating Code... It is as if, my mpqs.h file is not included into the compilation process? Also I dont understand why it complains about .push_back() for a vector type? Help is much appreciated!

    Read the article

  • Template operator linker error

    - by Dani
    I have a linker error I've reduced to a simple example. The build output is: debug/main.o: In function main': C:\Users\Dani\Documents\Projects\Test1/main.cpp:5: undefined reference tolog& log::operator<< (char const (&) [6])' collect2: ld returned 1 exit status It looks like the linker ignores the definition in log.cpp. I also cant put the definition in log.h because I include the file alot of times and it complains about redefinitions. main.cpp: #include "log.h" int main() { log() << "hello"; return 0; } log.h: #ifndef LOG_H #define LOG_H class log { public: log(); template<typename T> log &operator <<(T &t); }; #endif // LOG_H log.cpp: #include "log.h" #include <iostream> log::log() { } template<typename T> log &log::operator <<(T &t) { std::cout << t << std::endl; return *this; }

    Read the article

  • Practical rules for premature optimization

    - by DougW
    It seems that the phrase "Premature Optimization" is the buzz-word of the day. For some reason, iphone programmers in particular seem to think of avoiding premature optimization as a pro-active goal, rather than the natural result of simply avoiding distraction. The problem is, the term is beginning to be applied more and more to cases that are completely inappropriate. For example, I've seen a growing number of people say not to worry about the complexity of an algorithm, because that's premature optimization (eg http://stackoverflow.com/questions/2190275/help-sorting-an-nsarray-across-two-properties-with-nssortdescriptor/2191720#2191720). Frankly, I think this is just laziness, and appalling to disciplined computer science. But it has occurred to me that maybe considering the complexity and performance of algorithms is going the way of assembly loop unrolling, and other optimization techniques that are now considered unnecessary. What do you think? Are we at the point now where deciding between an O(n^n) and O(n!) complexity algorithm is irrelevant? What about O(n) vs O(n*n)? What do you consider "premature optimization"? What practical rules do you use to consciously or unconsciously avoid it? This is a bit vague, but I'm curious to hear other peoples' opinions on the topic.

    Read the article

  • LINQ to XML Query Help

    - by cw
    Hello, I am trying to get a "diff" of 2 xml documents and end up with a list of Elements that are different. Below is the XML, I was wondering if anyone can assist. In the case below, I want the list to contain the "file2.xml" element and the "file3.xml" element because they are both different or new than the first xml document. Thanks in advance! <?xml version="1.0" encoding="utf-8" ?> <versioninfo> <files> <file version="1.0">file1.xml</file> <file version="1.0">file2.xml</file> </files> </versioninfo> <?xml version="1.0" encoding="utf-8" ?> <versioninfo> <files> <file version="1.0">file1.xml</file> <file version="1.1">file2.xml</file> <file version="1.0">file3.xml</file> </files> </versioninfo>

    Read the article

  • How to search a PDF in Acrobat Reader AND jump to a certain page via parameter?

    - by agez
    Hi, we are using lucene within a web application to search in a great number of PDF documents. The workflow is like this: A user enters a search term A list of search results is presented to the user. Each search result represents one PDF document and shows the user on which page the search term was found. Each of these pages is represented as a hyperlink. If the user now clicks on such a hyperlink, he directly jumps to that page. But now the user has the problem that the search term isn't highlighted on the page. Therefore the user has to look on his own to find the search term on the page. What we wanted is a way to highlight the search term on the specific page in the PDF. The open parameters for Acrobat Reader allow for either searching a PDF document (with hit highlighting) OR jumping to a specific page. But the combination of both parameters - which we would need - doesn't work. Does anyone have an idea how jumping to a page and highlighting a search term in a pdf document could work? I had a look at the Acrobat SDK but don't see how we can use it (it's terribly documented). Cheers, Helmut

    Read the article

  • How to use Profile in asp.net?

    - by Phsika
    i try to learn asp.net Profile management. But i added below xml firstName,LastName and others. But i cannot write Profile. if i try to write Profile property. drow my editor Profile : Error 1 The name 'Profile' does not exist in the current context C:\Documents and Settings\ykaratoprak\Desktop\Security\WebApp_profile\WebApp_profile\Default.aspx.cs 18 13 WebApp_profile How can i do that? <authentication mode="Windows"/> <profile> <properties> <add name="FirstName"/> <add name="LastName"/> <add name="Age"/> <add name="City"/> </properties> </profile> protected void Button1_Click(object sender, System.EventArgs e) { Profile.FirstName = TextBox1.Text; Profile.LastName = TextBox2.Text; Profile.Age = TextBox3.Text; Profile.City = TextBox4.Text; Label1.Text = "Profile stored successfully!<br />" + "<br />First Name: " + Profile.FirstName + "<br />Last Name: " + Profile.LastName + "<br />Age: " + Profile.Age + "<br />City: " + Profile.City; }

    Read the article

  • Where does IE store the ASP.NET_SessionId cookie?

    - by scherand
    I am a bit baffled here; using IE7, ASP.NET 2.0 and Cassini (the VS built-in web server; although the same thing seems to be true for "real" applications deployed in IIS) I am looking for the session-id-cookie. My test page shows a session id (by printing out Session.SessionId) and Response.Cookies.Keys contains ASP.NET_SessionId. So far so good. But I cannot find the cookie in IEs cookie-store! Nor does "remove all cookies" reset the session (as it does in FF)... So where - I am tempted to write that four letter word - does IE store that bloody cookie? Or am I missing something? By the way there is no hidden field with a session id either, as far as I can see. If I check in FF there is a cookie called ASP.NET_SessionId as I would expect. And as mentioned above deleting that cookie does start a new session; as I would expect. Can anybody imagine what is happening here?

    Read the article

  • Data Modeling of Entity with Attributes

    - by StackOverflowNewbie
    I'm storing some very basic information "data sources" coming into my application. These data sources can be in the form of a document (e.g. PDF, etc.), audio (e.g. MP3, etc.) or video (e.g. AVI, etc.). Say, for example, I am only interested in the filename of the data source. Thus, I have the following table: DataSource Id (PK) Filename For each data source, I also need to store some of its attributes. Example for a PDF would be "numbe of pages." Example for audio would be "bit rate." Example for video would be "duration." Each DataSource will have different requirements for the attributes that need to be stored. So, I have modeled "data source attribute" this way: DataSourceAttribute Id (PK) DataSourceId (FK) Name Value Thus, I would have records like these: DataSource->Id = 1 DataSource->Filename = 'mydoc.pdf' DataSource->Id = 2 DataSource->Filename = 'mysong.mp3' DataSource->Id = 3 DataSource->Filename = 'myvideo.avi' DataSourceAttribute->Id = 1 DataSourceAttribute->DataSourceId = 1 DataSourceAttribute->Name = 'TotalPages' DataSourceAttribute->Value = '10' DataSourceAttribute->Id = 2 DataSourceAttribute->DataSourceId = 2 DataSourceAttribute->Name = 'BitRate' DataSourceAttribute->Value '16' DataSourceAttribute->Id = 3 DataSourceAttribute->DataSourceId = 3 DataSourceAttribute->Name = 'Duration' DataSourceAttribute->Value = '1:32' My problem is that this doesn't seem to scale. For example, say I need to query for all the PDF documents along with thier total number of pages: Filename, TotalPages 'mydoc.pdf', '10' 'myotherdoc.pdf', '23' ... The JOINs needed to produce the above result is just too costly. How should I address this problem?

    Read the article

  • What java web application framework to use?

    - by frohiky
    One of the main products of my company is an Oracle Forms (and Reports) based application, that "needs" to be re-written in another technology. Why? Users want a more rich interface experience, and we want, preferably, to reduce costs with an open source application server. For this (HUGE) project, we intend to use a java web application framework, keep these points in mind: We have: hundreds of tables on our database (the ORM must be as flexible as possible); some logic which is (and will still be) based on PL/SQL procedures/functions/packages; a lot of CRUDs (the application itself is of an considerable size); a demand to work with/generate documents and workflows; an intranet based user environment; We want: to offer a RIA interface experience; use (if possible) an open source app server; a rapid (as possible) development framework; a somewhat mature framework with a "wise" roadmap (and a considerable community support); a MVC approach combined with JS or GWT widgets (e.g. Vaadin or SmartGWT); Well, in the past weeks I've read a lot of posts, Q&As on stackoverflow, and much more: Wicket, JSF, Tapestry, Grails, GWT, Struts2, Play, Spring, Seam, Echo, .... the list goes on! I've even researched about Alfresco..! The obvious question: Which one to use? At this time, any insight, recommendation, shared experience, advice will be more then welcome!

    Read the article

  • Hierarchical object model with property inheritance and event bubbling?

    - by Winston Fassett
    I'm writing a document-based client application and I need a DOM or WPF-like, but non-visual model that: Is a tree composed of elements Can accept an unlimited number of custom properties that get/set any CLR type, including collections. Can inherit their values from their parent Can inherit their default values from an ancestor Can be derived/calculated from other properties, ancestors, or descendants Support event bubbling / tunneling There will be a core set of properties but other plugins may add their own or even create custom documents Supports full inspection by the owning document in order to persist the tree and attributes in an XML format. I realize that's a tall order but I was really hoping there would be something out there to help me get started. Unfortunately WPF DependencyObjects are too closed, proprietary, and coupled to WPF to be of any use as a document model. My needs also have a strong resemblance to the HTML DOM but I haven't been able to find any clean DOM implementations that could be decoupled from HTML or ported to .NET. My current platform is .NET/C# but if anyone knows of anything that might be useful for inspiration or embedding, regardless of the platform, I'd love to know.

    Read the article

  • Recognizing terminals in a CFG production previously not defined as tokens.

    - by kmels
    I'm making a generator of LL(1) parsers, my input is a CoCo/R language specification. I've already got a Scanner generator for that input. Suppose I've got the following specification: COMPILER 1. CHARACTERS digit="0123456789". TOKENS number = digit{digit}. decnumber = digit{digit}"."digit{digit}. PRODUCTIONS Expression = Term{"+"Term|"-"Term}. Term = Factor{"*"Factor|"/"Factor}. Factor = ["-"](Number|"("Expression")"). Number = (number|decnumber). END 1. So, if the parser generated by this grammar receives a word "1+1", it'd be accepted i.e. a parse tree would be found. My question is, the character "+" was never defined in a token, but it appears in the non-terminal "Expression". How should my generated Scanner recognize it? It would not recognize it as a token. Is this a valid input then? Should I add this terminal in TOKENS and then consider an error routine for a Scanner for it to skip it? How does usual language specifications handle this?

    Read the article

  • What's your preferred pointer declaration style, and why?

    - by Owen
    I know this is about as bad as it gets for "religious" issues, as Jeff calls them. But I want to know why the people who disagree with me on this do so, and hear their justification for their horrific style. I googled for a while and couldn't find a style guide talking about this. So here's how I feel pointers (and references) should be declared: int* pointer = NULL; int& ref = *pointer; int*& pointer_ref = pointer; The asterisk or ampersand goes with the type, because it modifies the type of the variable being declared. EDIT: I hate to keep repeating the word, but when I say it modifies the type I'm speaking semantically. "int* something;" would translate into English as something like "I declare something, which is a pointer to an integer." The "pointer" goes along with the "integer" much more so than it does with the "something." In contrast, the other uses of the ampersand and asterisk, as address-of and dereferencing operators, act on a variable. Here are the other two styles (maybe there are more but I really hope not): int *ugly_but_common; int * uglier_but_fortunately_less_common; Why? Really, why? I can never think of a case where the second is appropriate, and the first only suitable perhaps with something like: int *hag, *beast; But come now... multiple variable declarations on one line is kind of ugly form in itself already.

    Read the article

  • Truncate portions of a string to limit the whole string's length in Ruby

    - by Horace Loeb
    Suppose you want to generate dynamic page titles that look like this: "It was all a dream, I used to read word up magazine" from "Juicy" by The Notorious B.I.G I.e., "LYRICS" from "SONG_NAME" by ARTIST However, your title can only be 69 characters total and this template will sometimes generate titles that are longer. One strategy for solving this problem is to truncate the entire string to 69 characters. However, a better approach is to truncate the less important parts of the string first. I.e., your algorithm might look something like this: Truncate the lyrics until the entire string is <= 69 characters If you still need to truncate, truncate the artist name until the entire string is <= 69 characters If you still need to truncate, truncate the song name until the entire string is <= 69 characters If all else fails, truncate the entire string to 69 characters Ideally the algorithm would also limit the amount each part of the string could be truncated. E.g., step 1 would really be "Truncate the lyrics to a minimum of 10 characters until the entire string is <= 69 characters" Since this is such a common situation, I was wondering if someone has a library or code snippet that can take care of it.

    Read the article

  • java Getting a list of words from a Trie

    - by adam08
    I'm looking to use the following code to not check whether there is a word matching in the Trie but to return a list all words beginning with the prefix inputted by the user. Can someone point me in the right direction? I can't get it working at all..... public boolean search(String s) { Node current = root; System.out.println("\nSearching for string: "+s); while(current != null) { for(int i=0;i<s.length();i++) { if(current.child[(int)(s.charAt(i)-'a')] == null) { System.out.println("Cannot find string: "+s); return false; } else { current = current.child[(int)(s.charAt(i)-'a')]; System.out.println("Found character: "+ current.content); } } // If we are here, the string exists. // But to ensure unwanted substrings are not found: if (current.marker == true) { System.out.println("Found string: "+s); return true; } else { System.out.println("Cannot find string: "+s +"(only present as a substring)"); return false; } } return false; } }

    Read the article

  • How to store dynamic references to parts of a text

    - by Antoine L
    In fact, my question concerns an algorithm. I need to be able to attach annotations to certain parts of a text, like a word or a group of words. The first thing that came to me to do so is to store the position of this part (indexes) in the text. For instance, in the text 'The quick brown fox jumps over the lazy dog', I'd like to attach an annotation to 'quick brown fox', so the indexes of the annotation would be 4 - 14. But since the text is editable (other annotations could provoke a modification from text's author), the annoted part is likely to move (the indexes could change). In fact, I don't know how to update the indexes of the annoted part. What if the text becomes 'Everyday, the quick brown fox jumps over the lazy dog' ? I guess I have to watch every change of the text in the front-end application ? The front-end part of the application will be HTML with Javascript. I will be using PHP to develop the back-end part and every text and annotation will be stored in a database.

    Read the article

  • 'Forward-Compatible' Program Design

    - by Jeffrey Kern
    The majority of my questions I've asked here so far on StackOverflow have been how to implement individual concepts and techniques towards developing a software-based NES clone via the XNA environment. The small samples that I've thrown together on my PC work relatively great and everything. Except I hit a brick wall. How do I merge all of these samples together. Having proof-of-concept is amazing, except when you need it to go beyond just that. I now have samples strewn about that I'm trying to merge, some of them incomplete. And now I'm stuck with the chicken-and-the-egg situation of where I would like to incorporate these samples together, to make sure they work, but I cannot without test data. And I don't have tools to create test data, because they'd need to be based off of the individual pieces that need to be put together. In my mind, I'm having nightmares with circular reference. For my sample data, I am hoping to save it in XML and write a specification - and then make sample data by hand - but I'm too paranoid of manually creating an XML file full of incorrect data and blaming it on my code, or vice-versa. It doesn't help that the end-result of my work is graphic-oriented, which makes it interseting how a graphic on the screen can be visualized in XML Nodes. I guess, my question is this: What design patterns and disciplines exist in the coding world that address this type of concern? I've always relied on brute-force coding and restarting a project with a whole new code base in attempts to further along my goals, but I doubt that would be the best way to do so. Within my college career, the majority of my programming was to work on simple projects that came out of a book, or with a given correct data set and a verifyable result. I don't have that, as my own design documents that I am going by could be terribly wrong.

    Read the article

  • Best and simple way to handle JSON in Django

    - by primal
    Hi, As part of the application we are developing (with android client and Django server) a json object which contains user name and pass word is sent to server from android client as follows HttpPost post = new HttpPost(URL); /*Adding key value pairs */ json.put("username", un); json.put("password", pwd); StringEntity se = new StringEntity(json.toString()); post.setEntity(se); response = client.execute(post); The response is parsed like this result = responsetoString(response.getEntity().getContent()); //Converts response to String jObject = new JSONObject(result); JSONObject post = jObject.getJSONObject("post"); username = post.getString("username"); message = post.getString("message"); Hope upto this everything is fine. The problem comes when parsing or sending JSON responses in Django server. Whats the best way to do this? We tried using SimpleJSON and it turned out not to be so simple as we didn't find any good tutorials or sample code for the same? Are there any python functions similiar to get,put and opt in java for JSON? Any help would be much appreciated..

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >