Search Results

Search found 3758 results on 151 pages for 'efficient'.

Page 128/151 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • how to synchronize database table and directory with php

    - by twmulloy
    hello, I have a directory with files and a database table with what should be the same files. I would like to be able to synchronize the database table with the directory. What would be the most efficient way to do this? or would I realistically only be able to do this in a brute manner? Here's my approach: 1. retrieve all of the files in the directory as array 2. retrieve all of the filenames in the database table as array 3. loop through the file values in the directory array and use in_array() on the database table array to verify the filename is in that array, and if not then start building an array to insert the missing filenames. run db query to add each missing file row to database table 4. loop through directory array and use in_array() on the directory array and anything not found in the directory array will just be deleted from the table. Is there a better way to go about this? or something better for this in php than in_array()?

    Read the article

  • mysql image disable print download

    - by Vish
    Hi, We use a Flex AIR client and a WAMP server. Tiff images are stored in MySQL. Currently, I can download the image from AIR client and it prompts for a download dialog. Things are fine till this point. We got a new requirement. Requirement is that only some users can print the image which gets downloaded. For other users, they should not be able to print the tiff image. Wondering how to accomplish this. One idea, not sure if its efficient, is to convert the image requested to pdf at the server side, disable print option there(hope there are API's available) and send back the pdf. Please let me know btter ideas. Also, is there a way to prevent file download dialog from popping up everytime the file is requested for download? Can we just get the file stream to the client and manipulate it to open with a particular viewer or write it to pdf... Please help.

    Read the article

  • Refactoring - Speed increase

    - by Michael G
    How can I make this function more efficient. It's currently running at 6 - 45 seconds. I've ran dotTrace profiler on this specific method, and it's total time is anywhere between 6,000ms to 45,000ms. The majority of the time is spent on the "MoveNext" and "GetEnumerator" calls. and example of the times are 71.55% CreateTableFromReportDataColumns - 18, 533* ms - 190 calls -- 55.71% MoveNext - 14,422ms - 10,775 calls What can I do to speed this method up? it gets called a lot, and the seconds add up: private static DataTable CreateTableFromReportDataColumns(Report report) { DataTable table = new DataTable(); HashSet<String> colsToAdd = new HashSet<String> { "DataStream" }; foreach (ReportData reportData in report.ReportDatas) { IEnumerable<string> cols = reportData.ReportDataColumns.Where(c => !String.IsNullOrEmpty(c.Name)).Select(x => x.Name).Distinct(); foreach (var s in cols) { if (!String.IsNullOrEmpty(s)) colsToAdd.Add(s); } } foreach (string col in colsToAdd) { table.Columns.Add(col); } return table; }

    Read the article

  • Fastest XML parser for small, simple documents in Java

    - by Varkhan
    I have to objectify very simple and small XML documents (less than 1k, and it's almost SGML: no namespaces, plain UTF-8, you name it...), read from a stream, in Java. I am using JAXP to process the data from my stream into a Document object. I have tried Xerces, it's way too big and slow... I am using Dom4j, but I am still spending way too much time in org.dom4j.io.SAXReader. Does anybody out there have any suggestion on a faster, more efficient implementation, keeping in mind I have very tough CPU and memory constraints? [Edit 1] Keep in mind that my documents are very small, so the overhead of staring the parser can be important. For instance I am spending as much time in org.xml.sax.helpers.XMLReaderFactory.createXMLReader as in org.dom4j.io.SAXReader.read [Edit 2] The result has to be in Dom format, as I pass the document to decision tools that do arbitrary processing on it, like switching code based on the value of arbitrary XPaths, but also extracting lists of values packed as children of a predefined node. [Edit 3] In any case I eventually need to load/parse the complete document, since all the information it contains is going to be used at some point. (This question is related to, but different from, http://stackoverflow.com/questions/373833/best-xml-parser-for-java )

    Read the article

  • How to Treat Race Condition of Session in Web Application?

    - by Morgan Cheng
    I was in a ASP.NET application has heavy traffic of AJAX requests. Once a user login our web application, a session is created to store information of this user's state. Currently, our solution to keep session data consistent is quite simple and brutal: each request needs to acquire a exclusive lock before being processed. This works fine for tradition web application. But, when the web application turns to support AJAX, it turns to not efficient. It is quite possible that multiple AJAX requests are sent to server at the same time without reloading the web page. If all AJAX requests are serialized by the exclusive lock, the response is not so quick. Anyway, many AJAX requests that doesn't access same session variables are blocked as well. If we don't have a exclusive lock for each requests, then we need to treat all race condition carefully to avoid dead lock. I'm afraid that would make the code complex and buggy. So, is there any best practice to keep session data consistent and keep code simple and clean?

    Read the article

  • Implementation of MVC with SQLite and NSURLConnection, use cases?

    - by user324723
    I'm interested in knowing how others have implemented/designed database & web services in their iphone app and how they simplified it for the entire application. My application is dependent on these services and I can't figure out a efficient way to use them together due to the (semi)complexity of my requirements. My past attempts on combining them haven't been completely successful or at least optimal in my mind. I'm building a database driven iphone app that uses a relational database in sqlite and consumes web services based on missing content or user interaction. Like this hasn't been done before...right? Since I am using a relational database - any web services consumed requires normalization, parsing the result and persisting it to the database before it can be displayed in a table view controller. The applications UI consists of nested(nav controller) table views where a user can select a cell and be taken to the next table view where it attempts to populate the table views data source from the database. If nothing exists in the database then it will send a request via web services to download its content, thus download - parse - persist - query - display. Since the user has the ability to request a refresh of this data it still requires the same process. Quickly describing what I've implemented and tried to run with - 1st attempt - Used a singleton web service class that handled sending web service requests, parsing the result and returning it to the table view controller via delegate protocols. Once the controller received that data it would then be responsible for persisting it to the database and re-returning the result. I didn't like the idea of only preventing the case where the app delegate selector doesn't exists(released) causing the app to crash. 2nd attempt - Used NSNotificationCenter for easy access to both database and web services but later realized it was more complex due to adding and removing observers per view(which isn't advised anyways).

    Read the article

  • help me reason about F# threads

    - by Kevin Cantu
    In goofing around with some F# (via MonoDevelop), I have written a routine which lists files in a directory with one thread: let rec loop (path:string) = Array.append ( path |> Directory.GetFiles ) ( path |> Directory.GetDirectories |> Array.map loop |> Array.concat ) And then an asynchronous version of it: let rec loopPar (path:string) = Array.append ( path |> Directory.GetFiles ) ( let paths = path |> Directory.GetDirectories if paths <> [||] then [| for p in paths -> async { return (loopPar p) } |] |> Async.Parallel |> Async.RunSynchronously |> Array.concat else [||] ) On small directories, the asynchronous version works fine. On bigger directories (e.g. many thousands of directories and files), the asynchronous version seems to hang. What am I missing? I know that creating thousands of threads is never going to be the most efficient solution -- I only have 8 CPUs -- but I am baffled that for larger directories the asynchronous function just doesn't respond (even after a half hour). It doesn't visibly fail, though, which baffles me. Is there a thread pool which is exhausted? How do these threads actually work?

    Read the article

  • Two radically different queries against 4 mil records execute in the same time - one uses brute force.

    - by IanC
    I'm using SQL Server 2008. I have a table with over 3 million records, which is related to another table with a million records. I have spent a few days experimenting with different ways of querying these tables. I have it down to two radically different queries, both of which take 6s to execute on my laptop. The first query uses a brute force method of evaluating possibly likely matches, and removes incorrect matches via aggregate summation calculations. The second gets all possibly likely matches, then removes incorrect matches via an EXCEPT query that uses two dedicated indexes to find the low and high mismatches. Logically, one would expect the brute force to be slow and the indexes one to be fast. Not so. And I have experimented heavily with indexes until I got the best speed. Further, the brute force query doesn't require as many indexes, which means that technically it would yield better overall system performance. Below are the two execution plans. If you can't see them, please let me know and I'll re-post then in landscape orientation / mail them to you. Brute-force query: Index-based exception query: My question is, based on the execution plans, which one look more efficient? I realize that thing may change as my data grows.

    Read the article

  • Fast serarch of 2 dimensional array

    - by Tim
    I need a method of quickly searching a large 2 dimensional array. I extract the array from Excel, so 1 dimension represents the rows and the second the columns. I wish to obtain a list of the rows where the columns match certain criteria. I need to know the row number (or index of the array). For example, if I extract a range from excel. I may need to find all rows where column A =”dog” and column B = 7 and column J “a”. I only know which columns and which value to find at run time, so I can’t hard code the column index. I could use a simple loop, but is this efficient ? I need to run it several thousand times, searching for different criteria each time. For r As Integer = 0 To UBound(myArray, 0) - 1 match = True For c = 0 To UBound(myArray, 1) - 1 If not doesValueMeetCriteria(myarray(r,c) then match = False Exit For End If Next If match Then addRowToMatchedRows(r) Next The doesValueMeetCriteria function is a simple function that checks the value of the array element against the query requirement. e.g. Column A = dog etc. Is it more effiecent to create a datatable from the array and use the .select method ? Can I use Linq in some way ? Perhaps some form of dictionary or hashtable ? Or is the simple loop the most effiecent ? Your suggestions are most welcome.

    Read the article

  • What's the difference between these LINQ queries ?

    - by SnAzBaZ
    I use LINQ-SQL as my DAL, I then have a project called DB which acts as my BLL. Various applications then access the BLL to read / write data from the SQL Database. I have these methods in my BLL for one particular table: public IEnumerable<SystemSalesTaxList> Get_SystemSalesTaxList() { return from s in db.SystemSalesTaxLists select s; } public SystemSalesTaxList Get_SystemSalesTaxList(string strSalesTaxID) { return Get_SystemSalesTaxList().Where(s => s.SalesTaxID == strSalesTaxID).FirstOrDefault(); } public SystemSalesTaxList Get_SystemSalesTaxListByZipCode(string strZipCode) { return Get_SystemSalesTaxList().Where(s => s.ZipCode == strZipCode).FirstOrDefault(); } All pretty straight forward I thought. Get_SystemSalesTaxListByZipCode is always returning a null value though, even when it has a ZIP Code that exists in that table. If I write the method like this, it returns the row I want: public SystemSalesTaxList Get_SystemSalesTaxListByZipCode(string strZipCode) { var salesTax = from s in db.SystemSalesTaxLists where s.ZipCode == strZipCode select s; return salesTax.FirstOrDefault(); } Why does the other method not return the same, as the query should be identical ? Note that, the overloaded Get_SystemSalesTaxList(string strSalesTaxID) returns a record just fine when I give it a valid SalesTaxID. Is there a more efficient way to write these "helper" type classes ? Thanks!

    Read the article

  • What is the proper way to use a Logger in a Serializable Java class?

    - by Tim Visher
    I have the following (doctored) class in a system I'm working on and Findbugs is generating a SE_BAD_FIELD warning and I'm trying to understand why it would say that before I fix it in the way that I thought I would. The reason I'm confused is because the description would seem to indicate that I had used no other non-serializable instance fields in the class but bar.model.Foo is also not serializable and used in the exact same way (as far as I can tell) but Findbugs generates no warning for it. import bar.model.Foo; import java.io.File; import java.io.Serializable; import java.util.List; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Demo implements Serializable { private final Logger logger = LoggerFactory.getLogger(this.getClass()); private final File file; private final List<Foo> originalFoos; private Integer count; private int primitive = 0; public Demo() { for (Foo foo : originalFoos) { this.logger.debug(...); } } ... } My initial blush at a solution is to get a logger reference from the factory right as I use it: public DispositionFile() { Logger logger = LoggerFactory.getLogger(this.getClass()); for (Foo foo : originalFoos) { this.logger.debug(...); } } That doesn't seem particularly efficient, though. Thoughts?

    Read the article

  • How good is the memory mapped Circular Buffer on Wikipedia?

    - by abroun
    I'm trying to implement a circular buffer in C, and have come across this example on Wikipedia. It looks as if it would provide a really nice interface for anyone reading from the buffer, as reads which wrap around from the end to the beginning of the buffer are handled automatically. So all reads are contiguous. However, I'm a bit unsure about using it straight away as I don't really have much experience with memory mapping or virtual memory and I'm not sure that I fully understand what it's doing. What I think I understand is that it's mapping a shared memory file the size of the buffer into memory twice. Then, whenever data is written into the buffer it appears in memory in 2 places at once. This allows all reads to be contiguous. What would be really great is if someone with more experience of POSIX memory mapping could have a quick look at the code and tell me if the underlying mechanism used is really that efficient. Am I right in thinking for example that the file in /dev/shm used for the shared memory always stays in RAM or could it get written to the hard drive (performance hit) at some point? Are there any gotchas I should be aware of? As it stands, I'm probably going to use a simpler method for my current project, but it'd be good to understand this to have it in my toolbox for the future. Thanks in advance for your time.

    Read the article

  • Are there any libraries for parsing "number expressions" like 1,2-9,33- in Java

    - by mihi
    Hi, I don't think it is hard, just tedious to write: Some small free (as in beer) library where I can put in a String like 1,2-9,33- and it can tell me whether a given number matches that expression. Just like most programs have in their print range dialogs. Special functions for matching odd or even numbers only, or matching every number that is 2 mod 5 (or something like that) would be nice, but not needed. The only operation I have to perform on this list is whether the range contains a given (nonnegative) integer value; more operations like max/min value (if they exist) or an iterator would be nice, of course. What would be needed that it does not occupy lots of RAM if anyone enters 1-10000000 but the only number I will ever query is 12345 :-) (To implement it, I would parse a list into several (min/max/value/mod) pairs, like 1,10,0,1 for 1-10 or 11,33,1,2 for 1-33odd, or 12,62,2,10 for 12-62/10 (i. e. 12, 22, 32, ..., 62) and then check each number for all the intervals. Open intervals by using Integer.MaxValue etc. If there are no libs, any ideas to do it better/more efficient?)

    Read the article

  • Parsing multiple files at a time in Perl

    - by sfactor
    I have a large data set (around 90GB) to work with. There are data files (tab delimited) for each hour of each day and I need to perform operations in the entire data set. For example, get the share of OSes which are given in one of the columns. I tried merging all the files into one huge file and performing the simple count operation but it was simply too huge for the server memory. So, I guess I need to perform the operation each file at a time and then add up in the end. I am new to perl and am especially naive about the performance issues. How do I do such operations in a case like this. As an example two columns of the file are. ID OS 1 Windows 2 Linux 3 Windows 4 Windows Lets do something simple, counting the share of the OSes in the data set. So, each .txt file has millions of these lines and there are many such files. What would be the most efficient way to operate on the entire files.

    Read the article

  • How do I detect proximity of the mouse pointer to a line in Flex?

    - by Hanno Fietz
    I'm working on a charting UI in Flex. One of the features I want to implement is "snapping" of the mousepointer to the data points in the diagram. I. e., if the user hovers the mouse pointer over a line diagram and gets close to the data point, I want the pointer to move to the exact coordinates and show a marker, like this: Currently, the lines are drawn on a Shape, using the Graphics API. The Shape is a child DisplayObject of a custom UIComponent subclass with the exact same dimensions. This means, I already get mouseOver events on the parent of the diagram's canvas. Now I need a way to detect if the pointer is close to one of the data points. I. e. I need an answer to the question "Which data points lie within a radius of x pixels from my current position and which of them is closest?" upon each move of the mouse. I can think of the following possibilities: draw the lines not as simple lines in the graphics API, but as more advanced objects that can have their own mouseOver events. However, I want the snapping to trigger before the mouse is actually over the line. check the original data for possible candidates upon each mouse movement. Using binary search, I might be able to reduce the number of items I have to compare sufficently. prepare some kind of new data structure from the raw data that makes the above search more efficient. I don't know how that would look like. I'm guessing this is a pretty standard problem for a number of applications, but probably the actual code usually is inside of some framework. Is there anything I can read about this topic?

    Read the article

  • Java-Eclipse-Spring 3.1 - the fastest way to get familiar with this set

    - by Leron
    I, know almost all of you at some point of your life as a programmer get to the point where you know (more or less) different technologies/languages/IDEs and a times come when you want to get things together and start using them once - more efficient and second - more closely to the real life situation where in fact just knowing Java, or some experience with Eclipse doesn't mean nothing, and what makes you a programmer worth something is the ability to work with the combination of 2 or more combinations. Having this in mind here is my question - what do you think is the optimal way of getting into Java+Eclipse+Spring3.1 world. I've read, and I've read a lot. I started writing real code but almost every step is discovering the wheel again and again, wondering how to do thing you know are some what trivial, but you've missed that one article where this topic was discussed and so on. I don't mind for paying for a good tutorial like for example, after a bit of research I decided that instead of losing a lot of time getting the different parts together I'd rather pay for the videos in http://knpuniversity.com/screencast/starting-in-symfony2-tutorial and save myself a lot of time (I hope) and get as fast as possible to writing a real code instead of wondering what do what and so on. But I find it much more difficult to find such sources of info especially when you want something more specific as me and that's the reason to ask this question. I know a lot of you go through the hard way, and I won't give up if I have to do the same, but to be honest I really hope to get post with good tutorials on the subject (paid or not) because in my situation time is literally money. Thanks Leron

    Read the article

  • [MySQL] Efficiently store last X records per item

    - by Saif Bechan
    I want to store the last X records in an MySQL database in an efficient way. So when the 4th record is stored the first should be deleted. The way I do this not is first run a query getting the items. Than check what I should do then insert/delete. There has to be a better way to do this. Any suggestions? Edit I think I should add that the records stored do not have a unique number. They have a mixed par. For example article_id and user_id. Then I want to make a table with the last X items for user_x. Just selecting the article from the table grouped by user and sorted by time is not an option for me. The table where I do the sort and group on has millions of records and gets hit a lot for no reason. So making a table in between with the last X records is way more effient. PS. I am not using this for articles and users.

    Read the article

  • SQL Databases and table design/organization

    - by John McMullen
    (NOOB disclaimer) I'm working on a system (a type of map), that is accessed mostly via 3 fields: ID (auto incremented), X coordinate, and Y coordinate. As it is right now, i have all data on the map, stored in 1 table. Whenever the map display is loaded it simply queries the database for contents in x and y, and the DB gives the data (other fields in the same entry). If an item on the map is doing something, it has a flag saying its doing something, and then has an ID of the action in another table holding that type of 'actions'. Essentially, for all map data, its stored in 1 table. All actions of a certain type are stored in their own table. I'm a noob, and i'm wondering what the most effective/efficient structure for such a design? (a map that has items, and each item has stats/actions). I'm using PHP atm, using standard SQL queries to get my data. Should i split up the tables so that there are only x number of entries on a table? (coord range limits)? Should it just keep growing and growing? There's a lot of queries to the table... so just tryin to see what is best :/

    Read the article

  • Saving multiple items per single database cell...

    - by eugeneK
    Hi, i have a countries list. Each user can check multiple countries. Once saved, this "user country list" will be used to get whether other users fit into countries certain user chose. Question is what would be the most efficient approach to this problem... I have one, one to save user selection as delimited list like Canada,USA,France ... in single varchar(max) field but problem with it would be that once user from Germany enters page i perform this check on. To search for Germany i would be needed to get all items and un-delimit each field to check against value or to use sql 'like' which again is pretty damn slow.. If you have better solution or some tips i would be glad to hear. Just to make sure, many users will have their own selections of countries from which and only they want to have users to land on their page. While millions of users will reach those pages. So the faster approach will be the better. technology, MSSQL and ASP.NET thanks

    Read the article

  • How do people handle foreign keys on clients when synchronizing to master db

    - by excsm
    Hi, I'm writing an application with offline support. i.e. browser/mobile clients sync commands to the master db every so often. I'm using uuid's on both client and server-side. When synching up to the server, the servre will return a map of local uuids (luid) to server uuids (suid). Upon receiving this map, clients updated their records suid attributes with the appropriate values. However, say a client record, e.g. a todo, has an attribute 'list_id' which holds the foreign key to the todos' list record. I use luids in foreign_keys on clients. However, when that attribute is sent over to the server, it would dirty the server db with luids rather than the suid the server is using. My current solution, is for the master server to keep a record of the mappings of luids to suids (per client id) and for each foreign key in a command, look up the suid for that particular client and use the suid instead. I'm wondering wether others have come across thus problem and if so how they have solved it? Is there a more efficient, simpler way? I took a look at this question "Synchronizing one or more databases with a master database - Foreign keys (5)" and someone seemed to suggest my current solution as one option, composite keys using suids and autoincrementing sequences and another option using -ve ids for client ids and then updating all negative ids with the suids. Both of these other options seem like a lot more work. Thanks, Saimon

    Read the article

  • Noob - Cycle through stored names and skip blanks

    - by ActiveJimBob
    NOOB trying to make my code more efficient. On scroll button push, the function 'SetName' stores a number to integer 'iName' which is index against 5 names stored in memory. If a name is not set in memeory, it skips to the next. The code works, but takes up a lot of room. Any advice appreciated. Code: #include <string.h> int iName = 0; int iNewName = 0; BYTE GetName () { return iName; } void SetName (int iNewName) { while (iName != iNewName) { switch (byNewName) { case 1: if (strlen (memory.m_nameA) == 0) new_name++; else iName = iNewName; break; case 2: if (strlen (memory.m_nameB) == 0) new_name++; else iName = iNewName; break; case 3: if (strlen (memory.m_nameC) == 0) new_name++; else iName = iNewName; break; case 4: if (strlen (memory.m_nameD) == 0) new_name++; else iName = iNewName; break; case 5: if (strlen (memory.m_nameE) == 0) new_name++; else iName = iNewName; break; default: iNewName = 1; break; } // end of case } // end of loop } // end of SetName function void main () { while(1) { if (Button_pushed) SetName(GetName+1); } // end of infinite loop } // end of main

    Read the article

  • Validating JSP's and HTML Forms, Server-side or Client-side, or both?

    - by CitadelCSAlum
    I am aware that I can Google "HTML Form Validation" and would get a billion tutorials. I am well aware that I can use simple JavaScript to validate form input, but I have been told that this is not necessarily an efficient method. I have also heard that it is a best practice to validate both client and server-side code. OK! Well, What exactly does this mean besides writing code on both? Does it mean I do some with JavaScript and other with Servlet's or does it mean that I write identical validation methods on both? My real question is can anybody give me insight and direction as how to go about validation my HTML forms. I am using JSP's and Servlet's and I have tons of form validation to do. I have already done minor form validation with regex in Java, but want to figure out if Im heading in the right track before I write any more code. Only productive answers please, If I wanted negative feedback on how inexperienced I was, I would have gone to Reddit. Thanks!

    Read the article

  • Am I understanding premature optimization correctly?

    - by Ed Mazur
    I've been struggling with an application I'm writing and I think I'm beginning to see that my problem is premature optimization. The perfectionist side of me wants to make everything optimal and perfect the first time through, but I'm finding this is complicating the design quite a bit. Instead of writing small, testable functions that do one simple thing well, I'm leaning towards cramming in as much functionality as possible in order to be more efficient. For example, I'm avoiding multiple trips to the database for the same piece of information at the cost of my code becoming more complex. One part of me wants to just not worry about redundant database calls. It would make it easier to write correct code and the amount of data being fetched is small anyway. The other part of me feels very dirty and unclean doing this. :-) I'm leaning towards just going to the database multiple times, which I think is the right move here. It's more important that I finish the project and I feel like I'm getting hung up because of optimizations like this. My question is: is this the right strategy to be using when avoiding premature optimization?

    Read the article

  • Good way to identify similar images?

    - by Nick
    I've developed a simple and fast algorithm in PHP to compare images for similarity. Its fast (~40 per second for 800x600 images) to hash and a unoptimised search algorithm can go through 3,000 images in 22 mins comparing each one against the others (3/sec). The basic overview is you get a image, rescale it to 8x8 and then convert those pixels for HSV. The Hue, Saturation and Value are then truncated to 4 bits and it becomes one big hex string. Comparing images basically walks along two strings, and then adds the differences it finds. If the total number is below 64 then its the same image. Different images are usually around 600 - 800. Below 20 and extremely similar. Are there any improvements upon this model I can use? I havent looked at how relevant the different components (hue, saturation and value) are to the comparison. Hue is probably quite important but the others? To speed up searches I could probably split the 4 bits from each part in half, and put the most significant bits first so if they fail the check then the lsb doesnt need to be checked at all. I dont know a efficient way to store bits like that yet still allow them to be searched and compared easily. I've been using a dataset of 3,000 photos (mostly unique) and there havent been any false positives. Its completely immune to resizes and fairly resistant to brightness and contrast changes.

    Read the article

  • VB.NET Syntax Coding

    - by Yiu Korochko
    I know many people ask how some of these are done, but I do not understand the context in which to use the answers, so... I'm building a code editor for a subversion of Python language, and I found a very decent way of highlighting keywords in the RichTextBox through this: bluwords.Add(KEYWORDS GO HERE) If scriptt.Text.Length > 0 Then Dim selectStart2 As Integer = scriptt.SelectionStart scriptt.Select(0, scriptt.Text.Length) scriptt.SelectionColor = Color.Black scriptt.DeselectAll() For Each oneWord As String In bluwords Dim pos As Integer = 0 Do While scriptt.Text.ToUpper.IndexOf(oneWord.ToUpper, pos) >= 0 pos = scriptt.Text.ToUpper.IndexOf(oneWord.ToUpper, pos) scriptt.Select(pos, oneWord.Length) scriptt.SelectionColor = Color.Blue pos += 1 Loop Next scriptt.SelectionStart = selectStart2 End If (scriptt is the richtextbox) But when any decent amount of code is typed (or loaded via OpenFileDialog) chunks of the code go missing, the syntax selection falls apart, and it just plain ruins it. I'm looking for a more efficient way of doing this, maybe something more like visual studio itself...because there is NO NEED to highlight all text, set it black, then redo all of the syntaxing, and the text begins to over-right if you go back to insert characters between text. Also, in this version of Python, hash (#) is used for comments on comment only lines and double hash (##) is used for comments on the same line. Now I saw that someone had asked about this exact thing, and the working answer to select to the end of the line was something like: ^\'[^\r\n]+$|''[^\r\n]+$ which I cannot seem to get into practice. I also wanted to select text between quotes and turn it turquoise, such as between the first quotation mark and the second, the text is turquoise, and the same between the 3rd and 4th etcetera... Any help is appreciated!

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >