Search Results

Search found 1091 results on 44 pages for 'efficiency'.

Page 30/44 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • How do we achieve "substring-match" under O(n) time?

    - by Pacerier
    I have an assignment that requires reading a huge file of random inputs, for example: Adana Izmir Adnan Menderes Apt Addis Ababa Aden ADIYAMAN ALDAN Amman Marka Intl Airport Adak Island Adelaide Airport ANURADHAPURA Kodiak Apt DALLAS/ADDISON Ardabil ANDREWS AFB etc.. If I specify a search term, the program is supposed to find the lines whereby a substring occurs. For example, if the search term is "uradha", the program is supposed to show ANURADHAPURA. If the search term is "airport", the program is supposed to show Amman Marka Intl Airport, Adelaide Airport A quote from the assignment specs: "You are to program this application taking efficiency into account as though large amounts of data and processing is involved.." I could easily achieve this functionality using a loop but the performance would be O(n). I was thinking of using a trie but it seems to only work if the substring starts from index 0. I was wondering what solutions are there which gives a performance better than O(n)?

    Read the article

  • HTML 5 Canvas - Get pixel data for a path

    - by Mikey S.
    I wonder if is there any way I can get pixel data for the currently drawn path in the canvas tag. I can calculate the pixel data on my own when drawing simple shapes like square or a line, but things get messy with more complicated shapes like ellipse or even a simple circle. The reason i'm asking this is because I'm working on a web application which involves sending canvas pixels data to the server when I add a path to the canvas. The server needs to keep it's own copy of the entire canvas, and I really don't want to send the ENTIRE canvas image every single change, but only the delta for efficiency reasons... Thanks.

    Read the article

  • code throws std::bad_alloc, not enough memory or can it be a bug?

    - by Andreas
    I am parsing using a pretty large grammar (1.1 GB, it's data-oriented parsing). The parser I use (bitpar) is said to be optimized for highly ambiguous grammars. I'm getting this error: 1terminate called after throwing an instance of 'std::bad_alloc' what(): St9bad_alloc dotest.sh: line 11: 16686 Aborted bitpar -p -b 1 -s top -u unknownwordsm -w pos.dfsa /tmp/gsyntax.pcfg /tmp/gsyntax.lex arbobanko.test arbobanko.results Is there hope? Does it mean that it has ran out of memory? It uses about 15 GB before it crashes. The machine I'm using has 32 GB of RAM, plus swap as well. It crashes before outputting a single parse tree. The parser is an efficient CYK chart parser using bit vector representations; I presume it is already near the limit of memory efficiency. If it really requires too much memory I could sample from the grammar rules, but this will decrease parse accuracy of course.

    Read the article

  • Is Work Stealing always the most appropriate user-level thread scheduling algorithm?

    - by Il-Bhima
    I've been investigating different scheduling algorithms for a thread pool I am implementing. Due to the nature of the problem I am solving I can assume that the tasks being run in parallel are independent and do not spawn any new tasks. The tasks can be of varying sizes. I went immediately for the most popular scheduling algorithm "work stealing" using lock-free deques for the local job queues, and I am relatively happy with this approach. However I'm wondering whether there are any common cases where work-stealing is not the best approach. For this particular problem I have a good estimate of the size of each individual task. Work-stealing does not make use of this information and I'm wondering if there is any scheduler which will give better load-balancing than work-stealing with this information (obviously with the same efficiency). NB. This question ties up with a previous question.

    Read the article

  • Complicated parsing in python

    - by Quazi Farhan
    I have a weird parsing problem with python. I need to parse the following text. Here I need only the section between(not including) "pre" tag and column of numbers (starting with 205 4 164). I have several pages in this format. <html> <pre> A Short Study of Notation Efficiency CACM August, 1960 Smith Jr., H. J. CA600802 JB March 20, 1978 9:02 PM 205 4 164 210 4 164 214 4 164 642 4 164 1 5 164 </pre> </html>

    Read the article

  • How do I know if I'm being truly clever and not just "clever"?

    - by Covar
    If there's one thing I've learned from programming is that there are clever solutions to problems, and then there are "clever" solutions to problems. One is an intelligent solution to a difficult problem that results in improved efficiency and a better way to to do something and the other will wind up on The Daily WTF, and result in headaches and pain for anyone else involved. My question is how do you distinguish between one and the other? How do you figure out if you've over thought the solution? How do you stop yourself from throwing away truly clever solutions, thinking they were "clever"?

    Read the article

  • DDD: Trying to code sorting and filtering as it pertains to Poco, Repository, DTO, and DAO using C#?

    - by Dr. Zim
    I get a list of items from my Repository. Now I need to sort and filter them, which I believe would be done in the Repository for efficiency. I think there would be two ways of doing this in a DDD way: Send a filter and a sort object full of conditions to the Repository (What is this called)? Repository result would produce an object with .filter and .sort methods? (This wouldn't be a POJO/POCO because it contains more than one object?). So is the answer 1, 2, or other? Could you explain why? I am leaning toward #1 because the Repository would be able to only send the data I want (or will #2 be able to delay accessing data like a LazyList?) A code example (or website link) would be very helpful. Example: Product product = repo.GetProducts(mySortObject, myFilterObject); // List of Poco product.AddFilter("price", "lessThan", "3.99"); product.AddSort("price", "descending");

    Read the article

  • File Explorer using Java - how to go about it?

    - by user299988
    Hi, I am set to create a file explorer using Java. The aim is to emulate the behavior of the default explorer as closely as possible, whatever may be the underlying OS. I have done NO GUI programming in Java. I have looked-up Swing, SWT and JFace, and I am beginning my project with this tutorial: http://www.ibm.com/developerworks/opensource/library/os-ecgui1/ I would like to know your opinions about the best approach to tackle this problem. If you could comment on complexity of coding, portability and OS-independence, and efficiency, it would be great. Is there anything else I should know? Do some other ways exist? Thanks a lot!

    Read the article

  • Rejigging a floating point equation ...

    - by Jamie
    I'd like to know if there is a way to improve the accuracy of calculating a slope. (This came up a few months back here). It seems by changing: float get_slope(float dXa, float dXb, float dYa, float dYb) { return (dXa - dXb)/(dYa - dYb); } to float get_slope(float dXa, float dXb, float dYa, float dYb) { return dXa/(dYa - dYb) - dXb/(dYa - dYb); } might be an improvement. Suggestions? Edit: It's precision I'm after, not efficiency.

    Read the article

  • How do you write an idiomatic Scala Quicksort function?

    - by Don Mackenzie
    I recently answered a question with an attempt at writing a quicksort function in scala, I'd seen something like the code below written somewhere. def qsort(l: List[Int]): List[Int] = { l match { case Nil => Nil case pivot::tail => qsort(tail.filter(_ < pivot)) ::: pivot :: qsort(tail.filter(_ >= pivot)) } } My answer received some constructive criticism pointing out that List was a poor choice of collection for quicksort and secondly that the above wasn't tail recursive. I tried to re-write the above in a tail recursive manner but didn't have much luck. Is it possible to write a tail recursive quicksort? or, if not, how can it be done in a functional style? Also what can be done to maximise the efficiency of the implementation? Thanks in advance.

    Read the article

  • How efficient is an if statement compared to a test that doesn't use an if? (C++)

    - by Keand64
    I need a program to get the smaller of two numbers, and I'm wondering if using a standard "if x is less than y" int a, b, low; if (a < b) low = a; else low = a; is more or less efficient than this: int a, b, low; low = b + ((a - b) & ((a - b) >> 31)); (or the variation of putting int delta = a - b at the top and rerplacing instances of a - b with that). I'm just wondering which one of these would be more efficient (or if the difference is to miniscule to be relevant), and the efficiency of if-else statements versus alternatives in general.

    Read the article

  • What's the proper way to use sqlite on the iPhone?

    - by Elliot Chen
    Hi, Experts: Can you please give some suggestions on sqlite using on the iPhone? Within my application, I use a sqlite DB to store all local data. Two methods can be used to retrieve those data during running time. 1, Load all the data into memory at initialization stage. (More memory used, less DB open/close operation needed) 2, Read corresponding records when necessary, free the occupied memory after using. (Good habit for memory using, but much DB open/close operations needs). I prefer to use method 2, but not sure whether too many DB opening/closing operations could affect app's efficiency. Or do you think I can 'upgrade' method 2 by opening DB when app launches and closing DB when app quits? Thanks for your suggestions very much!

    Read the article

  • Aggregate functions in ANSI SQL

    - by morpheous
    I want to use multiple aggregate functions in a query. All the examples i have seem on aggregate functions however, are trivial. Typically, they are of the form: SELECT field1,agg_func1, agg_func2 GROUP BY SOME_COLUMNS HAVING agg_func1 OP SOME_SCALAR Where: OP: is a boolean operator (e.g. <, = etc) SOME_SCALAR: is a scalar (i.e. a constant number) What I want to know is if it is possible to write (IN ANSI SQL) queries like: SELECT field1,agg_func1, agg_func2, agg_func3 GROUP BY SOME_COLUMNS HAVING (agg_func1 OP1 agg_func2) OP2 (agg_func2 OP3 agg_func3) Where: OP[N] are boolean operators or ANSI SQL clause operators like 'BETWEEN', 'LIKE', 'IN' etc. Also, assuming this is possible (I have not seen any documentation saying otherwise) are there any efficiency/performance considerations (i.e. penalties) when the HAVING clause consists of a boolean expression combining the output of the aggregate functions - instead of the normal comparison of the output of the aggregate with a constant number (e.g. min('salary') 100 ) - which is often used in the most banal examples involving aggregate functions?

    Read the article

  • [Ruby] How can I randomly iterate through a large Range?

    - by void
    I would like to randomly iterate through a range. Each value will be visited only once and all values will eventually be visited. For example: (0..9).sort_by{rand}.map{|x| f(x)} where f(x) is some function that operates on each value. A Fisher-Yates shuffle could be used to increase efficiency, but this code is sufficient for many purposes. My problem is that sort_by will transform the range into an array, which is not cool because I am working with astronomically large numbers. Ruby will quickly consume a large amount of RAM trying to create a monstrous array. This is also why the following code will not work: tried = {} # store previous attempts bigint = 99**99 bigint.times { x = rand(bigint) redo if tried[x] tried[x] = true f(x) # some function } This code is very naive and quickly runs out of memory as tried obtains more entries. What sort of algorithm can accomplish what I am trying to do?

    Read the article

  • Base64 Encoded Data - DB or Filesystem

    - by Marty
    I have a new program that will be generating a lot of Base64 encoded audio and image data. This data will be served via HTTP in the form of XML and the Base64 data will be inline. These files will most likely break 20MB and higher. Would it be more efficient to serve these files directly from the filesystem or would it be feasible to store the data in a MySQL database? Caching will be set up but overall unnecessary because it is likely that this data will be purged shortly after it is created and served. i know that storing binary data in the DB is frowned upon in most circumstances but since this will all be character data I want to see what the consensus is. As of now, I am leaning toward storing them in the filesystem for efficiency reasons but if it is feasible to store them in a database it would be much easier to manage the data.

    Read the article

  • Where can I find soft-multiply and divide algorithms?

    - by srking
    I'm working on a micro-controller without hardware multiply and divide. I need to cook up software algorithms for these basic operations that are a nice balance of compact size and efficiency. My C compiler port will employ these algos, not the the C developers themselves. My google-fu is so far turning up mostly noise on this topic. Can anyone point me to something informative? I can use add/sub and shift instructions. Table lookup based algos might also work for me, but I'm a bit worried about cramming so much into the compiler's back-end...um, so to speak. Thanks!

    Read the article

  • Make a usable Join relationship with LINQ on top of a database CSV design error

    - by jdk
    I'm looking for a way to fix and/or abstract away a comma-separated values (CSV) list in a database field in order to reconstruct a usable relationship such that I can properly join the two tables below and query them using LINQ and its Join method. Following is a sample showing the Person table and CsvArticleIds field having a CSV value to represent a one-to-many association with Article records. TABLE [dbo].[Person] Id Name CsvArticleIds -- ---------- -------- 1 Joe "15,22" 5 Ed "22" 10 Arnie "8,15,22" ^^^(Of course a link table should have been created; nonetheless the relationship with articles is trapped inside that list of CSV values.) TABLE [dbo].[Article] Id Title -- ---------- 8 Beginning C# 15 A Historic look at Programming in the 90s 22 Gardening in January Additional Info the fix can be at any level: C#.NET or SQL Server something easy because I will be repeating the solution for many other CSV values in other tables. Elegant is nice too. not looking for efficiency because this is part of a one-time data migration task and can take as long as it wants to run.

    Read the article

  • smallest perimiter rectangle with given integer area and integer sides

    - by remuladgryta
    Given an integer area A, how can one find integer sides w and h of a rectangle such that w*h = A and w+h is as small as possible? I'd rather the algorithm be simple than efficient (although within reasonable efficiency). What would be the best way to accomplish this? Finding out the prime factors of A, then combining them in some way that tries to balance w and h? Finding the two squares with integer sides with areas closest to A and then somehow interpolating between them? Any other method i'm not thinking of?

    Read the article

  • FLEX: UIComponent Click Handler??

    - by davidemm
    Hello, Thanks for reading. I'm trying to put a label and an image on a UIComponent, side-by-side. I want to use UIComponent for efficiency reasons, and the entire component is more complicated than just a button with an icon. I'm trying to get a click handler on ther entire UIComponent so that when a click happens on either the label or the image, the same [click] event handler is called. If I add a click handler on the root component, it works, but when I introduce space between the label and the image, the space between them becomes un-clickable. I produce the space by using the move(x,y) method for the label and the image with a small gap between the x/y coordinates. Any ideas how to remedy this? Thanks.

    Read the article

  • Possible to see what actual SQL queries Rails invokes when using console script?

    - by randombits
    Sometimes I like to pop open the console script that comes with Rails to test small excerpts of code. That code normally involves some more involved ActiveRecord queries. Although not an expert in ActiveRecord, I'm proficient with SQL and want to see what it's translating underneath the hood for efficiency purposes. This will help me refactor or rethink how I'm writing my app if it looks inefficient. Now when the query is in the actual application itself, it all shows up in logs. Ad-hoc ActiveRecord queries in the console do not though. Anyway to change that behavior?

    Read the article

  • What's the proper way to use sqlite at xCode?

    - by Elliot Chen
    Hi, Experts: Can you please give some suggestions on sqlite using at xcode? Within my application, I use a sqlite DB to store all local data. Two methods can be used to retrieve those data during running time. 1, Load all the data into memory at initialization stage. (More memory used, less DB open/close operation needed) 2, Read corresponding records when necessary, free the occupied memory after using. (Good habit for memory using, but much DB open/close operations needs). I prefer to use method 2, but not sure whether too many DB opening/closing operations could affect app's efficiency. Or do you think I can 'upgrade' method 2 by opening DB when app launches and closing DB when app quits? Thanks for your suggestions very much!

    Read the article

  • In what circumstances can large pages produce a speedup ?

    - by timday
    Modern x86 CPUs have the ability to support larger page sizes than the legacy 4K (ie 2MB or 4MB), and there are OS facilities (Linux, Windows) to access this functionality. The Microsoft link above states large pages "increase the efficiency of the translation buffer, which can increase performance for frequently accessed memory". Which isn't very helpful in predicting whether large pages will improve any given situation. I'm interested in concrete, preferably quantified, examples of where moving some program logic (or a whole application) to use huge pages has resulted in some performance improvement. Anyone got any success stories ? There's one particular case I know of myself: using huge pages can dramatically reduce the time needed to fork a large process (presumably as the number of TLB records needing copying is reduced by a factor on the order of 1000). I'm interested in whether huge pages can also benefit more mundane applications though.

    Read the article

  • c++ variable declaration

    - by swan
    Hello, Im wondering if this code: int main(){ int p; for(int i = 0; i < 10; i++){ p = ...; } return 0 } is exactly the same as that one int main(){ for(int i = 0; i < 10; i++){ int p = ...; } return 0 } in term of efficiency ? I mean, the p variable will be recreated 10 times in the second example ?

    Read the article

  • Finding mySQL duplicates, then merging data

    - by Michael Pasqualone
    I have a mySQL database with a tad under 2 million rows. The database is non-interactive, so efficiency isn't key. The (simplified) structure I have is: `id` int(11) NOT NULL auto_increment `category` varchar(64) NOT NULL `productListing` varchar(256) NOT NULL Now the problem I would like to solve is, I want to find duplicates on productListing field, merge the data on the category field into a single result - deleting the duplicates. So given the following data: +----+-----------+---------------------------+ | id | category | productListing | +----+-----------+---------------------------+ | 1 | Category1 | productGroup1 | | 2 | Category2 | productGroup1 | | 3 | Category3 | anotherGroup9 | +----+-----------+---------------------------+ What I want to end up is with: +----+----------------------+---------------------------+ | id | category | productListing | +----+----------------------+---------------------------+ | 1 | Category1,Category2 | productGroup1 | | 3 | Category3 | anotherGroup9 | +----+----------------------+---------------------------+ What's the most efficient way to do this either in pure mySQL query or php?

    Read the article

  • Function chaining depending on boolean result

    - by Markive
    This is just an efficiency question really.. I'm interested to know if there is a more efficient or logical way that people use to handle this sort of scenario. In my asp.net application I am running a script to generate a new project my code at the top level looks like this: Dim ok As Boolean = True ok = createFolderStructure() If ok Then ok = createMDB() If ok Then ok = createProjectConfig() If ok Then ok = updateCompanyConfig() I create a boolean and each function returns a boolean result, the next function in this chain will only run if the previous one was successful. I do this because an asp.net application will continue to run through the page life cycle unless there is an unhandled exception and I don't want my whole application to be screwed up if something in the chain goes wrong (there is a lot of copying and deleting of files etc.. in this example). I was just wondering how other people handle this scenario? the vb.net single line if statement is quite succinct but I'm wondering if there is a better way?

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >