Search Results

Search found 9992 results on 400 pages for 'space efficiency'.

Page 157/400 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • How do you write an idiomatic Scala Quicksort function?

    - by Don Mackenzie
    I recently answered a question with an attempt at writing a quicksort function in scala, I'd seen something like the code below written somewhere. def qsort(l: List[Int]): List[Int] = { l match { case Nil => Nil case pivot::tail => qsort(tail.filter(_ < pivot)) ::: pivot :: qsort(tail.filter(_ >= pivot)) } } My answer received some constructive criticism pointing out that List was a poor choice of collection for quicksort and secondly that the above wasn't tail recursive. I tried to re-write the above in a tail recursive manner but didn't have much luck. Is it possible to write a tail recursive quicksort? or, if not, how can it be done in a functional style? Also what can be done to maximise the efficiency of the implementation? Thanks in advance.

    Read the article

  • What's the proper way to use sqlite on the iPhone?

    - by Elliot Chen
    Hi, Experts: Can you please give some suggestions on sqlite using on the iPhone? Within my application, I use a sqlite DB to store all local data. Two methods can be used to retrieve those data during running time. 1, Load all the data into memory at initialization stage. (More memory used, less DB open/close operation needed) 2, Read corresponding records when necessary, free the occupied memory after using. (Good habit for memory using, but much DB open/close operations needs). I prefer to use method 2, but not sure whether too many DB opening/closing operations could affect app's efficiency. Or do you think I can 'upgrade' method 2 by opening DB when app launches and closing DB when app quits? Thanks for your suggestions very much!

    Read the article

  • Aggregate functions in ANSI SQL

    - by morpheous
    I want to use multiple aggregate functions in a query. All the examples i have seem on aggregate functions however, are trivial. Typically, they are of the form: SELECT field1,agg_func1, agg_func2 GROUP BY SOME_COLUMNS HAVING agg_func1 OP SOME_SCALAR Where: OP: is a boolean operator (e.g. <, = etc) SOME_SCALAR: is a scalar (i.e. a constant number) What I want to know is if it is possible to write (IN ANSI SQL) queries like: SELECT field1,agg_func1, agg_func2, agg_func3 GROUP BY SOME_COLUMNS HAVING (agg_func1 OP1 agg_func2) OP2 (agg_func2 OP3 agg_func3) Where: OP[N] are boolean operators or ANSI SQL clause operators like 'BETWEEN', 'LIKE', 'IN' etc. Also, assuming this is possible (I have not seen any documentation saying otherwise) are there any efficiency/performance considerations (i.e. penalties) when the HAVING clause consists of a boolean expression combining the output of the aggregate functions - instead of the normal comparison of the output of the aggregate with a constant number (e.g. min('salary') 100 ) - which is often used in the most banal examples involving aggregate functions?

    Read the article

  • [Ruby] How can I randomly iterate through a large Range?

    - by void
    I would like to randomly iterate through a range. Each value will be visited only once and all values will eventually be visited. For example: (0..9).sort_by{rand}.map{|x| f(x)} where f(x) is some function that operates on each value. A Fisher-Yates shuffle could be used to increase efficiency, but this code is sufficient for many purposes. My problem is that sort_by will transform the range into an array, which is not cool because I am working with astronomically large numbers. Ruby will quickly consume a large amount of RAM trying to create a monstrous array. This is also why the following code will not work: tried = {} # store previous attempts bigint = 99**99 bigint.times { x = rand(bigint) redo if tried[x] tried[x] = true f(x) # some function } This code is very naive and quickly runs out of memory as tried obtains more entries. What sort of algorithm can accomplish what I am trying to do?

    Read the article

  • Base64 Encoded Data - DB or Filesystem

    - by Marty
    I have a new program that will be generating a lot of Base64 encoded audio and image data. This data will be served via HTTP in the form of XML and the Base64 data will be inline. These files will most likely break 20MB and higher. Would it be more efficient to serve these files directly from the filesystem or would it be feasible to store the data in a MySQL database? Caching will be set up but overall unnecessary because it is likely that this data will be purged shortly after it is created and served. i know that storing binary data in the DB is frowned upon in most circumstances but since this will all be character data I want to see what the consensus is. As of now, I am leaning toward storing them in the filesystem for efficiency reasons but if it is feasible to store them in a database it would be much easier to manage the data.

    Read the article

  • Where can I find soft-multiply and divide algorithms?

    - by srking
    I'm working on a micro-controller without hardware multiply and divide. I need to cook up software algorithms for these basic operations that are a nice balance of compact size and efficiency. My C compiler port will employ these algos, not the the C developers themselves. My google-fu is so far turning up mostly noise on this topic. Can anyone point me to something informative? I can use add/sub and shift instructions. Table lookup based algos might also work for me, but I'm a bit worried about cramming so much into the compiler's back-end...um, so to speak. Thanks!

    Read the article

  • smallest perimiter rectangle with given integer area and integer sides

    - by remuladgryta
    Given an integer area A, how can one find integer sides w and h of a rectangle such that w*h = A and w+h is as small as possible? I'd rather the algorithm be simple than efficient (although within reasonable efficiency). What would be the best way to accomplish this? Finding out the prime factors of A, then combining them in some way that tries to balance w and h? Finding the two squares with integer sides with areas closest to A and then somehow interpolating between them? Any other method i'm not thinking of?

    Read the article

  • Make a usable Join relationship with LINQ on top of a database CSV design error

    - by jdk
    I'm looking for a way to fix and/or abstract away a comma-separated values (CSV) list in a database field in order to reconstruct a usable relationship such that I can properly join the two tables below and query them using LINQ and its Join method. Following is a sample showing the Person table and CsvArticleIds field having a CSV value to represent a one-to-many association with Article records. TABLE [dbo].[Person] Id Name CsvArticleIds -- ---------- -------- 1 Joe "15,22" 5 Ed "22" 10 Arnie "8,15,22" ^^^(Of course a link table should have been created; nonetheless the relationship with articles is trapped inside that list of CSV values.) TABLE [dbo].[Article] Id Title -- ---------- 8 Beginning C# 15 A Historic look at Programming in the 90s 22 Gardening in January Additional Info the fix can be at any level: C#.NET or SQL Server something easy because I will be repeating the solution for many other CSV values in other tables. Elegant is nice too. not looking for efficiency because this is part of a one-time data migration task and can take as long as it wants to run.

    Read the article

  • Possible to see what actual SQL queries Rails invokes when using console script?

    - by randombits
    Sometimes I like to pop open the console script that comes with Rails to test small excerpts of code. That code normally involves some more involved ActiveRecord queries. Although not an expert in ActiveRecord, I'm proficient with SQL and want to see what it's translating underneath the hood for efficiency purposes. This will help me refactor or rethink how I'm writing my app if it looks inefficient. Now when the query is in the actual application itself, it all shows up in logs. Ad-hoc ActiveRecord queries in the console do not though. Anyway to change that behavior?

    Read the article

  • What's the proper way to use sqlite at xCode?

    - by Elliot Chen
    Hi, Experts: Can you please give some suggestions on sqlite using at xcode? Within my application, I use a sqlite DB to store all local data. Two methods can be used to retrieve those data during running time. 1, Load all the data into memory at initialization stage. (More memory used, less DB open/close operation needed) 2, Read corresponding records when necessary, free the occupied memory after using. (Good habit for memory using, but much DB open/close operations needs). I prefer to use method 2, but not sure whether too many DB opening/closing operations could affect app's efficiency. Or do you think I can 'upgrade' method 2 by opening DB when app launches and closing DB when app quits? Thanks for your suggestions very much!

    Read the article

  • In what circumstances can large pages produce a speedup ?

    - by timday
    Modern x86 CPUs have the ability to support larger page sizes than the legacy 4K (ie 2MB or 4MB), and there are OS facilities (Linux, Windows) to access this functionality. The Microsoft link above states large pages "increase the efficiency of the translation buffer, which can increase performance for frequently accessed memory". Which isn't very helpful in predicting whether large pages will improve any given situation. I'm interested in concrete, preferably quantified, examples of where moving some program logic (or a whole application) to use huge pages has resulted in some performance improvement. Anyone got any success stories ? There's one particular case I know of myself: using huge pages can dramatically reduce the time needed to fork a large process (presumably as the number of TLB records needing copying is reduced by a factor on the order of 1000). I'm interested in whether huge pages can also benefit more mundane applications though.

    Read the article

  • c++ variable declaration

    - by swan
    Hello, Im wondering if this code: int main(){ int p; for(int i = 0; i < 10; i++){ p = ...; } return 0 } is exactly the same as that one int main(){ for(int i = 0; i < 10; i++){ int p = ...; } return 0 } in term of efficiency ? I mean, the p variable will be recreated 10 times in the second example ?

    Read the article

  • Finding mySQL duplicates, then merging data

    - by Michael Pasqualone
    I have a mySQL database with a tad under 2 million rows. The database is non-interactive, so efficiency isn't key. The (simplified) structure I have is: `id` int(11) NOT NULL auto_increment `category` varchar(64) NOT NULL `productListing` varchar(256) NOT NULL Now the problem I would like to solve is, I want to find duplicates on productListing field, merge the data on the category field into a single result - deleting the duplicates. So given the following data: +----+-----------+---------------------------+ | id | category | productListing | +----+-----------+---------------------------+ | 1 | Category1 | productGroup1 | | 2 | Category2 | productGroup1 | | 3 | Category3 | anotherGroup9 | +----+-----------+---------------------------+ What I want to end up is with: +----+----------------------+---------------------------+ | id | category | productListing | +----+----------------------+---------------------------+ | 1 | Category1,Category2 | productGroup1 | | 3 | Category3 | anotherGroup9 | +----+----------------------+---------------------------+ What's the most efficient way to do this either in pure mySQL query or php?

    Read the article

  • Function chaining depending on boolean result

    - by Markive
    This is just an efficiency question really.. I'm interested to know if there is a more efficient or logical way that people use to handle this sort of scenario. In my asp.net application I am running a script to generate a new project my code at the top level looks like this: Dim ok As Boolean = True ok = createFolderStructure() If ok Then ok = createMDB() If ok Then ok = createProjectConfig() If ok Then ok = updateCompanyConfig() I create a boolean and each function returns a boolean result, the next function in this chain will only run if the previous one was successful. I do this because an asp.net application will continue to run through the page life cycle unless there is an unhandled exception and I don't want my whole application to be screwed up if something in the chain goes wrong (there is a lot of copying and deleting of files etc.. in this example). I was just wondering how other people handle this scenario? the vb.net single line if statement is quite succinct but I'm wondering if there is a better way?

    Read the article

  • C#. Struct design. Why 16 byte is recommended size?

    - by maxima120
    I read Cwalina book (recommendations on development and design of .NET apps). He says that good designed struct has to be less than 16 bytes in size (for performance purpose). My questions is - why exactly is this? And (more important) can I have larger struct with same efficiency if I run my .NET 3.5 (soon to be .NET 4.0) 64-bit application on i7 under Win7 x64 (is this limitation CPU / OS based)? Just to stress again - I need as efficient struct as it is possible. I try to keep it in stack all the time, the application is heavily multi-threaded and runs on sub-millisecond intervals, the current size of the struct is 64 byte.

    Read the article

  • how to get a sub list from a list in ocaml

    - by romerun
    Hi, I'm looking at the List document. It seems the library does not provide a sublist function. I'm trying to get list of elements from i to j. Now I have to write it as: let rec sublist list i j = if i j then [] else (List.nth list i) :: (sublist list (i+1) j) which is quite concise but I'm questioning the efficiency of List.nth, because if it's O(n), I would rather have to write it in a less concise way. I'm wondering why didn't they provide List.sublist func, if List.nth is not O(1), because it's such a quite common operation..

    Read the article

  • How to decrease front end development time in a company/team environment?

    - by metal-gear-solid
    How to decrease front end development time in a company/team environment? My company is asking to suggest idea to make front end development process faster? Some points I realized main problem is client never provide right information at first time and many front end developer works on same project on same CSS so everyone makes his own method sometimes. It increase time of process. Graceful degradation and progressive enhancement both takes time to think and development. should we think about it? it increase the project cost. How to judge time estimation by just seeing a PSD for to make PSD in Cross browser Compatible XHTML CSS. Most of the time I always give less time then then takes more time. Any other suggestions to improve work efficiency in a team (50 people) environment?

    Read the article

  • Partially parse C++ for a domain-specific language

    - by PierreBdR
    I would like to create a domain specific language as an augmented-C++ language. I will need mostly two types of contructs: Top-level constructs for specialized types or declarations In-code constructs, i.e. to add primitives to make functions calls or idiom easier The language will be used for scientific computing purposes, and will ultimately be translated into plain C++. C++ has been chosen as it seems to offer a good compromise between: ease of use, efficiency and availability of a wide range of libraries. A previous attempt using flex and bison failed due to the complexity of the C++ syntax. The existing parser can still fail on some constructs. So we want to start over, but on better bases. Do you know about similar projects? And if you attempted to do so, what tools would you use? What would be the main pitfalls? Would you have recommendations in term of syntax?

    Read the article

  • How to store data to Excel from DataSet without going cell-by-cell?

    - by Jason Barnwell
    Duplicate of: What’s the simplest way to import a System.Data.DataSet into Excel? Using c# under VS2008, we can create an excel app, workbook, and then worksheet fine by doing this: Application excelApp = new Application(); Workbook excelWb = excelApp.Workbooks.Add(template); Worksheet excelWs = (Worksheet)this.Application.ActiveSheet; Then we can access each cell by "excelWs.Cells[i,j]" and write/save without problems. However with large numbers of rows/columns, we are expecting a loss in efficiency. Is there a way to "data bind" from a DataSet object into the worksheet without using the cell-by-cell approach? Most of the methods we have seen at some point revert to the cell-by-cell approach. Thanks for any suggestions.

    Read the article

  • Looking for an array (vs linked list) hashtable implementation in C

    - by kingusiu
    hi, I'm looking for a hashtable implementation in C that stores its objects in (twodimensional) arrays rather than linked lists. i.e. if a collision happens, the object that is causing the collision will be stored in the next free row index rather than pushed to the head and first element of a linked list. plus, the objects themselves must be copied to the hashtable, rather than referenced by pointers. (the objects do not live for the whole lifetime of the program but the table does). I know that such an implementation might have serious efficiency drawbacks and is not the "standard way of hashing" but as I work on a very special system-architecture i need those characteristics. thanks

    Read the article

  • Insert data into table effeciently, postgresql

    - by Rowan_Gaffney
    I am new to postgresql (and databases in general) and was hoping to get some pointers on improving the efficiency of the following statement. I am inserting data from one table to another, and do not want to insert duplicate values. I have a rid (unique identifier in each table) that are indexed and are Primary Keys. I am currently using the following statement: Insert INTO table1 SELECT * FROM table2 WHERE rid NOT IN (SELECT rid FROM table1). As of now the table one is 200,000 records, table2 is 20,000 records. Table1 is going to keep growing (probably to around 2,000,000) and table2 will stay around 20,000 records. As of now the statement takes about 15 minutes to run. I am concerned that as Table1 grows this is going to take way to long. Any suggestions?

    Read the article

  • Efficient retrieval of lists over WebServices

    - by Chris
    I have a WCF WebService that uses LINQ and EF to connect to an SQL database. I have an ASP.NET MVC front end that collects its data from the webservice. It currently has functions such as List<Customer> GetCustomers(); As the number of customers increases massively the amount of data being passed increases also reducing efficiency. What is the best way to "page data" across WebServices etc. My current idea is to implement a crude paging system such as List<Customer> GetCustomers(int start, int length); This, however, means I would have to replicate such code for all functions returning List types. It is unfortunate that I cannot use LINQ as it would be much nicer. Does anyone have any advice or ideas of patterns to implement that would be "as nice as possible" Thanks

    Read the article

  • Performance implications of using a variable versus a magic number

    - by Nathan
    I'm often confused by this. I've always been taught to name numbers I use often using variables or constants, but if it reduces the efficiency of the program, should I still do it? Heres an example: private int CenterText(Font font, PrintPageEventArgs e, string text) { int recieptCenter = 125; int stringLength = Convert.ToInt32(e.Graphics.MeasureString(text, font)); return recieptCenter - stringLength / 2; } The above code is using named variables, but runs slower then this code: private int CenterText(Font font, PrintPageEventArgs e, string text) { return 125 - Convert.ToInt32(e.Graphics.MeasureString(text, font) / 2); } In this example, the difference in execution time is minimal, but what about in larger blocks of code?

    Read the article

  • Making simple tabs in android

    - by user2910566
    I am new to Android. I am making a tab Activity that has 3 tabs in it. . I came across reading some interesting articles that tab can be made in three ways:: Regular TabHost Using simple Fragments Using Action Bar Sherlock I have a set of questions Which is a better choice & why ? Which gives more flexibility, efficiency & performance ? Which would be the preferd choice in case of requirement changes happen in future ? My research indicate :: ActionBarsherlock is better ! Is there something better than this ? If so what is it ?

    Read the article

  • Which DB Server should I use?

    - by Alex
    I have to develop a new (desktop) app for a small business. This business currently has an Access database with millions of records. The file size is about 1.5 GB. The boss told me that searching on this DB is very slow. The DB consists of a single table with about 20 fields. I also think the overall DB design isn't great. I thought to use another DB server with a new design to improve both performance and efficiency. Considering this is a relatively small business, I don't want to spend much for a DB license, so I want to ask you what would you do. Continue to use Access, maybe improving and optimizing the DB in some way Buy a DB server license (in this case, which one?) ? (any idea?)

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >