Search Results

Search found 9992 results on 400 pages for 'space efficiency'.

Page 37/400 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • more efficient way to pickle a string

    - by gatoatigrado
    The pickle module seems to use string escape characters when pickling; this becomes inefficient e.g. on numpy arrays. Consider the following z = numpy.zeros(1000, numpy.uint8) len(z.dumps()) len(cPickle.dumps(z.dumps())) The lengths are 1133 characters and 4249 characters respectively. z.dumps() reveals something like "\x00\x00" (actual zeros in string), but pickle seems to be using the string's repr() function, yielding "'\x00\x00'" (zeros being ascii zeros). i.e. ("0" in z.dumps() == False) and ("0" in cPickle.dumps(z.dumps()) == True)

    Read the article

  • Most efficient way to fetch and output Content with 2-Level Comments?

    - by awegawef
    I have some content with up to 2-levels of replies. I am wondering what the most efficient way to fetch and output the replies. I should note that I am planning on storing the comments with fields content_id and reply_to, where reply_to refers to which comment it is in reply to (if any). Any criticism on this design is welcome. In pseudo-code (ish), my first attempt would be: # in outputting content CONTENT_ID all_comments = fetch all comments where content_id == CONTENT_ID root_comments = filter all_comments with reply_to == None children_comments = filter all_comments with reply_to != None output_comments = list() for each root_comment children = filter children_comments, reply_to == root_comment.id output_coments.append( (root_comment, children) ) send output_comments to template Is this the best way to do this? Thanks in advance. Edit: On second thought, I'll want to preserve date-order on the comments, so I'll have to do this a bit differently, or at least just sort the comments afterward.

    Read the article

  • Algorithm to pick values from set to match target value?

    - by CSharperWithJava
    I have a fixed array of constant integer values about 300 items long (Set A). The goal of the algorithm is to pick two numbers (X and Y) from this array that fit several criteria based on input R. Formal requirement: Pick values X and Y from set A such that the expression X*Y/(X+Y) is as close as possible to R. That's all there is to it. I need a simple algorithm that will do that. Additional info: The Set A can be ordered or stored in any way, it will be hard coded eventually. Also, with a little bit of math, it can be shown that the best Y for a given X is the closest value in Set A to the expression X*R/(X-R). Also, X and Y will always be greater than R From this, I get a simple iterative algorithm that works ok: int minX = 100000000; int minY = 100000000; foreach X in A if(X<=R) continue; else Y=X*R/(X-R) Y=FindNearestIn(A, Y);//do search to find closest useable Y value in A if( X*Y/(X+Y) < minX*minY/(minX+minY) ) then minX = X; minY = Y; end end end I'm looking for a slightly more elegant approach than this brute force method. Suggestions?

    Read the article

  • Set asisde space for ADT when reading integers from file

    - by That Guy
    I'm using C to make maze solver. The program should be able read a maze from a text file containing a grid of 1 and 0 representing walls and paths. This file could be of any size as the user selects which maze to use. The program should then show the maze being solved. As the maze is being solved it should show where has been walked and how many steps have been taken. I have made an ADT called Cell containing a bool for wall or path and an integer for steps taken. I now need to populate a 2D array of Cells which means I need to set aside enough space to store a Cell for every integer in the maze file. What would be the best way to do this?

    Read the article

  • How to reduce virtual memory by optimising my PHP code?

    - by iCeR
    My current code (see below) uses 147MB of virtual memory! My provider has allocated 100MB by default and the process is killed once run, causing an internal error. The code is utilising curl multi and must be able to loop with more than 150 iterations whilst still minimizing the virtual memory. The code below is only set at 150 iterations and still causes the internal server error. At 90 iterations the issue does not occur. How can I adjust my code to lower the resource use / virtual memory? Thanks! <?php function udate($format, $utimestamp = null) { if ($utimestamp === null) $utimestamp = microtime(true); $timestamp = floor($utimestamp); $milliseconds = round(($utimestamp - $timestamp) * 1000); return date(preg_replace('`(?<!\\\\)u`', $milliseconds, $format), $timestamp); } $url = 'https://www.testdomain.com/'; $curl_arr = array(); $master = curl_multi_init(); for($i=0; $i<150; $i++) { $curl_arr[$i] = curl_init(); curl_setopt($curl_arr[$i], CURLOPT_URL, $url); curl_setopt($curl_arr[$i], CURLOPT_RETURNTRANSFER, 1); curl_setopt($curl_arr[$i], CURLOPT_SSL_VERIFYHOST, FALSE); curl_setopt($curl_arr[$i], CURLOPT_SSL_VERIFYPEER, FALSE); curl_multi_add_handle($master, $curl_arr[$i]); } do { curl_multi_exec($master,$running); } while($running > 0); for($i=0; $i<150; $i++) { $results = curl_multi_getcontent ($curl_arr[$i]); $results = explode("<br>", $results); echo $results[0]; echo "<br>"; echo $results[1]; echo "<br>"; echo udate('H:i:s:u'); echo "<br><br>"; usleep(100000); } ?> Processor Information Total processors: 8 Processor #1 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #2 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #3 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #4 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #5 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #6 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #7 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #8 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Memory Information Memory for crash kernel (0x0 to 0x0) notwithin permissible range Memory: 8302344k/9175040k available (2176k kernel code, 80272k reserved, 901k data, 228k init, 7466304k highmem) System Information Linux server3.server.com 2.6.18-194.17.1.el5PAE #1 SMP Wed Sep 29 13:31:51 EDT 2010 i686 i686 i386 GNU/Linux Physical Disks SCSI device sda: 1952448512 512-byte hdwr sectors (999654 MB) sda: Write Protect is off sda: Mode Sense: 03 00 00 08 SCSI device sda: drive cache: write back SCSI device sda: 1952448512 512-byte hdwr sectors (999654 MB) sda: Write Protect is off sda: Mode Sense: 03 00 00 08 SCSI device sda: drive cache: write back sd 0:1:0:0: Attached scsi disk sda sd 4:0:0:0: Attached scsi removable disk sdb sd 0:1:0:0: Attached scsi generic sg4 type 0 sd 4:0:0:0: Attached scsi generic sg7 type 0 Current Memory Usage total used free shared buffers cached Mem: 8306672 7847384 459288 0 487912 6444548 -/+ buffers/cache: 914924 7391748 Swap: 4095992 496 4095496 Total: 12402664 7847880 4554784 Current Disk Usage Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 898G 307G 546G 36% / /dev/sda1 99M 19M 76M 20% /boot none 4.0G 0 4.0G 0% /dev/shm /var/tmpMnt 4.0G 1.8G 2.0G 48% /tmp

    Read the article

  • What is the lightest way to make a huge chess-like grid?

    - by Sotkra
    Hey there I'm working on a browser-game and I can't help but wonder about what's the lightest way to make the grid/board on which the game takes place. Right now, as a mere sample, I'll show you this: http://sotkra.com/game/ Now, as the grid gets bigger and bigger, the table and its td's create a very heavy filepage which in turn...sucks in more resources from the browser engine and computer. So, is a table with td's the most lightweight way to craft a huge grid-like board or is there something lighter that you recommend? Cheers Sotkra

    Read the article

  • Correct way to add objects to an ArrayList

    - by ninjasense
    I am trying to add an object to an arraylist but when I view the results of the array list, it keeps adding the same object over and over to the arraylist. I was wondering what the correct way to implement this would be. public static ArrayList<Person> parsePeople(String responseData) { ArrayList<Person> People = new ArrayList<Person>(); try { JSONArray jsonPeople = new JSONArray(responseData); if (!jsonPeople.isNull(0)) { for (int i = 0; i < jsonPeople.length(); i++) { Person.add(new Person(jsonPeople.getJSONObject(i))); } } } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (Exception e) { } return People; } I have double checked my JSONArray data and made sure they are not duplicates. It seems to keep adding the first object over and over.

    Read the article

  • What is the best file format to parse?

    - by anxiety
    Scenario: I'm working on a rails app that will take data entry in the form of uploaded text-based files. I need to parse these files before importing the data. I can choose the file type uploaded to the app; the software used by those uploading has several export options regarding file type. While it may be insignificant, I was wondering if there is a specific file type that is most efficiently parsed. This question can be viewed as language-independent, I believe. (While XML is commonly parsed, it is not a feasible file type for sake of this project.)

    Read the article

  • How can I make Emacs start-up faster?

    - by Colin
    I use Emacs v. 22 (the console version, either remotely with PuTTY or locally with Konsole) as my primary text editor on Linux. It takes a while to load up each time I start it though, probably almost a second, although I never timed it. I tend to open and close Emacs a lot, because I'm more comfortable using the Bash command-line for file/directory manipulation and compiling. How can I speed up the start-up time?

    Read the article

  • efficiently trimming postgresql tables

    - by agilefall
    I have about 10 tables with over 2 million records and one with 30 million. I would like to efficiently remove older data from each of these tables. My general algorithm is: create a temp table for each large table and populate it with newer data truncate the original tables copy tmp data back to original tables using: "insert into originaltable (select * from tmp_table)" However, the last step of copying the data back is taking longer than I'd like. I thought about deleting the original tables and making the temp tables "permanent", but I lose constraint/foreign key info. If I delete from the tables directly, it takes much longer. Given that I need to preserve all foreign keys and constraints, are there any faster ways of removing the older data? Thanks.

    Read the article

  • read files from directory and filter files from Java

    - by Adnan
    The following codes goes through all directories and sub-directories and outputs just .java files; import java.io.File; public class DirectoryReader { private static String extension = "none"; private static String fileName; public static void main(String[] args ){ String dir = "C:/tmp"; File aFile = new File(dir); ReadDirectory(aFile); } private static void ReadDirectory(File aFile) { File[] listOfFiles = aFile.listFiles(); if (aFile.isDirectory()) { listOfFiles = aFile.listFiles(); if(listOfFiles!=null) { for(int i=0; i < listOfFiles.length; i++ ) { if (listOfFiles[i].isFile()) { fileName = listOfFiles[i].toString(); int dotPos = fileName.lastIndexOf("."); if (dotPos > 0) { extension = fileName.substring(dotPos); } if (extension.equals(".java")) { System.out.println("FILE:" + listOfFiles[i] ); } } if(listOfFiles[i].isDirectory()) { ReadDirectory(listOfFiles[i]); } } } } } } Is this efficient? What could be done to increase the speed? All ideas are welcome.

    Read the article

  • What's the best way to write to more files than the kernel allows open at a time?

    - by Elpezmuerto
    I have a very large binary file and I need to create separate files based on the id within the input file. There are 146 output files and I am using cstdlib and fopen and fwrite. FOPEN_MAX is 20, so I can't keep all 146 output files open at the same time. I also want to minimize the number of times I open and close an output file. How can I write to the output files effectively? I also must use the cstdlib library due to legacy code.

    Read the article

  • Is there any way to make this JavaScript tab completion script more efficient?

    - by Saladin Akara
    This code is to be integrated into an AJAX Chat system to enable a tab auto-completion of user names: var usernames = new Array(); usernames[0] = "Saladin"; usernames[1] = "Jyllaby"; usernames[2] = "CadaverKindler"; usernames[3] = "qbsuperstar03"; var text = "Text and something else q"; // Start of the script to be imported var searchTerm = text.slice(text.lastIndexOf(" ") + 1); var i; for(i = 0; i < usernames.length && usernames[i].substr(0,searchTerm.length) != searchTerm; i++); // End of the script to be imported document.write(usernames[i]); A couple of notes to be made: The array of usernames and the text variable would both be loaded from the chat itself via AJAX (which, unfortunately, I don't know), and the final output will be handled by AJAX as well. Is there a more efficient way to do this? Also, any tips on how to handle multiple instances of the searchTerm being found?

    Read the article

  • FacesMessages and rich:effect?

    - by user331747
    I'd like to be able to make an Ajax call using JSF/Seam/RichFaces and have the page update with the relevant h:messages component. That works with no problem. I'm able to perform the appropriate reRender. However, I'd also like to be able to make use of rich:effect to make it a bit prettier. Ideally, I'd like to be able to have the messages fade in and then disappear when the user clicks on them. However, I've been unable to get this working thus far. Has anyone gotten such a scenario working? Does anyone who knows JSF/Seam a bit better than me have any good advice? Thanks in advance!

    Read the article

  • How do I effectively write to 146 output files in C++ using cstdlib library

    - by Elpezmuerto
    I have a very large binary file and I need to create separate files based on the id within the input file. There are 146 output files and I am using cstdlib and fopen and fwrite. FOPEN_MAX is 20, so I can't keep all 146 output files open at the same time. I also want to minimize the number of times I open and close an output file. How can I write to the output files effectively? I also must use the cstdlib library due to legacy code.

    Read the article

  • What is the most efficient algorithm for reversing a String in Java?

    - by Hultner
    I am wondering which way to reverse a string in Java that is most efficient. Should I use some sort of xor method? The easy way would be to put all the chars in a stack and put them back into a string again but I doubt that's a very efficient way to do it. And please do not tell me to use some built in function in Java. I am interested in learning how to do it not to use an efficient function but not knowing why it's efficient or how it's built up.

    Read the article

  • when to use StringBuilder in java

    - by kostja
    It is supposed to be generally preferable to use a StringBuilder for String concatenation in Java. Is it always the case? What i mean is : Is the overhead of creating a StringBuilder object, calling the append() method and finally toString() smaller then concatenating existing Strings with + for 2 Strings already or is it only advisable for more Strings? If there is such a threshold, what does it depend on (the String length i suppose, but in which way)? And finally - would you trade the readability and conciseness of the + concatenation for the performance of the StringBuilder in smaller cases like 2, 3, 4 Strings?

    Read the article

  • Making an efficient algorithm

    - by James P.
    Here's my recent submission for the FB programming contest (qualifying round only requires to upload program output so source code doesn't matter). The objective is to find two squares that add up to a given value. I've left it as it is as an example. It does the job but is too slow for my liking. Here's the points that are obviously eating up time: List of squares is being recalculated for each call of getNumOfDoubleSquares(). This could be precalculated or extended when needed. Both squares are being checked for when it is only necessary to check for one (complements). There might be a more efficient way than a double-nested loop to find pairs. Other suggestions? Besides this particular problem, what do you look for when optimizing an algorithm? public static int getNumOfDoubleSquares( Integer target ){ int num = 0; ArrayList<Integer> squares = new ArrayList<Integer>(); ArrayList<Integer> found = new ArrayList<Integer>(); int squareValue = 0; for( int j=0; squareValue<=target; j++ ){ squares.add(j, squareValue); squareValue = (int)Math.pow(j+1,2); } int squareSum = 0; System.out.println( "Target=" + target ); for( int i = 0; i < squares.size(); i++ ){ int square1 = squares.get(i); for( int j = 0; j < squares.size(); j++ ){ int square2 = squares.get(j); squareSum = square1 + square2; if( squareSum == target && !found.contains( square1 ) && !found.contains( square2 ) ){ found.add(square1); found.add(square2); System.out.println( "Found !" + square1 +"+"+ square2 +"="+ squareSum); num++; } } } return num; }

    Read the article

  • Is it faster to count down than it is to count up?

    - by Bob
    Our computer science teacher once said that for some reason it is more efficient to count down that count up. For example if you need to use a FOR loop and the loop index is not used somewhere (like printing a line of N * to the screen) I mean that code like this : for (i=N; i>=0; i--) putchar('*'); is better than: for (i=0; i<N; i++) putchar('*'); Is it really true? and if so does anyone know why?

    Read the article

  • Most Efficient Way to Write to Fixed Width File (Ruby)

    - by Ruby Novice
    I'm currently working with extremely large fixed width files, sometimes well over a million lines. I have written a method that can write over the files based on a set of parameters, but I think there has to be a more efficient way to accomplish this. The current code I'm using is: def self.writefiles(file_name, positions, update_value) @file_name = file_name @positions = positions.to_i @update_value = update_value line_number = 0 @file_contents = File.open(@file_name, 'r').readlines while line_number < @file_contents.length @read_file_contents = @file_contents[line_number] @read_file_contents[@positions] = @update_value @file_contents[line_number] = @read_file_contents line_number += 1 end write_over_file = File.new(@file_name, 'w') line_number = 0 while line_number < @file_contents.length write_over_file.write @file_contents[line_number] line_number += 1 end write_over_file.close end For example, if position 25 in the file indicated that it is an original file the value would be set to "O" and if I wanted to replace that value I would use ClassName.writefiles(filename, 140, "X") to change this position on each line. Any help on making this method more efficient would be greatly appreciated! Thanks

    Read the article

  • C++: Is windows.h generally an efficient code library?

    - by Alerty
    I heard some people complaining about including the windows header file in a C++ application. They mentioned that it is inefficient. Is this just some urban legend or are there really some real hard facts behind it? In other words, if you believe it is efficient or inefficient please explain how this can be with facts. I am no C++ Windows programmer guru. It would really be appreciated to have detailed explanations.

    Read the article

  • Is scala functional programming slower than traditional coding?

    - by Fred Haslam
    In one of my first attempts to create functional code, I ran into a performance issue. I started with a common task - multiply the elements of two arrays and sum up the results: var first:Array[Float] ... var second:Array[Float] ... var sum=0f; for(ix<-0 until first.length) sum += first(ix) * second(ix); Here is how I reformed the work: sum = first.zip(second).map{ case (a,b) => a*b }.reduceLeft(_+_) When I benchmarked the two approaches, the second method takes 40 times as long to complete! Why does the second method take so much longer? How can I reform the work to be both speed efficient and use functional programming style?

    Read the article

  • Faking a Single Address Space

    - by dsimcha
    I have a large scientific computing task that parallelizes very well with SMP, but at too fine grained a level to be easily parallelized via explicit message passing. I'd like to parallelize it across address spaces and physical machines. Is it feasible to create a scheduler that would parallelize already multithreaded code across multiple physical computers under the following conditions: The code is already multithreaded and can scale pretty well on SMP configurations. The fact that not all of the threads are running in the same address space or on the same physical machine must be transparent to the program, even if this comes at a significant performance penalty in some use cases. You may assume that all of the physical machines involved are running operating systems and CPU architectures that are binary compatible. Things like locks and atomic operations may be slow (having network latency to deal with and all) but must "just work".

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >