Search Results

Search found 4580 results on 184 pages for 'faster'.

Page 22/184 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • What is a faster way of merging the values of this Python structure into a single dictionary?

    - by jcoon
    I've refactored how the merged-dictionary (all_classes) below is created, but I'm wondering if it can be more efficient. I have a dictionary of dictionaries, like this: groups_and_classes = {'group_1': {'class_A': [1, 2, 3], 'class_B': [1, 3, 5, 7], 'class_c': [1, 2], # ...many more items like this }, 'group_2': {'class_A': [11, 12, 13], 'class_C': [5, 6, 7, 8, 9] }, # ...and many more items like this } A function creates a new object from groups_and_classes like this (the function to create this is called often): all_classes = {'class_A': [1, 2, 3, 11, 12, 13], 'class_B': [1, 3, 5, 7, 9], 'class_C': [1, 2, 5, 6, 7, 8, 9] } Right now, there is a loop that does this: all_classes = {} for group in groups_and_classes.values(): for c, vals in group.iteritems(): for v in vals: if all_classes.has_key(c): if v not in all_classes[c]: all_classes[c].append(v) else: all_classes[c] = [v] So far, I changed the code to use a set instead of a list since the order of the list doesn't matter and the values need to be unique: all_classes = {} for group in groups_and_classes.values(): for c, vals in group.iteritems(): try: all_classes[c].update(set(vals)) except KeyError: all_classes[c] = set(vals) This is a little nicer, and I didn't have to convert the sets to lists because of how all_classes is used in the code. Question: Is there a more efficient way of creating all_classes (aside from building it at the same time groups_and_classes is built, and changing everywhere this function is called)?

    Read the article

  • Java object caching, which is faster, reading from a file or from a remote machine?

    - by Kumar225
    I am at a point where I need to take the decision on what to do when caching of objects reaches the configured threshold. Should I store the objects in a indexed file (like provided by JCS) and read them from the file (file IO) when required or have the object stored in a distributed cache (network, serialization, deserialization) We are using Solaris as OS. ============================ Adding some more information. I have this question so as to determine if I can switch to distributed caching. The remote server which will have cache will have more memory and better disk and this remote server will only be used for caching. One of the problems we cannot increase the locally cached objects is , it stores the cached objects in JVM heap which has limited memory(using 32bit JVM). ======================================================================== Thanks, we finally ended up choosing Coherence as our Cache product. This provides many cache configuration topologies, in process vs remote vs disk ..etc.

    Read the article

  • ADO.NET: Faster way to check if the database server is accessible?

    - by lotana
    At the moment I am using this code to check if the database is accessible: public bool IsDatabaseOnline(string con) { bool isConnected = false; SQLConnection connect = null; try { connect = new SQLConnection(con); connect.Open(); isConnected = true; } catch (Exception e) { isConnected = false; } finally { if (connect != null) connect.Close(); } return isConnected; } While this code works fine, there is a disadvantage. If the server is not online it spends about 4 full seconds trying to open the connection before deciding that it is not available. Is there a way to test the connection without trying to actually opening it and waiting for the timeout? Something like a database-equivalent of ping?

    Read the article

  • What is faster? Drawing or Compositing?

    - by mystify
    I make extensive use of -drawRect: to do some nice animations. A timer tries to fire 30 times per second with -setNeedsDisplay, but it feels like just 20 times. Also I can't use -setNeedsDisplayInRect: because the animation covers the entire thing. Would it help to take some of those drawing operations out of -drawRect: and move them to a subview? -drawRect has to do less then, but instead the OS will have more work with compositing views. Is there a rule of thumb which one is more worse? I remember from an apple text that they claimed Core Animation doesn't redraw during animation. So is that their secret of speed? Using subviews in animations?

    Read the article

  • Why does my .desktop file execute via double click but not from the menu?

    - by Insperatus
    I've installed FTL: Faster Than Light on my girlfriend's Lubuntu machine and created a .desktop file for it. Strangely, the program won't launch via its menu entry under 'Games'. If I navigate to /home/andi/.local/share/applications/ via pcmanfm and double click on FTL Faster Than Light.desktop the game launches without a problem. I know the menu entry is generated through the .desktop file so why won't it launch from the menu? Here's the .desktop file I created: FTL Faster Than Light.desktop

    Read the article

  • How can I make MODx manager UI work faster?

    - by tambourine
    I am currently involve in developing projects on MODx Revolution. I like this system, it fast and great, but what really annoying is manager interface. It works really slow. Every single action require ExtJs panels refreshing. Is there any way to change this behavior or roll back to Evolution interface? Thank you!

    Read the article

  • Is there a faster way to parse through a large file with regex quickly?

    - by Ray Eatmon
    Problem: Very very, large file I need to parse line by line to get 3 values from each line. Everything works but it takes a long time to parse through the whole file. Is it possible to do this within seconds? Typical time its taking is between 1 minute and 2 minutes. Example file size is 148,208KB I am using regex to parse through every line: Here is my c# code: private static void ReadTheLines(int max, Responder rp, string inputFile) { List<int> rate = new List<int>(); double counter = 1; try { using (var sr = new StreamReader(inputFile, Encoding.UTF8, true, 1024)) { string line; Console.WriteLine("Reading...."); while ((line = sr.ReadLine()) != null) { if (counter <= max) { counter++; rate = rp.GetRateLine(line); } else if(max == 0) { counter++; rate = rp.GetRateLine(line); } } rp.GetRate(rate); Console.ReadLine(); } } catch (Exception e) { Console.WriteLine("The file could not be read:"); Console.WriteLine(e.Message); } } Here is my regex: public List<int> GetRateLine(string justALine) { const string reg = @"^\d{1,}.+\[(.*)\s[\-]\d{1,}].+GET.*HTTP.*\d{3}[\s](\d{1,})[\s](\d{1,})$"; Match match = Regex.Match(justALine, reg, RegexOptions.IgnoreCase); // Here we check the Match instance. if (match.Success) { // Finally, we get the Group value and display it. string theRate = match.Groups[3].Value; Ratestorage.Add(Convert.ToInt32(theRate)); } else { Ratestorage.Add(0); } return Ratestorage; } Here is an example line to parse, usually around 200,000 lines: 10.10.10.10 - - [27/Nov/2002:16:46:20 -0500] "GET /solr/ HTTP/1.1" 200 4926 789

    Read the article

  • What's faster to parse lots of data (5Mb): eval or json?

    - by AlfaTeK
    I want to get, via ajax, a collection of data objects and parse them into JS data. Currently I have 2 choices: - Server returns valid javascript code and then I eval it. - Server returns JSON object and then I eval the json object What is the fastest of these in Firefox? (I only care about the "parsing" performance, not server or data transfer)

    Read the article

  • Directory file size calculation - how to make it faster?

    - by Xinxua
    Using C#, I am finding the total size of a directory. The logic is this way : Get the files inside the folder. Sum up the total size. Find if there are sub directories. Then do a recursive search. I tried one another way to do this too : Using FSO (obj.GetFolder(path).Size). There's not much of difference in time in both these approaches. Now the problem is, I have tens of thousands of files in a particular folder and its taking like atleast 2 minute to find the folder size. Also, if I run the program again, it happens very quickly (5 secs). I think the windows is caching the file sizes. Is there any way I can bring down the time taken when I run the program first time??

    Read the article

  • MySQL: Is it faster to use inserts and updates instead of insert on duplicate key update?

    - by Nir
    I have a cron job that updates a large number of rows in a database. Some of the rows are new and therefore inserted and some are updates of existing ones and therefore update. I use insert on duplicate key update for the whole data and get it done in one call. But- I actually know which rows are new and which are updated so I can also do inserts and updates seperately. Will seperating the inserts and updates have advantage in terms of performance? What are the mechanics behind this ? Thanks!

    Read the article

  • Oracle Exalogic Elastic Cloud Software 2.0

    - by Robert Baumgartner
    Am Mittwoch den 25. Juli 2012 um 19:00 wird die neu Version der Oracle Exalogic Cloud Software 2.0, dem Engenieered System für den Oracle WebLogic Server, vorgestellt. Learn how Oracle Exalogic Elastic Cloud Software 2.0 can help your company: Close business up to 10x faster Protect sensitive data with complete application isolation Rapidly respond to market needs by provisioning applications 6x faster Maximize availability and productivity with 2x faster Näheres siehe Register now

    Read the article

  • Google Caffeine - How Will it Affect Your Web Site?

    Google, one of the most used search engines is coming up with a new organization process that will make Google search engine faster for searchers as well as crawl the web faster therefore rankings to be updated faster. Many people are concerned about the changes and how it will affect their Search Engine Optimization.

    Read the article

  • Microbenchmark showing process-switching faster than thread-switching; what's wrong?

    - by Yang
    I have two simple microbenchmarks trying to measure thread- and process-switching overheads, but the process-switching overhead. The code is living here, and r1667 is pasted below: https://assorted.svn.sourceforge.net/svnroot/assorted/sandbox/trunk/src/c/process_switch_bench.c // on zs, ~2.1-2.4us/switch #include <stdlib.h> #include <fcntl.h> #include <stdint.h> #include <stdio.h> #include <semaphore.h> #include <unistd.h> #include <sys/wait.h> #include <sys/types.h> #include <sys/time.h> #include <pthread.h> uint32_t COUNTER; pthread_mutex_t LOCK; pthread_mutex_t START; sem_t *s0, *s1, *s2; void * threads ( void * unused ) { // Wait till we may fire away sem_wait(s2); for (;;) { pthread_mutex_lock(&LOCK); pthread_mutex_unlock(&LOCK); COUNTER++; sem_post(s0); sem_wait(s1); } return 0; } int64_t timeInMS () { struct timeval t; gettimeofday(&t, NULL); return ( (int64_t)t.tv_sec * 1000 + (int64_t)t.tv_usec / 1000 ); } int main ( int argc, char ** argv ) { int64_t start; pthread_t t1; pthread_mutex_init(&LOCK, NULL); COUNTER = 0; s0 = sem_open("/s0", O_CREAT, 0022, 0); if (s0 == 0) { perror("sem_open"); exit(1); } s1 = sem_open("/s1", O_CREAT, 0022, 0); if (s1 == 0) { perror("sem_open"); exit(1); } s2 = sem_open("/s2", O_CREAT, 0022, 0); if (s2 == 0) { perror("sem_open"); exit(1); } int x, y, z; sem_getvalue(s0, &x); sem_getvalue(s1, &y); sem_getvalue(s2, &z); printf("%d %d %d\n", x, y, z); pid_t pid = fork(); if (pid) { pthread_create(&t1, NULL, threads, NULL); pthread_detach(t1); // Get start time and fire away start = timeInMS(); sem_post(s2); sem_post(s2); // Wait for about a second sleep(1); // Stop thread pthread_mutex_lock(&LOCK); // Find out how much time has really passed. sleep won't guarantee me that // I sleep exactly one second, I might sleep longer since even after being // woken up, it can take some time before I gain back CPU time. Further // some more time might have passed before I obtained the lock! int64_t time = timeInMS() - start; // Correct the number of thread switches accordingly COUNTER = (uint32_t)(((uint64_t)COUNTER * 2 * 1000) / time); printf("Number of process switches in about one second was %u\n", COUNTER); printf("roughly %f microseconds per switch\n", 1000000.0 / COUNTER); // clean up kill(pid, 9); wait(0); sem_close(s0); sem_close(s1); sem_unlink("/s0"); sem_unlink("/s1"); sem_unlink("/s2"); } else { if (1) { sem_t *t = s0; s0 = s1; s1 = t; } threads(0); // never return } return 0; } https://assorted.svn.sourceforge.net/svnroot/assorted/sandbox/trunk/src/c/thread_switch_bench.c // From <http://stackoverflow.com/questions/304752/how-to-estimate-the-thread-context-switching-overhead> // on zs, ~4-5us/switch; tried making COUNTER updated only by one thread, but no difference #include <stdlib.h> #include <stdint.h> #include <stdio.h> #include <pthread.h> #include <unistd.h> #include <sys/time.h> uint32_t COUNTER; pthread_mutex_t LOCK; pthread_mutex_t START; pthread_cond_t CONDITION; void * threads ( void * unused ) { // Wait till we may fire away pthread_mutex_lock(&START); pthread_mutex_unlock(&START); int first=1; pthread_mutex_lock(&LOCK); // If I'm not the first thread, the other thread is already waiting on // the condition, thus Ihave to wake it up first, otherwise we'll deadlock if (COUNTER > 0) { pthread_cond_signal(&CONDITION); first=0; } for (;;) { if (first) COUNTER++; pthread_cond_wait(&CONDITION, &LOCK); // Always wake up the other thread before processing. The other // thread will not be able to do anything as long as I don't go // back to sleep first. pthread_cond_signal(&CONDITION); } pthread_mutex_unlock(&LOCK); return 0; } int64_t timeInMS () { struct timeval t; gettimeofday(&t, NULL); return ( (int64_t)t.tv_sec * 1000 + (int64_t)t.tv_usec / 1000 ); } int main ( int argc, char ** argv ) { int64_t start; pthread_t t1; pthread_t t2; pthread_mutex_init(&LOCK, NULL); pthread_mutex_init(&START, NULL); pthread_cond_init(&CONDITION, NULL); pthread_mutex_lock(&START); COUNTER = 0; pthread_create(&t1, NULL, threads, NULL); pthread_create(&t2, NULL, threads, NULL); pthread_detach(t1); pthread_detach(t2); // Get start time and fire away start = timeInMS(); pthread_mutex_unlock(&START); // Wait for about a second sleep(1); // Stop both threads pthread_mutex_lock(&LOCK); // Find out how much time has really passed. sleep won't guarantee me that // I sleep exactly one second, I might sleep longer since even after being // woken up, it can take some time before I gain back CPU time. Further // some more time might have passed before I obtained the lock! int64_t time = timeInMS() - start; // Correct the number of thread switches accordingly COUNTER = (uint32_t)(((uint64_t)COUNTER * 2 * 1000) / time); printf("Number of thread switches in about one second was %u\n", COUNTER); printf("roughly %f microseconds per switch\n", 1000000.0 / COUNTER); return 0; }

    Read the article

  • Is there a faster method then StringBuilder for a max 9-10 step string concatenation?

    - by Pentium10
    I have this code to concate some array elements: StringBuilder sb = new StringBuilder(); private RatedMessage joinMessage(int step, boolean isresult) { sb.delete(0, sb.length()); for (int i = 0; i <= step; i++) { if (mStack[i] == null) continue; rm = mStack[i].getCurrentMsg(); if (rm == null || rm.msg.length() == 0) continue; if (sb.length() != 0) { sb.append(", "); } sb.append(rm.msg); } return sb.toString(); } Important the array holds max 10 items, so it's not quite much. My trace output tells me this method is called 18864 times, 16% of the runtime was spent in this method. Can I optimize more?

    Read the article

  • Faster way to know the total number of rows in MySQL database?

    - by Starx
    If I need to know the total number of rows in a table of database I do something like this: $query = "SELECT * FROM tablename WHERE link='1';"; $result = mysql_query($query); $rows = mysql_fetch_array($result); $count = count($rows); So you see the total number of data is recovered scanning through the entire database. Is there a better way?

    Read the article

  • Would using a MemoryMappedFile for IPC across AppDomains be faster than WCF/named pipes?

    - by Morten Mertner
    Context: I am loading and executing untrusted code in a separate AppDomain and am currently communicating between the two using WCF (using named pipes as the underlying transport). I am exchanging relatively simple object graphs using a reasonably coarse-grained API, but would like to use a more fine-grained API if it does not cost me performance-wise. I've noticed that 4.0 adds a MemoryMappedFile class (which doesn't need a physical file, so could be entirely memory based). What kind of performance gains could I expect to see (if any) by using this new class? I know that it would take some "infrastructure code" to get the request/response behavior of WCF, but for now I'm only interested in the performance difference.

    Read the article

  • How faster is using an internal IP address instead of an external one?

    - by user349603
    I have a mailing list application that sends emails through several dedicated SMTP servers (running Linux Debian 5 and Postfix) in the same network of a hosting company. However, the application is using the servers' external IP addresses in order to connect to them over SMTP, and I was wondering what kind of improvement would be obtained if the application used the internal IP addresses of the servers instead? Thank you in advance for your insight.

    Read the article

  • How to make CSS/HTML prototyping faster for engineers without strong CSS skills?

    - by rdeshpande
    I've been developing Ruby on Rails applications for some time, and have often found help to develop generate the templates of HTML with accompanying CSS. However, I'd like to make an attempt at doing this myself. Initial experiments leave me feeling like my process is really slow. I'm writing all my Rails code in VIM, which, with accompanying aliases to run the test suite, is pretty fast for me. However, the back-and-forth between browser/VIM to see new changes seems cumbersome - I'm guessing finding an editor with an embedded browser that constantly sees new changes is ideal for this (any suggestions?) So far I've experimented with Blueprint, which at the onset seems like it will save me a ton of time. However, what other tools have helped you do the PSD-HTML/CSS conversion as fast as you can?

    Read the article

  • Is jdbc or ldap faster for basic read operations?

    - by Brandon
    I have a set of user data which I am try to access. Due to the way our company's employee data is set up, the information is available both through LDAP and through a table in our DB. I was curious, for standard read operations which would generally be a higher performance query?

    Read the article

  • Faster way to compare two sets of points in N-dimensional space?

    - by Amit
    List1 contains a high number (~7^10) of N-dimensional points (N <=10), List2 contains the same or fewer number of N-dimensional points (N <=10). My task is this: I want to check which point in List2 is closest (euclidean distance) to a point in List1 for every point in List1 and subsequently perform some operation on it. I have been doing it the simple- the nested loop way when I didn't have more than 50 points in List1, but with 7^10 points, this obviously takes up a lot of time. What is the fastest way to do this? Any concepts from Computational Geometry might help?

    Read the article

  • Faster way to know the tolal number of rows in Mysql Database?

    - by Starx
    If I need to know the total number of rows in a table of database I do something like $query = "SELECT * FROM tablename WHERE link='1';"; $result = mysql_query($query); $rows = mysql_fetch_array($result); $count = count($rows); So you see the total number of data is recovered scanning through entire database Is there a better way

    Read the article

  • How much faster is a database running in RAM?

    - by orokusaki
    I"m looking to run PostgreSQL in RAM for performance enhancement. The database isn't more than 1GB and shouldn't ever grow to more than 5GB. Is it worth doing? Are there any benchmarks out there? Is it buggy? My second major concern is: How easy is it to back things up when it's running purely in RAM. Is this just like using RAM as tier 1 HD, or is it much more complicated?

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >