Search Results

Search found 42685 results on 1708 pages for 'page speed'.

Page 124/1708 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Python's FTPLib too slow ?

    - by PyNEwbie
    I have been playing around with Python's FTP library and am starting to think that it is too slow as compared to using a script file in DOS? I run sessions where I download thousands of data files (I think I have over 8 million right now). My observation is that the download process seems to take five to ten times as long in Python than it does as compared to using the ftp commands in the DOS shell. Since I don't want anyone to fix my code I have not included any. I am more interested in understanding if my observation is valid or if I need to tinker more with the arguments.

    Read the article

  • Python if statement efficiency

    - by Dennis
    A friend (fellow low skill level recreational python scripter) asked me to look over some code. I noticed that he had 7 separate statements that basically said. if ( a and b and c): do something the statements a,b,c all tested their equality or lack of to set values. As I looked at it I found that because of the nature of the tests, I could re-write the whole logic block into 2 branches that never went more than 3 deep and rarely got past the first level (making the most rare occurrence test out first). if a: if b: if c: else: if c: else: if b: if c: else: if c: To me, logically it seems like it should be faster if you are making less, simpler tests that fail faster and move on. My real questions are 1) When I say if and else, should the if be true, does the else get completely ignored? 2) In theory would if (a and b and c) take as much time as the three separate if statements would?

    Read the article

  • Sort algorithm with fewest number of operations

    - by luvieere
    What is the sort algorithm with fewest number of operations? I need to implement it in HLSL as part of a pixel shader effect v2.0 for WPF, so it needs to have a really small number of operations, considering Pixel Shader's limitations. I need to sort 9 values, specifically the current pixel and its neighbors.

    Read the article

  • speeding up website load using multiple servers/domains

    - by Mohammad
    When Yahoo! developer guide says "Deploying your content across multiple, geographically dispersed servers will make your pages load faster from the user's perspective". And as an explanation I read somewhere, that browsers will load up to 5 things simultaneously from the same domain. Would a subdomain, for example cdn.example.com be considered a new domain, in the previous statement?

    Read the article

  • What is faster- Java or C# (Or good old C)?

    - by Rexsung
    I'm currently deciding on a platform to build a scientific computational product on, and am deciding on either C#, Java, or plain C with Intels compiler on Core2 Quad CPU's. It's mostly integer arithmetic. My benchmarks so far show Java and C are about on par with each other, and dotNET/C# trails by about 5%- however a number of my coworkers are claiming that dotNET with the right optimizations will beat both of these given enough time for the JIT to do its work. I always assume that the JIT would have done it's job within a few minutes of the app starting (Probably a few seconds in my case, as it's mostly tight loops), so I'm not sure whether to believe them Can anyone shed any light on the situation? Would dotNET beat Java? (Or am I best just sticking with C at this point?). The code is highly multithreaded and data sets are several terabytes in size. Haskell/erlang etc are not options in this case as there is a significant quantity of existing legacy C code that will be ported to the new system, and porting C to Java/C# is a lot simpler than to Haskell or Erlang. (Unless of course these provide a significant speedup). Edit: We are considering moving to C# or Java because they may, in theory, be faster. Every percent we can shave off our processing time saves us tens of thousands of dollars per year. At this point we are just trying to evaluate whether C, Java, or c# would be faster.

    Read the article

  • Fastest way to iterate through an NSArray with objects and keys

    - by AppGolfer
    Hello, I have an NSArray called 'objects' below with arrayCount = 1000. It takes about 10 secs to iterate through this array. Does anyone have a faster method of iterating through this array? Thanks! for (int i = 0; i <= arrayCount; i++) { event.latitude = [[[objects valueForKey:@"CLatitude"] objectAtIndex:i] floatValue]; event.longitude = [[[objects valueForKey:@"CLongitude"] objectAtIndex:i] floatValue]; }

    Read the article

  • Fast iterating over first n items of an iterable in python

    - by martinthenext
    Hello! I'm looking for a pythonic way of iterating over first n items of a list, and it's quite important to do this as fast as possible. This is how I do it now: count = 0 for item in iterable: do_somethin(item) count += 1 if count >= n: break Doesn't seem neat to me. Another way of doing this is: for item in itertools.islice(iterable, n): do_something(item) This looks good, the question is it fast enough to use with some generator(s)? For example: pair_generator = lambda iterable: itertools.izip(*[iter(iterable)]*2) for item in itertools.islice(pair_generator(iterable), n): so_something(item) Will it run fast enough as compared to the first method? Is there some easier way to do it?

    Read the article

  • Is it possible to write a Shell script which is faster to the same script in perl?

    - by JohnJohnGa
    Hi, I wrote multiple scripts in perl & shell and I had compared the real execution time. In all the cases - Perl script is more than 10 times faster than shell script. So i wondered if it possible to write a Shell script which is faster to the same script in perl? and why perl faster than shell although I use the 'system' function in perl script. Thanks for your help, Regards, JohnJohnGa

    Read the article

  • Is Matlab faster than Python?

    - by kame
    I want to compute magnetic fields of some conductors using the biot-savart-law and I want to use a 1000x1000x1000 matrix. Before I use Matlab, but now I want to use Python. Is Python slower than Matlab? How can I make Python faster? EDIT: Maybe the best way is to compute the big array with c/c++ and then transfering them to python. I want to visualise then with VPython. EDIT2: Could somebody give an advice for which is better in my case: C or C++?

    Read the article

  • Optimising Database Calls

    - by Dwaine Bailey
    I have a database that is filled with information for films, which is (in turn) read in to the database from an XML file on a webserver. What happens is the following: Gather/Parse XML and store film info as objects Begin Statement For every film object we found: Check to see if record for film exists in database If no film record, write data for film Commit Statement Currently I just test for the existence of a film using (the very basic): SELECT film_title FROM film WHERE film_id = ? If that returns a row, then the film exists, if not then I need to add it... The only problem is, is that there are many many hundreds of records in the database (lots of films!) and because it has to check for the existence of a film in the database before it can write it, the whole process ends up taking quite a while (about 27 seconds for 210 films) Is there a more efficient method of doing this, or just any suggestions in general? Programming Language is Objective-C, database is in sqlite3 Thanks, Dwaine

    Read the article

  • Faster jquery selector for finding a number of TD elements

    - by Bernard Chen
    I have a table where each row has 13 TD elements. I want to show and hide the first 10 of them when I toggle a link. These 10 TD elements all have an IDs with the prefix "foo" and a two digit number for its position (e.g., "foo01"). What's the fastest way to select them across the entire table? $("td:nth-child(-n+10)") or $("td[id^=foo]") or is it worth concatenating all of the ids? $("#foo01, #foo02, #foo03, #foo04, #foo05, #foo06, #foo07, #foo08, #foo09, #foo10") Is there another approach I should be considering as well?

    Read the article

  • Why is changing displays slow?

    - by Josh Bronson
    I've had many laptops over the course of many years, and while many things have sped up, one thing remains as slow today as it was years ago: (dis)connecting an external display. What's taking it so long to detect the new display and update the pixel buffers? I use Macs primarily, but I think this is equally slow on other platforms.

    Read the article

  • Whats faster in Javascript a bunch of small setInterval loops, or one big one?

    - by RobertWHurst
    Just wondering if its worth it to make a monolithic loop function or just add loops were they're needed. The big loop option would just be a loop of callbacks that are added dynamically with an add function. adding a function would look like this setLoop(function(){ alert('hahaha! I\'m a really annoying loop that bugs you every tenth of a second'); }); setLoop would add the function to the monolithic loop. so is the is worth anything in performance or should I just stick to lots of little loops using setInterval?

    Read the article

  • Fast iterating over first n items of an iterable (not a list) in python

    - by martinthenext
    Hello! I'm looking for a pythonic way of iterating over first n items of an iterable (upd: not a list in a common case, as for lists things are trivial), and it's quite important to do this as fast as possible. This is how I do it now: count = 0 for item in iterable: do_something(item) count += 1 if count >= n: break Doesn't seem neat to me. Another way of doing this is: for item in itertools.islice(iterable, n): do_something(item) This looks good, the question is it fast enough to use with some generator(s)? For example: pair_generator = lambda iterable: itertools.izip(*[iter(iterable)]*2) for item in itertools.islice(pair_generator(iterable), n): so_something(item) Will it run fast enough as compared to the first method? Is there some easier way to do it?

    Read the article

  • Fastest way to check array items existence in mySQL table

    - by Enrique
    User writes a series of tags (, separated) and posts the form. I build an array containing the tags and delete dupes with array_unique() php function. I'm thinking of doing: go through the array with foreach($newarray as $item) { ... } check each $item for existence in the tags mySQL table if item does not exists, insert into tags table Is there a FASTER or MORE OPTIMUM way for doing this?

    Read the article

  • .NET values lookup

    - by Maciej
    Hi, I have a feeling of missing something obvious. UDP receiver application. It holds a collection of valid UDP sender IPs - only guys with IP on that list will be considered. Since that list must be looked at on every packet and UDPs are so volatile, that operation must be maximum fast. Good choice is Dictionary but it is a key-value structure and what I actually need here is a dictionary-like (hash lookup) key only structure. Is there something like that? Small annoyance rather than a bug but still. I can still use Dictionary Thanks, M.

    Read the article

  • Should I invest time in learning about OR\M or LINQ?

    - by Peter Smith
    I'm a .NET web developer primarily who occasionally writes console applications to mine data, cleanup tasks, etc. Most of what I do winds up involving a database which I currently design via sql server management studio, using stored procedures, and query analyzer. I also create a lot of web services which are consumed via AJAX applications. Do these technologies really help you in speeding up development times? Do you still have to build the database or object code first?

    Read the article

  • Are any of these quad-tree libraries any good?

    - by Noctis Skytower
    It appears that a certain project of mine will require the use of quad-trees, something that I have never worked with before. From what I have read they should allow substantial performance enhancements than a brute-force attempt at the problem would yield. Are any of these python modules any good? Quadtree 0.1.2 <= No: unable to execute in Python 3.1 QuadTree <= Yes: simple while working with rectangles quadtree.py <= No: no support for needed operations EDIT: Does anyone know of a better implementation that the one presented on the pygame wiki article?

    Read the article

  • Quickly retrieve the subset of properties used in a huge collection in C#

    - by ccornet
    I have a huge Collection (which I can cast as an enumerable using OfType<()) of objects. Each of these objects has a Category property, which is drawn from a list somewhere else in the application. This Collection can reach sizes of hundreds of items, but it is possible that only, say, 6/30 of the possible Categories are actually used. What is the fastest method to find these 6 Categories? The size of the huge Collection discourages me from just iterating across the entire thing and returning all unique values, so is there a faster method of accomplishing this? Ideally I'd collect the categories into a List.

    Read the article

  • database vs flat file, which is a faster structure for regex matching with many simultaneous request

    - by Jamex
    Hi, which structure returns faster result and/or less taxing on the host server, flat file or database (mysql)? Assume many users (100 users) are simultaneously query the file/db. Searches involve pattern matching using regex against a static file/db. File has 50,000 unique lines (same data type). There could be many matches. There is no writing to the file/db, just read. Is it possible to have a duplicate the file/db and write a logic switch to use the backup file/db if the main file is in use? Which language is best for the type of structure? Perl for flat and PHP for db? TIA

    Read the article

  • database vs flat file, which is a faster structure for "regex" matching with many simultaneous reque

    - by Jamex
    Hi, which structure returns faster result and/or less taxing on the host server, flat file or database (mysql)? Assume many users (100 users) are simultaneously query the file/db. Searches involve pattern matching against a static file/db. File has 50,000 unique lines (same data type). There could be many matches. There is no writing to the file/db, just read. Is it possible to have a duplicate the file/db and write a logic switch to use the backup file/db if the main file is in use? Which language is best for the type of structure? Perl for flat and PHP for db? Addition info: If I want to find all the cities have the pattern "cis" in their names. Which is better/faster, using regex or string functions? Please recommend a strategy TIA

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >