Search Results

Search found 13288 results on 532 pages for 'high speed'.

Page 7/532 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • which factors determines the speed of a processor? [closed]

    - by Deb
    I think that clock rate of processor determines the speed of core, in my case it is 1.86GHz. But If I am not wrong, it also determines that how much energy it will consume. If you have more frequency then more power it will consume. I choose Power Saver scheme to increase my battery life, however it reduces my core speed to half of the actual speed. I understand this happens because of SpeedStep, but I don't see any slowdown of my computer. So my problem is why we have such high frequency cores as it uses too much power. We can use low frequency cores. Actually I get confused between the two terms Speed of the processor and its frequency. So how much important is the frequency of core in case of any processor.

    Read the article

  • How to get an ARM CPU clock speed in Linux?

    - by MiKy
    I have an ARM-based embedded machine based on S3C2416 board. According to the specifications I have available there should be a 533 MHz ARM9 (ARM926EJ-S according to /proc/cpuinfo), however the software running on it "feels" slow, compared to the same software on my Android phone with a 528MHz ARM CPU. /proc/cpuinfo tells me that BogoMIPS is 266.24. I know that I should not trust BogoMIPS regarding performance ("Bogo" = bogus), however I would like to get a measurement on the actual CPU speed. On x86, I could use the rdtsc instruction to get the time stamp counter, wait a second (sleep(1)), read the counter again to get an approximation on the CPU speed, and according to my experience, this value was close enough to the real CPU speed. How can I find the actual CPU speed of given ARM processor? Update I found this simple Pi calculator, which I compiled both for my Android phone and the ARM board. The results are as follows: S3C2416 # cat /proc/cpuinfo Processor : ARM926EJ-S rev 5 (v5l) BogoMIPS : 266.24 Features : swp half fastmult edsp java ... #./pi_arm 10000 Calculation of PI using FFT and AGM, ver. LG1.1.2-MP1.5.2a.memsave ... 8.50 sec. (real time) Android # cat /proc/cpuinfo Processor : ARMv6-compatible processor rev 2 (v6l) BogoMIPS : 527.56 Features : swp half thumb fastmult edsp java # ./pi_android 10000 Calculation of PI using FFT and AGM, ver. LG1.1.2-MP1.5.2a.memsave ... 5.95 sec. (real time) So it seems that the ARM926EJ-S is slower than my Android phone, but not twice slower as I would expect by the BogoMIPS figures. I am still unsure about the clock speed of the ARM9 CPU.

    Read the article

  • How much the distance and ms can affect on the download speed ?

    - by Prix
    Let's consider A (client) and B (server) where A makes download from B. How much can a bad routing from A to B affect the download speed ? For example A does a tracert to B and get a response of 10 steps where the avg ms is around 300 with 10% packet loss at the 4 step and when the connection is normal the avg from A to B is 10 ~ 30 ms. Could this sort of impact reduce A download speed drasticaly or as long as both side and routes have enough link for the full speed of A from B and vice-versa it should maintain the same speed ? Besides tracert and the ping analyse of A to B what else is used to identify the problem ? If you need extra information please let me know.

    Read the article

  • hosting a high traffic facebook app (game)

    - by z3cko
    we are currently developing a high traffic facebook application. all the traffic will be within one month, where there are 500.000 to 1.000.000 expected users. after that month, the game is over and we have a winner - so the app will be archived. we are currently planning to develop the application with ruby on rails and searching for hosting options that can deal with the traffic. the problem is not so much the users, but the peak values: we will have around 500.000 requests coming daily within a short timeframe (lets say within 3 minutes in the worst case) we are expecting 500.000 to 1.000.000 users of the application, with peaks at 1:00pm (timezone GMT+1), where most (up to 80% of the users) will send most of the requests. the requests are from 11th of june to 11.july - after that, the app/game is closed/over. we are currently developing an aggressive caching mechanism - currently we are thinking about 2 or 3 small apps/webservices, that will handle the load. the load is distributed as follows: a) main application, cached data (11 screens, 200k each) b) voting: every day until 1:00pm (timezone GMT+1) - every user votes with about 10k data sent, high concurrent peak values! questions: is there any specific application setup that is recommendable? are there any hosting partners that can be recommended? thanks!

    Read the article

  • how to increase speed of my execution

    - by ratty
    i am creating project in c#.net. my execution process is very slow. i also found the reason for that.in one method i copied the values from one list to another.that list consists more 3000values for every row . how can i speed up this process.any body help me

    Read the article

  • Python vs. Java performance (runtime speed)

    - by Bijan
    Ignoring all the characteristics of each languages and focusing SOLELY on speed, which language is better performance-wise? You'd think this would be a rather simple question to answer, but I haven't found a decent one. I'm aware that some types of operations may be faster with python, and vice-versa, but I cannot find any detailed information on this. Can anyone shed some light on the performance differences?

    Read the article

  • Speed of NSScanner vs NSXMLParser?

    - by Chris
    I have an iPhone App that reads in an XML file, then pulls out the necessary data by looping through an NSScanner. The XML is not particularly long. I am wondering if it would be worth the work to implement NSXMLParser in place of using NSScanner, if I will see any real improvement in speed?

    Read the article

  • Speed test (open source)

    - by Nimbuz
    Are there any good (preferably PHP) open source scripts that I use to measure download/upload speeds? Something like speedtest.net but without Flash. Basically, I want to put up a 'broadband speed test' website. I found this but is there anything better already available? Thanks!

    Read the article

  • Tactics for using PHP in a high-load site

    - by Ross
    Before you answer this I have never developed anything popular enough to attain high server loads. Treat me as (sigh) an alien that has just landed on the planet, albeit one that knows PHP and a few optimisation techniques. I'm developing a tool in PHP that could attain quite a lot of users, if it works out right. However while I'm fully capable of developing the program I'm pretty much clueless when it comes to making something that can deal with huge traffic. So here's a few questions on it (feel free to turn this question into a resource thread as well). Databases At the moment I plan to use the MySQLi features in PHP5. However how should I setup the databases in relation to users and content? Do I actually need multiple databases? At the moment everything's jumbled into one database - although I've been considering spreading user data to one, actual content to another and finally core site content (template masters etc.) to another. My reasoning behind this is that sending queries to different databases will ease up the load on them as one database = 3 load sources. Also would this still be effective if they were all on the same server? Caching I have a template system that is used to build the pages and swap out variables. Master templates are stored in the database and each time a template is called it's cached copy (a html document) is called. At the moment I have two types of variable in these templates - a static var and a dynamic var. Static vars are usually things like page names, the name of the site - things that don't change often; dynamic vars are things that change on each page load. My question on this: Say I have comments on different articles. Which is a better solution: store the simple comment template and render comments (from a DB call) each time the page is loaded or store a cached copy of the comments page as a html page - each time a comment is added/edited/deleted the page is recached. Finally Does anyone have any tips/pointers for running a high load site on PHP. I'm pretty sure it's a workable language to use - Facebook and Yahoo! give it great precedence - but are there any experiences I should watch out for? Thanks, Ross

    Read the article

  • Matlab - binary vector with high concentration of 1s (or 0s)

    - by JohnIdol
    What's the best way to generate a number X of random binary vectors of size N with concentration of 1s (or, simmetrically, of 0s) that spans from very low to very high? Using randint or unidrnd (as in this question) will generate binary vectors with uniform distributions, which is not what I need in this case. Any help appreciated!

    Read the article

  • MySQL high IO usage quries

    - by jack
    MySQL has a built-in slow query logger. Is there any options or third-party tools which are able to detect the queries causing high IO usage just in the way like what slow query logger does?

    Read the article

  • Programming Language Choices for High Integrity Systems

    - by Finbarr
    What programming languages are a good choice for High Integrity Systems? An example of a bad choice is Java as there is a considerable amount of code that is inaccessible to the programmer. I am looking for examples of strongly typed, block structured languages where the programmer is responsible for 100% of the code, and there is as little interference from things like a JVM as possible. Compilers will obviously be an issue. Language must have a complete and unambiguous definition.

    Read the article

  • weighted RNG speed problem in C++

    - by supert
    I have a (fast) bit of C++ code that samples cards from a 52 card deck: void sample_allcards(int table[5], int holes[], int players) { int temp[5 + 2 * players]; bool try_again; int c, n, i; for (i = 0; i < 5 + 2 * players; i++) { try_again = true; while (try_again == true) { try_again = false; c = fast_rand52(); // reject collisions for (n = 0; n < i + 1; n++) { try_again = (temp[n] == c) || try_again; } temp[i] = c; } } copy_cards(table, temp, 5); copy_cards(holes, temp + 5, 2 * players); } I am implementing code to sample the hole cards according to a known distribution (stored as a 2d table). My code for this looks like: void sample_allcards_weighted(double weights[][HOLE_CARDS], int table[5], int holes[], int players) { // weights are distribution over hole cards int temp[5 + 2 * players]; int n, i; // table cards for (i = 0; i < 5; i++) { bool try_again = true; while (try_again == true) { try_again = false; int c = fast_rand52(); // reject collisions for (n = 0; n < i + 1; n++) { try_again = (temp[n] == c) || try_again; } temp[i] = c; } } for (int player = 0; player < players; player++) { // hole cards according to distribution i = 5 + 2 * player; bool try_again = true; while (try_again == true) { try_again = false; // weighted-sample c1 and c2 at once double w[1326]; memcpy(w, weights[player], sizeof(w)); // h is a number < 1325 int h = weighted_randi(w, HOLE_CARDS); // i2h uses h and sets temp[i] to the 2 cards implied by h i2h(&temp[i], h); // reject collisions for (n = 0; n < i; n++) { try_again = (temp[n] == temp[i]) || (temp[n] == temp[i+1]) || try_again; } } } copy_cards(table, temp, 5); copy_cards(holes, temp + 5, 2 * players); } My problem? The weighted sampling algorithm is a factor of 10 slower. Speed is very important for my application. Is there a way to improve the speed of my algorithm to something more reasonable? Am I doing something wrong in my implementation? Thanks.

    Read the article

  • Speed up the loop operation in R

    - by Kay
    Hi, i have a big performance problem in R. I wrote a function that iterates over an data.frame object. It simply adds a new col to a data.frame and accumulate sth. (simple operation). The data.frame has round about 850.000 rows. My PC is still working about 10h now and i have no idea about the runtime. dayloop2 <- function(temp){ for (i in 1:nrow(temp)){ temp[i,10] <- i if (i > 1) { if ((temp[i,6] == temp[i-1,6]) & (temp[i,3] == temp[i-1,3])) { temp[i,10] <- temp[i,9] + temp[i-1,10] } else { temp[i,10] <- temp[i,9] } } else { temp[i,10] <- temp[i,9] } } names(temp)[names(temp) == "V10"] <- "Kumm." return(temp) } Any ideas how to speed up this operation ?

    Read the article

  • MongoDB vs CouchDB (Speed optimization)

    - by Edward83
    Hi! I made some tests of speed to compare MongoDB and CouchDB. Only inserts were while testing. I got MongoDB 15x faster than CouchDB. I know that it is because of sockets vs http. But, it is very interesting for me how can I optimize inserts in CouchDB? Test platform: Windows XP SP3 32 bit. I used last versions of MongoDB, MongoDB C# Driver and last version of installation package of CouchDB for Windows. Thanks!

    Read the article

  • Speed up multiple JDBC SQL querys?

    - by paddydub
    I'm working on a shortest path a* algorithm in java with a mysql db. I'm executing the following SQL Query approx 300 times in the program to find route connections from a database of 10,000 bus connections. It takes approx 6-7 seconds to execute the query 300 times. Any suggestions on how I can speed this up or any ideas on a different method i can use ? Thanks ResultSet rs = stmt.executeQuery("select * from connections" + " where Connections.From_Station_stopID ="+StopID+";"); while (rs.next()) { int id = rs.getInt("To_Station_id"); String routeID = rs.getString("To_Station_routeID"); Double lat = rs.getDouble("To_Station_lat"); Double lng = rs.getDouble("To_Station_lng"); int time = rs.getInt("Time"); }

    Read the article

  • Higher speed options for executing very large (20 GB) .sql file in MySQL

    - by Jonogan
    My firm was delivered a 20+ GB .sql file in reponse to a request for data from the gov't. I don't have many options for getting the data in a different format, so I need options for how to import it in a reasonable amount of time. I'm running it on a high end server (Win 2008 64bit, MySQL 5.1) using Navicat's batch execution tool. It's been running for 14 hours and shows no signs of being near completion. Does anyone know of any higher speed options for such a transaction? Or is this what I should expect given the large file size? Thanks

    Read the article

  • PHP speed optimisation.

    - by Petah
    Hi, Im wondering about speed optimization in PHP. I have a series of files that are requested every page load. On average there are 20 files. Each file must be read an parsed if they have changed. And this is excluding that standard files required for a web page (HTML, CSS, images, etc). EG - client requests page - server outputs html, css, images - server outputs dynamic content (20+/- files combined and minified). What would be the best way to serve these files as fast as possible?

    Read the article

  • Higher than high-level web frameworks or CMS's?

    - by Ben
    I'm looking for options that allow very high-level web site development with these special characteristics: not requiring the user to program in a complex programming language not requiring the user to use GUI-like administration areas allow the user to "program" in a lightweight markup language The last point is not only about look and structure of the output but also about creating simple dynamic output. For example: listing pages fetching the content of other pages doing a site search and displaying its output dynamically show or hide parts of the page depending on login status Of course, I am not expecting a solution that provides the same possibilities like a Ruby, Python or PHP web framework. Rather I am looking for support of the "basics" that are common for web sites. Until now, I have found only one piece of software that fulfills these requirements but while it's free, it's not open source: BoltWire at http://www.boltwire.com/. Being open source is required.

    Read the article

  • process taking high time on Itanium server

    - by Vishwa
    Hi, we have an application which is recently migrated from PA-Risk server to itanium server. After migration we noticed that there is significant increase in time taken to complete a process. when we tracked the time taken by each part we found that the system time is 7.590000, user time is 3.990000 but elapsed time is 70.434882!! Due to this huge elapsed time, overall performance of the application has gone very low. i wrote a small progrms to perform I/O operations on Itanium. these scripts working faster on Itanium compared to PA-Risk. what could be the reason for this high elapsed time of this process?

    Read the article

  • android imageview scrolling to display full high quality image

    - by Javadid
    hi friends... i am working on displaying an image and placing an icon on top of it... clicking the icon will show an enlarged version of the image... though putting the imageview holding the image in a LinearLayout scales the image to the width of the dialog, the problem is that i need to display the image in a dialog but the image is very high resolution and hence is far bigger than the width of the dialog... I need to show the actual image with scrolling for both ways to see the whole image... But whenever i try putting the imageView in a scrollview the top of my imageview is blank... and again though image scrolls downwards the width is scaled to the width of the dialog... Helppppppppp guys....

    Read the article

  • Script Speed vs Memory Usage

    - by Doug Neiner
    I am working on an image generation script in PHP and have gotten it working two ways. One way is slow but uses a limited amount of memory, the second is much faster, but uses 6x the memory . There is no leakage in either script (as far as I can tell). In a limited benchmark, here is how they performed: -------------------------------------------- METHOD | TOTAL TIME | PEAK MEMORY | IMAGES -------------------------------------------- One | 65.626 | 540,036 | 200 Two | 20.207 | 3,269,600 | 200 -------------------------------------------- And here is the average of the previous numbers (if you don't want to do your own math): -------------------------------------------- METHOD | TOTAL TIME | PEAK MEMORY | IMAGES -------------------------------------------- One | 0.328 | 540,036 | 1 Two | 0.101 | 3,269,600 | 1 -------------------------------------------- Which method should I use and why? I anticipate this being used by a high volume of users, with each user making 10-20 requests to this script during a normal visit. I am leaning toward the faster method because though it uses more memory, it is for a 1/3 of the time and would reduce the number of concurrent requests.

    Read the article

  • Drupal CMS most Stable for High Traffic

    - by Aditi
    Drupal users have high satisfaction with Drupal compared to the Joomla users, for a number of reasons. If you are thinking of  choosing a high performance platform to run your high traffic website.. Drupal Installation is your forte! Overload Scenario Drupal is scalable high performance CMS and is stable under heavy load. If your server is pushed beyond its capacity, Drupal shuts off gracefully and doesn’t crash. As soon as the server is back within its traffic capability, Drupal handles all requests smoothly again. For example if your dedicated server can handle a maximum of 50,000 visits a day, and on lucky days when your news created the buzz in social media and your traffic rose to 70,000 on one day, then your server will be overloaded and usually it crashes causing permanent damage to your database at times.. But if you have used Drupal CMS it closes down gracefully an as soon as traffic goes down to within the server’s capacity, the Drupal running site accepts all requests again. Extensibility Drupal users know that their add-ons integrate better with the core, and their framework makes it easier to extend their CMS’s capabilities.. which makes an extended version of it quite stable unlike Joomla, which loses its strength if you have plenty of plugins & heavy customizations running. Any CMS with number of plugins makes the content complex and reduces your ability to handle high traffic requests. Accessibility Management or ACL Chances are if you are high traffic website, you may have various users & content contributors. ACL means group roles that is assigning people out of the various registered user levels and allocating many kinds of privileges. The most common example is the ability to see or edit a section or selected pages. This efficient feature of Drupal makes it a class apart than other CMSs out there.

    Read the article

  • How do I implement deceleration for the player character?

    - by tesselode
    Using delta time with addition and subtraction is easy. player.speed += 100 * dt However, multiplication and division complicate things a bit. For example, let's say I want the player to double his speed every second. player.speed = player.speed * 2 * dt I can't do this because it'll slow down the player (unless delta time is really high). Division is the same way, except it'll speed things way up. How can I handle multiplication and division with delta time? Edit: it looks like my question has confused everyone. I really just wanted to be able to implement deceleration without this horrible mass of code: else if speed > 0 then speed = speed - 20 * dt if speed < 0 then speed = 0 end end if speed < 0 then speed = speed + 20 * dt if speed > 0 then speed = 0 end end end Because that's way bigger than it needs to be. So far a better solution seems to be: speed = speed - speed * whatever_number * dt

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >