Search Results

Search found 707 results on 29 pages for 'timing'.

Page 5/29 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Measuring execution time of a system call in C++

    - by jm1234567890
    I have found some code on measuring execution time here http://www.dreamincode.net/forums/index.php?showtopic=24685 However, it does not seem to work for system calls. I imagine this is because the execution jumps out of the current process. clock_t begin=clock(); system(something); clock_t end=clock(); cout<<"Execution time: "<<diffclock(end,begin)<<" s."<<endl; Then double diffclock(clock_t clock1,clock_t clock2) { double diffticks=clock1-clock2; double diffms=(diffticks)/(CLOCKS_PER_SEC); return diffms; } However this always returns 0 seconds... Is there another method that will work? Also, this is in Linux. Thanks!

    Read the article

  • Difference between arguments in setInterval calls

    - by Martin Janiczek
    What's the difference between these setInterval calls and which ones should be used? setInterval("myFunction()",1000) setInterval("myFunction",1000) setInterval(myFunction(),1000) setInterval(myFunction,1000) My guess is that JS uses eval() on the first two (strings) and calls the latter two directly. Also, I don't understand the difference between the calls with and without parentheses. The ones with parentheses call it directly and then periodically call its return value?

    Read the article

  • Delay function from running for n seconds then run it once. (2minute question)

    - by Ozaki
    TLDR I have a function that runs on the end of a pan in an openlayers map. Don't want it to fire continously. I have a function that runs on the end of panning a map. I want it so that it will not fire the function until say 3 secconds after the pan has finished. Although I don't want to queue up the function to fire 10 or so times like setTimeout is currently doing. How can I delay a function from running for n seconds then run it only once no matter how many times it has been called?

    Read the article

  • Why are gettimeofday() intervals occasionally negative?

    - by Andres Jaan Tack
    I have an experimental library whose performance I'm trying to measure. To do this, I've written the following: struct timeval begin; gettimeofday(&begin, NULL); { // Experiment! } struct timeval end; gettimeofday(&end, NULL); // Print the time it took! std::cout << "Time: " << 100000 * (end.tv_sec - begin.tv_sec) + (end.tv_usec - begin.tv_usec) << std::endl; Occasionally, my results include negative timings, some of which are nonsensical. For instance: Time: 226762 Time: 220222 Time: 210883 Time: -688976 What's going on?

    Read the article

  • Can this loop be sped up in pure Python?

    - by Noctis Skytower
    I was trying out an experiment with Python, trying to find out how many times it could add one to an integer in one minute's time. Assuming two computers are the same except for the speed of the CPUs, this should give an estimate of how fast some CPU operations may take for the computer in question. The code below is an example of a test designed to fulfill the requirements given above. This version is about 20% faster than the first attempt and 150% faster than the third attempt. Can anyone make any suggestions as to how to get the most additions in a minute's time span? Higher numbers are desireable. EDIT: This experiment is being written in Python 3.1 and is 15% faster than the fourth speed-up attempt. def start(seconds): import time, _thread def stop(seconds, signal): time.sleep(seconds) signal.pop() total, signal = 0, [None] _thread.start_new_thread(stop, (seconds, signal)) while signal: total += 1 return total if __name__ == '__main__': print('Testing the CPU speed ...') print('Relative speed:', start(60))

    Read the article

  • Accurately and securely measure the time spent viewing a web page

    - by balpha
    Suppose the following: You have a web page that presents a simple game to a user (e.g. a quiz, a puzzle, etc). The user solves the puzzle, submits the result, and you want to measure as precisely as possible how long they took to solve it. Assume it's quite simple, so we're talking seconds, not hours. Also assume JavaScript is required anyway, so there's no need to think of JS-disabled browsers. Finally, assume we don't want to use anything like Flash, Silverlight, or the like. I can think of several techniques: Simply take the time between the points when the data was sent from the server and when the submission arrives. Since this is exclusively server-side, there's no chance for cheating. However, issues like network latency and page rendering time might make this unfair for users with slow computers / browsers / internet connections. On the first request, just send the page without the actual game data. When everything is loaded so far, retrieve the game data through an AJAX call and populate it into the page. This is similar to 1., but reduces some of the caveats introduced through time spent on overhead. Have the time measured on the client side using JavaScript and submitted alongside with the solution. This would theoretically be the most accurate, but it introduces the possibility of cheating, because you're relying on client data. Use the request time headers of a "ready to play" AJAX call and the result submission request. Same caveat as 3., as it is still client data. A combination of server side and client side measuring with some kind of plausibility analysis. I can't think of a good way, but maybe you can. Thoughts? Other ideas?

    Read the article

  • mysql query timer for .net

    - by acidzombie24
    Is there something i can use to track how long my mysql queries take? perhaps log them if they take a certain amount of time? or track all queries but only hold the longest query time? using this with C# .NET with ASP.NET. I'd like to use this to occasionally check if my queries are getting slow.

    Read the article

  • Releasing an NSTimer iPhone?

    - by Conor Taylor
    I have an NSTimer declared in my .h and in the viewDidLoad of the /m I have the code: timer = [NSTimer scheduledTimerWithTimeInterval:kComplexTimer target:self selector:@selector (main) userInfo:nil repeats:YES]; I also have [timer release]; in my dealloc. However when I exit the view and return to it, the timer has not in fact released, it has doubles in speed! How do I solve this & what am I doing wrong??? Thanks

    Read the article

  • Starting an http request, but dropping out if no response after a certain time

    - by nbv4
    I'm trying to write a python script that does the following from within a minutely cronjob: tries to execute a url after 10 seconds if there is no response yet, abandon the response and immediately issue a command via os.system to restart the webserver. The problem is that when my server crashes, it doesn't return a response at all. If I were to just have my script time the response, the script will go on for 10 minutes or more. I want it to issue the restart immediately once it detects a slow response. I know such a script could be written in probably less than 5 mines of code, but I have no idea how to go about it.

    Read the article

  • Measuring execution time of a call to system() in C++

    - by jm1234567890
    I have found some code on measuring execution time here http://www.dreamincode.net/forums/index.php?showtopic=24685 However, it does not seem to work for calls to system(). I imagine this is because the execution jumps out of the current process. clock_t begin=clock(); system(something); clock_t end=clock(); cout<<"Execution time: "<<diffclock(end,begin)<<" s."<<endl; Then double diffclock(clock_t clock1,clock_t clock2) { double diffticks=clock1-clock2; double diffms=(diffticks)/(CLOCKS_PER_SEC); return diffms; } However this always returns 0 seconds... Is there another method that will work? Also, this is in Linux. Thanks!

    Read the article

  • SAS disk performance drops a while after reboot.

    - by Flamewires
    So we have some workstations with identical hardware. The Fedora14 box has a couple weeks uptime and still get good performance. hdparm -tT /dev/sda /dev/sda: Timing cached reads: 21766 MB in 2.00 seconds = 10902.12 MB/sec Timing buffered disk reads: 586 MB in 3.00 seconds = 195.20 MB/sec The Cent 5.5 boxes however seem to be okay after a reboot, /dev/sda: Timing cached reads: 34636 MB in 2.00 seconds = 17354.64 MB/sec Timing buffered disk reads: 498 MB in 3.01 seconds = 165.62 MB/sec but some time later( unsure exactly, tested at approx 1 day uptime) /dev/sda: Timing cached reads: 2132 MB in 2.00 seconds = 1064.96 MB/sec Timing buffered disk reads: 160 MB in 3.01 seconds = 53.16 MB/sec drop to this. This is with very low load. I believe they all have the same bios settings. Any ideas what could cause this on Cent? Ask for more info. It might also be worth noting, that passing the --direct flag causes the slow boxes to perform similarly to the non-slow ones for buffered disk reads.

    Read the article

  • slow software raid

    - by Jure1873
    I've got software raid 1 for / and /home and it seems I'm not getting the right speed out of it. Reading from md0 I get around 100 MB/sec Reading from sda or sdb I get around 95-105 MB/sec I thought I would get more speed (while reading data) from two drives. I don't know what is the problem. I'm using kernel 2.6.31-18 hdparm -tT /dev/md0 /dev/md0: Timing cached reads: 2078 MB in 2.00 seconds = 1039.72 MB/sec Timing buffered disk reads: 304 MB in 3.01 seconds = 100.96 MB/sec hdparm -tT /dev/sda /dev/sda: Timing cached reads: 2084 MB in 2.00 seconds = 1041.93 MB/sec Timing buffered disk reads: 316 MB in 3.02 seconds = 104.77 MB/sec hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 2150 MB in 2.00 seconds = 1075.94 MB/sec Timing buffered disk reads: 302 MB in 3.01 seconds = 100.47 MB/sec Edit: Raid 1

    Read the article

  • How to solve timing problems in automated UI tests with C# and Visual Studio?

    - by Lernkurve
    Question What is the standard approach to solve timing problems in automated UI tests? Concrete example I am using Visual Studio 2010 and Team Foundation Server 2010 to create automated UI tests and want to check whether my application has really stopped running: [TestMethod] public void MyTestMethod() { Assert.IsTrue(!IsMyAppRunning(), "App shouldn't be running, but is."); StartMyApp(); Assert.IsTrue(IsMyAppRunning(), "App should have been started and should be running now."); StopMyApp(); //Pause(500); Assert.IsTrue(!IsMyAppRunning(), "App was stopped and shouldn't be running anymore."); } private bool IsMyAppRunning() { foreach (Process runningProcesse in Process.GetProcesses()) { if (runningProcesse.ProcessName.Equals("Myapp")) { return true; } } return false; } private void Pause(int pauseTimeInMilliseconds) { System.Threading.Thread.Sleep(pauseTimeInMilliseconds); } StartMyApp() and StopMyApp() have been recorded with MS Test Manager 2010 and reside in UIMap.uitest. The last assert fails because the assertion is executed while my application is still in the process of shutting down. If I put a delay after StopApp() the test case passes. The above is just an example to explain my problem. What is the standard approach to solve these kinds of timing issues? One idea would be to wait with the assertion until I get some event notification that my app has been stopped.

    Read the article

  • High CPU load for 1:30 minutes when mounting ext4-raid partition

    - by sirion
    I have a raid 5 (software) with 5x2TB drives. I encrypted the raid with cryptsetup and put an ext4-partition on top. In the beginning opening and mounting the raid took less than 10 seconds, now (for a few weeks) mounting alone takes 1:30 minutes and the cpu stays around 93% the whole time: The output of "time sudo mount /dev/mapper/8000 /media/8000" is: real 1m31.952s user 0m0.008s sys 1m25.229s At the same time only one line is added to /var/log/syslog: kernel: [ 2240.921381] EXT4-fs (dm-1): mounted filesystem with ordered data mode. Opts: (null) My Ubuntu-version is "12.04.1 LTS" and no updates are pending. I checked the partition with fsck, but it says that all is ok. The "cryptsetup luksOpen" command only takes a few seconds. I also tried changing the raid-bitmap (as it was suggested in some forum) but it did not change the behaviour. sudo mdadm --grow /dev/md0 -b internal and sudo mdadm --grow /dev/md0 -b none I had the idea that it might be the hardware being slow, but a read test with "sudo hdparm -t /dev/md0" spit out values between 62 and 159 MB/sec: Timing buffered disk reads: 382 MB in 3.00 seconds = 127.14 MB/sec Timing buffered disk reads: 482 MB in 3.02 seconds = 159.62 MB/sec Timing buffered disk reads: 190 MB in 3.03 seconds = 62.65 MB/sec Timing buffered disk reads: 474 MB in 3.02 seconds = 157.12 MB/sec Although I think it is strange that the read rate jumps by more than 100% - could that mean something? The speed test when reading from the mapped (decrypted) device shows similar behavior, although it is of course much slower. "sudo hdparm -t /dev/mapper/8000": Timing buffered disk reads: 56 MB in 3.02 seconds = 18.54 MB/sec Timing buffered disk reads: 122 MB in 3.09 seconds = 39.43 MB/sec Timing buffered disk reads: 134 MB in 3.02 seconds = 44.35 MB/sec The output of a verbose mount "mount -vvv /dev/mapper/8000 /media/8000" does not help much: mount: fstab path: "/etc/fstab" mount: mtab path: "/etc/mtab" mount: lock path: "/etc/mtab~" mount: temp path: "/etc/mtab.tmp" mount: UID: 0 mount: eUID: 0 mount: spec: "/dev/mapper/8000" mount: node: "/media/8000" mount: types: "(null)" mount: opts: "(null)" mount: you didn't specify a filesystem type for /dev/mapper/8000 I will try type ext4 mount: mount(2) syscall: source: "/dev/mapper/8000", target: "/media/8000", filesystemtype: "ext4", mountflags: -1058209792, data: (null) Any idea where I could find additional information on why mounting takes so long, or what additional tests I could run?

    Read the article

  • sql: Group by x,y,z; return grouped by x,y with lowest f(z)

    - by Sai Emrys
    This is for http://cssfingerprint.com I collect timing stats about how fast the different methods I use perform on different browsers, etc., so that I can optimize the scraping speed. Separately, I have a report about what each method returns for a handful of URLs with known-correct values, so that I can tell which methods are bogus on which browsers. (Each is different, alas.) The related tables look like this: CREATE TABLE `browser_tests` ( `id` int(11) NOT NULL AUTO_INCREMENT, `bogus` tinyint(1) DEFAULT NULL, `result` tinyint(1) DEFAULT NULL, `method` varchar(255) DEFAULT NULL, `url` varchar(255) DEFAULT NULL, `os` varchar(255) DEFAULT NULL, `browser` varchar(255) DEFAULT NULL, `version` varchar(255) DEFAULT NULL, `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, `user_agent` varchar(255) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=33784 DEFAULT CHARSET=latin1 CREATE TABLE `method_timings` ( `id` int(11) NOT NULL AUTO_INCREMENT, `method` varchar(255) DEFAULT NULL, `batch_size` int(11) DEFAULT NULL, `timing` int(11) DEFAULT NULL, `os` varchar(255) DEFAULT NULL, `browser` varchar(255) DEFAULT NULL, `version` varchar(255) DEFAULT NULL, `user_agent` varchar(255) DEFAULT NULL, `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=28849 DEFAULT CHARSET=latin1 (user_agent is broken down pre-insert into browser, version, and os from a small list of recognized values using regex; I keep the original user-agent string just in case.) I have a query like this that tells me the average timing for every non-bogus browser / version / method tuple: select c, avg(bogus) as bog, timing, method, browser, version from browser_tests as b inner join ( select count(*) as c, round(avg(timing)) as timing, method, browser, version from method_timings group by browser, version, method having c > 10 order by browser, version, timing ) as t using (browser, version, method) group by browser, version, method having bog < 1 order by browser, version, timing; Which returns something like: c bog tim method browser version 88 0.8333 184 reuse_insert Chrome 4.0.249.89 18 0.0000 238 mass_insert_width Chrome 4.0.249.89 70 0.0400 246 mass_insert Chrome 4.0.249.89 70 0.0400 327 mass_noinsert Chrome 4.0.249.89 88 0.0556 367 reuse_reinsert Chrome 4.0.249.89 88 0.0556 383 jquery Chrome 4.0.249.89 88 0.0556 863 full_reinsert Chrome 4.0.249.89 187 0.0000 105 jquery Chrome 5.0.307.11 187 0.8806 109 reuse_insert Chrome 5.0.307.11 123 0.0000 110 mass_insert_width Chrome 5.0.307.11 176 0.0000 231 mass_noinsert Chrome 5.0.307.11 176 0.0000 237 mass_insert Chrome 5.0.307.11 187 0.0000 314 reuse_reinsert Chrome 5.0.307.11 187 0.0000 372 full_reinsert Chrome 5.0.307.11 12 0.7500 82 reuse_insert Chrome 5.0.335.0 12 0.2500 102 jquery Chrome 5.0.335.0 [...] I want to modify this query to return only the browser/version/method with the lowest timing - i.e. something like: 88 0.8333 184 reuse_insert Chrome 4.0.249.89 187 0.0000 105 jquery Chrome 5.0.307.11 12 0.7500 82 reuse_insert Chrome 5.0.335.0 [...] How can I do this, while still returning the method that goes with that lowest timing? I could filter it app-side, but I'd rather do this in mysql since it'd work better with my caching.

    Read the article

  • Is there a way for -webkit-animtion-timing-function to apply to the entire animation instead of each keyframe?

    - by Ken Sykora
    I'm a bit new to animation, so forgive me if I'm missing a huge concept here. I need to animate an arrow that is pointing to a point on a curve, let's just say it's a cubic curve for the sake of this post. The arrow moves along the curve's line, always pointing a few pixels below it. So what I did, is I setup keyframes along the curve's line using CSS3: @-webkit-keyframes ftch { 0% { opacity: 0; left: -10px; bottom: 12px; } 25% { opacity: 0.25; left: 56.5px; bottom: -7px; } 50% { opacity: 0.5; left: 143px; bottom: -20px; } 75% { opacity: 0.75; left: 209.5px; bottom: -24.5px; } 100% { opacity: 1; left: 266px; bottom: -26px; } } However, when I run this animation using -webkit-animation-timing-function: ease-in, it applies that easing to each individual keyframe, which is definitely not what I want. I want the easing to apply to the entire animation. Is there a different way that I should be doing this? Is there some property to apply the easing to the entire sequence rather than each keyframe?

    Read the article

  • Why is my SSH session timing out in less than a minute?

    - by John Smith
    Within a minute of connecting to my remote Linux server through SSH, my session times out and I cannot contact the server until a few seconds have passed. Meanwhile, I'm connected to other servers without interruption. This is only happening when I establish connection from an hotel wireless AP. When I connect from my phone's Internet, the problem does not occur. Does anyone know what might be causing these unusual timeouts?

    Read the article

  • Threading models when talking to hardware devices

    - by Fuzz
    When writing an interface to hardware over a communication bus, communications timing can sometimes be critical to the operation of a device. As such, it is common for developers to spin up new threads to handle communications. It can also be a terrible idea to have a whole bunch of threads in your system, an in the case that you have multiple hardware devices you may have many many threads that are out of control of the main application. Certainly it can be common to have two threads per device, one for reading and one for writing. I am trying to determine the pros and cons of the two different models I can think of, and would love the help of the Programmers community. Each device instance gets handles it's own threads (or shares a thread for a communication device). A thread may exist for writing, and one for reading. Requested writes to a device from the API are buffered and worked on by the writer thread. The read thread exists in the case of blocking communications, and uses call backs to pass read data to the application. Timing of communications can be handled by the communications thread. Devices aren't given their own threads. Instead read and write requests are queued/buffered. The application then calls a "DoWork" function on the interface and allows all read and writes to take place and fire their callbacks. Timing is handled by the application, and the driver can request to be called at a given specific frequency. Pros for Item 1 include finer grain control of timing at the communication level at the expense of having control of whats going on at the higher level application level (which for a real time system, can be terrible). Pros for Item 2 include better control over the timing of the entire system for the application, at the expense of allowing each driver to handle it's own business. If anyone has experience with these scenarios, I'd love to hear some ideas on the approaches used.

    Read the article

  • Jquery Ajax Call In For Loop Only Runs Once - Possible Issue with Timing & Exit Condition?

    - by Grumps
    Background I'm building a form that uses autocomplete against the EchoNest API. First the users picks an artist, using the Artist Suggest call. Next they select a song but the Song and/Or Artist song search doesn't provide a "wild card" search. It only returns exact matches. So based on the forums they suggest building an array of the songs and using auto complete on the array. I can only get a maximum of 100 responses at a time. I do know based on the initial response the number of songs. My Plan: Wrap the ajax call in a for loop ('runonceloop'). Amend the loop exit condition after the first response with the total number of songs. Challenge I'm having: The 'runonceloop' only completes a singe loop because or at least that's what I believe: The exit condition is satisfied before the first response [1] is received. I've tried to adjust the 'exit condition' and 'counter' such that they are set and and increased at the end of the success block. This seems to lock up my browser. Can someone provide some guidance on this situation?[2] I'd really appreciate it. I also don't think turning async off is a good idea because it locks the browser. Response[1]: { "response": { "status": { "code": "0", "message": "Success", "version": "4.2" }, "start": 0, "total": 121, //Used for "songs": [ { "id": "SOXZYYG127F3E1B7A2", "title": "Karma police" }, { "id": "SOXZABD127F3E1B7A2", "title" : "Creep" } ] } } } Code[2] var songsList = []; function getSongs() { var numsongs = 2; //at least 2 runs. var startindex = 0; runonceloop: //<~~~~Referenced in question for (var j = 0;j >= numsongs;) { console.log('numsongs' + numsongs); $.ajax({ url: "http://developer.echonest.com/api/v4/artist/songs", dataType: "jsonp", data: { results: 100, api_key: "XXXXXXXXXXX", format: "jsonp", name: $("#artist").val(), start: startindex }, success: function (data) { var songs = data.response.songs; numsongs = data.response.total; //modify exit condition for (var i = 0; i < songs.length; i++) { songsList.push(songs[i].title); } j +=100;// increase by 100 to match number of responses. } }); }};

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >