Search Results

Search found 3489 results on 140 pages for 'clock rate'.

Page 16/140 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Programatically determining maximum transfer rate

    - by dauphic
    I have a problem that requires me to calculate the maximum upload and download available, then limit my program's usage to a percentage of it. However, I can't think of a good way to find the maximums. At the moment, the only solution I can come up with is transfering a few megabytes between the client and server, then measuring how ling the transfer took. This solution is very undesirable, however, because with 100,000 clients it could potentially result in too much of an increase to our server's bandwidth usage (which is already too high). Does anyone have any solutions to this problem?

    Read the article

  • Database storage for high sample rate data in web app

    - by Jim
    I've got multiple sensors feeding data to my web app. Each channel is 5 samples per second and the data gets uploaded bundled together in 1 minute json messages (containing 300 samples). The data will be graphed using flot at multiple zoom levels from 1 day to 1 minute. I'm using Amazon SimpleDB and I'm currently storing the data in the 1 minute chunks that I receive it in. This works well for high zoom levels, but for full days there will be simply be too many rows to retrieve. The idea I've currently got is that every hour I can crawl through the data and collect together 300 samples for the last hour and store them in an hour Domain (table if you like). Does this sound like a reasonable solution? How have others implemented the same sort of systems?

    Read the article

  • Twitter rate limit

    - by raulriera
    Hi, I am whitelisted in Twitter, and I have this "traffic heavy" application that just makes 2 request to find out how many users 2 people have.... the traffic currently is killing the 150 request limit per hour. How do I authenticate my requests so that twitter knows I am whitelisted? http://api.twitter.com/1/users/show.xml?screen_name=chavezcandanga http://api.twitter.com/1/users/show.xml?screen_name=luischataing I wish to authenticate those for this simple project http://250mil.com Thanks!

    Read the article

  • Calling a method at a specific interval rate in C++

    - by Aetius
    This is really annoying me as I have done it before, about a year ago and I cannot for the life of me remember what library it was. Basically, the problem is that I want to be able to call a method a certain number of times or for a certain period of time at a specified interval. One example would be I would like to call a method "x" starting from now, 10 times, once every 0.5 seconds. Alternatively, call method "x" starting from now, 10 times, until 5 seconds have passed. Now I thought I used a boost library for this functionality but I can't seem to find it now and feeling a bit annoyed. Unfortunately I can't look at the code again as I'm not in possession of it any more. Alternatively, I could have dreamt this all up and it could have been proprietary code. Assuming there is nothing out there that does what I would like, what is currently the best way of producing this behaviour? It would need to be high-resolution, up to a millisecond. It doesn't matter if it blocks the thread that it is executed from or not. Thanks!

    Read the article

  • Rate my C# code (~300 SLOC) using GDI+/Backgroundworker

    - by sebastianlarsson
    Hi, I want to get some feedback on my code! Below is some background info. I am taking a pre-certification course in C# (Sweden, 15 ECTS). The focus of the course is theoretical and only limited practical work. I dont find the assignments very hard at all to tell you the truth, but since I only have very limited work experience as a developer (I have worked 15h/week at Ericsson since November) I think I would benefit from having the certificate (70-536 and more probably). I am currently reading Martin Fowler's "Refactoring: Improving the design of existing code" and I tried to apply his techniques to my latest lab in the course. I have been on the lookout for a website which have the idea of providing feedback on code, but so far I have yet to discover any. Please take a look on my code and tell me what you think. It is only roughly 300 lines of code divided into a couple of classes. GDI+, backgroundworker and user controls are what the lab is about. I reckon you may have to spend as little as a couple of minutes on looking on the solution. Link to solution: http://www.filefactory.com/file/b18h7d5/n/Lab4_Lab5_SebastianLarsson.zip Regards and thank you, Sebastian

    Read the article

  • limiting the rate of emails using python

    - by Ali
    I have a python script which reads email addresses from a database for a particular date, example today, and sends out an email message to them one by one. It reads data from MySQL using the MySQLdb module and stores all results in a dictionary and sends out emails using : rows = cursor.fetchall () #All email addresses returned that are supposed to go out on todays date. for row is rows: #send email However, my hosting service only lets me send out 500 emails per hour. How can I limit my script from making sure only 500 emails are sent in an hour and then to check the database if more emails are left for today or not and then to send them in the next hour. The script is activated using a cron job.

    Read the article

  • High Resolution Timeouts

    - by user12607257
    The default resolution of application timers and timeouts is now 1 msec in Solaris 11.1, down from 10 msec in previous releases. This improves out-of-the-box performance of polling and event based applications, such as ticker applications, and even the Oracle rdbms log writer. More on that in a moment. As a simple example, the poll() system call takes a timeout argument in units of msec: System Calls poll(2) NAME poll - input/output multiplexing SYNOPSIS int poll(struct pollfd fds[], nfds_t nfds, int timeout); In Solaris 11, a call to poll(NULL,0,1) returns in 10 msec, because even though a 1 msec interval is requested, the implementation rounds to the system clock resolution of 10 msec. In Solaris 11.1, this call returns in 1 msec. In specification lawyer terms, the resolution of CLOCK_REALTIME, introduced by POSIX.1b real time extensions, is now 1 msec. The function clock_getres(CLOCK_REALTIME,&res) returns 1 msec, and any library calls whose man page explicitly mention CLOCK_REALTIME, such as nanosleep(), are subject to the new resolution. Additionally, many legacy functions that pre-date POSIX.1b and do not explicitly mention a clock domain, such as poll(), are subject to the new resolution. Here is a fairly comprehensive list: nanosleep pthread_mutex_timedlock pthread_mutex_reltimedlock_np pthread_rwlock_timedrdlock pthread_rwlock_reltimedrdlock_np pthread_rwlock_timedwrlock pthread_rwlock_reltimedwrlock_np mq_timedreceive mq_reltimedreceive_np mq_timedsend mq_reltimedsend_np sem_timedwait sem_reltimedwait_np poll select pselect _lwp_cond_timedwait _lwp_cond_reltimedwait semtimedop sigtimedwait aiowait aio_waitn aio_suspend port_get port_getn cond_timedwait cond_reltimedwait setitimer (ITIMER_REAL) misc rpc calls, misc ldap calls This change in resolution was made feasible because we made the implementation of timeouts more efficient a few years back when we re-architected the callout subsystem of Solaris. Previously, timeouts were tested and expired by the kernel's clock thread which ran 100 times per second, yielding a resolution of 10 msec. This did not scale, as timeouts could be posted by every CPU, but were expired by only a single thread. The resolution could be changed by setting hires_tick=1 in /etc/system, but this caused the clock thread to run at 1000 Hz, which made the potential scalability problem worse. Given enough CPUs posting enough timeouts, the clock thread could be a performance bottleneck. We fixed that by re-implementing the timeout as a per-CPU timer interrupt (using the cyclic subsystem, for those familiar with Solaris internals). This decoupled the clock thread frequency from timeout resolution, and allowed us to improve default timeout resolution without adding CPU overhead in the clock thread. Here are some exceptions for which the default resolution is still 10 msec. The thread scheduler's time quantum is 10 msec by default, because preemption is driven by the clock thread (plus helper threads for scalability). See for example dispadmin, priocntl, fx_dptbl, rt_dptbl, and ts_dptbl. This may be changed using hires_tick. The resolution of the clock_t data type, primarily used in DDI functions, is 10 msec. It may be changed using hires_tick. These functions are only used by developers writing kernel modules. A few functions that pre-date POSIX CLOCK_REALTIME mention _SC_CLK_TCK, CLK_TCK, "system clock", or no clock domain. These functions are still driven by the clock thread, and their resolution is 10 msec. They include alarm, pcsample, times, clock, and setitimer for ITIMER_VIRTUAL and ITIMER_PROF. Their resolution may be changed using hires_tick. Now back to the database. How does this help the Oracle log writer? Foreground processes post a redo record to the log writer, which releases them after the redo has committed. When a large number of foregrounds are waiting, the release step can slow down the log writer, so under heavy load, the foregrounds switch to a mode where they poll for completion. This scales better because every foreground can poll independently, but at the cost of waiting the minimum polling interval. That was 10 msec, but is now 1 msec in Solaris 11.1, so the foregrounds process transactions faster under load. Pretty cool.

    Read the article

  • How do you set rate limit access to your API using Iptables?

    - by Cory
    How can you set rate limit access to API using Iptables. Tried to set limit using port 80, but I don't want to set limit to the web access entirely. Is there a way to specified a subdomain rather than port. Example: set rate limit to api.example.com not example.com? If there is no way to set rate limit by subdomain, what is the suggested rate limit access to port 80 without risking blocking a legitimate web user? One connection per second would be enough?

    Read the article

  • Is it safe to set FPS rate to a constant?

    - by Ozan
    I learned from game class that in update function, every movements must be time dependent for the sake of linearity in movement. We made a simple game. Every move like going left, right or jump is written time dependent. But, in some other computers, our game is worked very differently. For example, our character jumps higher than it should be. I guess this is because each computer has different FPS rate according to its specification. My question is that what should we do to make this game work in same way in every computer? Setting FPS rate to a constant is a solution?

    Read the article

  • Calculating CPU frequency in C with RDTSC always returns 0

    - by Nazgulled
    Hi, The following piece of code was given to us from our instructor so we could measure some algorithms performance: #include <stdio.h> #include <unistd.h> static unsigned cyc_hi = 0, cyc_lo = 0; static void access_counter(unsigned *hi, unsigned *lo) { asm("rdtsc; movl %%edx,%0; movl %%eax,%1" : "=r" (*hi), "=r" (*lo) : /* No input */ : "%edx", "%eax"); } void start_counter() { access_counter(&cyc_hi, &cyc_lo); } double get_counter() { unsigned ncyc_hi, ncyc_lo, hi, lo, borrow; double result; access_counter(&ncyc_hi, &ncyc_lo); lo = ncyc_lo - cyc_lo; borrow = lo > ncyc_lo; hi = ncyc_hi - cyc_hi - borrow; result = (double) hi * (1 << 30) * 4 + lo; return result; } However, I need this code to be portable to machines with different CPU frequencies. For that, I'm trying to calculate the CPU frequency of the machine where the code is being run like this: int main(void) { double c1, c2; start_counter(); c1 = get_counter(); sleep(1); c2 = get_counter(); printf("CPU Frequency: %.1f MHz\n", (c2-c1)/1E6); printf("CPU Frequency: %.1f GHz\n", (c2-c1)/1E9); return 0; } The problem is that the result is always 0 and I can't understand why. I'm running Linux (Arch) as guest on VMware. On a friend's machine (MacBook) it is working to some extent; I mean, the result is bigger than 0 but it's variable because the CPU frequency is not fixed (we tried to fix it but for some reason we are not able to do it). He has a different machine which is running Linux (Ubuntu) as host and it also reports 0. This rules out the problem being on the virtual machine, which I thought it was the issue at first. Any ideas why this is happening and how can I fix it?

    Read the article

  • Verilog Posedge Problem

    - by Cenoc
    I'm having the following problem; I have a clk at 50MHz, but I want something else to run with a posedge signal whenever ready, but for some reason it always goes at the 50MHz, although I explicitly write otherwise, do you guys have any suggestions?

    Read the article

  • NSTimer to fire while device is locked

    - by edie
    Hi, I'm currently creating an alarm. I use NSTimer to schedule my alarms. My problem is when the device was put into locked mode my NSTimer doesn't fire. I think that the NSTimer will not fire because my app goes to suspended state when it is lock. Can you help me find a solution to my problem? I've found some topics about UIBackgroundModes, but I don't know how it will help me. Thanks.. The problem in UILocalNotification is when the device was in silent, the sound will not be hear. My implementation was I'm using NSTimer to fire an alarm when the app is in foreground or device is locked but app currently running. When the applicationDidEnterBackground: is called I schedule the UILocalNotification as the alarm.

    Read the article

  • C#: how to obtain the current clock speed of an Intel i-series CPU when TurboBoost is activated

    - by shifuimam
    I know that it's possible to get this information - Intel's own TurboBoost sidebar gadget appears to use an ActiveX control to determine the current clock speed of an i3/i5/i7 CPU when TurboBoost is active. However, I'm wanting to do this programmatically in C# - obtaining the CurrentClockSpeed value from WMI tops out at the set maximum clock speed of the CPU, so in TurboBoost mode, it doesn't report the current actual clock speed.

    Read the article

  • Using client time to calculate timezone

    - by Mike TK
    Hi Folks, Instead of asking a client timezone in registration form (to correctly format datetime, all server dates in UTC) I thought about fetching a time from client computer and calculating time offset between client and server. Anyone tried this? How often clients have something insane on their system clocks? Cheers!

    Read the article

  • C++ Program performs better when piped

    - by ET1 Nerd
    I haven't done any programming in a decade. I wanted to get back into it, so I made this little pointless program as practice. The easiest way to describe what it does is with output of my --help codeblock: ./prng_bench --help ./prng_bench: usage: ./prng_bench $N $B [$T] This program will generate an N digit base(B) random number until all N digits are the same. Once a repeating N digit base(B) number is found, the following statistics are displayed: -Decimal value of all N digits. -Time & number of tries taken to randomly find. Optionally, this process is repeated T times. When running multiple repititions, averages for all N digit base(B) numbers are displayed at the end, as well as total time and total tries. My "problem" is that when the problem is "easy", say a 3 digit base 10 number, and I have it do a large number of passes the "total time" is less when piped to grep. ie: command ; command |grep took : ./prng_bench 3 10 999999 ; ./prng_bench 3 10 999999|grep took .... Pass# 999999: All 3 base(10) digits = 3 base(10). Time: 0.00005 secs. Tries: 23 It took 191.86701 secs & 99947208 tries to find 999999 repeating 3 digit base(10) numbers. An average of 0.00019 secs & 99 tries was needed to find each one. It took 159.32355 secs & 99947208 tries to find 999999 repeating 3 digit base(10) numbers. If I run the same command many times w/o grep time is always VERY close. I'm using srand(1234) for now, to test. The code between my calls to clock_gettime() for start and stop do not involve any stream manipulation, which would obviously affect time. I realize this is an exercise in futility, but I'd like to know why it behaves this way. Below is heart of the program. Here's a link to the full source on DB if anybody wants to compile and test. https://www.dropbox.com/s/6olqnnjf3unkm2m/prng_bench.cpp clock_gettime() requires -lrt. for (int pass_num=1; pass_num<=passes; pass_num++) { //Executes $passes # of times. clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &temp_time); //get time start_time = timetodouble(temp_time); //convert time to double, store as start_time for(i=1, tries=0; i!=0; tries++) { //loops until 'comparison for' fully completes. counts reps as 'tries'. <------------ for (i=0; i<Ndigits; i++) //Move forward through array. | results[i]=(rand()%base); //assign random num of base to element (digit). | /*for (i=0; i<Ndigits; i++) //---Debug Lines--------------- | std::cout<<" "<<results[i]; //---a LOT of output.---------- | std::cout << "\n"; //---Comment/decoment to disable/enable.*/ // | for (i=Ndigits-1; i>0 && results[i]==results[0]; i--); //Move through array, != element breaks & i!=0, new digits drawn. -| } //If all are equal i will be 0, nested for condition satisfied. -| clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &temp_time); //get time draw_time = (timetodouble(temp_time) - start_time); //convert time to dbl, subtract start_time, set draw_time to diff. total_time += draw_time; //add time for this pass to total. total_tries += tries; //add tries for this pass to total. /*Formated output for each pass: Pass# ---: All -- base(--) digits = -- base(10) Time: ----.---- secs. Tries: ----- (LINE) */ std::cout<<"Pass# "<<std::setw(width_pass)<<pass_num<<": All "<<Ndigits<<" base("<<base<<") digits = " <<std::setw(width_base)<<results[0]<<" base(10). Time: "<<std::setw(width_time)<<draw_time <<" secs. Tries: "<<tries<<"\n"; } if(passes==1) return 0; //No need for totals and averages of 1 pass. /* It took ----.---- secs & ------ tries to find --- repeating -- digit base(--) numbers. (LINE) An average of ---.---- secs & ---- tries was needed to find each one. (LINE)(LINE) */ std::cout<<"It took "<<total_time<<" secs & "<<total_tries<<" tries to find " <<passes<<" repeating "<<Ndigits<<" digit base("<<base<<") numbers.\n" <<"An average of "<<total_time/passes<<" secs & "<<total_tries/passes <<" tries was needed to find each one. \n\n"; return 0;

    Read the article

  • Code refactoring homework?

    - by Hira
    This is the code that I have to refactor for my homework: if (state == TEXAS) { rate = TX_RATE; amt = base * TX_RATE; calc = 2 * basis(amt) + extra(amt) * 1.05; } else if ((state == OHIO) || (state == MAINE)) { rate = (state == OHIO) ? OH_RATE : MN_RATE; amt = base * rate; calc = 2 * basis(amt) + extra(amt) * 1.05; if (state == OHIO) points = 2; } else { rate = 1; amt = base; calc = 2 * basis(amt) + extra(amt) * 1.05; } I have done something like this if (state == TEXAS) { rate = TX_RATE; calculation(rate); } else if ((state == OHIO) || (state == MAINE)) { rate = (state == OHIO) ? OH_RATE : MN_RATE; calculation(rate); if (state == OHIO) points = 2; } else { rate = 1; calculation(rate); } function calculation(rate) { amt = base * rate; calc = 2 * basis(amt) + extra(amt) * 1.05; } How could I have done better? Edit i have done code edit amt = base * rate;

    Read the article

  • Low framerate on background apps

    - by user1698923
    My problem is that when a game is running in the foreground, in Full Screen mode, any applications on my second monitor (such as youtube videos, videos, not app specific) drop their frame-rate to about 2-3 FPS. It seems like some sort of power management option that I can't track down. As far as I can tell, it's not due to the GPU not being able to keep up. For instance, my PC can play League of Legends at about 280FPS when the framerate is uncapped. If i cap it at 60FPS using the in-game option, it has no affect on the performance of the background app. Summary Operating System Windows 8 Pro 64-bit CPU Intel Core i7 3820 @ 3.60GHz 42 °C Sandy Bridge-E 32nm Technology RAM 12.0GB Triple-Channel DDR3 @ 533MHz (7-7-7-20) Motherboard Gigabyte Technology Co., Ltd. X79-UD3 (SOCKET 0) 37 °C Graphics DELL U2713HM (2560x1440@59Hz) DELL U2713HM (2560x1440@59Hz) 1280MB NVIDIA GeForce GTX 570 (Gigabyte) 58 °C Hard Drives 212GB Volume0 (RAID) 1863GB Western Digital WDC WD20EARS-00MVWB0 (SATA) 36 °C 1863GB Western Digital WDC WD20EARS-00MVWB0 (SATA) 34 °C Optical Drives No optical disk drives detected Audio ASUS Xonar Essence STX Audio Device Operating System Windows 8 Pro 64-bit Computer type: Desktop Graphics Monitor 1 Name DELL U2713HM on NVIDIA GeForce GTX 570 Current Resolution 2560x1440 pixels Work Resolution 2560x1400 pixels State Enabled, Output devices support Multiple displays Extended, Secondary, Enabled Monitor Width 2560 Monitor Height 1440 Monitor BPP 32 bits per pixel Monitor Frequency 59 Hz Device \\.\DISPLAY4\Monitor0 Monitor 2 Name DELL U2713HM on NVIDIA GeForce GTX 570 Current Resolution 2560x1440 pixels Work Resolution 2560x1400 pixels State Enabled, Output devices support Multiple displays Extended, Primary, Enabled Monitor Width 2560 Monitor Height 1440 Monitor BPP 32 bits per pixel Monitor Frequency 59 Hz Device \\.\DISPLAY5\Monitor0 NVIDIA GeForce GTX 570 Manufacturer NVIDIA Model GeForce GTX 570 GPU GF110 Device ID 10DE-1086 Revision A2 Subvendor Gigabyte (1458) Series GeForce GTX 500 Current Performance Level Level 3 Current GPU Clock 845 MHz Current Memory Clock 1900 MHz Current Shader Clock 1690 MHz Voltage 0.988 V Technology 40 nm Die Size 520 mm² Release Date Dec 07, 2010 DirectX Support 11.0 OpenGL Support 5.0 Bus Interface PCI Express x16 Temperature 57 °C Driver version 9.18.13.2018 BIOS Version 70.10.55.00.01 ROPs 40 Shaders 512 unified Memory Type GDDR5 Memory 1280 MB Bus Width 64x5 (320 bit) Filtering Modes 16x Anisotropic Noise Level Moderate Max Power Draw 219 Watts Count of performance levels : 3 Level 1 - "Default" GPU Clock 50 MHz Memory Clock 135 MHz Shader Clock 101 MHz Level 2 - "2D Desktop" GPU Clock 405 MHz Memory Clock 324 MHz Shader Clock 810 MHz Level 3 - "3D Applications" GPU Clock 845 MHz Memory Clock 1900 MHz Shader Clock 1690 MHz Things I've tried: 1) Updating the graphics driver 2) Setting windows power mode to High Performance 3) Reset Nvidia Global Performance settings to default

    Read the article

  • How do I get the Windows 8 Desktop to stop refreshing itself while I'm working?

    - by Nessa Morris
    I have an Asus touchscreen laptop with Windows 8, not RT. The best way that I can describe the problem is: when I am working on something in the desktop, the desktop/screen refreshes itself. It doesn't matter if I am using an IE window, or Word, etc. Basically, while I'm viewing the desktop, the icons disappear for a second or two and then come back. If I'm typing in Word, the screen essentially pauses and just stops typing. It won't start typing again until I touch the screen or click on something. In IE, the screen acts pretty similar, if I happen to be typing a URL, or in a form, etc. Why does it do this? And how can I make it stop? Thanks so much for any help you can give me, and please let me know if I can provide any other info that you think may be helpful.

    Read the article

  • 59 vs 60 hertz on external display via DisplayPort

    - by Shiki
    Don't know if this is a really specific question or you'll help me out. Please do. I just bought a DeLock DisplayPort cable for my Lenovo T500. Tried to use it with the in-built Intel MHD4500, no use. With the ATI HD3650 (you can switch between the two vga on this machine) I get a normal picture, but only on 59hz. On 60 I get a smaller desktop size (strange). Any idea what can cause this? Will I have any problem if I use the 59hz? (Was using the display with a VGA cable but I get a blurry picture with that (dont matter what device I attach, VGA is like this on this display) and thats why I wanted to change.)

    Read the article

  • Latitude E6410 - Screen flickering on Windows 8

    - by Martin
    On Windows 8, my Latitude E6410 screen (1440x900 - 60Hz - NVidia NVS3100M) is flickering. If I turn off the laptop for 30 minutes and turn it back on, there's no flickering but it eventually start again after 10-15 minutes. The flickering seems more present at the top and bottom of the screen. I already updated the graphic card drivers, bios, etc. Nothing seems to fix the issue. To prove that it isn't a hardware issue, when I reinstalled Windows 7 I couldn't reproduce the issue. Any idea what could cause this?

    Read the article

  • Per connection bandwidth limit

    - by Kyr
    Apparently, our server box running Windows Server 2008 R2 has a per connection bandwidth limit of 0.2 MB/s. Meaning, while one TCP connection can pull at max 0.2 MB/s, 60 parallel connections can pull 12 MB/s. We first noticed this when trying to checkout large SVN repository from this server. I used a simple Java application to test this, transferring data from server to workstation using variable number of threads (one connection per thread). Server part of the application simply writes 1 MB memory buffer to socket 100 times, so there is no disk involvement. Each connection topped at 0.2 MB/s. Same per connection limit was for only one as was for 60 parallel connections. The problem is that I have no idea from where this limit comes from. I have very little experience administrating Windows Server, so I was mostly trying to find something by googling. I have checked the following: Local Computer Policy QoS Packet Scheduler Limit reservable bandwidth: it's Not configured; Group Policy Management Console: we have two GOPs, but neiher has any Policy-based QoS defined; There isn't any bandwidth limiter program installed, as far as I can tell. We're using standard Windows Firewall. I can update this question with any additional information if needed.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >