Search Results

Search found 1157 results on 47 pages for 'latency'.

Page 2/47 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Terminal server performance over high latency links

    - by holz
    Our datacenter and head office is currently in Brisbane, Australia, and we have a branch office in the UK. We have a private WAN with a 768k link to our UK office and the latency is at about 350ms. The terminal server performance is reeeeealy bad. Applications that don't have too much animation or any images seem to be okay. But as soon as they do, the session is almost unusable. Powerpoint and internet explorer are good examples of apps that make it run slow. And if there is an image in your email signature, outlook will hang for about 10 seconds each time a new line is inserted, while the image gets moved down a few pixels. We are currently running server 2003. I have tried Server 2008 R2 RDS, and also a third party solution called Blaze by a company called Ericom, but it is still not too much better. We currently have a 5 levels dynamic class of service with the priority in the following order. VoIP Video Terminal Services Printing Everything else When testing the terminal server performance, the link monitored using net-flows, and have plenty we of bandwidth available, so I believe that it is a latency issue rather than bandwidth. Is there anything that can be done to improve performance. Would citrix help at all?

    Read the article

  • IRP_MJ_WRITE latency up to 15 seconds

    - by racitup
    We have written an application that performs small (22kB) writes to multiple files at once (one thread performing asynchronous queued writes to multiple locations on behalf of other threads) on the same local volume (RAID1). 99.9% of the writes are low-latency but occasionally (maybe every minute or two) we get one or two huge latency writes (I have seen 10 seconds and above) without any real explanation. Platform: Win2003 Server with NTFS. Monitoring: Sysinternals Process Monitor (see link below) and our own application logging. We have tried multiple things to try and solve this that have been gleaned from a few websites, e.g.: Making the first part of file names unique to aid 8.3 name generation Writing files to multiple directories Changing Intel Disk Write Caching Windows File/Printer Sharing Minimize memory used Balance Maximize data throughput for file sharing Maximize data throughput for network applications System-Advanced-Performance-Advanced NtfsDisableLastAccessUpdate - use fsutil behavior set disablelastaccess 1 disable 8.3 name generation - use "fsutil behavior set disable8dot3 1" + restart Enable a large size file system cache Disable paging of the kernel code IO Page Lock Limit Turn Off (or On) the Indexing Service But nothing seems to make much difference. There's a whole host of things we haven't tried yet but we wondered if anyone had come across the same problem, a reason and a solution (programmatic or not)? We can reproduce the problem using IOMeter and a simple setup: Start IOMeter and remove all but the first worker thread in 'Topology' using the disconnect button. Select the Worker thread and put a cross in the box next to the disk you want to use in the Disk Targets tab and put '2000000' in Maximum Disk Size (NOTE: must have at least 1GB free space; sector size is 512 bytes) Next create a new access specification and add it to the worker thread: Transfer Request Size = 22kB 100% Sequential Percent of Access Spec = 100% Percent Read/Write = 100% Write Change Results Display Update Frequency to 5 seconds, Test Setup Run Time to 20 seconds and both 'Number of Workers to Spawn Automatically' settings to zero. Select the Worker Thread in the Topology panel and hit the Duplicate Worker button 59 times to create 60 threads with identical settings. Hit the 'Go' button (green flag) and monitor the Results tab. The 'Maximum I/O Response Time (ms)' always hits at least 3500 on our machine. Our machine isn't exactly slow (Xeon 8 core rack server with 4GB and onboard RAID). I'd be interested to see what other people get. We have a feeling it might be something to do with the NTFS filesystem (ours is currently 75% full of fragmented files) and we are going to try a few things around this principle. But it is also related to disk performance since we don't see it on a RAMDisk and it's not as severe on a RAID10 array. Any help is much appreciated. Richard Right-click and select 'Open Link in New Tab': ProcMon Result

    Read the article

  • Internet latency map

    - by David
    I would like to see a latency map, showing the lowest latencies achieved between various destinations around the world. What is for example the lowest latencies achieved between Denmark and India. This could for example be used for planning of where to place a server farm for online games.

    Read the article

  • How much network latency is "typical" for east - west coast USA?

    - by Jeff Atwood
    At the moment we're trying to decide whether to move our datacenter from the west coast (Corvallis, OR) to the east coast (NY, NY). However, I am seeing some disturbing latency numbers from my location (Berkeley, CA) to the NYC host. Here's a sample result, retrieving a small .png logo file in Google Chrome and using the dev tools to see how long the request takes: Berkeley to NYC server: 215 ms latency, 46ms transfer time, 261ms total Berkeley to Corvallis server: 114ms latency, 41ms transfer time, 155ms total some URLs if you want to try yourself: http://careers.stackoverflow.com/content/cso/img/logo.png (NY, NY) http://serverfault.com/cache/logo.png (Corvallis, OR) It makes sense that Corvallis, OR is geographically closer to Berkeley, CA so I expect the connection to be a bit faster.. but I'm seeing an increase in latency of +100ms when I perform the same test to the NYC server. That seems .. excessive to me. Particularly since the time spent transferring the actual data only went up 10%, yet the latency went up ten times as much! That feels... wrong... to me. I found a few links here that were helpful (through Google no less!) ... http://serverfault.com/questions/63531/does-routing-distance-affect-performance-significantly http://serverfault.com/questions/61719/how-does-geography-affect-network-latency http://serverfault.com/questions/6210/latency-in-internet-connections-from-europe-to-usa ... but nothing authoritative. So, is this normal? It doesn't feel normal. What is the "typical" latency I should expect when moving network packets from the east coast <--> west coast of the USA?

    Read the article

  • How much network latency is "typical" for east - west coast USA?

    - by Jeff Atwood
    At the moment we're trying to decide whether to move our datacenter from the west coast to the east coast. However, I am seeing some disturbing latency numbers from my west coast location to the east coast. Here's a sample result, retrieving a small .png logo file in Google Chrome and using the dev tools to see how long the request takes: West coast to east coast: 215 ms latency, 46 ms transfer time, 261 ms total West coast to west coast: 114 ms latency, 41 ms transfer time, 155 ms total It makes sense that Corvallis, OR is geographically closer to my location in Berkeley, CA so I expect the connection to be a bit faster.. but I'm seeing an increase in latency of +100ms when I perform the same test to the NYC server. That seems .. excessive to me. Particularly since the time spent transferring the actual data only increased 10%, yet the latency increased 100%! That feels... wrong... to me. I found a few links here that were helpful (through Google no less!) ... Does routing distance affect performance significantly? How does geography affect network latency? Latency in Internet connections from Europe to USA ... but nothing authoritative. So, is this normal? It doesn't feel normal. What is the "typical" latency I should expect when moving network packets from the east coast <--> west coast of the USA?

    Read the article

  • Latency, Ping and Other Questions

    - by Paulo Cassiano
    In a high traffic application, like an online auction system, few ms could determine 'to win or 'to lose' the 'battle'. I'm from Brazil. Here, I 'ping' local sites - like UOL - and receive replies in ~ 11ms. When I 'ping' US sites - like RackSpace - I receive replies in ~ 130 ms! The point is: I need a (very good like RackSpace [1]) infra-structure to host my killer online auction application, but there's no (RackSpace like) options in Brazil... Assuming that all users are located here, in Brazil, is it 'sine qua non' condition to host my application here, in Brazil? I think ~130 ms is a very high latency but, all users will receive this reply, sure? Well, where should I host my application? [1] Feel free to point me to any other very good host option other than RackSpace. I've cited it because I only know these guys...

    Read the article

  • 10 System LAN latency with ADSL modem as gateway

    - by itsoft3g
    Recently I expanded LAN in my office from 3 to 10 computers. Structure star topology, one ADSL Modem connected to One Switch which is again connected to 10 computers. Also we have Wifi device Netgear which is connected from switch. ADSL Modem acts as the DHCP Server, all the system will have default gateway IP (ADSL Modem's IP) Network latency is now become very high, All the chat severs disconnect often like google talk, skype etc, also internet become very very slow. when all the computer turned on. We have 4 Mbps Download and 100 Kbps upload Net speed. Its look like ADSL Modem cannot able to handle all the connections. I tried to setup a system as default gateway which will connect to modem, not sure how to do this. Please advice on this.

    Read the article

  • Real RAM latency

    - by user32569
    Hi, very quick question. When I look for RAM timing, I got 2 different explanations on what is CAS latency. First states, its the time after command to read has been issued from CPU and data are send to data bus. Second says its time betwen column in memory layout has been activated. So, where is the truth? I mean, when I want to know total RAM latenci, in case 1, ot would be just CAS times one clock time. In second case, it would be CAS+other things like RAstoCAS and so times one clock time. Thanks.

    Read the article

  • Does low latency code sometimes have to be "ugly"?

    - by user997112
    (This is mainly aimed at those who have specific knowledge of low latency systems, to avoid people just answering with unsubstantiated opinions). Do you feel there is a trade-off between writing "nice" object orientated code and writing very fast low latency code? For instance, avoiding virtual functions in C++/the overhead of polymorphism etc- re-writing code which looks nasty, but is very fast etc? It stands to reason- who cares if it looks ugly (so long as its maintainable)- if you need speed, you need speed? I would be interested to hear from people who have worked in such areas.

    Read the article

  • Low latency programming

    - by Sambatyon
    I've been reading a lot about low latency financial systems (especially since the famous case of corporate espionage) and the idea of low latency systems has been in my mind ever since. There are a million applications that can use what these guys are doing, so I would like to learn more about the topic. The thing is I cannot find anything valuable about the topic. Can anybody recommend books, sites, examples on low latency systems?

    Read the article

  • Low Latency Serial Communications In .Net

    - by bvillersjr
    I have been researching various third party libraries and approaches to low latency serial communications in .Net. I've read enough that I have now come full circle and know as little as I did when I started due to the variety of conflicting opinions. For example, the functionality in the Framework was ruled out due to some convincing articles stating: "that the Microsoft provided solution has not been stable across framework versions and is lacking in functionality." I have found articles bashing many of the older COM based libraries. I have found articles bashing the idea of a low latency .Net app as a whole due to garbage collection. I have also read articles demonstrating how P/Invoking Windows API functionality for the purpose of low latency communication is unacceptable. THIS RULES OUT JUST ABOUT ANY APPROACH I CAN THINK OF! I would really appreciate some words from those with been there / done that experience. Ideally, I could locate a solid library / partner and not have to build the communications library myself. I have the following simple objectives: Sustained low latency serial communication in C# / VB.Net 32/64 bit Well documented (if the solution is 3rd party) Relatively unimpacted (communication and latency wise) by garbage collection . Flexible (I have no idea what I will have to interface with in the future!) The only requirement that I have for certain is that I need to be able to interface with many different industrial devices such as RS485 based linear actuators, serial / microcontroller based gauges, and ModBus (also RS485) devices. Any comments, ideas, thoughts or links to articles that may iron out my confusion are much appreciated!

    Read the article

  • Latency on mobile networks (Android)

    - by Meep3D
    I am planning to give mobile phone development a shot and was thinking about making some simple multiplayer games. I know latency over local wifi is probably fine but what are the issues with latency over GPRS/3G? I've searched and the best I've seen is someone saying it was 'high', without presenting any concrete numbers. I suppose latency fluctuations are important as well - does anyone have any info on this?

    Read the article

  • Troubleshooting latency spikes on ESXi NFS datastores

    - by exo_cw
    I'm experiencing fsync latencies of around five seconds on NFS datastores in ESXi, triggered by certain VMs. I suspect this might be caused by VMs using NCQ/TCQ, as this does not happen with virtual IDE drives. This can be reproduced using fsync-tester (by Ted Ts'o) and ioping. For example using a Grml live system with a 8GB disk: Linux 2.6.33-grml64: root@dynip211 /mnt/sda # ./fsync-tester fsync time: 5.0391 fsync time: 5.0438 fsync time: 5.0300 fsync time: 0.0231 fsync time: 0.0243 fsync time: 5.0382 fsync time: 5.0400 [... goes on like this ...] That is 5 seconds, not milliseconds. This is even creating IO-latencies on a different VM running on the same host and datastore: root@grml /mnt/sda/ioping-0.5 # ./ioping -i 0.3 -p 20 . 4096 bytes from . (reiserfs /dev/sda): request=1 time=7.2 ms 4096 bytes from . (reiserfs /dev/sda): request=2 time=0.9 ms 4096 bytes from . (reiserfs /dev/sda): request=3 time=0.9 ms 4096 bytes from . (reiserfs /dev/sda): request=4 time=0.9 ms 4096 bytes from . (reiserfs /dev/sda): request=5 time=4809.0 ms 4096 bytes from . (reiserfs /dev/sda): request=6 time=1.0 ms 4096 bytes from . (reiserfs /dev/sda): request=7 time=1.2 ms 4096 bytes from . (reiserfs /dev/sda): request=8 time=1.1 ms 4096 bytes from . (reiserfs /dev/sda): request=9 time=1.3 ms 4096 bytes from . (reiserfs /dev/sda): request=10 time=1.2 ms 4096 bytes from . (reiserfs /dev/sda): request=11 time=1.0 ms 4096 bytes from . (reiserfs /dev/sda): request=12 time=4950.0 ms When I move the first VM to local storage it looks perfectly normal: root@dynip211 /mnt/sda # ./fsync-tester fsync time: 0.0191 fsync time: 0.0201 fsync time: 0.0203 fsync time: 0.0206 fsync time: 0.0192 fsync time: 0.0231 fsync time: 0.0201 [... tried that for one hour: no spike ...] Things I've tried that made no difference: Tested several ESXi Builds: 381591, 348481, 260247 Tested on different hardware, different Intel and AMD boxes Tested with different NFS servers, all show the same behavior: OpenIndiana b147 (ZFS sync always or disabled: no difference) OpenIndiana b148 (ZFS sync always or disabled: no difference) Linux 2.6.32 (sync or async: no difference) It makes no difference if the NFS server is on the same machine (as a virtual storage appliance) or on a different host Guest OS tested, showing problems: Windows 7 64 Bit (using CrystalDiskMark, latency spikes happen mostly during preparing phase) Linux 2.6.32 (fsync-tester + ioping) Linux 2.6.38 (fsync-tester + ioping) I could not reproduce this problem on Linux 2.6.18 VMs. Another workaround is to use virtual IDE disks (vs SCSI/SAS), but that is limiting performance and the number of drives per VM. Update 2011-06-30: The latency spikes seem to happen more often if the application writes in multiple small blocks before fsync. For example fsync-tester does this (strace output): pwrite(3, "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"..., 1048576, 0) = 1048576 fsync(3) = 0 ioping does this while preparing the file: [lots of pwrites] pwrite(3, "********************************"..., 4096, 1036288) = 4096 pwrite(3, "********************************"..., 4096, 1040384) = 4096 pwrite(3, "********************************"..., 4096, 1044480) = 4096 fsync(3) = 0 The setup phase of ioping almost always hangs, while fsync-tester sometimes works fine. Is someone capable of updating fsync-tester to write multiple small blocks? My C skills suck ;) Update 2011-07-02: This problem does not occur with iSCSI. I tried this with the OpenIndiana COMSTAR iSCSI server. But iSCSI does not give you easy access to the VMDK files so you can move them between hosts with snapshots and rsync. Update 2011-07-06: This is part of a wireshark capture, captured by a third VM on the same vSwitch. This all happens on the same host, no physical network involved. I've started ioping around time 20. There were no packets sent until the five second delay was over: No. Time Source Destination Protocol Info 1082 16.164096 192.168.250.10 192.168.250.20 NFS V3 WRITE Call (Reply In 1085), FH:0x3eb56466 Offset:0 Len:84 FILE_SYNC 1083 16.164112 192.168.250.10 192.168.250.20 NFS V3 WRITE Call (Reply In 1086), FH:0x3eb56f66 Offset:0 Len:84 FILE_SYNC 1084 16.166060 192.168.250.20 192.168.250.10 TCP nfs > iclcnet-locate [ACK] Seq=445 Ack=1057 Win=32806 Len=0 TSV=432016 TSER=769110 1085 16.167678 192.168.250.20 192.168.250.10 NFS V3 WRITE Reply (Call In 1082) Len:84 FILE_SYNC 1086 16.168280 192.168.250.20 192.168.250.10 NFS V3 WRITE Reply (Call In 1083) Len:84 FILE_SYNC 1087 16.168417 192.168.250.10 192.168.250.20 TCP iclcnet-locate > nfs [ACK] Seq=1057 Ack=773 Win=4163 Len=0 TSV=769110 TSER=432016 1088 23.163028 192.168.250.10 192.168.250.20 NFS V3 GETATTR Call (Reply In 1089), FH:0x0bb04963 1089 23.164541 192.168.250.20 192.168.250.10 NFS V3 GETATTR Reply (Call In 1088) Directory mode:0777 uid:0 gid:0 1090 23.274252 192.168.250.10 192.168.250.20 TCP iclcnet-locate > nfs [ACK] Seq=1185 Ack=889 Win=4163 Len=0 TSV=769821 TSER=432716 1091 24.924188 192.168.250.10 192.168.250.20 RPC Continuation 1092 24.924210 192.168.250.10 192.168.250.20 RPC Continuation 1093 24.924216 192.168.250.10 192.168.250.20 RPC Continuation 1094 24.924225 192.168.250.10 192.168.250.20 RPC Continuation 1095 24.924555 192.168.250.20 192.168.250.10 TCP nfs > iclcnet_svinfo [ACK] Seq=6893 Ack=1118613 Win=32625 Len=0 TSV=432892 TSER=769986 1096 24.924626 192.168.250.10 192.168.250.20 RPC Continuation 1097 24.924635 192.168.250.10 192.168.250.20 RPC Continuation 1098 24.924643 192.168.250.10 192.168.250.20 RPC Continuation 1099 24.924649 192.168.250.10 192.168.250.20 RPC Continuation 1100 24.924653 192.168.250.10 192.168.250.20 RPC Continuation 2nd Update 2011-07-06: There seems to be some influence from TCP window sizes. I was not able to reproduce this problem using FreeNAS (based on FreeBSD) as a NFS server. The wireshark captures showed TCP window updates to 29127 bytes in regular intervals. I did not see them with OpenIndiana, which uses larger window sizes by default. I can no longer reproduce this problem if I set the following options in OpenIndiana and restart the NFS server: ndd -set /dev/tcp tcp_recv_hiwat 8192 # default is 128000 ndd -set /dev/tcp tcp_max_buf 1048575 # default is 1048576 But this kills performance: Writing from /dev/zero to a file with dd_rescue goes from 170MB/s to 80MB/s. Update 2011-07-07: I've uploaded this tcpdump capture (can be analyzed with wireshark). In this case 192.168.250.2 is the NFS server (OpenIndiana b148) and 192.168.250.10 is the ESXi host. Things I've tested during this capture: Started "ioping -w 5 -i 0.2 ." at time 30, 5 second hang in setup, completed at time 40. Started "ioping -w 5 -i 0.2 ." at time 60, 5 second hang in setup, completed at time 70. Started "fsync-tester" at time 90, with the following output, stopped at time 120: fsync time: 0.0248 fsync time: 5.0197 fsync time: 5.0287 fsync time: 5.0242 fsync time: 5.0225 fsync time: 0.0209 2nd Update 2011-07-07: Tested another NFS server VM, this time NexentaStor 3.0.5 community edition: Shows the same problems. Update 2011-07-31: I can also reproduce this problem on the new ESXi build 4.1.0.433742.

    Read the article

  • Website latency and bad tcp packets

    - by Mistero Lupo
    I have multiple websites hosted on a Linode VPS and I'm having an issue with one of them: every page that I try to load has about 10 seconds latency. Apache logs are clean and the other websites on the same machine are running well. At a first glance I tought it was a memory problem since the VPS has got only 512M, but from the linode dashboard CPU and Disk I/O are normal. Anyway here we have the ram status: $ free -m total used free shared buffers cached Mem: 487 463 23 0 2 55 -/+ buffers/cache: 404 82 Swap: 255 155 100 Only 23M free, but if it was a memory problem why other websites are going as usual? I took a live capture with wireshark, and there are some duplicates SYN ACK packets just before the 10 seconds gap. I'm out of ideas, looking for some clues. Wireshark live capture screenshot As you can see from the image, the gap is after the last bad tcp. Thank you in advance. UPDATE I've checked Apache2 logs in debug error level, and this is where something is appening: 151.97.156.191 - - [14/Nov/2012:11:19:40 +0100] [www.fmaisi.it/sid#7f32c625a220][rid#7f32c6801578/subreq] (3) [perdir /home/fmaisi/sites/www.fmaisi.it/public_html/] applying pattern '^index\.php$' to uri 'index.php' 151.97.156.191 - - [14/Nov/2012:11:19:40 +0100] [www.fmaisi.it/sid#7f32c625a220][rid#7f32c6801578/subreq] (1) [perdir /home/fmaisi/sites/www.fmaisi.it/public_html/] pass through /home/fmaisi/sites/www.fmaisi.it/public_html/index.php 151.97.156.191 - - [14/Nov/2012:11:19:54 +0100] [www.fmaisi.it/sid#7f32c625a220][rid#7f32c6537c78/initial] (3) [perdir /home/fmaisi/sites/www.fmaisi.it/public_html/] strip per-dir prefix: /home/fmaisi/sites/www.fmaisi.it/public_html/wp-content/plugins/wp-filebase/wp-filebase_css.php -> wp-content/plugins/wp-filebase/wp-filebase_css.php 151.97.156.191 - - [14/Nov/2012:11:19:54 +0100] [www.fmaisi.it/sid#7f32c625a220][rid#7f32c6537c78/initial] (3) [perdir /home/fmaisi/sites/www.fmaisi.it/public_html/] applying pattern '^index\.php$' to uri 'wp-content/plugins/wp-filebase/wp-filebase_css.php' As you can see there is a gap of 14 seconds after the pass through of index.php. Any suggestions? I'm out of ideas again.

    Read the article

  • Database Network Latency

    - by Karl
    Hi All, I am currently working on an n-tier system and battling some database performance issues. One area we have been investigating is the latency between the database server and the application server. In our test environment the average ping times between the two boxes is in the region of 0.2ms however on the clients site its more in the region of 8.2 ms. Is that somthing we should be worried about? For your average system what do you guys consider a resonable latency and how would you go about testing/measuring the latency? Karl

    Read the article

  • low latency data link pc to android

    - by steveh
    can anyone recommend a method for low latency bi-directional com link between my pc app and android slave app. the app i have works now via wifi but the latency is too slow (about 300mS), i'm looking to get it down to 10mS or so. the android is acting like a glorified remote control to the game on the pc. the apk displays a low res image and sends button presses back to the game and the round trip need to be quick i'm thinking the only option beside the network, is to connect a usb cable but i don't see a lot of support for that path and not even sure it would be lower latency than wifi any ideas please?

    Read the article

  • Low latency technologies for c++, c# and java?

    - by James
    I've been reading job descriptions and many mention 'low latency'. However, I wondered if someone could clarify what type of technologies this would refer to? One of the adverts mentioned 'ACE' which I googled to find out was some CISCO telephony technology. If you were hiring someone for a low latency role, what would you use as a checklist for ensuring they knew about low latency programming? I'm using this to learn more about low latency programming myself.

    Read the article

  • Beware of SQL Server and PerfMon differences in disk latency calculation

    - by Michael Zilberstein
    Recently sp_blitz procedure on one of my OLTP servers returned alarming notification about high latency on one of the disks (more than 100ms per IO). Our chief storage guy didn’t understand what I was talking about – according to his measures, average latency is only about 15ms. In order to investigate the issue, I’ve recorded 2 snapshots of sys.dm_io_virtual_file_stats and calculated latency per read and write separately. Results appeared to be even more alarming: while for read average latency...(read more)

    Read the article

  • Latency Matters

    - by Frederic P
    A lot of interest in low latencies has been expressed within the financial services segment, most especially in the stock trading applications where every millisecond directly influences the profitability of the trader. These days, much of the trading is executed by software applications which are trained to respond to each other almost instantaneously. In fact, you could say that we are in an arms race where traders are using any and all options to cut down on the delay in executing transactions, even by moving physically closer to the trading venue. The Solaris OS network stack has traditionally been engineered for high throughput, at the expense of higher latencies. Knowledge of tuning parameters to redress the imbalance is critical for applications that are latency sensitive. We are presenting in this blog how to configure further a default Oracle Solaris 10 installation to reduce network latency. There are many parameters in fact that can be altered, but the most effective ones are intr_blank_time and intr_blank_packets. These parameters affect on-board network throughput and latency on Solaris systems. If interrupt blanking is disabled, packets are processed by the driver as soon as they arrive, resulting in higher network throughput and lower latency, but with higher CPU utilization. With interrupt blanking disabled, processor utilization can be as high as 80–90% in some high-load web server environments. If interrupt blanking is enabled, packets are processed when the interrupt is issued. Enabling interrupt blanking can result in reduced processor utilization and network throughput, but higher network latency. Both parameters should be set at the same time. You can set these parameters by using the ndd command as follows: # ndd -set /dev/eri intr_blank_time 0 # ndd -set /dev/eri intr_blank_packets 0 You can add them to the /etc/system file as follows: set eri:intr_blank_time 0 set eri:intr_blank_packets 0 The value of the interrupt blanking parameter is a trade-off between network throughput and processor utilization. If higher processor utilization is acceptable for achieving higher network throughput, then disable interrupt blanking. If lower processor utilization is preferred and higher network latency is the penalty, then enable interrupt blanking. Our experience at ISV Engineering is that under controlled experiments the above settings result in reduction of network latency by at least 50%; on a two-socket 3GHz Sun Fire X4170 M2 running Solaris 10 Update 9, the above settings improved ping-pong latency from 60µs to 25-30µs with the on-board NIC.

    Read the article

  • How do you measure latency in low-latency environments?

    - by Ajaxx
    Here's the setup... Your system is receiving a stream of data that contains discrete messages (usually between 32-128 bytes per message). As part of your processing pipeline, each message passes through two physically separate applications which exchange the data using a low-latency approach (such as messaging over UDP) or RDMA and finally to a client via the same mechanism. Assuming you can inject yourself at any level, including wire protocol analysis, what tools and/or techniques would you use to measure the latency of your system. As part of this, I'm assuming that every message that is delivered to the system results in a corresponding (though not equivalent) message being pushed through the system and delivered to the client. The only tool that I've seen on the market like this is TS-Associates TipOff. I'm sure that with the right access you could probably measure the same information using a wire analysis tool (ala wireshark) and the right dissectors, but is this the right approach or are there any commodity solutions that I can use?

    Read the article

  • SoundManager2 has irregular latency

    - by Stefan Monov
    I'm playing some notes at regular intervals. Each one is delayed by a random number of milliseconds, creating a jarring irregular effect. How do I fix it? Note: I'm OK with some latency, just as long as it's consistent. Answers of the type "implement your own small SoundManager2 replacement, optimized for timing-sensitive playback" are OK, if you know how to do that :) but I'm trying to avoid rewriting my whole app in flash for now. For an example of app with zero audible latency see the flash-based ToneMatrix. Testcase (see it here live or get it in an zip): <head> <title></title> <script type="text/javascript" src="http://www.schillmania.com/projects/soundmanager2/script/soundmanager2.js"> </script> <script type="text/javascript"> soundManager.url = '.' soundManager.flashVersion = 9 soundManager.useHighPerformance = true soundManager.useFastPolling = true soundManager.autoLoad = true function recur(func, delay) { window.setTimeout(function() { recur(func, delay); func(); }, delay) } soundManager.onload = function() { var sound = soundManager.createSound("test", "test.mp3") recur(function() { sound.play() }, 300) } </script> </head> <body> </body> </html>

    Read the article

  • Which hosts have low latency across United States and Europe?

    - by Joost van Doorn
    I'm looking for some information on web hosts that have low latency (<100ms) to both the United States and Europe. The host can be in either the United States or Europe. Latency is most important to the United States, United Kingdom, the Netherlands, Sweden and Norway. Should be able to provide managed hosting. Hosting at multiple locations is not what I'm looking for. Answers should contain at least some latency information from multiple locations, preferably from Los Angeles, New York, London, Amsterdam and Oslo. Also some information on your experience with this host is preferred, do not rant, do provide details of your package (with or without SLA, dedicated or VPS etc.). From my own little research I found that probably New York based hosts can offer low latency to all these locations, but I do not have much statistics to back that up other than my own ping is about 85ms to New York from the Netherlands.

    Read the article

  • Networking with extremely high latency.

    - by BCS
    Are there any protocols, systems, etc. experimental or otherwise designed for allowing normal (as normal as can be) network operations (E-mail, DNS, HTML, etc.) over very high latency links? I'm thinking of minutes to an hour, or maybe two. Think light speed lag at a solar system scale.

    Read the article

  • lowest latency, least overhead app server?

    - by Mark Harrison
    I'm designing an application which will have a network interface for feeding out large numbers of very small metadata requests. The application code itself is very fast, basically looking up data cached in memory and sending it to the client. What's the absolute lowest latency I can get for a network application server running on a linux box? This will be an internal app running on gigE with no authentication. Any language/framework considered, with a preference for C, C++, or Python. Likewise for protocol, although HTTP would be nice.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >