Search Results

Search found 5290 results on 212 pages for 'refresh rate'.

Page 12/212 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Packet loss rate with iperf and tcpdump

    - by stefita
    I tested a line for its link quality with iperf. The measured speed (UDP port 9005) was 96Mbps, which is fine, because both servers are connected with 100Mbps to the internet. On the other hand the datagram loss rate was shown to be 3.3-3.7%, which I found a little too much. Using a high-speed transfer protocol I recorded the packets on both sides with tcpdump. Than I calculated the packet loss - average 0.25%. Have anyone an explanation, where this big difference may be coming from? What is an acceptable packet loss in your opinion?

    Read the article

  • Network Transfer Rate on SMB/FTP

    - by Jack Sleight
    Hi, We're trying to optimise our network transfer rate from the client PCs to the file server, but having no luck. If I run an iperf test between a client PC (Windows XP) and our server (Linux) with a large TCP window size (the default is 8kb) I can get 60 Mbps. But when I run an SMB transfer speed test all I get is around 35 Mbps. When I run an FTP transfer speed test all I get is around 32 Mbps. I thought this was to do with the TCP window size, but I have now increased that to 256960 and it made no difference at all. Any ideas what I'm doing wrong? Or is 35 Mbps all I should expect? Cheers, Jack

    Read the article

  • NTFS file size, how do you guys refresh it to view its current correct size

    - by Michael Goldshteyn
    I work in a command prompt quite often and around many large (remote) log files. Unfortunatelly, the sizes of these files do not update as the logs grow, unless it would appear the files are touched. I usually use hacks like the following from Cygwin to "touch" the file so that its file size updates: stat file.txt or head -c0 file.txt Are there any native Windows constructs that can refresh the file size from the command prompt, as unintrusively as possible and preferrably without transferring any (remote) data, since I often need to refresh the sizes of very large files remotely, to see how large they have grown.

    Read the article

  • Postfix Stagger/Rate Limit Outbound Mail

    - by GruffTech
    We have a server that sends our weekly newsletter to subscribers, To prevent people like Hotmail or Yahoo from blocking us due to sending too many simultaneous emails to them, Is there a way we can stagger email, or rate-limit outbound emails from postfix? Keep in mind, I dont want the mailserver to stop queueing mail or accepting new messages, Simply defer delivery if there are more then 3-4 messages per destination domain/ipaddress, or something similar. Note: I dont want a Sender Throttle, as described in a similar question, here. I'm looking more for a recipient throttle but haven't had any luck finding out how to do so with PolicyD or Anvil services, and was wondering if anyone else has accomplished such a task.

    Read the article

  • Rate limiting an internet connection per user

    - by Alister
    I've got a friend who has a "rent-by-room" property and includes internet access as part of this. However some tenants are somewhat hogging the internet (i.e. constantly downloading). I was wondering if anyone knows of a fairly easy way of rate limiting each connection to make the system more equitable. A preferred solution would be a cheap piece of hardware or some sort of Linux "appliance". I would rather not have to get an iptables headache if this is avoidable.

    Read the article

  • Windows XP to Ubuntu 10.04 via VNC does not refresh

    - by hughdbrown
    I've tried tightvncserver and vnc4server on Ubuntu. I've tried tightvnc viewer and ultravnc viewer on Windows XP. I can connect from Windows to Ubuntu with any combination, but there is no screen refresh: I can drag a window on Ubuntu using my mouse in Windows or type into a terminal in Ubuntu from my keyboard in Windows, but the image does not change on Windows. I can request a screen refresh from Windows but the screen does not update. I am running the ATI driver on Ubuntu. I've tried stepping the System|Preferences|Appearance|Visual Effects down from Extra to Normal with no effect.

    Read the article

  • WinXP keyboard input repeat rate problem

    - by Victor Sorokin
    I have problem similar to http://superuser.com/questions/33981/problems-with-kvm-switch-and-keyboard-repeat-rate-on-windows-xp: When I press and hold some key it's repeated random number of times, after that repeat stops and I need to release key and press it again to make repeat continue. If there's simultaneous sound playback and repeat is stopped, sound's stuck while I hold key with unpleasant drumming as if there's some audio problem. This issue is reproducible in WinXP Safe Mode. On the same config under Linux there's no issue. My config: List item PS/2 Logitech Keyboard USB Mouse MB M3A/H-HDMI WinXP Pro English SP2 with added Russian layout (WinXP loads via GRUB2) Realtek audio drivers for AC97 Thanks for your suggestions J

    Read the article

  • Will this increase my VPS failing rate ?

    - by Spencer Lim
    Will this increase my Virtual private Server failing rate if i :- install Microsoft Window Server 2008 Enterprise install SQL server enterprise 2008 install IIS 7.5 install ASP.Net Mvc 2 install Microsoft Exchange install Team foundation server on one mini VPS with specification of DELL Poweredge R710 shared plan DDR3 ECC RAMs 16GB and -- 1GB for this VPS using DELL PERC 6i raid controller (this thing alone about 1.5k-2k) and the SAS HDD (15K RPM) (146GB) -- 33GB to this VPS each hdd is freaking fast over 300MB read / write possible with proper tuning the motherboard is a DELL and it has twin redundant PSU (870watt 85%eff) its running on Intel Xeon 5502 (Quad Core) x2 so about 8 physical proc (fairly share) is there any ruler for this about one VPS can only install what what what service ? Thx for reply

    Read the article

  • How soon does nginx's token bucket replenish when limiting at requests per minute?

    - by Michael Gorsuch
    Hi all. We've decided that we want to experiment and limit requests per minute instead of requests per second on our sites. However, I am confused by the burst parameter in this context. I am under the impression that when you use the 'nodelay' flag, the rate limiting facility acts like a token bucket instead of a leaky bucket. That being the case, the bucket size is equal to the burst parameter, and every time that you violate the policy (say 1 req/s), you have to put a token in the bucket. Once the bucket is full (being equal to the burst setting), you are given a 503 error page. I am also under the impression that once a violator stops going against the policy, a token is removed from the bucket at a rate of 1 token/s allowing him to regain access to the site. Assuming that I have the above correct, my question is what happens when I start regulating access per minute? If we chose 60 requests per minute, at what rate does the token bucket replenish?

    Read the article

  • Normalize Accept-Encoding via HAProxy for optimized Squid hit rate

    - by Matt Beckman
    Our website infrastructure uses HAProxy for load balancing, a Squid cluster for caching, and application data is on an IIS cluster. We load balance HAProxy by URI to optimize the Squid hit-rate, but we know that Squid is holding different copies of each page based on the Accept-Encoding header passed to it by the browser, and so IE (gzip, deflate) will have a different copy of a cached page than Firefox (gzip,deflate) or Chrome (gzip,deflate,sdch). We want to normalize the Accept-Encoding headers and I think the best place to do so would be in HAProxy. I'd appreciate it if someone could offer some ideas on how to accomplish this without breaking support for clients without gzip or deflate support.

    Read the article

  • Best practices for setting lm-factor in Squid refresh patterns

    - by Mpentecost
    I am running a Squid (3.1) cache in front of Django. The content of the site does not change very often, so Squid gives our backend much needed breathing room. Currently, this is the refresh pattern that we are using to cache the content: refresh_pattern . 60 100% 60 We basically want to cache everything for at least an hour (and only an hour) before Squid then re-validates the content. My question is on the "100%" parameter, which sets the lm-factor. I'm not sure if setting that to 100% is doing what we want it to. The assumption was that by setting it to 100%, it would ensure that objects stay in the cache for the max cache time. Is this an incorrect assumption? What are the best practices that one should follow when setting up a refresh pattern like this?

    Read the article

  • LAME: Switch sample rate of file without reencoding?

    - by TK Kocheran
    Is it possible to resample an MP3 file to a different rate (44.1) without doing a reencode? I have a few MP3 files that are at 48 and I need to switch 'em to 44.1, and I don't want to have to reencode my files to do so, as I'll lose quality. The source files are at CBR 320 and at 48kHz. Can this be done? The current way I'm doing it is using the following command: lame -b 320 -q 0 --resample 44.1 input.mp3 output.mp3 Is there a better way to do this?

    Read the article

  • How soon does nginx's token bucket replenish when limiting at requests per minute?

    - by Michael Gorsuch
    We've decided that we want to experiment and limit requests per minute instead of requests per second on our sites. However, I am confused by the burst parameter in this context. I am under the impression that when you use the 'nodelay' flag, the rate limiting facility acts like a token bucket instead of a leaky bucket. That being the case, the bucket size is equal to the burst parameter, and every time that you violate the policy (say 1 req/s), you have to put a token in the bucket. Once the bucket is full (being equal to the burst setting), you are given a 503 error page. I am also under the impression that once a violator stops going against the policy, a token is removed from the bucket at a rate of 1 token/s allowing him to regain access to the site. Assuming that I have the above correct, my question is what happens when I start regulating access per minute? If we chose 60 requests per minute, at what rate does the token bucket replenish?

    Read the article

  • Converting higher bit rate songs to 128 kbps AAC

    - by danny wilson
    i was updating my ipod classic tonight when for some stupid reason i checked the box which says "convert higher bit rate to 128 kbps aac". straight away it changed my audio amount from 67gb to 79gb and then started to sync, i stopped it right away as i thought the whole reason was to reduce space not add anymore to it. everytime i try to sync up now it keeps going back to try this same task and i dont know how to stop it apart from the obvious and cancelling the sync but then i cant update my ipod then? anybody got any ideas for me please? thanks in advance.

    Read the article

  • Need help translating rate limiting iptables rules to Puppet format

    - by geoffroy
    I use Puppet Iptables module to manage Iptables rules on my machine. I'd like to implement to rate limit failed SSH connections as described here : Hundreds of failed ssh logins iptables -A INPUT -p tcp --dport 22 -m recent --update --seconds 60 --hitcount 5 --name SSH --rsource -j DROP iptables -A INPUT -p tcp --dport 22 -m recent --set --name SSH --rsource -j ACCEPT Is it possible to translate it to Puppet syntax, such as firewall { '015 drop 5 failed attemps to connect to SSH in a minute ': proto => 'tcp', port => 22, action => 'drop', // what are the other paramters ? } Any help welcome. Best regards Geoffroy

    Read the article

  • How'd they do it: Millions of tiles in Terraria

    - by William 'MindWorX' Mariager
    I've been working up a game engine similar to Terraria, mostly as a challenge, and while I've figured out most of it, I can't really seem to wrap my head around how they handle the millions of interactable/harvestable tiles the game has at one time. Creating around 500.000 tiles, that is 1/20th of what's possible in Terraria, in my engine causes the frame-rate to drop from 60 to around 20, even tho I'm still only rendering the tiles in view. Mind you, I'm not doing anything with the tiles, only keeping them in memory. Update: Code added to show how I do things. This is part of a class, which handles the tiles and draws them. I'm guessing the culprit is the "foreach" part, which iterates everything, even empty indexes. ... public void Draw(SpriteBatch spriteBatch, GameTime gameTime) { foreach (Tile tile in this.Tiles) { if (tile != null) { if (tile.Position.X < -this.Offset.X + 32) continue; if (tile.Position.X > -this.Offset.X + 1024 - 48) continue; if (tile.Position.Y < -this.Offset.Y + 32) continue; if (tile.Position.Y > -this.Offset.Y + 768 - 48) continue; tile.Draw(spriteBatch, gameTime); } } } ... Also here is the Tile.Draw method, which could also do with an update, as each Tile uses four calls to the SpriteBatch.Draw method. This is part of my autotiling system, which means drawing each corner depending on neighboring tiles. texture_* are Rectangles, are set once at level creation, not each update. ... public virtual void Draw(SpriteBatch spriteBatch, GameTime gameTime) { if (this.type == TileType.TileSet) { spriteBatch.Draw(this.texture, this.realm.Offset + this.Position, texture_tl, this.BlendColor); spriteBatch.Draw(this.texture, this.realm.Offset + this.Position + new Vector2(8, 0), texture_tr, this.BlendColor); spriteBatch.Draw(this.texture, this.realm.Offset + this.Position + new Vector2(0, 8), texture_bl, this.BlendColor); spriteBatch.Draw(this.texture, this.realm.Offset + this.Position + new Vector2(8, 8), texture_br, this.BlendColor); } } ... Any critique or suggestions to my code is welcome. Update: Solution added. Here's the final Level.Draw method. The Level.TileAt method simply checks the inputted values, to avoid OutOfRange exceptions. ... public void Draw(SpriteBatch spriteBatch, GameTime gameTime) { Int32 startx = (Int32)Math.Floor((-this.Offset.X - 32) / 16); Int32 endx = (Int32)Math.Ceiling((-this.Offset.X + 1024 + 32) / 16); Int32 starty = (Int32)Math.Floor((-this.Offset.Y - 32) / 16); Int32 endy = (Int32)Math.Ceiling((-this.Offset.Y + 768 + 32) / 16); for (Int32 x = startx; x < endx; x += 1) { for (Int32 y = starty; y < endy; y += 1) { Tile tile = this.TileAt(x, y); if (tile != null) tile.Draw(spriteBatch, gameTime); } } } ...

    Read the article

  • Why is the framerate (fps) capped at 60?

    - by dennmat
    ISSUE I recently moved a project from my laptop to my desktop(machine info below). On my laptop the exact same code displays the fps(and ms/f) correctly. On my desktop it does not. What I mean by this is on the laptop it will display 300 fps(for example) where on my desktop it will show only up to 60. If I add 100 objects to the game on the laptop I'll see my frame rate drop accordingly; the same test on the desktop results in no change and the frames stay at 60. It takes a lot(~300) entities before I'll see a frame drop on the desktop, then it will descend. It seems as though its "theoretical" frames would be 400 or 500 but will never actually get to that and only do 60 until there's too much to handle at 60. This 60 frame cap is coming from no where. I'm not doing any frame limiting myself. It seems like something external is limiting my loop iterations on the desktop, but for the last couple days I've been scratching my head trying to figure out how to debug this. SETUPS Desktop: Visual Studio Express 2012 Windows 7 Ultimate 64-bit Laptop: Visual Studio Express 2010 Windows 7 Ultimate 64-bit The libraries(allegro, box2d) are the same versions on both setups. CODE Main Loop: while(!abort) { frameTime = al_get_time(); if (frameTime - lastTime >= 1.0) { lastFps = fps/(frameTime - lastTime); lastTime = frameTime; avgMspf = cumMspf/fps; cumMspf = 0.0; fps = 0; } /** DRAWING/UPDATE CODE **/ fps++; cumMspf += al_get_time() - frameTime; } Note: There is no blocking code in the loop at any point. Where I'm at My understanding of al_get_time() is that it can return different resolutions depending on the system. However the resolution is never worse than seconds, and the double is represented as [seconds].[finer-resolution] and seeing as I'm only checking for a whole second al_get_time() shouldn't be responsible. My project settings and compiler options are the same. And I promise its the same code on both machines. My googling really didn't help me much, and although technically it's not that big of a deal. I'd really like to figure this out or perhaps have it explained, whichever comes first. Even just an idea of how to go about figuring out possible causes, because I'm out of ideas. Any help at all is greatly appreciated. EDIT: Thanks All. For any others that find this to disable vSync(windows only) in opengl: First get "wglext.h". It's all over the web. Then you can use a tool like GLee or just write your own quick extensions manager like: bool WGLExtensionSupported(const char *extension_name) { PFNWGLGETEXTENSIONSSTRINGEXTPROC _wglGetExtensionsStringEXT = NULL; _wglGetExtensionsStringEXT = (PFNWGLGETEXTENSIONSSTRINGEXTPROC) wglGetProcAddress("wglGetExtensionsStringEXT"); if (strstr(_wglGetExtensionsStringEXT(), extension_name) == NULL) { return false; } return true; } and then create and setup your function pointers: PFNWGLSWAPINTERVALEXTPROC wglSwapIntervalEXT = NULL; PFNWGLGETSWAPINTERVALEXTPROC wglGetSwapIntervalEXT = NULL; if (WGLExtensionSupported("WGL_EXT_swap_control")) { // Extension is supported, init pointers. wglSwapIntervalEXT = (PFNWGLSWAPINTERVALEXTPROC) wglGetProcAddress("wglSwapIntervalEXT"); // this is another function from WGL_EXT_swap_control extension wglGetSwapIntervalEXT = (PFNWGLGETSWAPINTERVALEXTPROC) wglGetProcAddress("wglGetSwapIntervalEXT"); } Then just call wglSwapIntervalEXT(0) to disable vSync and 1 to enable vSync. I found the reason this is windows only is that openGl actually doesn't deal with anything other than rendering it leaves the rest up to the OS and Hardware. Thanks everyone saved me a lot of time!

    Read the article

  • WPF Cursor Blink rate

    - by Daniel
    I have noticed that the cursor blinks really slowly in my WPF apps. This is much much slower then in the rest of windows. What I would like is for the Cursor blink rate to match the standard windows cursor blink rate.

    Read the article

  • Older iPhone/ iPod frame rate?

    - by Adam
    Do older iPods and iPhones have a frame rate of 60fps? I'm finding that all the methods for calculating time intervals on iPhone (cftimeinterval, nstimer, timesince1970, etc) are all giving me bad data, so I've decided assume a frame rate of 60, just not sure if older apple devices can run at this.

    Read the article

  • What is "Memory Page out Rate"

    - by Tuxist
    Could somebody please tell me what is "Memory Page Out Rate". I have seen this in "HP Open View" server monitoring tool and tried googling it. Would appreciate if some expert can clarify. If page out rate is too high as 200+ per second, can it crash the server? Thanks in advance

    Read the article

  • GAE Task Queue rate

    - by bach
    Hi, Is there a way to guarantee a task to be performed in X minutes (or after X min) ? (rate would mean the intervals between tasks but what about the first task, would the first task starts after the 'rate' time?)

    Read the article

  • SMTP message rate control on Ubuntu 8.04, preferably with postfix

    - by TimDaMan
    Maybe I am chasing a bug but I am trying to set up a smtp proxy of sorts. I have a postfix server which receives all the email for a collection of servers/clients. It them uses a smarthost (relayhost=...) to forward it's mail to our corporate MTA. I would like to limit the number of messages an individual server can relay to prevent swamping the corporate MTA. Postfix has a program called "anvil" that is capable of tracking stats about mail to be used for such things but it doesn't seem to be executed. I ran "inotifywait -m /usr/lib/postfix/anvil" while I started postfix and sent a number of messages through it from a remote server. inotifywait indicated anvil was never run. Anyone gotten postfix/anvil rate controls to work? main.cf smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no readme_directory = no myhostname = site-server-q9 alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = localhost relayhost = Out outgoing mail relay mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 10.0.0.0/8 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = 10.X.X.X smtpd_client_message_rate_limit = 1 anvil_rate_time_unit = 1h master.cf extract anvil unix - - - - 1 anvil smtp inet n - - - - smtpd

    Read the article

  • ADSL2+ - High sync-rate, good line attenuation, but low noise margin and slow speeds

    - by Mark Pim
    I've been with my ISP (IdNet) for a few months and have been getting some good speeds, but in the last week the speed has dramatically decreased (from 15 Mbps+ to around 0.2 Mbps). This happens at all times of day, not just peak periods. Obviously I've done all I can to isolate problems my end - only one PC is connected to the router (via ethernet cable) and no other background programs are using the network etc. I've raised the issue with the ISP and they've suggested trying a new ADSL filter to see if that is casuing the problem, but I thought it would also be good to get the opinion of superuser on possible causes or other troubleshooting I can do. Here are the juicy stats :) My router (Netgear DGN1000) reports: Downstream Upstream Connection Speed 17602 kbps 1062 kbps Line Attenuation 17.9 db 8.6 db Noise Margin 6.0 db 6.1 db I used RouterStats and it seems to show those figures stay fairly consistent all the time I ran the BT speedtest and it reported: download speed of 164 kbps, out of a max achievable of 21000 kbps upload speed of 859 kbps, out of 1048 kbps DSL connection rate 17719 kbps down and 1048 kbps up IP Profile of 15000 kbps Is there any more troubleshooting I can do? Does this look like a problem with my equipment / wiring or with BT's line? Any advice would be great :)

    Read the article

  • Monitor resolution messed up somehow

    - by Kelp
    I purchased the Westinghouse 22" LCD LCM-22w3 a few years ago, and now it's been acting up on me. I just booted into Windows 7(without changing any settings), and the default resolution is 1600x1024, and it allows me to select a refresh rate of up to 85 Hz(it didn't let me do that). I usually have my resolution set to 1680x1050 with a refresh rate of 60 Hz. Now, that resolution does not even appear in the list. Does anyone have any idea of what could be the problem and how to fix it? Edit: I am not sure if this will help, but when I go to change the screen resolution, the monitor is known as "Generic Non-PnP Monitor". It used to be referred to as "Generic PnP Monitor). I tried to disable Generic Non-PnP Monitor, but when I restart, it uses that monitor again. Edit 2: I created a custom .inf file using Powerstrip, but that does not work either. The monitor settings are being stubborn.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >