Search Results

Search found 1659 results on 67 pages for 'bandwidth throttling'.

Page 11/67 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Need to know who is hogging my bandwidth?

    - by Dev
    I have an ethernet connection to my iMac and with Internet sharing I am broadcasting the wireless network from my mac rather than using a wireless router. I use it to connect other devices wirelessly to the internet. But this makes all the traffic flow through my iMac. I wanted a way to analyze the traffic so that I know what connected devices are hogging the bandwidth at a given time and from which websites? I installed wireshark for mac and played around a little but it seems like an overkill when you first look at it. Can someone please help with few instructions to get what I need or any other way other than using wireshark? Thanks Dev.

    Read the article

  • Win CE 6.0 client using WCF Services - Reduce Bandwidth

    - by Sean
    We have a Win CE 6.0 device that is required to consume services that will be provided using WCF. We are attempting to reduce bandwidth usage as much as possible and with a simple test we have found that using UDP instead of HTTP saved significant data usage. I understand there are limitations regarding WCF on .NET Compact Framework 3.5 devices and was curious what people thought would be the appropriate way forward. Would it make sense to develop a custom UDP binding, and would that work for both sides? Any feedback would be appreciated. Thanks.

    Read the article

  • openvpn TCP/UDP slow SSH/SMB performance

    - by Petr Latal
    I have question about strange behavior of my openVPN configuration on Debian lenny. I have 2 server configs (one proto tcp-server based and one proto udp based). ISP bandwidth is 7Mbit/7Mbit. When I uses proto tcp-server my download server rate is fine around 6,4 Mbit/s, but upload rate is about 3Mbit/s. When I uses proto udp, my download server rate is around 3Mbit/s and upload rate around 6,4Mbit/s. I tried to handle the MTU, MSSFIX and cipher on/off on server and client configs to synchronize rates, but without solution. Here is TCP based SERVER config: mode server tls-server port 1194 proto tcp-server dev tap0 ifconfig 11.10.15.1 255.255.255.0 ifconfig-pool 11.10.15.2 11.10.15.20 255.255.255.0 push "route 192.168.1.0 255.255.255.0" push "dhcp-option DNS 192.168.1.200" push "route-gateway 11.10.15.1" push "dhcp-option WINS 192.168.1.200" route-up /etc/openvpn/routeup.sh duplicate-cn ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh2048.pem log-append /var/log/openvpn.log status /var/run/vpn.status 10 user nobody group nogroup keepalive 10 120 comp-lzo verb 3 script-security 3 plugin /usr/lib/openvpn/openvpn-auth-pam.so system-auth persist-tun persist-key mssfix cipher BF-CBC Here is UDP based SERVER config: port 1194 proto udp dev tun0 local xx.xx.xx.xx server 11.10.15.0 255.255.255.0 ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh2048.pem log-append /var/log/openvpn.log status /var/run/vpn.status 10 user nobody group nogroup keepalive 10 120 comp-lzo verb 3 duplicate-cn script-security 3 plugin /usr/lib/openvpn/openvpn-auth-pam.so system-auth persist-tun persist-key tun-mtu 1500 mssfix 1212 client-to-client ifconfig-pool-persist ipp.txt Here is TCP/UDP based windows CLIENT config: remote xx.xx.xx.xx --socket-flags TCP_NODELAY tls-client port 1194 proto tcp-client #proto udp dev tap #dev tun pull ca ca.crt cert latis.crt key latis.key mute 0 comp-lzo adaptive verb 3 resolv-retry infinite nobind persist-key auth-user-pass auth-nocache script-security 2 mssfix cipher BF-CBC

    Read the article

  • managing a high traffic media sharing website

    - by Jordan Westerman
    i'm in the process of developing a website that i predict will generate a lot of traffic. the site will be similar to many other sites offering free media streaming: mp3's. we are going to start with a pretty minimal amount of media to share, but the basic idea is that artists will set up a profile page with music they have made available for consumers to visit the page and listen to the music. we are starting with just a handful of artists, but i think that this project will generate more and more artist pages. eventually i'd like to set it up so consumers can create personalized playlists. how can i best prepare server space and bandwidth capabilities? i have a small team of web designers and programmers working on the site, as i am pretty illiterate when it comes to site management. as the ring leader of this organization, i am more or less looking for financial requirements and monthly burn rate estimates. i don't have a ton of capitol to start with, putting together a business plan, but i am seeking investments. i have a game plan to grow fast enough to be successful, and slow enough to manage the financial growth requirements. any questions i may have failed to ask myself? is it realistic to start this project on a shared server, and upgrade? any financial advice you think i can use? i really appreciate any advice given, as this is my first business venture. thank you all in advance. Jordan Westerman D.B.A. Badfish Productions, LLC

    Read the article

  • HTB.init / tc behind NAT

    - by Ben K.
    I have an Ubuntu 10 box that I'm trying to set up as a bandwidth-shaping router. The machine has one WAN interface, eth0 and two LAN interfaces, eth1 and eth2. NAT is configured using MASQUERADE as described at InternetConnectionSharing. I'm mostly concerned with shaping outbound traffic from the LAN interfaces -- in the end, I'd like to end up with a hard 768Kbps limit per-LAN-interface (rather than a limit on eth0 pooled across all interfaces). I installed HTB.init, and riffing on the examples, tried to set this up on eth1 by putting three files into /etc/sysconfig/htb: /etc/sysconfig/htb/eth1 DEFAULT=30 R2Q=100 /etc/sysconfig/htb/eth1-2.root RATE=768Kbps BURST=15k /etc/sysconfig/htb/eth1-2:30.dfl RATE=768Kbps CEIL=788Kbps BURST=15k LEAF=sfq I can /etc/init.d/htb start and /etc/init.d/htb stats and see information that /seems/ to suggest it's working...but when I try pulling a large file via the WAN interface the shaping clearly isn't in effect. Any suggestions? My guess is it has something to do with where the shaping falls in the NAT chain, but I really have no idea where to begin troubleshooting this. ---- Update: Here's my /etc/init.d/htb list output, it seems to make sense -- the default rate for eth1 is 768Kbps? ### eth0: queueing disciplines qdisc htb 1: root refcnt 2 r2q 100 default 30 direct_packets_stat 0 qdisc sfq 30: parent 1:30 limit 127p quantum 1514b perturb 10sec ### eth0: traffic classes class htb 1:2 root rate 768000bit ceil 768000bit burst 1599b cburst 1599b class htb 1:30 parent 1:2 leaf 30: prio 0 rate 6144Kbit ceil 6144Kbit burst 15Kb cburst 1598b ### eth0: filtering rules filter parent 1: protocol ip pref 100 u32 filter parent 1: protocol ip pref 100 u32 fh 800: ht divisor 1 filter parent 1: protocol ip pref 100 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:30 match 00000000/00000000 at 12 match 00000000/00000000 at 16 ### eth1: queueing disciplines qdisc htb 1: root refcnt 2 r2q 100 default 30 direct_packets_stat 0 qdisc sfq 30: parent 1:30 limit 127p quantum 1514b perturb 10sec ### eth1: traffic classes class htb 1:2 root rate 768000bit ceil 768000bit burst 1599b cburst 1599b class htb 1:30 parent 1:2 leaf 30: prio 0 rate 6144Kbit ceil 6144Kbit burst 15Kb cburst 1598b

    Read the article

  • finding the best network latency between two countries

    - by Yoav Aner
    I know there are many tools to test for bandwidth and latency, but they all rely on having at least one host from which you can run those tests. I wonder whether there's an online source or some other way to guestimate the latency or speed between two countries (in general). For example, would a customer in Japan get lower latency if the server is located in Singapore or Australia? Is a user in India likely to get higher download speed from a server in the UK or in the US? Are there any online resources or some clever ways to answer those questions with a reasonable degree of accuracy? [UPDATE]: Thanks for the great suggestions from Raffael Luthiger. I didn't know about those looking glass servers. The submarine cable maps were also really cool to discover (Thanks to Jesper Mortensen). Also seems really wise if I could ask those network professional in the area for their experience, but obviously I don't have access to those. At least some of them are on SF :) However, I'm still a little unsure how to combine those resources to give me some measurements. This is the information I have: Two countries (A,B). I do have IP addresses of customers in country A (I can obtain those from the web server log files for example). Presumably I can find some looking glass servers in country B and run a trace to those IPs. What's the best measurements to use? Are there any scripts that help automate at least some of this process?

    Read the article

  • First web server questions

    - by Graeme
    Hi there, Just looking for some help/suggestions with this. I require my own server for an upcoming project that will be hosting users websites. I want to build a control panel the user can log into and modify their website which will be stored elsewhere on the server. This all seems easy enough, It's just managing domains and emails that confuse me. What should I look for to manage domain names and point them to the correct website and also what would be the best way to manage email accounts/set up new ones etc. I want to avoid cPanel/WHM if possible, I'm looking to control most things through the control panel I will be building. So any suggestions on this would be useful as well, as I will be wanting to add email accounts through php (Can be done using a shell I assume?). I will also be wanting to measure bandwidth used on the websites contained in each users directory, any suggestions on making this possible? I'm really looking for some suggestions on what software to use to set this up, any advice would be really helpful! Thanks, Graeme

    Read the article

  • Measure Upload Speed between a client and our server

    - by tresstylez
    We host a SAAS application specially customized for multiple clients. For one customer in particular -- they are reporting sporadic performance issues from various locations on their network, in particular UPLOADING documents through a form on our website. The client claims they have "bandwidth to spare" and that utilization of their "pipe" is so low that it MUST be our application, but our application has MANY clients and all features are working fine for all other clients. Interestingly enough -- DOWNLOADS (ie. just accessing the website, or downloading documents) is working fine. Speed test shows that they should get 1.2Mbps UP. So, a 3MB file should take 20 secs to upload. It takes 60+ seconds on their network. Sometimes even small files take OVER 10 minutes to upload or they timeout. Pings and Traceroutes don't show any abnormally long hops or response times. They claim other SAAS applications they use allow them to upload just fine. Both IT teams are working together to resolve this issue. What kind of data can I request from the clients to begin ruling things out. Seems like we need to somehow measure LATENCY of the networks involved or even at the switch level, we need to understand if packets are getting dropped somewhere and why. Where should I start? Any help is appreciated. I'll provide more info upon requests

    Read the article

  • Programmatic resource monitoring per process in Linux

    - by tuxx
    Hi, I want to know if there is an efficient solution to monitor a process resource consumption (cpu, memory, network bandwidth) in Linux. I want to write a daemon in C++ that does this monitoring for some given PIDs. From what I know, the classic solution is to periodically read the information from /proc, but this doesn't seem the most efficient way (it involves many system calls). For example to monitor the memory usage every second for 50 processes, I have to open, read and close 50 files (that means 150 system calls) every second from /proc. Not to mention the parsing involved when reading these files. Another problem is the network bandwidth consumption: this cannot be easily computed for each process I want to monitor. The solution adopted by NetHogs involves a pretty high overhead in my opinion: it captures and analyzes every packet using libpcap, then for each packet the local port is determined and searched in /proc to find the corresponding process. Do you know if there are more efficient alternatives to these methods presented or any libraries that deal with this problems?

    Read the article

  • Php miniserver downloader bandwidth

    - by snikolov
    $httpsock = @socket_create_listen("9090"); if (!$httpsock) { print "Socket creation failed!\n"; exit; } while (1) { $client = socket_accept($httpsock); $input = trim(socket_read ($client, 4096)); $input = explode(" ", $input); $input = $input[1]; $fileinfo = pathinfo($input); switch ($fileinfo['extension']) { default: $mime = "text/html"; } if ($input == "/") { $input = "index.html"; } $input = ".$input"; if (file_exists($input) && is_readable($input)) { echo "Serving $input\n"; $contents = file_get_contents($input); $output = "HTTP/1.0 200 OK\r\nServer: APatchyServer\r\nConnection: close\r\nContent-Type: $mime\r\n\r\n$contents"; } else { //$contents = "The file you requested doesn't exist. Sorry!"; //$output = "HTTP/1.0 404 OBJECT NOT FOUND\r\nServer: BabyHTTP\r\nConnection: close\r\nContent-Type: text/html\r\n\r\n$contents"; $filename = "dada"; $file = fopen($filename, 'r'); $filesize = filesize($filename); $buffer = fread($file, $filesize); $send = array("Output"=$buffer,"filesize"=$filesize,"filename"=$filename); $file = $send['filename']; $filesize = $send['filesize']; $output = 'HTTP/1.0 200 OK\r\n'; $output .= "Content-type: application/octet-stream\r\n"; $output .= 'Content-Disposition: attachment; filename="'.$file.'"\r\n'; $output .= "Content-Length:$filesize\r\n"; $output .= "Accept-Ranges: bytes\r\n"; $output .= "Cache-Control: private\n\n"; $output .= $send['Output']; $output .= "Content-Transfer-Encoding: binary"; $output .= "Connection: Keep-Alive\r\n"; } socket_write($client, $output); socket_close ($client); } socket_close ($httpsock); Hello i have create a code here to work as a webserver downloader the client can request files and download, i got it working at last however, i would like to know if the webserver can show much bandwidth the user request via sockets, perl has the same option as php but its more hardcore than php, i dont understand much about perl, i even saw that a miniwebserver can show much the client user pulls from the server would it be possible that you can assist me with this coding, i much aprreciate it thank you guys.

    Read the article

  • PHP- Curl Download Bandwidth Limiting

    - by snikolov
    Hey guys, I have been trying to limit the bandwidth with PHP. Can you please help here? I can't get the download rate to be limited with PHP. Thanks a million! function total_filesize($url) { $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "$url"); curl_setopt($ch, CURLINFO_SPEED_DOWNLOAD,12); //ITS NOT WORKING! curl_setopt($ch, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.11) ". "Gecko/20071127 Firefox/2.0.0.11"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_NOBODY, true); $chStore = curl_exec($ch); $chError = curl_error($ch); $chInfo = curl_getinfo($ch); curl_close($ch); return $size = $chInfo['download_content_length']; } function __define_url($url) { $basename = basename($url); Define('filename',$basename); $define_file_size = total_filesize($url); Define('filesizes',$define_file_size); } function _download_file($url_file) { __define_url($url_file); // $range = "50000-60000"; $filesize = filesizes; $file = filename; header('Content-Type: application/octet-stream'); header('Content-Disposition: attachment; filename="'.$file.'"'); header('Content-Transfer-Encoding: binary'); header("Content-Length: $filesize"); $ch = curl_init(); curl_setopt($ch, CURLOPT_URL,"$url_file"); curl_setopt($ch, CURLOPT_BINARYTRANSFER,1); // curl_setopt($ch, CURLOPT_RANGE,$range); curl_exec($ch); curl_close($ch); } _download_file('http://rarlabs.com/rar/wrar393.exe');

    Read the article

  • Maximum page fetch with maximum bandwith

    - by Ehsan
    Hi I want to create an application like a spider I've implement fetching page as the following code in multi-thread application but there is two problem 1) I want to use my maximum bandwidth to send/receive request, how should I config my request to do so (Like Download Accelerator application and the like) cause I heard the normal application will use 66% of the available bandwidth. 2) I don't know what exactly HttpWebRequest.KeepAlive do, but as its name implies I think i can create a connection to a website and without closing the connection sending another request to that web site using existing connection. does it boost performance or Im wrong ?? public PageFetchResult Fetch() { PageFetchResult fetchResult = new PageFetchResult(); try { HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(URLAddress); HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); Uri requestedURI = new Uri(URLAddress); Uri responseURI = resp.ResponseUri; string resultHTML = ""; byte[] reqHTML = ResponseAsBytes(resp); if (!string.IsNullOrEmpty(FetchingEncoding)) resultHTML = Encoding.GetEncoding(FetchingEncoding).GetString(reqHTML); else if (!string.IsNullOrEmpty(resp.CharacterSet)) resultHTML = Encoding.GetEncoding(resp.CharacterSet).GetString(reqHTML); req.Abort(); resp.Close(); fetchResult.IsOK = true; fetchResult.ResultHTML = resultHTML; } catch (Exception ex) { fetchResult.IsOK = false; fetchResult.ErrorMessage = ex.Message; } return fetchResult; }

    Read the article

  • Mamimum page fetch with maximum bandwith

    - by Ehsan
    Hi I want to create an application like a spider I've implement fetching page as the following code in multi-thread application but there is two problem 1) I want to use my maximum bandwidth to send/receive request, how should I config my request to do so (Like Download Accelerator application and the like) cause I heard the normal application will use 66% of the available bandwidth. 2) I don't know what exactly HttpWebRequest.KeepAlive do, but as its name implies I think i can create a connection to a website and without closing the connection sending another request to that web site using existing connection. does it boost performance or Im wrong public PageFetchResult Fetch() { PageFetchResult fetchResult = new PageFetchResult(); try { HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(URLAddress); HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); Uri requestedURI = new Uri(URLAddress); Uri responseURI = resp.ResponseUri; if (Uri.Equals(requestedURI, responseURI)) { string resultHTML = ""; byte[] reqHTML = ResponseAsBytes(resp); if (!string.IsNullOrEmpty(FetchingEncoding)) resultHTML = Encoding.GetEncoding(FetchingEncoding).GetString(reqHTML); else if (!string.IsNullOrEmpty(resp.CharacterSet)) resultHTML = Encoding.GetEncoding(resp.CharacterSet).GetString(reqHTML); req.Abort(); resp.Close(); fetchResult.IsOK = true; fetchResult.ResultHTML = resultHTML; } else { URLAddress = responseURI.AbsoluteUri; relayPageCount++; if (relayPageCount > 5) { fetchResult.IsOK = false; fetchResult.ErrorMessage = "Maximum page redirection occured."; relayPageCount = 0; return fetchResult; } req.Abort(); resp.Close(); return Fetch(); } } catch (Exception ex) { fetchResult.IsOK = false; fetchResult.ErrorMessage = ex.Message; } return fetchResult; }

    Read the article

  • How to throttle login attemps in Java webapp?

    - by Jörn Zaefferer
    I want to implement an efficient mechanism to throttle login attemps in my Java web application, to prevent brute-force attacks on user accounts. Jeff explained the why, but not the how. Simon Willison showed an implementation in Python for Django: That doesn't really help me along as I can't use memcached nor Django. Porting his ideas from scratch doesn't seem like a great either - I don't want to reinvent the wheel. I found one Java implementation, though it seems rather naiive: Instead of a LRU cache, it just clears all entries after 15 minutes. EHCache could be an alternative for memcached, but I don't have any experience with it and don't really want to intoduce yet another technology if there are better alternatives for this task. So, whats a good way to implement login throttling in Java?

    Read the article

  • Low CPU performance with low usage and clock - Windows 8.1

    - by Daniele
    I recently deleted everything from my PC and reinstalled Windows 8.1 from scratch. When I first booted into Windows everything was extremely slow though the CPU usage was very low (about 1%). After installing some drivers the problem seemed to be solved, I was able to use my PC normally. Today I installed a game and I noticed a strange behavior: the game was playable but the performance worsened more and more in the time. This is the situation BEFORE opening the game (normal): This is AFTER some minutes inside the game (low CPU usage and clock): Some information about my system: PC: Sony Vaio S13 (SVS13A1C5E) OS: Windows 8.1 CPU: Intel Core i7-3520M 2.90GHz GPU(1): Intel HD Graphics 4000 GPU(2): NVIDIA GeForce GT 640M LE I tried searching for new drivers and other solutions but noting worked and I don't know what is the cause. I did not checked the temperatures but the fans are not running fast and the PC does not look overheated. Update: Max CPU Temp: 66°C, Max GPU Temp: 61°C The strange thing is that the GPU load is 99% (GPU-Z) and the fan is almost silent. Update 2: I had troubles with Sony Vaio software, I can't get the FN keys and the STAMINA/SPEED switch to work (it is a physical switch to enable/disable the Nvidia card and change the Power Profile). I'm saying this because I remember that before reinstalling Windows there was an option in the Vaio Control Center (now it is not there anymore) that allowed me to choose from something like "priority to performance (ventilation)" or "priority to silence". The current behavior looks like a "priority to silence", but I can't get the stamina-speed switch to work and so I don't see similar oprions in the Vaio Control Center. I don't know if the problem is related to this.

    Read the article

  • Ntop monitoring - Hosts visible with no SPAN/mirroring

    - by Cory J
    I am attempting to use ntop to monitor traffic over a Cisco Catalyst switch. I was assuming that in order to see any of the traffic, I'd have to use monitor, as described here: http://www.cisco.com/en/US/products/hw/switches/ps708/products_tech_note09186a008015c612.shtml. Howver, before I did anything on the switch, I simply plugged my ntop server in and fired up ntop. To my suprise, I instantly see 3+ pages of hosts, and thousands of packets. How is ntop seeing this? I have verified that no monitoring exists on the switch (run as en): cs1.pvdc#show monitor No SPAN configuration is present in the system. My ntop server is Ubuntu 8.04, I haven't done ANY configuration, I just installed the ntop package. This is also a fresh Ubuntu install. Is there anything else on my switch besides "monitor" that might cause my switch to mirror all its traffic like this? I've tried plugging ntop into different ports with the same results. UPDATE: It appears to be more then just broadcast traffic showing up in ntop, for example, I can see when my IPs have talked to the DNS server or generated HTTP traffic. If my switch is misconfigured, can anyone point me in the right direction towards rectify this? Not a Cisco expert.

    Read the article

  • Throughput tool with decent graphing.

    - by Cory J
    I've been looking through some of the tools available for measuring network throughput, namely iperf, bwping, ttcp, etc. I am planning on doing throughput tests over a long period of time, so what I really need is good graphing output, preferably rrd graphs. The Jperf frontend for iperf will generate a graph, and bmon has a nice command-line graph, but these simply count seconds since the test was started. I am trying to measure trends in throughput over times of the day, so a graph with times and days is necessary. So a way to get iperf to log to RRDs would be best, if this isn't possible could someone point me toward another solution?

    Read the article

  • CentOS - massive usage on loopback interface

    - by Matthew Iselin
    Hi, I have a CentOS installation which is running fairly smoothly. Today I ran ifconfig mainly to see what sort of usage has been coming across the ethernet interface, and to also check my link speed. This is what I ended up seeing for the loopback device: lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:10301085132061223274 errors:0 dropped:0 overruns:0 frame:0 TX packets:13981054163812689233 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:11783901785008000095 (0.6 EiB) TX bytes:10333501021200548281 (0.9 EiB) This just feels completely wrong - almost an EiB of data? Any assistance in tracking down the source of these statistics would be greatly appreciated.

    Read the article

  • What is a usable throughput for a home media server

    - by Craig
    I am looking to setup a home server that will act as a media server. This will include both video (possibly HD) and audio. The clients will be a fun mix of hardware but that is a different question. What I want to know is what is the minimum throughput for streaming video without hitches? Is there a "sweet" spot for throughput (price vs. throughput)? I am determining my budget for this "upgrade" and I need to evaluate wether or not upgrading to a 1 Gbps home LAN is required. Sure, it would be sweet and easily handle the traffic but I don't want to do it unless it is necesary.

    Read the article

  • what firefall linux distro applicance could track internet usage per device in my home?

    - by GregH
    Hello, Anyone know of a community edition/open source/free firewall/gateway software product that I could install onto an old PC to act as my firewall/gateway/proxy etc, BUT for which it has the power to track internet usage per device in my home. So: a) Mandatory - Track internet usage for devices on my home network on a per device basis (e.g. various PCs/Xbox etc) b) Mandatory - Report/graph would would give a breakdown of internet usage, per device (e.g. IP address), per day. c) Desirable - as in b) above but per hour d) Desirable - realtime graph (e.g. 5 minute measurement intervals or something) that shows current internet usage per device e) Mandatory - Handles all internal<=internet requests for all protocols (e.g. HTTP, HTTPS, xbox etc) f) Mandatory - No explicit settings in clients required - i.e. Transparent Monitoring concept (for both HTTP and non-HTTP traffic like xbox, skype etc) g) Mandatory - easy "appliance" like installation onto a dedicated low spec PC thanks in advance

    Read the article

  • what firefall linux distro applicance could track internet usage per device in my home?

    - by GregH
    Hello, Anyone know of a community edition/open source/free firewall/gateway software product that I could install onto an old PC to act as my firewall/gateway/proxy etc, BUT for which it has the power to track internet usage per device in my home. So: a) Mandatory - Track internet usage for devices on my home network on a per device basis (e.g. various PCs/Xbox etc) b) Mandatory - Report/graph would would give a breakdown of internet usage, per device (e.g. IP address), per day. c) Desirable - as in b) above but per hour d) Desirable - realtime graph (e.g. 5 minute measurement intervals or something) that shows current internet usage per device e) Mandatory - Handles all internal<=internet requests for all protocols (e.g. HTTP, HTTPS, xbox etc) f) Mandatory - No explicit settings in clients required - i.e. Transparent Monitoring concept (for both HTTP and non-HTTP traffic like xbox, skype etc) g) Mandatory - easy "appliance" like installation onto a dedicated low spec PC thanks in advance

    Read the article

  • Why does a wired network slow down with user increase?

    - by Ed Briscoe
    If you had a wired network, 20 PCs hooked up to a 100 Mbit/sec switch (same onboard ethernet port speed) and you were just sending some test data round. What is the technical explanation as to why 20 machines sending test data around this network to each other is slower than one to one? I mean I know a busy network means it's slower but I'm really trying to understand some more technical details. Thanks for any help

    Read the article

  • Limit bandwith on network of computers

    - by Joseph34123
    We have network in office of approximately 10 computers sometimes when someone downloads something for work (e.g. syncing email with an attachment or Dropbox), everyone slows down. I could limit each computer in office but the problem is that we have an open WiFi network, and I cannot access the computers that use it. We have one main DSL router "Netopia 3347" and another router connected to it on wire for public wifi is "Linksys WRT110". I cannot change the setup we have and don't want to. There's 2 approaches here: Set office computers download limit in each computer and then I need to find out how I can set a download limit for the wifi network such as in the router settings. It's a Linksys router and I did not see the option for this, so maybe I need new router for WiFi? Don't bother with each computer and put some "specific hardware" or another computer before router so all traffic goes trough it and I can then assign priority maximum speed for each IP address. Question is what hardware do I need?

    Read the article

  • Rate limiting an internet connection per user

    - by Alister
    I've got a friend who has a "rent-by-room" property and includes internet access as part of this. However some tenants are somewhat hogging the internet (i.e. constantly downloading). I was wondering if anyone knows of a fairly easy way of rate limiting each connection to make the system more equitable. A preferred solution would be a cheap piece of hardware or some sort of Linux "appliance". I would rather not have to get an iptables headache if this is avoidable.

    Read the article

  • Implications of using many USB web cameras

    - by Martin
    I'm looking into connecting multiple low resolution USB webcams to a single computer. What implications might this have on performance? How does, for example, four 320x240 cameras fare against a single 640x480 camera? I'm not well versed in the architecture of the USB interface, what are the performance caveats? By performance I mean how would it affect the time to read the image data from multiple cameras compared to a single one.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >