Search Results

Search found 1822 results on 73 pages for 'bandwidth caps'.

Page 44/73 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Decreasing lagging on router, while gaming

    - by user2699451
    I had absolutely no idea where to post this question and get a professional answer for it but here goes... Okay, so I guess everyone whos is reading this had played online, and so I was playing LoL again tonight and my brother decided that now was a great time to go on youtube and start watching a movie, so my ping (connecting from South Africa to EU west server) is around 190-220 average, however it started spiking to 2000 and average was 600-800, so it arised the question, how ther hell can I "kick" him off for the time being I tried reasoning it out with him but its like playing chess with a pigeon, he's studying to be an engineer, and I just cant win an argument with him, so i need to step it up a level... I have in the past used the aireplay method by sending deauth packets but it only helped so much, is there another way of either kicking a peer of the local wifi or decreasing the lag spikes while in session or even splitting the bandwidth equally in 2 or 3,etc What do I do p.s. sorry if off topic, if it is not appropriate, just say which website will be able to help or assist me...

    Read the article

  • Connecting/Adding a private network on windows server 2008

    - by WhyMe
    Hey all, I have a dual server configuration on a host provider using VPS. I was told by my Host provider that in order to use free bandwidth between my two servers (they are in the same location) I need to add a alias "subnet" to a specific ip (A private network, VPN). How do I add an aliased ip in widnwos? in Linux the relevant command is supposed to be (By my search in blogs) "ifconfig eth0:1 10.129.175.165 netmask 255.255.255.0" They also said that another way to connect between the servers (should also be faster) is to use "private lan", but as it happens I don't know how to define one :(. Is there a windows equivalent or another way to do this? I have checked my ip config and found no indication of the private lan or the VPN ip.

    Read the article

  • rdesktop + seamlessrdp + virtualbox slow on Windows

    - by Claudiu
    I'm trying to get VirtualBox seamless mode to work on all of my monitors at once. I was directed to this link, so I followed the instructions, using the windows port of rdesktop 1.6 found here, and using Xming as an X server. I eventually finally got it to work! However, it's really slow. While VirtualBox's regular seamless mode is as performant as if I was running apps on my host machine, with rdesktop it takes 1-2 seconds to register any user input. Dragging windows around is laggy and buggy (pieces of the desktop behind the window show through). I'm simply connecting to localhost, so network bandwidth/latency shouldn't be an issue... anyone have any idea why it's so slow and what I can do to make it more performant?

    Read the article

  • Will I have internet connection issues next day, if I unplug router at night?

    - by headskracher77
    I did this regularly a few years ago and used to feel like I was 'being punished' by the Interent svc provider for disabling access to my computer, because trying to re-connect the next day became a constant pain. I have the same linksys router, a comcast modem, and hi-speed broadband through their LAN. Question: who or what is at fault for lousy internet connections, slow connections, or no connections: (everybody's tech dept. blames everybody else) The router? 10 year olds, maybe obsolete? The modem? came with the service plan - can connect three devices on a sharedconnection. The ISP: I read they not only even control and completely regulate bandwidth usage, but they also ration it!! (true?) So can I safely 'pull the plug' each night for security or not? thnx

    Read the article

  • Network and log monitoring and vulnerability scanning

    - by user137799
    I am trying to find out if there is any application or service in UNIX that will - 1. Monitor network interfaces for bandwidth usage 2. When network flaps occur send out a e-mail. 3. When duplicate mac-addresses or loop occurs in network - send out alerts. 4. Do a network vulnerability scan and be able to detect utorrent application on our network. Need to know which linux distribution will be best to support that specific application Thanks

    Read the article

  • Process for migrating Dropbox to SpiderOak

    - by Marcel Janus
    I want to move my data from dropbox to SpiderOak. I have 3 computers running dropbox. But I have a poor WAN connection with very limited upload bandwidth. So I thought I do as first step install the dropbox client on my server on the internet an download there my data from dropbox. Then after this I upload/backup my data from this server with a broadband connection to SpiderOak. After the backup is completed I setup the sync between my 3 computers so that they will not have to upload the data again. Will this process will work so that I don't have to upload my data again over my WAN connection at home?

    Read the article

  • Improve performance on Lync desktop sharing

    - by Trikks
    I'm using Lync 2010 server to handle some clients communication and screen sharing. The biggest issue is the performance with screen sharing, it is of rather high quality but the frame rate is very poor. I have been reading and searching a lot on the subject and 95% of all topics is about bandwidth, we have a 200/200 MBit Internet connection solely for this application. Also my test machines runs on an internal gigabit lan. The speeds between all boxes is hysterically fast. Next step was to ensure that there where some profiles for different bandwidths, so i registered some New-CsNetworkBandwidthPolicyProfile -Identity 50Mb_Link -Description "BW profile for 50Mb links" -AudioBWLimit 20000 -AudioBWSessionLimit 200 -VideoBWLimit 14000 -VideoBWSessionLimit 700 New-CsNetworkBandwidthPolicyProfile -Identity 100Mb_Link -Description "BW profile for 100Mb links" -AudioBWLimit 30000 -AudioBWSessionLimit 300 -VideoBWLimit 25000 -VideoBWSessionLimit 1500 Nothing fancy happend here either. Non of the test boxes have anything from Norton installed, they doesn't have any firewalls running (nor does the Lync server), all fences are down in this environment just for the testing. Is there any thing that I may have missed to improve the quality of this? Thanks

    Read the article

  • Poor quality when trying to stream a 720p video to XBox 360 using Media Center Extender

    - by MBraedley
    I have my XBox 360 set up as a Media Center Extender for my Windows 7 desktop. SD quality avi videos stream fine to my XBox, either though the video library or through Media Center Extender, but when I try a 720p mkv file, the frame rate plummets and the A/V sync is completely lost. I don't want to transcode or switch container formats (mkv isn't supported by the 360), but still want to stream. Both my desktop and 360 are plugged into the same gigabit switch, which is plugged into my ISP supplied modem/router. The video plays fine on my machine in a number of programs. Considering that I should have more than enough bandwidth to accommodate this video, why won't it play back properly?

    Read the article

  • How do I negotiate for colo space?

    - by randy melder
    I guess this isn't a technical question, but it definitely is something IT teams deal with, so here goes: I'm looking at getting a rack at a local colocation facility. I'm weighing the options versus building out in a cloud platform. We are REALLY low bandwidth and power. There's a total of six hosts for the total operation. You can assume we use <= 10 amps of power and <= 2Mbps 95th percentile. Do you have any advice for getting the best deal?

    Read the article

  • Logfiles filling with iptables logging

    - by Peter I
    OS: Debian 6 Server Version I have different logfiles which are filling up: user@server:/var/log$ ls -lahS | head total 427G -rw-r--r-- 1 root root 267G Nov 2 17:29 bandwidth -rw-r----- 1 root adm 44G Nov 2 17:29 kern.log -rw-r----- 1 root adm 27G Nov 2 17:29 debug -rw-r----- 1 root adm 23G Oct 27 06:33 kern.log.1 -rw-r----- 1 root adm 17G Nov 2 17:29 messages -rw-r----- 1 root adm 14G Oct 27 06:33 debug.1 -rw-r----- 1 root adm 12G Nov 2 17:29 syslog -rw-r----- 1 root adm 12G Nov 1 06:26 syslog.1 -rw-r----- 1 root adm 9.0G Oct 27 06:33 messages.1 So I looked up the file /etc/iptables.up.rules which had those lines in it: -A FORWARD -o eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_OUT: -A FORWARD -i eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_IN: -A OUTPUT -o eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_OUT: -A INPUT -i eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_IN: So deleting those lines will solve my problem. But how would I edit those lines without losing their functionality?

    Read the article

  • Limit number of simultaneous connections squid makes to a single server

    - by Ben Voigt
    Note: I am asking about outbound concurrent connection limits, not inbound, which is sufficiently covered on existing questions Modern browsers typically open a large number of simultaneous connections, to take advantage of the fact that TCP fairly shares bandwidth between connections. Of course, this doesn't result in fair sharing between users, so some servers have started penalizing hosts which open too many connections. This limit can be configured client-side (e.g. IE MaxConnectionsPerServer, Firefox network.http.max-connections-per-server), but the method differs for each browser and version, and many users aren't competent to adjust it themselves. So we turn to a squid transparent HTTP proxy for central management of HTTP download. How can the number of simultaneous connections from squid to a remote webserver be limited, so the webserver doesn't perceive it as abuse of concurrent connections? Ideally the limit would be per source address. Squid should accept virtually unlimited concurrent requests from the client browser, and issue them sequentially to the remote server, only N at a time, delaying (but not dropping) the others.

    Read the article

  • Monitor a log file on Linux and send each line to another program

    - by mlambie
    I run an apt-cacher-ng server on Ubuntu Linux which writes logs in the following format: 1299745593|O|149406|XXX.XXX.XXX.XXX|uburep/pool/main/t/tiff/libtiff4_3.9.2-2ubuntu0.4_amd64.deb 1299745593|O|10154976|XXX.XXX.XXX.XXX|uburep/pool/main/l/linux-firmware/linux-firmware_1.34.4_all.deb 1299748529|O|39368|XXX.XXX.XXX.XXX|uburep/pool/main/n/nagios-nrpe/nagios-nrpe-server_2.12-4ubuntu1_amd64.deb 1300155440|O|680100|XXX.XXX.XXX.XXX|uburep/pool/main/t/tzdata/tzdata_2011c-0ubuntu0.10.04_all.deb It shows the timestamp, direction (in or out), byte count, IP and filename. Every time a line is written to it, I'd like to also send that line to another program. I will have this program insert the line into a database so that I can crunch some statistics about how much bandwidth we're saving through operating a caching server. I do not want to cat the log file every X minutes (via cron) looking for new entries as it'd be somewhat computationally uneconomical. Instead I'd prefer to have a daemon monitor the log, and when a change is detected, each line is sent to my database-insertion script. Will swatch achieve this, or are there better options?

    Read the article

  • Offset AND incremental backup

    - by Pyrolistical
    I already do backups from my main computer to my server computer using synctoy. But now I also want to do off-site backup. My idea so far: have source hard drive (we'll call S) at home have backup hard drive at work called B have transport hard drive called T connect T at work and record index of files on B take T home and check index of S and note new/changed/deleted files and copy changed files to T take T to work and update S repeat Its basically a sneakernet and using all of the advantages of it. High bandwidth, low latency. Is there some software to do this, or do I have to write it myself?

    Read the article

  • Is there a way to tell if a file is done copying?

    - by Mike Cooper
    The scenario is this: Machine A has files I want to copy to Machine C. Machine A can't access C directly, but can access Machine B that can access Machine C. I am using scp to copy from Machine A to B, and then from B to C. Machine B has limited storage space, so as files come in, I need to copy them to C and delete them from B. The second copy is much faster, so this is no problem with bandwidth. I could do this by hand, but I am lazy. What I would like is to run a script on B or C that will copy each file to C as each one finishes. The scp job is running from A. So what I need is a way to ask (preferably from a bash script) if file X.avi is "done" copying. Each of these files is a different size, and I can't really predict size or time of completion. Edit: by the way, the file transfer times are something about 1 hour from A to B and about 10 minutes from B to C, if time scale matters at all.

    Read the article

  • Increase the compression performance of VPN

    - by Martin
    I am currently switching from a system with HPN-SSH tunnels and enabled compression to something VPN based. I have tried tinc and n2n so far, hamachi requires a library I do not have. In my primitive benchmarks I am not satisfied with the achievable bandwidth compared to the SSH tunnels. In tinc the low LZO setting performed best, but compression is only available in UDP mode. Ideally I would like to have a TCP-based VPN with a multi-threaded compression. Can you suggest me some ideas how to increase the performance? Would it be possible to somehow put a compression filter in front of the tun interface? Or are there any VPN implementations that might be better suited for my needs (fast compression, TCP-based, switch mode, does not have to be super-secure)? I would consider tunnelling Ethernet over SSH, but according to some articles it is not advisable.

    Read the article

  • I need a few minutes of dedicated server a week, but not for hosting, just to convert ogg etc

    - by talkingnews
    I'm completely happy with my webhosting, it's just that I need to do one little thing they won't allow, and that's run an instance of Sox to convert about 30 mp3s to ogg files, in various directories, a couple of times a week, to be done automatically in response to the detection of the upload of an mp3. Probably looking at a minute of server time over the whole week. I've had unhelpful suggestions on other forums like "why not leave your home PC on 24 hours a day and then use all your isp bandwidth to do this", which doesn't work for me. I know that I can host files on, say, Amazon S3, but is there something similar for my needs? All it would need to do would be: wget/ftp the mp3 files, convert them to ogg, ftp the files back to my hosting. Of course, all this wouldn't be needed if there was such a thing as a compiled binary of Sox (or any mp3ogg converter) for Centos which I could upload without needing root access, but I've given up asking that one, but always open to suggestions!

    Read the article

  • How to defend agains botnet http requests

    - by Killercode
    I have a server with WHM + CPanel and 5 of my costumer got infected with zbot. This means that the domains they have are constantly receiving requests to certain destinations. I tried to use mod_security but seems that it can't filter every requests... I don't really know why? I still see in the access log the connection comming in and it's consuming a LOT of bandwidth and server load Those accounts have already been clean so all of those requests go to error 404 (the ones catched on mod_security I am dropping the connection). Is there anymore ways to defend against this requests?

    Read the article

  • How full is too full for mechanical hard drives?

    - by Sunny Molini
    I have heard many claim that it doesn't matter how full a drive is until it starts cutting into temp and virtual memory space. This doesn't make sense to me, given the nature of how the data is transacted on a hard drive. The inside of the platter presents less data per revolution than the outside of the drive does, by significant factors. The inside 40% of the radius of full size hard drive is used for the spindle, so only the outside 60% is used for data storage, but that still means that the inside track of a hard drive presents data 60% slower than the outside track. By my calculation, a Hard drive that is only 10% full should perform about 2.25 times faster than a hard drive that is 90% full, assuming that the flow is constrained by other factors. Am I wildly off base here? For all the drives I know, even the top speeds of the first 1% of the drive would be well within the bandwidth provided by a SATA 2 connection.

    Read the article

  • Flash Media Server slow over SSL

    - by Antilogic
    We are using FMS to host a VoD site. We host FMS internally (we do not use a CDN). We recently installed an SSL certificate to alleviate connection issues for clients (they're networks either block or don't support RTMP), however we're noticing that when streaming in RTMPS connections are drastically slower (on the order of Mbps). I know SSL causes some amount of over head but both client and server show almost no signs of exertion. Speedtest.net and a locally hosted speed test confirm that bandwidth is not an issue. I'm really not a network guru, so I'm at a loss as to where to check next. Do any of you have an idea why streaming media would run so slow over SSL?

    Read the article

  • Thunderbird + Gmail, has to send emails twice.

    - by Mohammad
    I've configured Thunderbird to place a copy of my sent emails in my remote "sent" folder of my Gmail account as opposed to the local thunderbird one. This ensures I can completely take advantage of my imap synchronization. And so whenever I send an email, it first sends one to the address list, then it sends a new one to my sent box, however doing this with large attachments seems like a waste of time and bandwidth. Do you guys know of any extension or a combination of a trick plus a Gmail filter that could automate this in one step?

    Read the article

  • Where would an S3 upload speed cap originate?

    - by CoreyH
    I do a ton of uploading to S3 and am experiencing capped speeds and I can't quite figure out how to address it. The setup: Windows Server 2008 R2 x64, external HD, using a Java based upload tool called Jsh3ll and custom VBS scripts to kick the jobs off. Running one process at a time, I am always limited to about 4mbps. I have FiOS at 35/35mbps speeds, so it isn't an outright limit. AND, I can run parallel instances and can go all the way up to 35mbps, so I know the problem isn't gateway/nic/machine/amazon related. Running parallel instances works to a degree as a solution, but increases the complexity of my workflow greatly. Solving this would make my life dramatically easier. When I was first doing this I was playing around with a bunch of Windows TCP parameters and was able to briefly get unconstrained bandwidth, but it wasn't repeatable. Thoughts?

    Read the article

  • 2 DSL lines...any benefit?

    - by EJB
    I have verizon DSL in my office, I put DSL in about a year ago for $29.95 month...I added a new phone line recently and it was cheaper to actually get it bundled with DSL so now I have two DSL lines...my plan was to shut the first one off when my 1 year contract comes up (in September). A couple of times DSL has gone out on one line so I just used the other, which is a nice redundancy to have - but it doesn't happen often. (I unplugged one line and plugged in the other) Question is, is there any way to use both DSL lines together so that 1) I can increase bandwidth and effective speed might increase (is that possible?) or 2) have them both on and connected someway so that traffic on my network would just use either one, and if one went down the traffic would route automatically? If I can either increase speed by having two, or at a minimum get some automatic redundancy, I see no reason to keep both on.... Thanks!

    Read the article

  • VPS hosting for a social network

    - by Jana
    Hi, I've developed a social network and I've been using shared hosting for that since it was launched. With that I wasn't able to send emails in bulk in cases like "newsletters" and "invitations to join my site". Plus most importantly most of the mails I send ended up in user's SPAM list.I'm planning to move into VPS as it may not have limits added. I'm wondering what's the cheapest VPS host available. I'm not pretty much familiar with Linux commands and seeking cPanel to do the work for me. Will the following configuration suit for a "new" social network like mine which has a less load? 1000Mhz Guaranteed 512MB Guaranteed RAM 20GB (RAID) Disk Space 1000GB/month Bandwidth 2 IP(s) & 5 Backups Semi Managed Thanks in advance

    Read the article

  • Running a VM off of an external HD via USB

    - by Nelson LaQuet
    Is it viable to run (i.e. reference the vmx/vhd directly from the mounted drive) a VM (vmware running Windows Seven) off of an external HD via USB? I mean, I know it's possible, but I guess I'm asking if USB provides enough bandwidth for normal usage... If so, are there any particular brands that may be better or worse? I know that ESATA would be a more viable setup, but my laptop doesn't have an ESATA port. Currently I use the VM to segregate all of my work development servers and software from my main machine; so I will be running all development servers and tools on the VM directly.

    Read the article

  • Server goes offline. What to look for?

    - by Jonathan Sampson
    I'm using a new virtual server through GoDaddy, and this morning I received a call from the powers that be informing me our website was offline. After confirming this, I requested a power cycle through our GoDaddy control panel, and within a minute or two the server was back online. I made the call, and reported the news that we're back up. Of course, a couple minutes later we're down again. I tried connecting through PuTTy, and it takes forever to prompt me for a username, and each successive prompt takes a long time to come up. I'm using CentOS. So my questions are: How can I determine the cause? What types of things can I do to prevent this in the future? One interesting, and perhaps relevant, observation is that yesterday our bandwidth consumption was about 20% greater than our top figures from the past month.

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >