Search Results

Search found 1529 results on 62 pages for 'bandwidth'.

Page 50/62 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • 10 GigE interfaces limits single connection throughput to 1 Gb on a ProCurve 4208vl

    - by wazoox
    The setup is as follow : 3 Linux servers with Intel CX4 10 GigE controllers and an X-Serve with a Myricom 10 GigE CX4 controller are connected to a ProCurve 4208vl switch, with a myriad of other machines connected through good ol' 1000 base-T. The interfaces are actually set up as 10 Gig, according to both the switch monitoring interface and the servers (ethtool, etc). However a single connection between two 10 GigE equipped machines through the switch is limited to exactly 1Gb. If I connect two of the 10 GigE machines directly with a CX4 cable, netperf reports the link bandwidth as 9000 Mb/s. NFS achieves about 550 MB/s transfers. But when I'm using the switch, the connection tops at 950 Mb/s through netperf and 110 MB/s with NFS. When I open several connections from 3 of the machines to the 4th, I get 350 MB/s of NFS transfer speed. So each individual 10 GigE ports actually can reach much more than 1 Gb, but individual connections are strictly limited to 1 Gb. Conclusion : the 10 GigE connection through the switch behaves exactly like a trunk of 10 1 Gb connections. That doesn't make any sense to me, unless HP planned these ports only for cascading switches or strictly for many-clients-to-single-server connection. Unfortunately this is NOT the envisioned setup, we need big throughput from machine to machine. Is this a not-so-known (or carefully hidden...) limitation of this type of switch? Should I suggest seppuku to the HP representative? Does anyone have any idea on how to enable a proper behaviour ? I upgraded for an hefty price from bonded 1Gb links to 10 GigE and see exactly ZERO gain! That's absolutely unacceptable.

    Read the article

  • Time-Machine backup over SSH tunnel to NFS mount

    - by BTZ
    I've recently started using a new NAS which runs CentOS 6.2. One of the purposes of the NAS would be to serve as a backup target. Whilst I have been using Apple's Time-Machine for a while and I am very satisfied with it, I'd like to continue using it. Backing up directly to an address in my network is no hassle; all works fine. For security reasons I'd like all my traffic to go through an ssh tunnel to the NAS. This way I can avoid needing to get a VPNserver (for personal reasons). As of NFSv4 the NFS deamon is bound to port 2049, which makes it easy for me to direct all traffic through a ssh tunnel. Tunnel: ssh -f admin@ms -L 2000:localhost:2049 -N Mount: mount -t nfs -o nfsvers=4,rw,proto=tcp,sync,intr,hard,timeo=600,retrans=10,wsize=32768,rsize=32768,port=2000 localhost:/mac_backup /Volumes/backup This works fine for Finder/terminal and throughput is almost equal to direct traffic. (CPU of the NAS does ride high when I reach max bandwidth though) Now the problem: With Time-Machine I can't use the NFS mount point mounted on localhost. TM seems to try to connect to it and then give me a "OSStatus error 65". I also tried using NFSv3 (I correctly forwarded all ports) with no luck. Can anyone shed a light on this and/or give a solution?

    Read the article

  • Can I connect a Playstation 3's HDMI output to my monitor's DVI-D input? [migrated]

    - by HankJDoomstorm
    I'm attempting to connect my Playstation 3 to my computer monitor. The monitor has a DVI-D (dual link) input, so before distinguishing between the different DVI varieties, I bought a DVI-I (dual link) to HDMI converter that won't fit into the port on the monitor (not only that, there isn't enough physical space in the back of the monitor to fit that much stuff before it hits the bottom of it). So I grabbed a DVI-D (single link) cable and got a female-to-female DVI-I coupler, and plugged the DVI-D cable into the monitor and the whole mess of converters. The end result was HDMI to DVI-D single link, but my monitor isn't receiving a signal on its digital channel. (For clarity's sake: DVI-D DL input on Monitor, DVI-D SL cable, DVI-I DL female-to-female coupler, DVI-I DL to HDMI converter, HDMI output on PS3) I don't know much about this stuff (obviously), but my educated guess is that the bandwidth of the PS3 is too high for the DVI-D Single Link cable, so nothing's getting through. Will replacing the single link cable with dual link resolve this? If not, is it possible at all? Oh, I should mention I'm aware I won't get audio through the monitor. I have an RCA to 3.5mm converter for that.

    Read the article

  • Varnish, Nginx, Apache, APC, Meteor, Cpanel & Wordpress On A Single Server, Any Good?

    - by Aahan
    Yes, I have read many close questions, but I needed a specific answer and hence this question. First, these are my new server specifications: Linux Server (CentOS), Intel Xeon 3470 Quad Core (2.93GHz x 4) processor, 4 GB DDR3 Memory, 1TB Hard Disk Space, 10 TB Bandwidth and 9 Dedicated IPs. AIM: To speed up my wordpress blog + Increase server's capacity to handle heavy load PLAN: This is how I am planning to setup my server - - VARNISH (in the front, to cache server responses) NGINX (to effectively handle static content & overcome the C10k problem) APACHE (behind Nginx, to effectively deliver dynamic content) APC (PHP page, database & object caching) CPANEL (which requires Apache, and I require it) WORDPRESS W3 TOTAL CACHE (caching plugin for Wordpress). So , will the setup work? Have anyone tried it? Please shower your thoughts and knowledge. NOTE: I can't do without Apache because I am used to that .htaccess & Cpanel stuff. So, it's not any option. All others are options. Please try to help. I hope I am clear in what I wanted to ask.

    Read the article

  • After connecting wlan0 to bridge interface (and then removing it), can't connect to AP

    - by gmonk
    I'm on a laptop running Debian Jessie with kernel 3.13-1-amd64; lspci shows that my wireless NIC + driver is 04:00.0 Network controller: Intel Corporation Wireless 3160 (rev 83) Subsystem: Intel Corporation Dual Band Wireless-AC 3160 Kernel driver in use: iwlwifi This has been working without any problems, until I tried creating a bridge for lxc containers to use. I did the same thing as this person here: How-to set up a network bridge on a laptop for LXC use? -- and ended up having the same problem as this poster did, so I decided to "undo" my actions. This hasn't been successful. Actions taken so far: To configure the bridge: #> ip link add type veth #> iw dev wlan0 set 4addr on #> ifconfig veth0 up #> brctl addbr br0 #> brctl addif br0 wlan0 #> brctl addif br0 veth0 #> ifconfig br0 192.168.0.4/24 #> ifconfig wlan0 0.0.0.0 To "deconfigure": #> brctl delif br0 wlan0 #> brctl delif br0 veth0 #> iw dev wlan0 set 4addr off #> ifconfig veth0 down #> ifconfig wlan0 down #> ifconfig br0 down #> brctl delbr br0 Now, dmesg and /var/log/syslog show repeated attempts at connecting to the AP that was working before, which fail after authentication: May 27 09:16:01 myhostname kernel: [11350.757172] wlan0: authenticate with 00:18:f8:54:a3:d6 May 27 09:16:01 myhostname kernel: [11350.759036] wlan0: send auth to 00:18:f8:54:a3:d6 (try 1/3) May 27 09:16:01 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: scanning -> authenticating May 27 09:16:01 myhostname wpa_supplicant[8946]: wlan0: Trying to associate with 00:18:f8:54:a3:d6 (SSID='myaccesspoint' freq=2437 MHz) May 27 09:16:01 myhostname kernel: [11350.762615] wlan0: authenticated May 27 09:16:01 myhostname kernel: [11350.762753] iwlwifi 0000:04:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP May 27 09:16:01 myhostname kernel: [11350.762755] iwlwifi 0000:04:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP May 27 09:16:01 myhostname kernel: [11350.765080] wlan0: associate with 00:18:f8:54:a3:d6 (try 1/3) May 27 09:16:01 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: authenticating -> associating May 27 09:16:01 myhostname kernel: [11350.767474] wlan0: RX AssocResp from 00:18:f8:54:a3:d6 (capab=0x411 status=12 aid=0) May 27 09:16:01 myhostname kernel: [11350.767476] wlan0: 00:18:f8:54:a3:d6 denied association (code=12) May 27 09:16:01 myhostname wpa_supplicant[8946]: wlan0: CTRL-EVENT-ASSOC-REJECT bssid=00:18:f8:54:a3:d6 status_code=12 May 27 09:16:01 myhostname kernel: [11350.788475] wlan0: deauthenticating from 00:18:f8:54:a3:d6 by local choice (reason=3) May 27 09:16:01 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: associating -> disconnected May 27 09:16:01 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: disconnected -> scanning May 27 09:16:02 myhostname dhclient: DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 14 May 27 09:16:04 myhostname wpa_supplicant[8946]: wlan0: SME: Trying to authenticate with 00:18:f8:54:a3:d6 (SSID='myaccesspoint' freq=2437 MHz) May 27 09:16:04 myhostname kernel: [11354.559579] wlan0: authenticate with 00:18:f8:54:a3:d6 May 27 09:16:04 myhostname kernel: [11354.561458] wlan0: send auth to 00:18:f8:54:a3:d6 (try 1/3) May 27 09:16:04 myhostname wpa_supplicant[8946]: wlan0: Trying to associate with 00:18:f8:54:a3:d6 (SSID='myaccesspoint' freq=2437 MHz) May 27 09:16:04 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: scanning -> associating May 27 09:16:04 myhostname kernel: [11354.563445] wlan0: authenticated May 27 09:16:04 myhostname kernel: [11354.563631] iwlwifi 0000:04:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP May 27 09:16:04 myhostname kernel: [11354.563633] iwlwifi 0000:04:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP May 27 09:16:04 myhostname kernel: [11354.565727] wlan0: associate with 00:18:f8:54:a3:d6 (try 1/3) May 27 09:16:04 myhostname wpa_supplicant[8946]: wlan0: Associated with 00:18:f8:54:a3:d6 May 27 09:16:04 myhostname kernel: [11354.568091] wlan0: RX AssocResp from 00:18:f8:54:a3:d6 (capab=0x411 status=0 aid=9) May 27 09:16:04 myhostname kernel: [11354.569030] wlan0: associated May 27 09:16:04 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: associating -> associated May 27 09:16:05 myhostname kernel: [11354.978204] wlan0: deauthenticated from 00:18:f8:54:a3:d6 (Reason: 15) May 27 09:16:05 myhostname wpa_supplicant[8946]: wlan0: CTRL-EVENT-DISCONNECTED bssid=00:18:f8:54:a3:d6 reason=15 May 27 09:16:05 myhostname kernel: [11354.992729] cfg80211: Calling CRDA to update world regulatory domain May 27 09:16:05 myhostname kernel: [11354.995004] cfg80211: World regulatory domain updated: May 27 09:16:05 myhostname kernel: [11354.995005] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) May 27 09:16:05 myhostname kernel: [11354.995006] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (N/A, 2000 mBm) May 27 09:16:05 myhostname kernel: [11354.995007] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (N/A, 2000 mBm) May 27 09:16:05 myhostname kernel: [11354.995007] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (N/A, 2000 mBm) May 27 09:16:05 myhostname kernel: [11354.995008] cfg80211: (5170000 KHz - 5250000 KHz @ 80000 KHz), (N/A, 2000 mBm) May 27 09:16:05 myhostname kernel: [11354.995009] cfg80211: (5735000 KHz - 5835000 KHz @ 80000 KHz), (N/A, 2000 mBm) May 27 09:16:05 myhostname kernel: [11354.995010] cfg80211: (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 0 mBm) May 27 09:16:05 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: associated -> disconnected May 27 09:16:05 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: disconnected -> scanning May 27 09:16:09 myhostname wpa_supplicant[8946]: wlan0: SME: Trying to authenticate with 00:18:f8:54:a3:d6 (SSID='myaccesspoint' freq=2437 MHz) May 27 09:16:09 myhostname kernel: [11358.763968] wlan0: authenticate with 00:18:f8:54:a3:d6 May 27 09:16:09 myhostname kernel: [11358.765796] wlan0: send auth to 00:18:f8:54:a3:d6 (try 1/3) May 27 09:16:09 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: scanning -> authenticating May 27 09:16:09 myhostname wpa_supplicant[8946]: wlan0: Trying to associate with 00:18:f8:54:a3:d6 (SSID='myaccesspoint' freq=2437 MHz) May 27 09:16:09 myhostname kernel: [11358.769957] wlan0: authenticated May 27 09:16:09 myhostname kernel: [11358.770102] iwlwifi 0000:04:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP May 27 09:16:09 myhostname kernel: [11358.770104] iwlwifi 0000:04:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP May 27 09:16:09 myhostname kernel: [11358.770846] wlan0: associate with 00:18:f8:54:a3:d6 (try 1/3) May 27 09:16:09 myhostname kernel: [11358.773358] wlan0: RX AssocResp from 00:18:f8:54:a3:d6 (capab=0x411 status=12 aid=0) May 27 09:16:09 myhostname kernel: [11358.773361] wlan0: 00:18:f8:54:a3:d6 denied association (code=12) May 27 09:16:09 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: authenticating -> associating May 27 09:16:09 myhostname wpa_supplicant[8946]: wlan0: CTRL-EVENT-ASSOC-REJECT bssid=00:18:f8:54:a3:d6 status_code=12 May 27 09:16:09 myhostname kernel: [11358.802187] wlan0: deauthenticating from 00:18:f8:54:a3:d6 by local choice (reason=3) May 27 09:16:09 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: associating -> disconnected May 27 09:16:09 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: disconnected -> scanning May 27 09:16:12 myhostname wpa_supplicant[8946]: wlan0: SME: Trying to authenticate with 00:18:f8:54:a3:d6 (SSID='myaccesspoint' freq=2437 MHz) May 27 09:16:12 myhostname kernel: [11362.573442] wlan0: authenticate with 00:18:f8:54:a3:d6 May 27 09:16:12 myhostname kernel: [11362.575270] wlan0: send auth to 00:18:f8:54:a3:d6 (try 1/3) May 27 09:16:12 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: scanning -> authenticating May 27 09:16:12 myhostname wpa_supplicant[8946]: wlan0: Trying to associate with 00:18:f8:54:a3:d6 (SSID='myaccesspoint' freq=2437 MHz) May 27 09:16:12 myhostname kernel: [11362.580334] wlan0: authenticated May 27 09:16:12 myhostname kernel: [11362.580503] iwlwifi 0000:04:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP May 27 09:16:12 myhostname kernel: [11362.580516] iwlwifi 0000:04:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP May 27 09:16:12 myhostname kernel: [11362.583508] wlan0: associate with 00:18:f8:54:a3:d6 (try 1/3) May 27 09:16:12 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: authenticating -> associating May 27 09:16:12 myhostname wpa_supplicant[8946]: wlan0: Associated with 00:18:f8:54:a3:d6 May 27 09:16:12 myhostname kernel: [11362.585908] wlan0: RX AssocResp from 00:18:f8:54:a3:d6 (capab=0x411 status=0 aid=9) May 27 09:16:12 myhostname kernel: [11362.586781] wlan0: associated May 27 09:16:12 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: associating -> associated May 27 09:16:13 myhostname kernel: [11362.947693] wlan0: deauthenticated from 00:18:f8:54:a3:d6 (Reason: 15) May 27 09:16:13 myhostname wpa_supplicant[8946]: wlan0: CTRL-EVENT-DISCONNECTED bssid=00:18:f8:54:a3:d6 reason=15 May 27 09:16:13 myhostname kernel: [11362.973461] cfg80211: Calling CRDA to update world regulatory domain May 27 09:16:13 myhostname kernel: [11362.975673] cfg80211: World regulatory domain updated: May 27 09:16:13 myhostname kernel: [11362.975675] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) May 27 09:16:13 myhostname kernel: [11362.975676] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (N/A, 2000 mBm) May 27 09:16:13 myhostname kernel: [11362.975677] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (N/A, 2000 mBm) May 27 09:16:13 myhostname kernel: [11362.975678] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (N/A, 2000 mBm) May 27 09:16:13 myhostname kernel: [11362.975678] cfg80211: (5170000 KHz - 5250000 KHz @ 80000 KHz), (N/A, 2000 mBm) May 27 09:16:13 myhostname kernel: [11362.975679] cfg80211: (5735000 KHz - 5835000 KHz @ 80000 KHz), (N/A, 2000 mBm) May 27 09:16:13 myhostname kernel: [11362.975679] cfg80211: (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 0 mBm) May 27 09:16:13 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: associated -> disconnected May 27 09:16:13 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: disconnected -> scanning May 27 09:16:14 myhostname NetworkManager[13992]: <warn> Activation (wlan0/wireless): association took too long. May 27 09:16:14 myhostname NetworkManager[13992]: <info> (wlan0): device state change: config -> failed (reason 'no-secrets') [50 120 7] May 27 09:16:14 myhostname NetworkManager[13992]: <info> Marking connection 'Auto myaccesspoint' invalid. May 27 09:16:14 myhostname NetworkManager[13992]: <warn> Activation (wlan0) failed for connection 'Auto myaccesspoint' May 27 09:16:14 myhostname NetworkManager[13992]: <info> (wlan0): device state change: failed -> disconnected (reason 'none') [120 30 0] May 27 09:16:14 myhostname NetworkManager[13992]: <info> (wlan0): deactivating device (reason 'none') [0] May 27 09:16:14 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: scanning -> disconnected The things that jump out at me are "deauthenticating ... by local choice( reason=3)" and the lines that contain "(reason=15)". I've tried various fixes: iwconfig wlan0 power off killing wpa_supplicant connecting with iwconfig + dhclient instead of gnome's network -manager explicitly configuring wlan0 in /etc/network/interfaces creating a /etc/wpa_supplicant.conf file ...but nothing seems to work. I'm not sure what I did wrong, or what step I've skipped in trying to get wlan0 back as a non-bridged device -- I removed it from the bridge and then deleted the bridge itself. Any ideas?

    Read the article

  • Parking domains and avoiding so called "search engine penalities"

    - by senthilkumar-c
    I have purchased two domains from one particular registrar and hosting from GoDaddy. Assume they are domain1.com and domain2.com Assume my hosting IP address is 111.111.111.111 I added both domain1.com and domain2.com in my domain management control panel and gave the same two nameservers for both domains at my registrar's control panel. So, now, both domains should show the same website. When I ping "domain1.com" or "domain2.com" the results say - Pinging domain1.com [111.111.111.111] with 32 bytes of data: Pinging domain2.com [111.111.111.111] with 32 bytes of data: respectively. So, they both point to the same hosting IP. BUT, internally, I have configured IIS to point them to different folders so that different websites are shown. (My hosting plan is expensive and I intend to use the space and bandwidth for many websites). But still, technically, all domains point to same IP address. Is this a bad thing? Is this what is called "domain parking"? I read some search engine forum posts that two domains pointing to the same IP/Website will be penalised by search engines and stuff. I have also read that simply "parking" the domains won't attract penality. I don't know whether what I have done is parking or the so called "wrong" thing. Can someone shed light on what I have done and what I should do? I don't want to be blacklisted by any search engine. P.S. I know this is not a search engine forum, but I am new to website hosting and domains and I am very weak in nearly all technical terms and concepts relating to web hosting and domains. I thought this will be a good place to understand these things.

    Read the article

  • Site Goes Offline Every Day At Midnight - No One Knows Why

    - by HollerTrain
    0 down vote favorite Seems today a website I manage has been going online and offline between 12a and 12:25a. I have no idea what is causing the issue so I am seeking guidance on where to start. It is a Wordpress based site. So here is what I DO know: I have a pingdom account which alerts me when the site goes offline so we can see every day, like clockwork, the site goes on/off. At the time of the ups/downs I see a lot of strain on the memory usage. Look at the load average when the site is going online/offline (http://screencast.com/t/BRlfXkqrbJII). Then I ran this command to restart http (http://screencast.com/t/usVtYWZ2Qi) and the memory usage then goes down to this (http://screencast.com/t/VdTIy3bgZiQB). An hour after I restarted http, the site then went offline/online so restarting the http didn't do much help. When the site is going offline/online, I ran the top command and get this (http://screencast.com/t/zEwr7YQj3). Here is a top command when the site is at it's lowest (http://screencast.com/t/eaMfha9lbT - so this would be dubbged "normal"). Here is a bandwidth report (http://screencast.com/t/AS0h2CH1Gypq). The traffic doesn't seem to be that much (http://screencast.com/t/s7hrWNNic1K), but looking at my times the site is going up/down this may be one of the reasons? I have the dvp Nitro package at Media Temple (http://mediatemple.net/webhosting/nitro/). So at this point I would request some help in trying to figure out what the cause of this is, and how I can go about pinpointing this issue. ANY HELP is greatly appreciated.

    Read the article

  • Linux: prevent outgoing TCP flood

    - by Willem
    I run several hundred webservers behind loadbalancers, hosting many different sites with a plethora of applications (of which I have no control). About once every month, one of the sites gets hacked and a flood script is uploaded to attack some bank or political institution. In the past, these were always UDP floods which were effectively resolved by blocking outgoing UDP traffic on the individual webserver. Yesterday they started flooding a large US bank from our servers using many TCP connections to port 80. As these type of connections are perfectly valid for our applications, just blocking them is not an acceptable solution. I am considering the following alternatives. Which one would you recommend? Have you implemented these, and how? Limit on the webserver (iptables) outgoing TCP packets with source port != 80 Same but with queueing (tc) Rate limit outgoing traffic per user per server. Quite an administrative burden, as there are potentially 1000's of different users per application server. Maybe this: how can I limit per user bandwidth? Anything else? Naturally, I'm also looking into ways to minimize the chance of hackers getting into one of our hosted sites, but as that mechanism will never be 100% waterproof, I want to severely limit the impact of an intrusion. Cheers!

    Read the article

  • very slow internet with Linksys WRT54GL only in wireless mode (wired is OK)

    - by gojira
    I bought a new Cisco Linksys WRT54GL router to connect my laptop (running Windows 7) to the internet. I installed Tomato 1.28 firmware on the router. When I connect the laptop to the router via ethernet cable, everything is fine and I get extremely fast up- and download speeds. When I connect wirelesssly however, websites load extremely slow - it takes dozens of seconds to load a website! <-- This is my question, how can I fix the wireless speed issue? Gmail for example is unusable this way. I tried speedtest.net, but this always fails in the upload part of the test so I can't even test the bandwidth (could the fact that it fails in the upload part, not the download part, be an indication what the problem is?!). I have isolated the problem a bit, I am convinced it has to do either with the router itself, the router settings, or the settings of the wireless connection in Win 7. Because previously, I was using another router by Buffalo and I had no problems whatsoever. I have tried to reproduce the settings from the Bufallo router as closely as possible on the Linksys router (same channel, same encryption etc). The download speed problem only occurs with the Linksys router, and only in wireless mode! When I exchange the Linksys router with the Buffalo router I have here for testing, the wireless speed is up to normal again. Also, before I had installed the Tomato firmware I had exactly the same problem, so it has nothing to do with the firmware itself. Notes & things I already tried: Changing the channel: does not seem to affect anything, I am also on the same channel (10) which I was previously on when I had a Buffalo router. QoS is off. Ping to the router itself is OK, ~ 1 ms. Some current settings of the linksys router: WAN / Internet Type: DHCP Wirelesss Mode: Access Point B/G Mode: Mixed Broadcast: check Channel: 10 - 2.457 GHz Security: WPA2 Personal Encryption: AES

    Read the article

  • Nginx + Ubuntu 9.10, gzip not functioning

    - by Matt
    Hey there, So I installed and configured Nginx 0.7.62 on a new Slicehost Ubuntu 9.10 slice. All seems to work fine with the server, except that gzip isn't working for one reason or another. I made sure that it's setting were correct in /etc/nginx/nginx.conf: user www-data; worker_processes 3; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; keepalive_timeout 2; tcp_nodelay on; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript; gzip_disable "MSIE [1-6]\."; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } This normally wouldn't be a big deal, but gzip support could save considerable bandwidth for my site. Does anyone have any ideas of what to check, or has anyone else run into this problem?

    Read the article

  • HTTP cache for my virtual machines

    - by MathematicalOrchid
    I have several Linux virtual machines running on my home PC. One of the quirks of Linux is that every time you run a package manager, it wants to "refresh" the configured software repositories - which basically means it wants to download a file from the Internet. If I revert to an earlier snapshot of the VM, then next time I run the package manager it will re-download the exact same data again [since it no longer exists in the VM]. It seems a shame to waste bandwidth endlessly downloading the same data over and over again, so I was wondering if there's some way I can set up some kind of HTTP proxy server that caches downloaded files. I have no idea how you would do such a thing though. In particular, it needs to be set up so that the VMs don't need to "know" that the cache is there; it needs to be transparent. But I don't know how to do that. Any suggestions on what software I'd need to use? It would be nice if I could run it under the Windows host OS, but running a small VM with a Linux guest is also possible...

    Read the article

  • SQL 2008 R2 replication error: The process could not connect to Distributor

    - by Lance Lefebure
    I have two servers running SQL 2008 R2 Standard, each with an instance named "MAIN". I have a small test database on my primary server (one table, 13 rows) that I want to replicate to a second server as a proof-of-concept for some larger databases that I want to replicate. I set up the primary server to be a publisher and distributor, and set the database to do transactional replication. I copied the data to the second server via a backup/restore, not via a snapshot (which I'll have to do with the larger databases due to database size and limited bandwidth). I followed the instructions here: http://gnawgnu.blogspot.com/2009/11/sql-2008-transactional-replication-and.html Now on the subscriber, I go under Replication / Local Subscriptions / Right click / Properties on my subscription to the DB. The status of the last synchronization shows a status of: "The process could not connect to Distributor 'PRIMARYSERVER\MAIN'." Data IS replicating from the primary to the secondary. Any record I add on the primary shows up on the secondary server within seconds. Is the Distributor part of the Snapshot system that I'm not using, or is it part of the transaction replication stuff? Thanks, Lance

    Read the article

  • How can I monitor VNC via Nagios?

    - by atroon
    I have a number of remote sites which have VNC running on a few computers for support purposes. They are (obviously) only available on our internal network. I am using Nagios to keep track of all the systems in the network and I want to have it check to make sure the VNC server is running on the appropriate hosts. There is a 'check_vnc' plugin available here but it relies on VNC Snapshot which I don't want to use. Certainly I could use it, but it adds more complexity and dependency, which I want to avoid. It seems simpler to just use check_tcp to make sure I get the proper response to a connection request for VNC, e.g. port 5900, send a connect string, get back framebuffer info. My real question, I suppose, is this: What is the 'proper' generic connect string for VNC (I use both UltraVNC and RealVNC) and what is the expected response? If it's really easier to use the VNC Snapshot and check_vnc, let me know. I just can't imagine that a string of text isn't easier, faster, and less bandwidth intensive to monitor.

    Read the article

  • Log shipping on select tables.

    - by Scott Chamberlain
    I know I am most likely using incorrect terminology so please correct me if I use the wrong terms so I can search better. We have a very large database at a client's site and we would like to have up to date copies of some of the tables sent across the internet to our servers at our office. We would like to only copy a few of the tables because the bandwidth requirement to do log shipping of the entire database (our current solution) is too high. Also replication directly to our servers is out of the question as our servers are not accessible from the internet and management does not want to do replication (more on that later). One possible Idea we had is to do some form of replication on the tables we need to another database on the same server and do log shipping of that second smaller database but management is concerned that the clients have broken replication (it was between two servers on their internal network however) on us in the past and would like to stay away from it if possible. Any recommendations would be greatly appreciated. If using some form of replication is the only solution, I am not against replication, I just need compelling arguments to convince management to do it. This is to be set up on multiple sites that are running either Sql2005 or Sql2008 we will have both versions on our end to restore the data to so that is not a issue. Thank you.

    Read the article

  • Windows/IIS Hosting :: How much is too much?

    - by bsisupport
    I have 4 Windows 2003 servers running IIS 6. These servers host a bunch of unique web sites (in that they are all different in build/architecture/etc). The code behind these sites range from straight HTML, classic ASP, and 1.1/2.0/3.x flavors of .NET. Some (most) of the sites use a SQL backend, which is hosted on one or two different servers – not the IIS servers themselves. No virtualization on these servers and no load balancing for these particular sites. The problem I’m running into is coming up with some baseline metrics to determine, or basically come up with a “baseline score” to know when a web server has reached its hosting limit. Today, some basic information about each server is used: how much bandwidth does the server pump out, hard drive space availability, and basic (very basic) RAM & CPU utilization (what it looks like at peak traffic times.) I would be grateful if those of you that are 1000x smarter than I am could indulge me with your methods of managing IIS environments. Whether performance monitoring specifics, “score” determination as I’m trying to determine, or the obvious combination of both. Thanks in advance.

    Read the article

  • Doesn't VirtualBox 4.0 support drag-drop file copy yet?

    - by Benjamin
    Version 4.0.0 will be new major release. The following major new features were added: -New settings/disk file layout for VM portability; see the manual for more information. -Open Virtualization Format Archive (OVA) support; see the manual for more information. -VMM: support more than 1.5/2 GB guest RAM on 32-bit hosts -Language bindings: uniform Java bindings for both local (COM/XPCOM) and remote (SOAP) -invocation APIs -Chipset: added support for the Intel ICH9 chipset with 3 PCI buses, PCI express and -Message Signaled Interrupts (MSI) -Audio: Intel HD Audio is now available as guest hardware, for better support with modern -guest operating systems (e.g. 64-bit Windows; bug #2785). -GUI: redesigned user interface with guest window preview -GUI: new display mode with downscaled guest display -Resource control: added support for limiting a VM's CPU time and IO bandwidth. -Storage: support asynchronous I/O for iSCSI, VMDK, VHD and Parallels images -Storage: support for resizing VDI and VHD images -Windows Additions: support for automatically updating the Guest Additions (requires -installed Windows Guest Additions 4.0 or later) -Guest Additions: support for copying files into the guest file system What does the last line mean? I thought this is a drag-drop file copy feature like VMWare. I tried that. But I couldn't copy by drag-drop, ctrl-c ctrl-v either. Edit: I mean VBox 4.0 beta, not 3.x The release note is here. Download link is here.

    Read the article

  • DHCP Relay setup in ubuntu server

    - by jerichorivera
    I have a network appliance (QNO) that works as traffic load balancer and dhcp server. I would like to add a linux server in between the network appliance and the client computers. The linux server will be used to monitor bandwidth usage. My problem is I still want DHCP to be served by the network appliance so that load balancing will still work efficiently. We are afraid that if we setup the linux server as the DHCP server the network appliance will not be able to load balance the traffic if it only sees the linux server as a single client connecting to it. I've been searching all over for a tutorial on how to setup DHCP relay but have not found any. How do I setup DHCP relay on my linux server given there are two NICs attached to it, one connects the linux server to the network appliance and the other connects the linux server to the client computers. EDIT Router (DHCP) ---- [eth0] Linux Server (Relay agent) [eth1] ----- PC (network) Router IP is 192.168.0.100 eth0 is on DHCP eth1 is static 192.168.2.11 (if I need to change this I can) Tried to do dhcrelay -i eth1 192.168.0.100, but the PC was not getting any DHCP lease from the DHCP router. I might be missing something here.

    Read the article

  • VPS goes slow at more than 20 users online at the same time

    - by hachiari
    I have 512 MB VPS (brustable to 1GB) Somehow, the site goes slow when there are about 10 users, and becomes impossible to load at 20 users online at the same time. I wonder what could be the problem for this. The bandwidth connection of the VPS is 1Gbps. Here is some settings in my VPS: KeepAlive Off <IfModule prefork.c> StartServers 7 MinSpareServers 7 MaxSpareServers 10 ServerLimit 64 MaxClients 64 MaxRequestsPerChild 0 </IfModule> my.cnf settings - calculated Max Memory 300MB Output from UNIXBENCH INDEX VALUES TEST BASELINE RESULT INDEX Dhrystone 2 using register variables 376783.7 13429727.4 356.4 Double-Precision Whetstone 83.1 1137.5 136.9 Execl Throughput 188.3 1637.4 87.0 File Copy 1024 bufsize 2000 maxblocks 2672.0 148868.0 557.1 File Copy 256 bufsize 500 maxblocks 1077.0 79430.0 737.5 File Read 4096 bufsize 8000 maxblocks 15382.0 1410009.0 916.7 Pipe Throughput 111814.6 4419722.0 395.3 Pipe-based Context Switching 15448.6 561505.1 363.5 Process Creation 569.3 10272.7 180.4 Shell Scripts (8 concurrent) 44.8 514.3 114.8 System Call Overhead 114433.5 3537373.8 309.1 ========= FINAL SCORE 295.0 I am afraid that the VPS company limit the number of connection to the VPS... is it possible? The server is in Japan, but the site has global traffic (some of the traffic are from countries with low speed connection). Could this be the problem? This is a serious problem :( my site just cant grow if this keeps on happening... please tell me if you have any idea. Thank You, Bryant

    Read the article

  • choosing hosting for custom ecommerce site, shared, dedicated, what to look for?

    - by spirytus
    Hi, I have (almost) developed website for my client and now need to decide on hosting. Most of the users of the site will be located in Australia, and so am I and my client. Now, I want to consider everything before deciding on host and few questions comes to my mind: I cannot afford website being down, and all hosts say something like "99% uptime guranted". Should just that be enough or shall I ask hosts for some stats maybe? Does it make any difference if servers and whole hosting company is located in Australia or outside? I've been hosting few sites with JustHost.com on shared hosting (cheapest plan, servers in US I believe) and never seen any delays but could that be an issue? I would prefer Australian company so I can actually go to them and give them piece of my mind if something goes wrong, but US servers seem cheaper. Would share hosting do? Its ecommerce custom build php application, I know there are security issues with sessions etc on shared hosting. Will take precautions of course but could share hosting be an issue? Would dedicated be worthy option considering that my knowledge of server is very limited? I need to run php/mysql, with preferably unlimited bandwidth as with my experience I cannot tell what amount of traffic would be sufficient. Please let me know if I didn't provide you with enough information so you could answer my questions, will gladly explain further. In advance thanks for any answers :)

    Read the article

  • Recommend a UK based VPS host equivalent to Dreamhost [closed]

    - by Pez Cuckow
    I appreciate this question could be considered subjective and argumentative so can people make recommendations rather than arguing about the best. I believe the "correct" answer is the one closest to what I am looking for. Basically I live in the UK but have been using the US based Dreamhost for about 6 years now, and my web projects are getting to the scale where the websites need to the UK based to cope with the demand and load. I originally had shared hosting with Dreamhost but upgraded to a VPS a while ago, getting 512mb of RAM, unlimited disk space, bandwidth and domains for $30. Their control panel is a custom easy to use build that they have created in house and offers features very similar to other web panels (as far as I am aware). So basically my question boils down to, is there anywhere that offers an equivalent package? In all honesty as long as I have over 50gb HDD space and unlimited domains it doesn't really matter? Are there any VPS providers you would recommend as reliable? I promise to check every link posted, many thanks for your time!

    Read the article

  • How can I stop SipVicious ('friendly-scanner') from flooding my SIP server?

    - by a1kmm
    I run an SIP server which listens on UDP port 5060, and needs to accept authenticated requests from the public Internet. The problem is that occasionally it gets picked up by people scanning for SIP servers to exploit, who then sit there all day trying to brute force the server. I use credentials that are long enough that this attack will never feasibly work, but it is annoying because it uses up a lot of bandwidth. I have tried setting up fail2ban to read the Asterisk log and ban IPs that do this with iptables, which stops Asterisk from seeing the incoming SIP REGISTER attempts after 10 failed attempts (which happens in well under a second at the rate of attacks I'm seeing). However, SipVicious derived scripts do not immediately stop sending after getting an ICMP Destination Host Unreachable - they keep hammering the connection with packets. The time until they stop is configurable, but unfortunately it seems that the attackers doing these types of brute force attacks generally set the timeout to be very high (attacks continue at a high rate for hours after fail2ban has stopped them from getting any SIP response back once they have seen initial confirmation of an SIP server). Is there a way to make it stop sending packets at my connection?

    Read the article

  • using i7 "gamer" cpu in a HPC cluster

    - by user1219721
    I'm running WRF weather model. That's a ram intensive, highly parallel application. I need to build a HPC cluster for that. I use 10GB infiniband interconnect. WRF doesn't depends of core count, but on memory bandwidth. That's why a core i7 3820 or 3930K performs better than high-grade xeons E5-2600 or E7 Seems like universities uses xeon E5-2670 for WRF. It costs about $1500. Spec2006 fp_rates WRF bench shows $580 i7 3930K performs the same with 1600MHz RAM. What's interesting is that i7 can handle up to 2400MHz ram, doing a great performance increase for WRF. Then it really outperforms the xeon. Power comsumption is a bit higher, but still less than 20€ a year. Even including additional part I'll need (PSU, infiniband, case), the i7 way is still 700 €/cpu cheaper than Xeon. So, is it ok to use "gamer" hardware in a HPC cluster ? or should I do it pro with xeon ? (This is not a critical application. I can handle downtime. I think I don't need ECC?)

    Read the article

  • I have a perl script that is supposed to run indefinitely. It's being killed... how do I determine who or what kills it?

    - by John O
    I run the perl script in screen (I can log in and check debug output). Nothing in the logic of the script should be capable of killing it quite this dead. I'm one of only two people with access to the server, and the other guy swears that it isn't him (and we both have quite a bit of money riding on it continuing to run without a hitch). I have no reason to believe that some hacker has managed to get a shell or anything like that. I have very little reason to suspect the admins of the host operation (bandwidth/cpu-wise, this script is pretty lightweight). Screen continues to run, but at the end of the output of the perl script I see "Killed" and it has dropped back to a prompt. How do I go about testing what is whacking the damn thing? I've checked crontab, nothing in there that would kill random/non-random processes. Nothing in any of the log files gives any hint. It will run from 2 to 8 hours, it would seem (and on my mac at home, it will run well over 24 hours without a problem). The server is running Ubuntu version something or other, I can look that up if it matters.

    Read the article

  • What kind of hosting do I need? [closed]

    - by Robert Smith
    I have been trying to answer this question but I haven't found an specific answer to my situation. As I want to pay for what I need, I thought I could get a good answer here. I have custom made forum (rather than a built-in forum like the ones you can find as plugins, e.g. WP-Forum or phpBB type of software) in Django. I don't want to use Apache and modwsgi because it's usually very memory-hungry and I can't afford a big server. I prefer a combination of nginx and gunicorn which I think is very efficient (maybe you can also tell me what you think about that). I'm expecting to receive 10,000 to 20,000 visits each month with 15,000 to 30,000 page impressions. I have reviewed some cloud services like Amazon EC2 or Rackspace and other more traditional services (Linodo). This site won't use videos or big images and I certainly don't need a huge amount of bandwidth (200GB would be definitely too much). I need shell access so shared hosting is out of the question. What do I need to run a website like that without problems? What about RAM? 256MB would be enough (that's the amount of RAM offered by small instances in Amazon and Rackspace)? Do you know of any alternative to those I mentioned? If you need more information to provide a useful answer, please don't hesitate to ask. Thanks a lot.

    Read the article

  • What is the max connections via remote desktop for a small server?

    - by Jay Wen
    I have a small server running MS Server 2012. The CPU is a Xeon E3-1230 V2 @ 3.30GHz, 4 Cores, 8 Logical Processors, 8 GB RAM. Main HD is a Samsung 840, and the big storage is a 4 disk WD Black Raid 10 Array in a Synology NAS enclusure. My question is: given this hardware, approximately how many users can the system support via "Remote Desktop Connection"? Assume there are no licensing limits. These are not admin users. I know there is a two admin limit. This boils down to: What resources does one remote connection require? RAM? % of the CPU? Networking bandwidth? I guess the base case would be for a conection where the user is inactive or simply browsing cnn. Once you know this, you know how many you could fit on the machine before something is maxed-out. In reality, users would be mostly on Excel (multi-MB spreadsheets). I know the approx. resources currently required by each copy of Excel.

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >