Search Results

Search found 4786 results on 192 pages for 'traffic shaping'.

Page 170/192 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Mysql server high trafic makes websites really slow or unable to load

    - by Holapress
    Lately we have been having a lot of problems with our mysql server, from websites being really slow or even unable to load them at all. The server is a dedicated server that only runs our mysql database. i have been running some test using a profiler (JetProfiler) and tool to stress test (loadUI). If I use loadUI to connect with 50 simultaneous connections to one of our websites that runs a resently big query it will already make the website be unable to load. One of the things that makes me worried is that when I look at Jetprofile it always shows a Treads_connected of 1.00 and it seems that when it hits around 2.00 that I'm unable to connect. The 3 big peaks are when I run a test with loadUI, first one was 15 simultaneous connections wich made it still able for me to load the website but just really slow, the second one was 40 simultaneous connections which already made it impossible to load and the third one was with 100 connection which also didn't make it load anymore. Another thing that worries me is that in JetProfiler it says all the queries that get used are full table scans, could this maybe be the problem? The website I run as a test runs 3 queries, one for a menu that outputs around 1000 rows, one for the adds that has around 560 rows and a big one to get posts that has around 7000 rows (see screenshot bellow) I also have monitored the cpu of the server and there seems to be no problem there, even when I make a lot of connections with loadui the cpu stays low. I can't seem to figure out what is the main cause of the websites being unable to load when there is a high amount of traffic, if anyone has other suggestions for testing or something that might cause the problem please let me know.

    Read the article

  • Slower/cached Linux file system required

    - by Chopper3
    I know it sounds odd but I need a slower or cached filesystem. I have a lot of firewalls that are syslog'ing their data to a pair of Linux VMs which write these files to their 'local' (actually FC SAN attached) ext3-formatted disks and also forward the messages to our Splunk servers. The problem is that the syslog server is writing these syslog messages as hundreds, sometimes thousands, of tiny ~4k writes per second back to our FC SAN - which can handle this workload right now but our FW traffic's going to be growing by at least a factor of 5000% (really) in coming months and that'll be a pain for the SAN, I want to fix the root cause before it's a problem. So I need some help figuring out a way of getting these writes cached or held-off in some way from the 'physical' disks so that the VMs fire off larger, but less frequent, writes - there's no way of avoiding these writes but there's no need for it to do so many tiny ones. I've looked at the various ext3 options, setting noatime and nodiratime but that's not made much of a dent in the problem. Obviously I'm investigating other file systems but thought I'd throw this out in case others have the same problem in the future. Oh and I can't just forward these messages to Splunk, our firewall team insist they're in their original format for diag purposes.

    Read the article

  • Router intermittently failing

    - by nomen
    My old Asus router died a few weeks ago, so I thought I'd set up my Debian box to deal with routing my home network. I have a few complications, but I adapted my configuration from a previously working configuration, and I don't see why I am having intermittent problems. But I am having them! Every so often, my SSH connections to the router (and to the Xen virtual machines hosted by the router) just drop. I am unable to use the router's dns server. I can't ping the router. Etc. All of these things work most of the time, but break down intermittently, for a few minutes at a time. (I can provide more details, but I'm not sure what will be helpful) /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback # Gigabit ethernet, internal network auto eth0 allow-hotplug eth0 iface eth0 inet manual # USB ethernet, internet auto eth1 allow-hotplug eth1 iface eth1 inet dhcp # Xen Bridge auto xlan0 iface xlan0 inet static bridge_ports eth0 address 10.47.94.1 netmask 255.255.255.0 As I understand it, this is sufficient to create the network interfaces, and even do some switching between Xen hosts and my eth0 interface. I installed and configured Shorewall to manage routing between the bridge and my internet-facing interface: /etc/shorewall/zones fw firewall net ipv4 lan ipv4 /etc/shorewall/interfaces net eth1 detect dhcp,tcpflags,nosmurfs,routefilter,logmartians lan xlan0 detect dhcp,tcpflags,nosmurfs,routefilter,logmartians,routeback,bridge /etc/shorewall/policy net all DROP info fw net ACCEPT info all all REJECT info /etc/shorewall/rules DNS(ACCEPT) fw net DNS(ACCEPT) lan fw Ping(ACCEPT) lan fw ... and so on, these all work, when the router is accepting traffic at all. /etc/shorewall/masq eth1 10.47.94.0/24 Also, the router is currently "working", and I checked on a problematic client: arp infrastructure infrastructure.mydomain (10.47.94.1) at 0:23:54:bb:7d:ce on en0 ifscope [ethernet] I tried it when the router was down, and I (eventually) got the same response. It took about 30 seconds to return, though.

    Read the article

  • Dropbox picture sync: Skip RAW files?

    - by Steven Lu
    I like the convenience of having Dropbox keep track of my photos because it tends to work with my devices over 3G (I am often tethering to my phone with my iPad and Macbook) as well as Wifi, but it's a waste of network traffic to sync the raw files from my camera or memory card. It clutters up the dropbox list and the files are just huge. Is there a way to configure the Dropbox client so that it ignores a certain file extension for the picture sync? Also, I suspect that if I just go and delete the raw files, that the next time I plug in the memory card and tell Dropbox to sync, it will re-download the raw files. Which would be terribad. I could switch to iCloud for Photo Stream, I suppose, but there will be no access via 3G that way. And I've already got years of experience with Dropbox so I know it's going to just work. I think any method that works for filtering files to exclude from sync on Dropbox in general should work here too. Edit: Wow there are 19k votes for this exact request.

    Read the article

  • IPTables configuration help

    - by Sam
    I'm after some help with setting up IPTables. Mostly the configuration is working, but regardless of what I try I cannot allow localhost to access the local Apache only (i.e. localhost to access localhost:80 only). Here is my script: !/bin/bash Allow root to access external web and ftp iptables -t filter -A OUTPUT -p tcp --dport 21 --match owner --uid-owner 0 -j ACCEPT iptables -t filter -A OUTPUT -p tcp --dport 80 --match owner --uid-owner 0 -j ACCEPT Allow DNS queries iptables -A OUTPUT -p udp --dport 53 -j ACCEPT iptables -A OUTPUT -p tcp --dport 53 -j ACCEPT Allow in and outbound SSH to/from any server iptables -A INPUT -p tcp -s 0/0 --dport 22 -j ACCEPT iptables -A OUTPUT -p tcp -d 0/0 --sport 22 -j ACCEPT Accept ICMP requests iptables -A INPUT -p icmp -s 0/0 -j ACCEPT iptables -A OUTPUT -p icmp -d 0/0 -j ACCEPT Accept connections from any local machines but disallow localhost access to networked machines iptables -A INPUT -s 10.0.1.0/24 -j ACCEPT iptables -A OUTPUT -d 10.0.1.0/24 -j DROP Drop ALL other traffic iptables -A OUTPUT -p tcp -d 0/0 -j DROP iptables -A OUTPUT -p udp -d 0/0 -j DROP Now I have tried many permutations and I'm obviously missing everything. I place them above the in/out bound SSH to/from, so it's not the precedence order. If someone could give me the heads up on allowing only the local machine to access the local web server, that'd be great. Cheers guys.

    Read the article

  • Setting a subdomain to access home machine with windows remote desktop

    - by ianhales
    I'm trying to remotely connect to home machine through Windows Remote Desktop (amongst other things, but this is currently my primary focus). I can do this fine using my home WAN's static IP (thank god for cable!) with port-forwarding, but I would like to access it from a subdomain of my web-site (e.g. home.mydomain.co.uk). In the cPanel for my hosting account, I've gone into DNS zones and altered the A-record to point to my WAN's IP, which I thought should do the job, but I still cannot connect. When I ping the subdomain, I get my web-host's IP, which I guess is to be expected as I believe the DNS of the host domain is used first, then my server handles the redirection of traffic to the IP in the A-record. Is this the correct idea? Do A-record changes suffer from the same propagation delays as DNS record changes, as I suppose that could explain it? (by the way, this thread confirms my thoughts that setting the A-record should be enough: Hostmonster Subdomain redirected to home server IP: How to ssh into home server using subdomain)

    Read the article

  • Network Sniffing and Hubs

    - by Chris_K
    This will likely seem naive to the experts... but it has been on my mind lately. For years I've been using ntop and a cheap 4 port hub to sniff client networks to determine who's doing what -- and how much. Great way to see what's going on when they call and say "Geeze, the network seems really slow today." No need to bring in a managed switch (or access the existing one) and no need to configure spanning or mirroring. I just drop in the hub inline where I want to measure. Lately I noticed it is just about impossible to buy a real honest-to-goodness hub anymore. While looking for a new one, I had someone tell me that I should be sure to get a full-duplex hub or I'd only be seeing half the traffic when I monitor. Really? I've been using a crusty old Netgear DS104 all this time. No clue if it is half or FD. Have I really been understating my measurements? I'm just not bright enough about the physical layer to really know... Side note: Just ordered a Dualcomm Ethernet Switch TAP as a hub replacement. Seems like a nifty gadget. Any notes or tips about it would be welcome in the comments :-)

    Read the article

  • Server Hosting + AWS

    - by ledy
    Since my dedicated servers are hosted at a "normal" hosting service, I wonder if there is a really cheap way to extend the server farm with AWS instances. E.g. it seems to be a effient and flexible solution with data storage and ressources for ocassional data processing, too. However, it might be very in-efficient to mix two data centres and transfering data from current webhoster to amazon and vice-versa. In my case, the traffic for this continuous data exchange seems to be expensive and the delay for moving the data back to the hoster leads into a lack or delay. How are best practises for mixing non-aws and aws systems? E.g.: How to move the hosters data to aws as log file storage to run urchin analysis and/or port the log file data into a bigtable for exhausting analysis there. After working with the data: how to bring it back to the hoster and use the data with the webservers there? I am not going to move all the server farm to amazon, only "separate" parts or tasks if the transfer/exchange does not lead to increased cost.

    Read the article

  • XAMPP server giving 404 error when requested by ipv4 connection

    - by boyb
    This is in reference to a previous question that I asked and was answered by womble. http://serverfault.com/a/406280/127729 So, now we have the real DNS records, we can do some diagnosis. dig for both A and AAAA on akosiboybastos.broker.freenet6.net gives a valid response, with an appropriate address. Good. dig for both A and AAAA on bastosforum.strangled.net gives the same responses (with a CNAME response thrown in). Also good. This means that the problem is not DNS-related, as those records are in order. wget -6 bastosforum.strangled.net/ gives a 200 OK response. wget -4 bastosforum.strangled.net/ gives a 404 Not Found response. This means that your webserver is misconfigured so that it's not serving the response you desire on IPv4. Given that the initial DNS problem asked in this question has been solved, I would recommend posting a new question with relevant webserver-related configuration, if you can't determine the configuration error yourself. I am using XAMPP(latest version) running phpbb3.0.10 via ipv6 tunnel from freenet6 and my domain is akosiboybastos.broker.freenet6.com, nothing fancy with the installation just out of the box install(with a few cosmetic mod). Both ipv4 and ipv6 traffic can connect using that url, but when I try to put a CNAME record on my test domain which is bastosforum.strangled.net pointing it to akosiboybastos.broker.freenet6.com only ipv6 can connect. As suggested by womble, this is a misconfigured webserver. To be honest I don't know where to start checking on the server as it is fully working if you use the domain given by freenet6 (akosiboybastos.broker.freenet6.com), any info on how to go about this server issue is welcome as i'm really a noob when it comes to computers. regards boyb

    Read the article

  • mysql - moving to a lower performance server, how small can I go?

    - by pedalpete
    I've been running a site for a few years now which really isn't growing in traffic, and I want to save some money on hosting, but keep it going for the loyal users of the site and api. The database has one a nearly 4 million row table, and on a 4gb dual xeon 5320 server. When I check server stats on this server with ps -aux, i get returns of mysql running at about 11% capacity, so no serious load. The main query against mysql runs in about 0.45 seconds. I popped over to linode.com to see what kind of performance I could get out of one of their tiny boxes, and their 360mb ram XEN vps returns the same query in 20 seconds. Clearly not good enough. I've looked at the mysql variables, and they are both very similar (I've included the show variables output below, if anybody is interested). Is there a good way to decide on what size server is needed based on what I'm coming from? Is it RAM that is likely making the difference with the large table size? Is there a way for me to figure out how much ram would be ideal?? Here's the output of the show variables (though I'm not sure it is important). +---------------------------------+------------------------------------------------------------+ | Variable_name | Value | +---------------------------------+------------------------------------------------------------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log | 50 | | basedir | /usr/ | | bdb_cache_size | 8384512 | | bdb_home | /var/lib/mysql/ | | bdb_log_buffer_size | 262144 | | bdb_logdir | | | bdb_max_lock | 10000 | | bdb_shared_data | OFF | | bdb_tmpdir | /tmp/ | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608 | | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_filesystem | binary | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 10 | | datadir | /var/lib/mysql/ | | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | div_precision_increment | 4 | | keep_files_on_create | OFF | | engine_condition_pushdown | OFF | | expire_logs_days | 0 | | flush | OFF | | flush_time | 0 | | ft_boolean_syntax | + - For some reason, that table formats properly in the preview, but apparently not when viewing the question. Hopefully it isn't needed anyway.

    Read the article

  • HTTPS Proxy which answers CONNECT with own certificate

    - by user1109542
    I'm configuring a DMZ which has the following Scheme: Internet - Server A - Security Appliance - Server B - Intranet In this DMZ I need a Proxy server for http(s) connections from the Intranet to Internet. The Problem is, that all Traffic should be scanned by the Security Appliance. For this I have to terminate the SSL Connection at Server B, proxy it as plain http to Server A through the Security Appliance and then further as https into the Internet. An encryption is then persistent between the Client and Server B and the Target Server and Server A. The communication between Server A and Server B is unencrypted. I know about the security risks and that the client will see some warning about the unknown CA of Server B's certificate. As Software I want to use Apache Web Servers on Server A and Server B. As first step I tried to configure Server B that it serves as endpoint for the SSL Encryption. So it has to establish the encryption with the client (answering HTTP CONNECT). Listen 8443 <VirtualHost *:8443> ProxyRequests On ProxyPreserveHost On AllowCONNECT 443 # SSL ErrorLog logs/ssl_error_log TransferLog logs/ssl_access_log LogLevel debug SSLProxyEngine on SSLProxyMachineCertificateFile /etc/pki/tls/certs/localhost_private_public.crt <Proxy *> Order deny,allow Deny from all Allow from 192.168.0.0/22 </Proxy> </VirtualHost> With this Proxy only the CONNECT request is passed through and an encrypted Connection between the client and the target is established. Unfortunately there is no possibility to configure mod_proxy_connect to decrypt the SSL connection. Is there any possibility to accomplish that kind of proxying with Apache?

    Read the article

  • Help with routing table

    - by user68752
    I have tried to find the answer to my question but not really found a clean and easy solution. I have a box (Ubuntu headless 10.04.1 server, with one Ethernet port) on LAN behind a router (running m0n0wall), that I have successfully installed a PPTP device (ppp0) on, this is working flawlessly (following this link) The thing is I want this box to route all it's internet traffic through the VPN tunnel (ppp0 device) but also being able to access the local LAN on 192.168.1.* subnet. I've succeeded a bit with this, but my problem right now is that I have port forwards (e.g. SSH) done in the m0n0wall pointing to this specific box which forces me to do "add routes" to all boxes that want to access this machine through this specific port. For instance a machine with ip xyz.xyz.xyz.xyz needs to have a static route setup in the routing table on the box to be able to access the box. This is the result of route -n xxx.xxx.137.2 192.168.1.1 255.255.255.255 UGH 0 0 0 eth0 xxx.xxx.137.2 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 yyy.yyy.0.0 192.168.1.1 255.255.0.0 UG 0 0 0 eth0 0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 ppp0 Where xxx is the IPs provided from VPN server. yyy.yyy.0.0 is a net that i want to have access to the box, without this I can't access the box from outside the LAN (via port-forwards done in router software, m0n0wall) is there away round this ugly solution?

    Read the article

  • Small maximum number of connections on a Linux router

    - by Eugene
    I have a Linux box acting as a router with no iptables or other firewall and no networking applications running on it, just pure router. I've put it in a test environment that generates many TCP connections, each having unique source and destination IP, and those connections go through this router. I'm observing that number of connections successfully created rise to approximately 500 and then no more connections can be created for several minutes, then another 100 connections can be created and there is another pause, and so on. If 10 connections for each source-destination pair are created, then maximum numbers go about 10 times up, so the problem is probably with many connections from different IPs. As traffic is simply routed, it doesn't have to do with number of file descriptors, iptables connection tracking and other things often proposed to check in similar cases. The box has plenty of free RAM and CPU, both NICs are gigabit. The kernel is 2.6.32. I've already tried increasing net.core.*mem_max, net.core.netdev_max_backlog and txqueuelen on both NICs, with completely no effect. What else should I check ? Is there some rate-limit in the kernel itself ?

    Read the article

  • SNMP query - operation not permitted

    - by jperovic
    I am working on API that reads a lot of data via SNMP (routes, interfaces, QoS policies, etc...). Lately, I have experienced a random error stating: Operation not permitted Now, I use SNMP4J as core library and cannot really pinpoint the source of error. Some Stackoverflow questions have suggested OS being unable to open sufficient number of file handles but increasing that parameter did not help much. The strange thing is that error occurs only when iptables is up and running. Could it be that firewall is blocking some traffic? I have tried writing JUnit test that mimicked application's logic but no errors were fired... Any help would be appreciated! Thanks! IPTABLES *nat :PREROUTING ACCEPT [2:96] :POSTROUTING ACCEPT [68:4218] :OUTPUT ACCEPT [68:4218] # route redirect za SNMP Trap i syslog -A PREROUTING -i eth0 -p udp -m udp --dport 514 -j REDIRECT --to-ports 33514 -A PREROUTING -i eth0 -p udp -m udp --dport 162 -j REDIRECT --to-ports 33162 COMMIT *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT ..... # SNMP -A INPUT -p udp -m state --state NEW -m udp --dport 161 -j ACCEPT # SNMP trap -A INPUT -p udp -m state --state NEW -m udp --dport 162 -j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport 33162 -j ACCEPT ..... -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT

    Read the article

  • Allow outgoing connections for DNS

    - by Jimmy
    I'm new to IPtables, but I am trying to setup a secure server to host a website and allow SSH. This is what I have so far: #!/bin/sh i=/sbin/iptables # Flush all rules $i -F $i -X # Setup default filter policy $i -P INPUT DROP $i -P OUTPUT DROP $i -P FORWARD DROP # Respond to ping requests $i -A INPUT -p icmp --icmp-type any -j ACCEPT # Force SYN checks $i -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # Drop all fragments $i -A INPUT -f -j DROP # Drop XMAS packets $i -A INPUT -p tcp --tcp-flags ALL ALL -j DROP # Drop NULL packets $i -A INPUT -p tcp --tcp-flags ALL NONE -j DROP # Stateful inspection $i -A INPUT -m state --state NEW -p tcp --dport 22 -j ACCEPT # Allow established connections $i -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow unlimited traffic on loopback $i -A INPUT -i lo -j ACCEPT $i -A OUTPUT -o lo -j ACCEPT # Open nginx $i -A INPUT -p tcp --dport 443 -j ACCEPT $i -A INPUT -p tcp --dport 80 -j ACCEPT # Open SSH $i -A INPUT -p tcp --dport 22 -j ACCEPT However I've locked down my outgoing connections and it means I can't resolve any DNS. How do I allow that? Also, any other feedback is appreciated. James

    Read the article

  • OpenVPN + iptables / NAT routing

    - by Mikeage
    I'm trying to set up an OpenVPN VPN, which will carry some (but not all) traffic from the clients to the internet via the OpenVPN server. My OpenVPN server has a public IP on eth0, and is using tap0 to create a local network, 192.168.2.x. I have a client which connects from local IP 192.168.1.101 and gets VPN IP 192.168.2.3. On the server, I ran: iptables -A INPUT -i tap+ -j ACCEPT iptables -A FORWARD -i tap+ -j ACCEPT iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE On the client, the default remains to route via 192.168.1.1. In order to point it to 192.168.2.1 for HTTP, I ran ip rule add fwmark 0x50 table 200 ip route add table 200 default via 192.168.2.1 iptables -t mangle -A OUTPUT -j MARK -p tcp --dport 80 --set-mark 80 Now, if I try accessing a website on the client (say, wget google.com), it just hangs there. On the server, I can see $ sudo tcpdump -n -i tap0 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tap0, link-type EN10MB (Ethernet), capture size 96 bytes 05:39:07.928358 IP 192.168.1.101.34941 > 74.125.67.100.80: S 4254520618:4254520618(0) win 5840 <mss 1334,sackOK,timestamp 558838 0,nop,wscale 5> 05:39:10.751921 IP 192.168.1.101.34941 > 74.125.67.100.80: S 4254520618:4254520618(0) win 5840 <mss 1334,sackOK,timestamp 559588 0,nop,wscale 5> Where 74.125.67.100 is the IP it gets for google.com . Why isn't the MASQUERADE working? More precisely, I see that the source showing up as 192.168.1.101 -- shouldn't there be something to indicate that it came from the VPN? Edit: Some routes [from the client] $ ip route show table main 192.168.2.0/24 dev tap0 proto kernel scope link src 192.168.2.4 192.168.1.0/24 dev wlan0 proto kernel scope link src 192.168.1.101 metric 2 169.254.0.0/16 dev wlan0 scope link metric 1000 default via 192.168.1.1 dev wlan0 proto static $ ip route show table 200 default via 192.168.2.1 dev tap0

    Read the article

  • Thunderbird 15.0.1 cannot use Exchange 2003 SMTP

    - by speedreeder
    I'm having the strangest time getting a Thunderbird email client to connect to my Exchange 2003 server. I got the incoming IMAP account set up no problem, and I can receive mail. However sending mail will not work no matter what SMTP settings I enter. After checking the server, the proper settings should be port 25 with no authentication or connection security, which I have entered. I can ping the hostname of the server from the client machine in question. The Thunderbird error message I get is: "Sending of message failed. The message could not be sent because the connection to SMTP server -hostname omitted- was lost in the middle of the transaction." So I went to the server and double checked the settings for Exchange's SMTP stuff. I have it correct. I tried to telnet (on the server) to localhost 25. It appears to connect and then disconnect immediately, no message, no nothing. When I telnet to other ports (POP-110 for example) I get proper connection messages and a stable connection. There are no firewalls on either the client or the server. There's a firewall on the network but LAN-LAN traffic is unrestricted. I can reproduce the Thunderbird error on a second client, and I can't get any client to be able to telnet in. Anyone have any ideas?

    Read the article

  • haproxy + nginx: https trailing slashes redirected to http

    - by user1719907
    I have a setup where HTTP(S) traffic goes from HAProxy to nginx. HAProxy nginx HTTP -----> :80 ----> :9080 HTTPS ----> :443 ----> :9443 I'm having troubles with implicit redirects caused by trailing slashes going from https to http, like this: $ curl -k -I https://www.example.com/subdir HTTP/1.1 301 Moved Permanently Server: nginx/1.2.4 Date: Thu, 04 Oct 2012 12:52:39 GMT Content-Type: text/html Content-Length: 184 Location: http://www.example.com/subdir/ The reason obviously is HAProxy working as SSL unwrapper, and nginx sees only http requests. I've tried setting up the X-Forwarded-Proto to https on HAProxy config, but it does nothing. My nginx setup is as follows: server { listen 127.0.0.1:9443; server_name www.example.com; port_in_redirect off; root /var/www/example; index index.html index.htm; } And the relevant parts from HAProxy config: frontend https-in bind *:443 ssl crt /etc/example.pem prefer-server-ciphers default_backend nginxssl backend nginxssl balance roundrobin option forwardfor reqadd X-Forwarded-Proto:\ https server nginxssl1 127.0.0.1:9443

    Read the article

  • Linux Centos 6 becomes unavailable from time to time - OS&network issue

    - by adoado0
    I am encountering following problem. There is one server (DL160 G5) running Centos 6.3 with default kernel 2.6.32-220.2.1.el6.x86_64 - at this point I'd like to add that issue appeared also at older version - 6.1 and older kernel (do not remember exactly which version). There is cPanel installed and from time to time it becomes unavailable (network connection). What I've checked is (via KVMoIP): load average is completely normal it does not lack memory or disk space when problem occurs no console notifications checked all access logs and there is no sign that it can be caused by a client script cannot even access local interface (127.0.0.1) or main IP address running tcpdump I can only see packets arriving to server - no responses all services seem to be running properly (mail,sql,http,ssh) checked crontab and all clients' crontabs too network port utilisation is low ( up to several Mbits) arriving packet rate is low - hundreds per second (according to tcpdump) console (via kvmoip) works fine, no lags there is no conntrack at this server there is no ipv6 at this server flushing iptables, unloading modules does not resolve problem restarting network does not resolve problem, no errors appear it also occurs when two sepearate networks are configured (and multiple gateways) as well as one IP, one default gw and one network is configured - so it seems network configuration independent it seems to repeat randomly (load,packet rate,bandwith usage,load independent) checked server with different rootkit detection tools - it seems to be clean server has been rebooted, it did not change anything there are no interface errors it apperas randomly can be once a week or several times per day It usually works fine after 1-15 minutes. What I can also check? It is definitely OS issue - there is traffic at interface only in one direction when problem occurs, can not even ping loopback. Any ideas? Recommended checks? Anything I did not checked above.

    Read the article

  • Developing high-performance and scalable zend framework website

    - by Daniel
    We are going to develop an ads website like http://www.gumtree.com/ (it will not be like this one but just to give you an ideea) and we are having some issues regarding performance and scalability. We are planning on using Zend Framework for this project but this is all that I'm sure off at this point. I don't think a classic approch like Zend Framework (PHP) + MySQL + Memcache + jQuery (and I would throw Doctrine 2 in there to) will fix result in a high-performance application. I was thinking on making this a RESTful application (with Zend Framework) + NGINX (or maybe MongoDB) + Memcache (or eAccelerator -- I understand this will create problems with scalability on multiple servers) + jQuery, a CDN for static content, a server for images and a scalable server for the requests and the rest. My questions are: - What do you think about my approch? - What solutions would you recommand in terms of servers approch (MySQL, NGINX, MongoDB or pgsql) for a scalable application expected to have a lot of traffic using PHP?...I would be interested in your approch. Note: I'm a Zend Framework developer and don't have to much experience with the servers part (to determin what would be best solution for my scalable application)

    Read the article

  • Outbound ports to allow through firewall - core requirements

    - by dunxd
    This question was asked before, but in a rather general way. I'm asking more specifically based on my current requirements. We have a number of remote offices made up of a bunch of PCs and an ASA 5505 which is used as firewall and VPN termination point. In the offices we share the internet connection with one or more other organisations over whom we have very little control, asides from the config on the ASAs. For a bunch of reasons I'd like to lock down these ASA 5505s to only allow outbound traffic to ports used by applications we know we need. I'm putting a standard config to roll out to all the ASAs, and if we need to open up ports for the other orgs we can do it on request. But I want to leave open the most commonly required ports so we can get up and running without waiting on other folks technical staff to get back. I plan to allow the following TCP ports to support email and web access, which I know everyone will need: POP3 (110 and 995) HTTP (80 and 443) IMAP4 (143 and 993) SMTP (25 and and 465) The question really is, what other ports do I need to leave open to allow for "normal" working? I've seen UDP port 53 for DNS as one. Are there any others that would be worth opening up? Just to note - I'll also be setting up monitoring systems to keep an eye on the ports we do allow. Any of the above could be misused of course. We'll also back all this up with signed agreements. But I'm aiming for a technical solutions where I don't have to start out with the full requirements of everyone we share connections with. See also: outbound ports that are always open

    Read the article

  • Single application through OpenVPN tunnel (Debian Lenny)

    - by user14124
    I'm using Debian Lenny and I want to tunnel rtorrent only through a OpenVPN tunnel. I have a tunnel running, the config file looks like this: client dev tun proto udp remote openvpn.xxx.com 1194 resolv-retry infinite nobind persist-key persist-tun ca /etc/openvpn/xxx/keys/ca.crt cert /etc/openvpn/xxx/keys/client.crt key /etc/openvpn/xxx/keys/client.key tls-auth /etc/openvpn/xxx/keys/tls.key 1 ns-cert-type server comp-lzo verb 3 auth-user-pass script-security 3 reneg-sec 0 My idea is that I could run a sockd proxy internally that redirects traffic to the openvpn tunnel. I could use the *nix "proxifier" application "tsocks" to make it possible for rtorrent to connect through that proxy (as rtorrent doesn't support proxies). I have trouble configuring sockd as my IP inside the VPN changes every time I connect. This is a config file someone said would help: http://ircpimps.org/sockd.conf As my IP changes at each connect I don't know what to put in that config file. I have no control over the host side config file. Any help wanted. Any other method is very welcome.

    Read the article

  • Cisco ASA 5505 - InterVLAN NAT Exemptions Implementation not working

    - by Brandon Bearden
    Short version is we cannot communicate between our subnets. We have a Cisco ASA 5505 we are using for our network router. We have a Netgear L3 switch behind that with 10 vlans. Each VLAN is on its own subnet. (10.0.10.x/24, 10.0.11.x/24, etc) So ASA Switch Hosts We have PAT for each subnet to our outside interface. Each subnet NATs out properly. I have NAT exemption enabled for 2 of the subnets (eventually I will need all, but am just testing at the moment). Config is here: http://pastebin.com/pDsG7hsh I have tried multiple ways for the NAT exemption to allow all traffic from our inside VLANS. At this point in time I am trying to get "Engineering" to communicate with all hosts on "AuthUser". I can ping some hosts, but not as many as if I am directly on the interface. I can reach a port 80 service, but not 443. I cannot access anything via hostname or NetBIOS. What am I missing to allow higher security level interfaces to fully communicate with lower security level interfaces? Thx!

    Read the article

  • Outbound ports to allow through firewall

    - by dunxd
    This question was asked before, but in a rather general way. I'm asking more specifically based on my current requirements. We have a number of remote offices made up of a bunch of PCs and an ASA 5505 which is used as firewall and VPN termination point. In the offices we share the internet connection with one or more other organisations over whom we have very little control, asides from the config on the ASAs. For a bunch of reasons I'd like to lock down these ASA 5505s to only allow outbound traffic to ports used by applications we know we need. I'm putting a standard config to roll out to all the ASAs, and if we need to open up ports for the other orgs we can do it on request. But I want to leave open the most commonly required ports so we can get up and running without waiting on other folks technical staff to get back. I plan to allow the following TCP ports to support commonly required resources: POP3 (110 and 995) HTTP (80 and 443) IMAP4 (143 and 993) SMTP (25 and and 465) The question really is, what other ports do I need to leave open to allow for "normal" working. I've seen UDP port 53 for DNS as one. Are there any others that would be worth opening up? Just to note - I'll also be setting up monitoring systems to keep an eye on the ports we do allow. Any of the above could be misused of course. We'll also back all this up with signed agreements. But I'm aiming for a technical solutions where I don't have to start out with the full requirements of everyone we share connections with. See also: outbound ports that are always open

    Read the article

  • Throughput and why do ISPs sell too much bandwidth?

    - by jonescb
    I hope the question made sense how I worded it. :) I've been wondering, maximum theoretical bandwidth is measured as RWIN/RTT (Window size / round trip time) Source 1 and Souce 2 So if a major city only 100 miles away gives me a ping of 50ms, and I have the default 64kb TCP window size then my maximum throughput will be 12.5Mb/s. Everything further away would give me a higher ping and therefore a lower throughput. Is there any reason to buy something like FiOS with a 50Mb/s or greater connection? Will you ever be able to reach that kind of speed? I know you can increase the TCP window size to increase throughput, but it has to be at both ends which is a deal breaker because you can't control the server. I'm assuming other network protocols like UDP aren't quite as affected by latency as TCP is, but how much of overall network traffic does non-TCP make up vs TCP. Am I just misguided about how throughput works? But if the above is correct, then why should a consumer like me buy way more bandwidth than can be realistically used. Maybe the only reason is for downloading multiple things at once, or one thing from multiple servers/peers?

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >