Search Results

Search found 1671 results on 67 pages for 'packets'.

Page 14/67 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • OpenVPN not sending traffic to internet?

    - by coleifer
    I've set up openvpn on my pi and am running into a small issue. I can connect to the VPN server and ping it just fine, and I can also connect to other machines on my local network. However I am unable, when connected to the VPN, to reach the outside world (either by name lookup or IP). here are the details: On the server the tun0 interface: tun0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500 inet 10.8.0.1 netmask 255.255.255.255 destination 10.8.0.2 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 100 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 I can ping it just fine: # ping -c 3 10.8.0.1 PING 10.8.0.1 (10.8.0.1) 56(84) bytes of data. 64 bytes from 10.8.0.1: icmp_seq=1 ttl=64 time=0.159 ms 64 bytes from 10.8.0.1: icmp_seq=2 ttl=64 time=0.155 ms 64 bytes from 10.8.0.1: icmp_seq=3 ttl=64 time=0.156 ms --- 10.8.0.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms Routing table # ip route show default via 192.168.1.1 dev eth0 metric 204 10.8.0.0/24 via 10.8.0.2 dev tun0 10.8.0.2 dev tun0 proto kernel scope link src 10.8.0.1 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.6 metric 204 I also have ip traffic forwarding: net.ipv4.ip_forward = 1 I do not have any custom iptables rules (that I'm aware of). On the client, I can connect to the VPN. Here is my tun0: tun0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500 inet 10.8.0.6 netmask 255.255.255.255 destination 10.8.0.5 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 100 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 21 bytes 1527 (1.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 And on the client I can ping it: sudo ping -c 3 10.8.0.6 PING 10.8.0.6 (10.8.0.6) 56(84) bytes of data. 64 bytes from 10.8.0.6: icmp_seq=1 ttl=64 time=0.035 ms 64 bytes from 10.8.0.6: icmp_seq=2 ttl=64 time=0.026 ms 64 bytes from 10.8.0.6: icmp_seq=3 ttl=64 time=0.032 ms --- 10.8.0.6 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1998ms rtt min/avg/max/mdev = 0.026/0.031/0.035/0.003 ms I can ssh from the client into another server on my LAN (192.168.1.x), however I cannot reach anything outside my LAN. Here's some of the server logs at the bottom of this gist: https://gist.github.com/coleifer/6ef95c3008f130249933/edit I am frankly out of ideas! I don't think it's my client because both my laptop and my phone (which has an openvpn client) exhibit the same behavior. I had OpenVPN installed on this pi before using debian and it worked, so I don't think it's my router but of course anything is possible.

    Read the article

  • Centos does not open port/s after the rule/s are appended

    - by Charlie Dyason
    So after some battling and struggling with the firewall, i see that I may be doing something or the firewall isnt responding correctly there is has a port filter that is blocking certain ports. by the way, I have combed the internet, posted on forums, done almost everything and now hence the website name "serverfault", is my last resort, I need help What I hoped to achieve is create a pptp server to connect to with windows/linux clients UPDATED @ bottom Okay, here is what I did: I made some changes to my iptables file, giving me endless issues and so I restored the iptables.old file contents of iptables.old: # Firewall configuration written by system-config-firewall # Manual customization of this file is not recommended. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT after iptables.old restore(back to stock), nmap scan shows: nmap [server ip] Starting Nmap 6.00 ( nmap.org ) at 2013-11-01 13:54 SAST Nmap scan report for server.address.net ([server ip]) Host is up (0.014s latency). Not shown: 997 filtered ports PORT STATE SERVICE 22/tcp open ssh 113/tcp closed ident 8008/tcp open http Nmap done: 1 IP address (1 host up) scanned in 4.95 seconds if I append rule: (to accept all tcp ports incoming to server on interface eth0) iptables -A INPUT -i eth0 -m tcp -j ACCEPT nmap output: nmap [server ip] Starting Nmap 6.00 ( nmap.org ) at 2013-11-01 13:58 SAST Nmap scan report for server.address.net ([server ip]) Host is up (0.017s latency). Not shown: 858 filtered ports, 139 closed ports PORT STATE SERVICE 22/tcp open ssh 443/tcp open https 8008/tcp open http Nmap done: 1 IP address (1 host up) scanned in 3.77 seconds *notice it allows and opens port 443 but no other ports, and it removes port 113...? removing previous rule and if I append rule: (allow and open port 80 incoming to server on interface eth0) iptables -A INPUT -i eth0 -m tcp -p tcp --dport 80 -j ACCEPT nmap output: nmap [server ip] Starting Nmap 6.00 ( nmap.org ) at 2013-11-01 14:01 SAST Nmap scan report for server.address.net ([server ip]) Host is up (0.014s latency). Not shown: 996 filtered ports PORT STATE SERVICE 22/tcp open ssh 80/tcp closed http 113/tcp closed ident 8008/tcp open http Nmap done: 1 IP address (1 host up) scanned in 5.12 seconds *notice it removes port 443 and allows 80 but is closed without removing previous rule and if I append rule: (allow and open port 1723 incoming to server on interface eth0) iptables -A INPUT -i eth0 -m tcp -p tcp --dport 1723 -j ACCEPT nmap output: nmap [server ip] Starting Nmap 6.00 ( nmap.org ) at 2013-11-01 14:05 SAST Nmap scan report for server.address.net ([server ip]) Host is up (0.015s latency). Not shown: 996 filtered ports PORT STATE SERVICE 22/tcp open ssh 80/tcp closed http 113/tcp closed ident 8008/tcp open http Nmap done: 1 IP address (1 host up) scanned in 5.16 seconds *notice no change in ports opened or closed??? after removing rules: iptables -A INPUT -i eth0 -m tcp -p tcp --dport 80 -j ACCEPT iptables -A INPUT -i eth0 -m tcp -p tcp --dport 1723 -j ACCEPT nmap output: nmap [server ip] Starting Nmap 6.00 ( nmap.org ) at 2013-11-01 14:07 SAST Nmap scan report for server.address.net ([server ip]) Host is up (0.015s latency). Not shown: 998 filtered ports PORT STATE SERVICE 22/tcp open ssh 113/tcp closed ident Nmap done: 1 IP address (1 host up) scanned in 5.15 seconds and returning rule: (to accept all tcp ports incoming to server on interface eth0) iptables -A INPUT -i eth0 -m tcp -j ACCEPT nmap output: nmap [server ip] Starting Nmap 6.00 ( nmap.org ) at 2013-11-01 14:07 SAST Nmap scan report for server.address.net ([server ip]) Host is up (0.017s latency). Not shown: 858 filtered ports, 139 closed ports PORT STATE SERVICE 22/tcp open ssh 443/tcp open https 8008/tcp open http Nmap done: 1 IP address (1 host up) scanned in 3.87 seconds notice the eth0 changes the 999 filtered ports to 858 filtered ports, 139 closed ports QUESTION: why cant I allow and/or open a specific port, eg. I want to allow and open port 443, it doesnt allow it, or even 1723 for pptp, why am I not able to??? sorry for the layout, the editor was give issues (aswell... sigh) UPDATE @Madhatter comment #1 thank you madhatter in my iptables file: # Firewall configuration written by system-config-firewall # Manual customization of this file is not recommended. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i eth0 -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT # ----------all rules mentioned in post where added here ONLY!!!---------- -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT if I want to allow and open port 1723 (or edit iptables to allow a pptp connection from remote pc), what changes would I make? (please bear with me, my first time working with servers, etc.) Update MadHatter comment #2 iptables -L -n -v --line-numbers Chain INPUT (policy ACCEPT 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 9 660 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 3 0 0 ACCEPT all -- eth0 * 0.0.0.0/0 0.0.0.0/0 4 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 5 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 6 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT 6 packets, 840 bytes) num pkts bytes target prot opt in out source destination just on a personal note, madhatter, thank you for the support , I really appreciate it! UPDATE MadHatter comment #3 here are the interfaces ifconfig eth0 Link encap:Ethernet HWaddr 00:1D:D8:B7:1F:DC inet addr:[server ip] Bcast:[server ip x.x.x].255 Mask:255.255.255.0 inet6 addr: fe80::21d:d8ff:feb7:1fdc/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:36692 errors:0 dropped:0 overruns:0 frame:0 TX packets:4247 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2830372 (2.6 MiB) TX bytes:427976 (417.9 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) remote nmap nmap -p 1723 [server ip] Starting Nmap 6.00 ( http://nmap.org ) at 2013-11-01 16:17 SAST Nmap scan report for server.address.net ([server ip]) Host is up (0.017s latency). PORT STATE SERVICE 1723/tcp filtered pptp Nmap done: 1 IP address (1 host up) scanned in 0.51 seconds local nmap nmap -p 1723 localhost Starting Nmap 5.51 ( http://nmap.org ) at 2013-11-01 16:19 SAST Nmap scan report for localhost (127.0.0.1) Host is up (0.000058s latency). Other addresses for localhost (not scanned): 127.0.0.1 PORT STATE SERVICE 1723/tcp open pptp Nmap done: 1 IP address (1 host up) scanned in 0.11 seconds UPDATE MadHatter COMMENT POST #4 I apologize, if there might have been any confusion, i did have the rule appended: (only after 3rd post) iptables -A INPUT -p tcp --dport 1723 -j ACCEPT netstat -apn|grep -w 1723 tcp 0 0 0.0.0.0:1723 0.0.0.0:* LISTEN 1142/pptpd There are not VPN's and firewalls between the server and "me" UPDATE MadHatter comment #5 So here is an intersting turn of events: I booted into windows 7, created a vpn connection, went through the verfication username & pword - checking the sstp then checking pptp (went through that very quickly which meeans there is no problem), but on teh verfication of username and pword (before registering pc on network), it got stuck, gave this error Connection failed with error 2147943625 The remote computer refused the network connection netstat -apn | grep -w 1723 before connecting: netstat -apn |grep -w 1723 tcp 0 0 0.0.0.0:1723 0.0.0.0:* LISTEN 1137/pptpd after the error came tried again: netstat -apn |grep -w 1723 tcp 0 0 0.0.0.0:1723 0.0.0.0:* LISTEN 1137/pptpd tcp 0 0 41.185.26.238:1723 41.13.212.47:49607 TIME_WAIT - I do not know what it means but seems like there is progress..., any thoughts???

    Read the article

  • Abnormally high amount of Transmit discards reported by Solarwinds for multiple switches

    - by Jared
    I have several 3750X Cisco switches that, according to our Solarwinds NPM, are producing billions of transmit discards per day. I'm not sure why it's reporting these discards. Many of the ports on the 3750X's have 2960's connected to them and are hardcoded as trunk ports. Solarwinds NPM version 10.3 Cisco IOS version 12.2(58)SE2 Total output drops: 29139431: GigabitEthernet1/0/43 is up, line protocol is up (connected) Hardware is Gigabit Ethernet, address is XXXX (bia XXXX) Description: XXXX MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 100Mb/s, media type is 10/100/1000BaseTX input flow-control is off, output flow-control is unsupported ARP type: ARPA, ARP Timeout 04:00:00 Last input 00:00:47, output 00:00:50, output hang never Last clearing of "show interface" counters 1w4d Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 29139431 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 35000 bits/sec, 56 packets/sec 51376 packets input, 9967594 bytes, 0 no buffer Received 51376 broadcasts (51376 multicasts) 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 0 watchdog, 51376 multicast, 0 pause input 0 input packets with dribble condition detected 115672302 packets output, 8673778028 bytes, 0 underruns 0 output errors, 0 collisions, 0 interface resets 0 unknown protocol drops 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier, 0 pause output 0 output buffer failures, 0 output buffers swapped out sh controllers gigabitEthernet 1/0/43 utilization: Receive Bandwidth Percentage Utilization : 0 Transmit Bandwidth Percentage Utilization : 0

    Read the article

  • Asterisk terminating outbound call when picked up, sends 'BYE' message

    - by vo
    I'm running Asterisk 1.6.1.10 / FreePBX 2.5.2.2 and I've got an outbound trunk setup. Everything use to work fine until recently (perhaps due to upgrade to FC12 or other things I'm not sure). Anyway the setup does not appear to have issues registering and setting up the call, RTP packets go both ways and you can hear the ringing from the other side. However it appears that when the call is picked up or thereabouts, the incoming RTP packets cease. Upon closer inspection with Wireshark, there are these particular packets that seem to be the cause: trunk->asterisk SIP/SD Status: 200 OK, with session description asterisk->trunk SIP Request: ACK sip:<phone>@trunk:6889 asterisk->trunk SIP Request: BYE sip:<phone>@trunk:6889 [..about a dozzen RTP packets in/outbound..] trunk->asterisk SIP Status: 200 OK, CSeq: 104 Bye [..outbound RTP continues, phone is silent..] Then the inbound RTP packets cease, however the asterisk logs dont show any activity at this point. The last entry reads 'SIP/ is answered SIP/'. Then when you hangup the extension, you get asterisk->trunk SIP Request: BYE sip:<phone>@trunk:6889 trunk->asterisk SIP Status: 481 Call Leg/Transaction does not exist My trunk peer settings in FreePBX are: username=<user> fromuser=<user> canreinvite=no type=friend secret=<pass> qualify=no [qualify yes produces 401/forbidden messages] nat=yes insecure=very host=<sip trunk gateway> fromdomain=<sip trunk gateway> disallow=all context=from-pstn allow=ulaw dtmfmode=inband Under sip_general_custom.conf i have stunaddr=stun.xten.com externrefresh=120 localnet=192.168.1.1/255.255.255.0 nat=yes Whats causing Asterisk to prematurely end the call and still think the call is in progress? I have no idea where to look next.

    Read the article

  • IP address reuse on macvlan devices

    - by Alex Bubnoff
    I'm trying to create easy to use and possibly simple testing environment for some product and got some strange behaviour of macvlan's. What I'm trying to achieve: make a toolset for one-line start/stop of lxc containers(via docker) bound to external ip(I have enough of it on host machine). So, I'm doing something like this: docker run -d -name=container_name container_image pipework eth1 container_name ip/prefix_len@gateway and pipework here does this: GUEST_IFNAME=ph$NSPID$eth1 ip link add link eth1 dev $GUEST_IFNAME type macvlan mode bridge ip link set eth1 up ip link set $GUEST_IFNAME netns $NSPID ip netns exec $NSPID ip link set $GUEST_IFNAME name eth1 ip netns exec $NSPID ip addr add $IPADDR dev eth1 ip netns exec $NSPID ip route delete default ip netns exec $NSPID ip link set eth1 up ip netns exec $NSPID ip route replace default via $GATEWAY ip netns exec $NSPID arping -c 1 -A -I eth1 $IPADDR And it works for first time per IP. But for second time and later packets for containers IP isn't getting into container, while all configuration seem fine. So it looks like this: External machine ? ping 212.76.131.212 ....silence.... Host machine root@ubuntu:~# ip link show eth1 2: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:15:17:c9:e1:c9 brd ff:ff:ff:ff:ff:ff root@ubuntu:~# ip addr show eth1 2: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:15:17:c9:e1:c9 brd ff:ff:ff:ff:ff:ff root@ubuntu:~# tcpdump -v -i eth1 icmp tcpdump: WARNING: eth1: no IPv4 address assigned tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes 00:00:46.542042 IP (tos 0x0, ttl 60, id 9623, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2345, length 64 00:00:47.549969 IP (tos 0x0, ttl 60, id 9624, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2346, length 64 00:00:48.558143 IP (tos 0x0, ttl 60, id 9625, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2347, length 64 00:00:49.566319 IP (tos 0x0, ttl 60, id 9626, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2348, length 64 00:00:50.573999 IP (tos 0x0, ttl 60, id 9627, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2349, length 64 ^C 5 packets captured 5 packets received by filter 0 packets dropped by kernel 1 packet dropped by interface Host machine, netns of container root@ubuntu:~# ip netns exec 32053 ip link show eth1 48: eth1@if2: mtu 1500 qdisc noqueue state UNKNOWN link/ether b2:12:f7:cc:a1:9d brd ff:ff:ff:ff:ff:ff root@ubuntu:~# ip netns exec 32053 ip addr show eth1 48: eth1@if2: mtu 1500 qdisc noqueue state UNKNOWN link/ether b2:12:f7:cc:a1:9d brd ff:ff:ff:ff:ff:ff inet 212.76.131.212/29 scope global eth1 inet6 fe80::b012:f7ff:fecc:a19d/64 scope link valid_lft forever preferred_lft forever root@ubuntu:~# ip netns exec 32053 tcpdump -v -i eth1 icmp tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes ....silence.... ^C 0 packets captured 0 packets received by filter 0 packets dropped by kernel So, can anyone say, what can it be? Can this be caused by not a bug in macvlan implementation? Is there any tools I can use to debug that configuration?

    Read the article

  • LXC container can only access host via bridge

    - by vitaut
    I have an LXC container with i686 Ubuntu 12.04 running on a x86_64 Ubuntu 12.04 host. I've set up a bridge using instructions here. However the ping from the container only goes through to the host and not to other machines on the local network. Similarly only the host and not the other machines see the container OS. The host's /etc/network/interfaces file looks as follows: auto lo iface lo inet loopback iface eth0 inet manual auto br0 iface br0 inet dhcp bridge_ports eth0 bridge_fd 0 bridge_maxwait 0 The container's /etc/network/interfaces file looks as follows: auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp And here's the relevant part of the container's config: lxc.network.type=veth lxc.network.link=br0 lxc.network.flags=up Any ideas what I'm doing wrong? Additional info: The output of iptables-save on host: $ sudo iptables-save # Generated by iptables-save v1.4.12 on Sat Oct 26 06:06:48 2013 *filter :INPUT ACCEPT [6854:721708] :FORWARD ACCEPT [4067:538895] :OUTPUT ACCEPT [4967:522405] COMMIT # Completed on Sat Oct 26 06:06:48 2013 # Generated by iptables-save v1.4.12 on Sat Oct 26 06:06:48 2013 *nat :PREROUTING ACCEPT [82235:21547307] :INPUT ACCEPT [16:1070] :OUTPUT ACCEPT [9386:583359] :POSTROUTING ACCEPT [14693:1291952] -A POSTROUTING -s 10.0.3.0/24 ! -d 10.0.3.0/24 -j MASQUERADE COMMIT # Completed on Sat Oct 26 06:06:48 2013 The output of brctl show on host: $ brctl show bridge name bridge id STP enabled interfaces br0 8000.080027409684 no eth0 vethBkwWyV The output of ifconfig br0 on host: $ ifconfig br0 br0 Link encap:Ethernet HWaddr 08:00:27:40:96:84 inet addr:192.168.1.11 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe40:9684/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:232863 errors:0 dropped:0 overruns:0 frame:0 TX packets:59518 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:34437354 (34.4 MB) TX bytes:198492871 (198.4 MB) The output of ifconfig eth0 on host: $ ifconfig eth0 eth0 Link encap:Ethernet HWaddr 08:00:27:40:96:84 inet6 addr: fe80::a00:27ff:fe40:9684/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:299419 errors:0 dropped:0 overruns:0 frame:0 TX packets:203569 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:59077446 (59.0 MB) TX bytes:372056540 (372.0 MB) The output of ifconfig eth0 on container: $ ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:16:3e:74:08:2b inet addr:192.168.1.12 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::216:3eff:fe74:82b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:81 errors:0 dropped:0 overruns:0 frame:0 TX packets:113 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:8506 (8.5 KB) TX bytes:9021 (9.0 KB)

    Read the article

  • How to use CLEAR USB internet connection in Ubuntu (host) and WindowsXP (guest) using VirtualBox

    - by bithacker
    I'm trying to use CLEAR Motorola WiMax USB in Ubuntu as there is no support for linux as yet. I've installed windowsxp as guest in ubuntu and the version I'm using is 3.2.2. USB is connecting fine in WindowsXP but I can't use internet in Ubuntu. Can you please tell me how to do it. Here is the configuration that could help you guys. Thanks in advance. I'm using Two Network Adapters. Network Adapter 1: PCnet-FAST III (NAT) Adapter 2: PCnet-FAST III (Host-only adapter, 'vboxnet0') ipconfig [on Guest windowsXP] Windows IP Configuration Ethernet adapter Local Area Connection: PCnet-FAST III (NAT) Connection-specific DNS Suffix . : IP Address. . . . . . . . . . . . : 10.0.2.15 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 10.0.2.2 Ethernet adapter Local Area Connection 3: PCnet-FAST III (Host-only adapter, 'vboxnet0') Connection-specific DNS Suffix . : IP Address. . . . . . . . . . . . : 192.168.56.101 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Ethernet adapter Local Area Connection 2: Connection-specific DNS Suffix . : CLEAR Motorola USB IP Address. . . . . . . . . . . . : 10.168.242.33 Subnet Mask . . . . . . . . . . . : 255.255.192.0 Default Gateway . . . . . . . . . : 10.168.192.2 IFCONFIG [on Host Ubuntu] (Ethernet) eth0 Link encap:Ethernet HWaddr 00:14:22:b9:9d:76 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 eth1 (Wireless) Link encap:Ethernet HWaddr 00:13:ce:f0:9b:0d inet6 addr: fe80::213:ceff:fef0:9b0d/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:1 errors:0 dropped:5 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:84 (84.0 B) Interrupt:17 Base address:0xe000 Memory:dfcff000-dfcfffff lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:2292 errors:0 dropped:0 overruns:0 frame:0 TX packets:2292 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:171952 (171.9 KB) TX bytes:171952 (171.9 KB) vboxnet0 Link encap:Ethernet HWaddr 0a:00:27:00:00:00 inet addr:192.168.56.1 Bcast:192.168.56.255 Mask:255.255.255.0 inet6 addr: fe80::800:27ff:fe00:0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:137 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:21174 (21.1 KB)

    Read the article

  • Stop duplicate icmp echo replies when bridging to a dummy interface?

    - by mbrownnyc
    I recently configured a bridge br0 with members as eth0 (real if) and dummy0 (dummy.ko if). When I ping this machine, I receive duplicate replies as: # ping SERVERA PING SERVERA.domain.local (192.168.100.115) 56(84) bytes of data. 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=1 ttl=62 time=113 ms 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=1 ttl=62 time=114 ms (DUP!) 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=2 ttl=62 time=113 ms 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=2 ttl=62 time=113 ms (DUP!) Using tcpdump on SERVERA, I was able to see icmp echo replies being sent from eth0 and br0 itself as follows (oddly two echo request packets arrive "from" my Windows box myhost): 23:19:05.324192 IP myhost.domain.local > SERVERA.domain.local: ICMP echo request, id 512, seq 43781, length 40 23:19:05.324212 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 23:19:05.324217 IP myhost.domain.local > SERVERA.domain.local: ICMP echo request, id 512, seq 43781, length 40 23:19:05.324221 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 23:19:05.324264 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 23:19:05.324272 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 It's worth noting, testing reveals that hosts on the same physical switch do not see DUP icmp echo responses (a host on the same VLAN on another switch does see a dup icmp echo response). I've read that this could be due to the ARP table of a switch, but I can't find any info directly related to bridges, just bonds. I have a feeling my problem lay in the stack on linux, not the switch, but am opened to any suggestions. The system is running centos6/el6 kernel 2.6.32-71.29.1.el6.i686. How do I stop ICMP echo replies from being sent in duplicate when dealing with a bridge interface/bridged interfaces? Thanks, Matt [edit] Quick note: It was recommended in #linux to: [08:53] == mbrownnyc [gateway/web/freenode/] has joined ##linux [08:57] <lkeijser> mbrownnyc: what happens if you set arp_ignore to 1 for the dummy interface? [08:59] <lkeijser> also set arp_announce to 2 for that interface [09:24] <mbrownnyc> lkeijser: I set arp_annouce to 2, arp_ignore to 2 in /etc/sysctl.conf and rebooted the machine... verifying that the bits are set after boot... the problem is still present I did this and came up empty. Same dup problem. I will be moving away from including the dummy interface in the bridge as: [09:31] == mbrownnyc [gateway/web/freenode/] has joined #Netfilter [09:31] <mbrownnyc> Hello all... I'm wondering, is it correct that even with an interface in PROMISC that the kernel will drop /some/ packets before they reach applications? [09:31] <whaffle> What would you make think so? [09:32] <mbrownnyc> I ask because I am receiving ICMP echo replies after configuring a bridge with a dummy interface in order for ipt_netflow to see all packets, only as reported in it's documentation: http://ipt-netflow.git.sourceforge.net/git/gitweb.cgi?p=ipt-netflow/ipt-netflow;a=blob;f=README.promisc [09:32] <mbrownnyc> but I do not know if PROMISC will do the same job [09:33] <mbrownnyc> I was referred here from #linux. any assistance is appreciated [09:33] <whaffle> The following conditions need to be met: PROMISC is enabled (bridges and applications like tcpdump will do this automatically, otherwise they won't function). [09:34] <whaffle> If an interface is part of a bridge, then all packets that enter the bridge should already be visible in the raw table. [09:35] <mbrownnyc> thanks whaffle PROMISC must be set manually for ipt_netflow to function, but [09:36] <whaffle> promisc does not need to be set manually, because the bridge will do it for you. [09:36] <whaffle> When you do not have a bridge, you can easily create one, thereby rendering any kernel patches moot. [09:36] <mbrownnyc> whaffle: I speak without the bridge [09:36] <whaffle> It is perfectly valid to have a "half-bridge" with only a single interface in it. [09:36] <mbrownnyc> whaffle: I am unfamiliar with the raw table, does this mean that PROMISC allows the raw table to be populated with packets the same as if the interface was part of a bridge? [09:37] <whaffle> Promisc mode will cause packets with {a dst MAC address that does not equal the interface's MAC address} to be delivered from the NIC into the kernel nevertheless. [09:37] <mbrownnyc> whaffle: I suppose I mean to clearly ask: what benefit would creating a bridge have over setting an interface PROMISC? [09:38] <mbrownnyc> whaffle: from your last answer I feel that the answer to my question is "none," is this correct? [09:39] <whaffle> Furthermore, the linux kernel itself has a check for {packets with a non-local MAC address}, so that packets that will not enter a bridge will be discarded as well, even in the face of PROMISC. [09:46] <mbrownnyc> whaffle: so, this last bit of information is quite clearly why I would need and want a bridge in my situation [09:46] <mbrownnyc> okay, the ICMP echo reply duplicate issue is likely out of the realm of this channel, but I sincerely appreciate the info on the kernels inner-workings [09:52] <whaffle> mbrownnyc: either the kernel patch, or a bridge with an interface. Since the latter is quicker, yes [09:54] <mbrownnyc> thanks whaffle [edit2] After removing the bridge, and removing the dummy kernel module, I only had a single interface chilling out, lonely. I still received duplicate icmp echo replies... in fact I received a random amount: http://pastebin.com/2LNs0GM8 The same thing doesn't happen on a few other hosts on the same switch, so it has to do with the linux box itself. I'll likely end up rebuilding it next week. Then... you know... this same thing will occur again. [edit3] Guess what? I rebuilt the box, and I'm still receiving duplicate ICMP echo replies. Must be the network infrastructure, although the ARP tables do not contain multiple entries. [edit4] How ridiculous. The machine was a network probe, so I was (ingress and egress) mirroring an uplink port to a node that was the NIC. So, the flow (must have) gone like this: ICMP echo request comes in through the mirrored uplink port. (the real) ICMP echo request is received by the NIC (the mirrored) ICMP echo request is received by the NIC ICMP echo reply is sent for both. I'm ashamed of myself, but now I know. It was suggested on #networking to either isolate the mirrored traffic to an interface that does not have IP enabled, or tag the mirrored packets with dot1q.

    Read the article

  • With CentOS 6 and LXC, "ifconfig" is unable to see network interface (but busybox "ifconfig" works fine)

    - by larsks
    I've just started working with LXC under CentOS 6 (via the libvirt adapter). If I create an LXC container, I'm unable to see any network interfaces when using the native system tools: # ifconfig -a # The behavior is very odd; specifying an interface by names yields neither the expected output nor an error message. This is true even for clearly invalid interface names, like this: # ifconfig foo # The ip command exhibits the same behavior. On the other hand, if I use "ifconfig" provided by busybox, everything works as expected: # busybox ifconfig -a eth0 Link encap:Ethernet HWaddr 52:54:00:E0:12:C8 inet6 addr: fe80::5054:ff:fee0:12c8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:268 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:17814 (17.3 KiB) TX bytes:552 (552.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) So...what does busybox know that the native tools don't? The libvirt config for this environment is pretty standard; the network definition looks like this: <interface type='network'> <mac address='52:54:00:e0:12:c8'/> <source network='default'/> <target dev='veth0'/> </interface> The full configuration is here if you think it might help. I'm running: lxc-0.7.2-2.el6.x86_64 kernel-2.6.32-71.29.1.el6.x86_64 EDIT Weirder and weirder...it's a display issue, not a functionality issue. I can see the output of ifconfig if I pipe it into anything, so for example: # ifconfig eth0 | cat eth0 Link encap:Ethernet HWaddr 52:54:00:E0:12:C8 inet addr:192.168.10.10 Bcast:192.168.10.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:fee0:12c8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:573 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:37914 (37.0 KiB) TX bytes:552 (552.0 b) And in fact even when not piping the output, strace shows that ifconfig is in fact writing the output to file descriptor 1 (aka stdout), so it's not clear why no output is actually showing up. This could be either an LXC or a virsh issue, I guess.

    Read the article

  • SVN Server not responding

    - by Rob Forrest
    I've been bashing my head against a wall with this one all day and I would greatly appreciate a few more eyes on the problem at hand. We have an in-house SVN Server that contains all live and development code for our website. Our live server can connect to this and get updates from the repository. This was all working fine until we migrated the SVN Server from a physical machine to a vSphere VM. Now, for some reason that continues to fathom me, we can no longer connect to the SVN Server. The SVN Server runs CentOS 6.2, Apache and SVN 1.7.2. SELinux is well and trully disabled and the problem remains when iptables is stopped. Our production server does run an older version of CentOS and SVN but the same system worked previously so I don't think that this is the issue. Of note, if I have iptables enabled, using service iptables status, I can see a single packet coming in and being accepted but the production server simply hangs on any svn command. If I give up waiting and do a CTRL-C to break the process I get a "could not connect to server". To me it appears to be something to do with the SVN Server rejecting external connections but I have no idea how this would happen. Any thoughts on what I can try from here? Thanks, Rob Edit: Network topology Production server sits externally to our in-house SVN server. Our IPCop (?) firewall allows connections from it (and it alone) on port 80 and passes the connection to the SVN Server. The hardware is all pretty decent and I don't doubt that its doing its job correctly, especially as iptables is seeing the new connections. subversion.conf (in /etc/httpd/conf.d) LoadModule dav_svn_module modules/mod_dav_svn.so <Location /repos> DAV svn SVNPath /var/svn/repos <LimitExcept PROPFIND OPTIONS REPORT> AuthType Basic AuthName "SVN Server" AuthUserFile /var/svn/svn-auth Require valid-user </LimitExcept> </Location> ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:5F:C8:3A inet addr:172.16.0.14 Bcast:172.16.0.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe5f:c83a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:32317 errors:0 dropped:0 overruns:0 frame:0 TX packets:632 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2544036 (2.4 MiB) TX bytes:143207 (139.8 KiB) netstat -lntp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 1484/mysqld tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1135/rpcbind tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1351/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1230/cupsd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1575/master tcp 0 0 0.0.0.0:58401 0.0.0.0:* LISTEN 1153/rpc.statd tcp 0 0 0.0.0.0:5672 0.0.0.0:* LISTEN 1626/qpidd tcp 0 0 :::139 :::* LISTEN 1678/smbd tcp 0 0 :::111 :::* LISTEN 1135/rpcbind tcp 0 0 :::80 :::* LISTEN 1615/httpd tcp 0 0 :::22 :::* LISTEN 1351/sshd tcp 0 0 ::1:631 :::* LISTEN 1230/cupsd tcp 0 0 ::1:25 :::* LISTEN 1575/master tcp 0 0 :::445 :::* LISTEN 1678/smbd tcp 0 0 :::56799 :::* LISTEN 1153/rpc.statd iptables --list -v -n (when iptables is stopped) Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination iptables --list -v -n (when iptables is running, after one attempted svn connection) Chain INPUT (policy ACCEPT 68 packets, 6561 bytes) pkts bytes target prot opt in out source destination 19 1304 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 1 60 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:80 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 17 packets, 1612 bytes) pkts bytes target prot opt in out source destination tcpdump 17:08:18.455114 IP 'production server'.43255 > 'svn server'.local.http: Flags [S], seq 3200354543, win 5840, options [mss 1380,sackOK,TS val 2011458346 ecr 0,nop,wscale 7], length 0 17:08:18.455169 IP 'svn server'.local.http > 'production server'.43255: Flags [S.], seq 629885453, ack 3200354544, win 14480, options [mss 1460,sackOK,TS val 816478 ecr 2011449346,nop,wscale 7], length 0 17:08:19.655317 IP 'svn server'.local.http > 'production server'k.43255: Flags [S.], seq 629885453, ack 3200354544, win 14480, options [mss 1460,sackOK,TS val 817679 ecr 2011449346,nop,wscale 7], length 0

    Read the article

  • Improving TCP performance over a gigabit network lots of connections and high traffic for storage and streaming services

    - by Linux Guy
    I have two servers, Both servers hardware Specification are Processor : Dual Processor RAM : over 128 G.B Hard disk : SSD Hard disk Outging Traffic bandwidth : 3 Gbps network cards speed : 10 Gbps Server A : for Encoding videos Server B : for storage videos andstream videos over web interface like youtube The inbound bandwidth between two servers is 10Gbps , the outbound bandwidth internet bandwidth is 500Mpbs Both servers using public ip addresses in public and private network Both servers transfer and connection on nginx port , and the server B used for streaming media , like youtube stream videos Both servers in same network , when i do ping from Server A to Server B i got high time latency above 1.0ms , the time range time=52.7 ms to time=215.7 ms - This is the output of iftop utility 353Mb 707Mb 1.04Gb 1.38Gb 1.73Gb mqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqq server.example.com => ip.address 6.36Mb 4.31Mb 1.66Mb <= 158Kb 94.8Kb 35.1Kb server.example.com => ip.address 1.23Mb 4.28Mb 1.12Mb <= 17.1Kb 83.5Kb 21.9Kb server.example.com => ip.address 395Kb 3.89Mb 1.07Mb <= 6.09Kb 109Kb 28.6Kb server.example.com => ip.address 4.55Mb 3.83Mb 1.04Mb <= 55.6Kb 45.4Kb 13.0Kb server.example.com => ip.address 649Kb 3.38Mb 1.47Mb <= 9.00Kb 38.7Kb 16.7Kb server.example.com => ip.address 5.00Mb 3.32Mb 1.80Mb <= 65.7Kb 55.1Kb 29.4Kb server.example.com => ip.address 387Kb 3.13Mb 1.06Mb <= 18.4Kb 39.9Kb 15.0Kb server.example.com => ip.address 3.27Mb 3.11Mb 1.01Mb <= 81.2Kb 64.5Kb 20.9Kb server.example.com => ip.address 1.75Mb 3.08Mb 2.72Mb <= 16.6Kb 35.6Kb 32.5Kb server.example.com => ip.address 1.75Mb 2.90Mb 2.79Mb <= 22.4Kb 32.6Kb 35.6Kb server.example.com => ip.address 3.03Mb 2.78Mb 1.82Mb <= 26.6Kb 27.4Kb 20.2Kb server.example.com => ip.address 2.26Mb 2.66Mb 1.36Mb <= 51.7Kb 49.1Kb 24.4Kb server.example.com => ip.address 586Kb 2.50Mb 1.03Mb <= 4.17Kb 26.1Kb 10.7Kb server.example.com => ip.address 2.42Mb 2.49Mb 2.44Mb <= 31.6Kb 29.7Kb 29.9Kb server.example.com => ip.address 2.41Mb 2.46Mb 2.41Mb <= 26.4Kb 24.5Kb 23.8Kb server.example.com => ip.address 2.37Mb 2.39Mb 2.40Mb <= 28.9Kb 27.0Kb 28.5Kb server.example.com => ip.address 525Kb 2.20Mb 1.05Mb <= 7.03Kb 26.0Kb 12.8Kb qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq TX: cum: 102GB peak: 1.65Gb rates: 1.46Gb 1.44Gb 1.48Gb RX: 1.31GB 24.3Mb 19.5Mb 18.9Mb 20.0Mb TOTAL: 103GB 1.67Gb 1.48Gb 1.46Gb 1.50Gb I check the transfer speed using iperf utility From Server A to Server B # iperf -c 0.0.0.2 -p 8777 ------------------------------------------------------------ Client connecting to 0.0.0.2, TCP port 8777 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 3] local 0.0.0.1 port 38895 connected with 0.0.0.2 port 8777 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.8 sec 528 KBytes 399 Kbits/sec My Current Connections in Server B # netstat -an|grep ":8777"|awk '/tcp/ {print $6}'|sort -nr| uniq -c 2072 TIME_WAIT 28 SYN_RECV 1 LISTEN 189 LAST_ACK 139 FIN_WAIT2 373 FIN_WAIT1 3381 ESTABLISHED 34 CLOSING Server A Network Card Information Settings for eth0: Supported ports: [ TP ] Supported link modes: 100baseT/Full 1000baseT/Full 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Speed: 10000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 0 Transceiver: external Auto-negotiation: on MDI-X: Unknown Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes Server B Network Card Information Settings for eth2: Supported ports: [ FIBRE ] Supported link modes: 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: No Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: No Speed: 10000Mb/s Duplex: Full Port: Direct Attach Copper PHYAD: 0 Transceiver: external Auto-negotiation: off Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes ifconfig server A eth0 Link encap:Ethernet HWaddr 00:25:90:ED:9E:AA inet addr:0.0.0.1 Bcast:0.0.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1202795665 errors:0 dropped:64334 overruns:0 frame:0 TX packets:2313161968 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:893413096188 (832.0 GiB) TX bytes:3360949570454 (3.0 TiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:2207544 errors:0 dropped:0 overruns:0 frame:0 TX packets:2207544 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:247769175 (236.2 MiB) TX bytes:247769175 (236.2 MiB) ifconfig Server B eth2 Link encap:Ethernet HWaddr 00:25:90:82:C4:FE inet addr:0.0.0.2 Bcast:0.0.0.2 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:39973046980 errors:0 dropped:1828387600 overruns:0 frame:0 TX packets:69618752480 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3013976063688 (2.7 TiB) TX bytes:102250230803933 (92.9 TiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:1049495 errors:0 dropped:0 overruns:0 frame:0 TX packets:1049495 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:129012422 (123.0 MiB) TX bytes:129012422 (123.0 MiB) Netstat -i on Server B # netstat -i Kernel Interface table Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg eth2 9000 0 42098629968 0 2131223717 0 73698797854 0 0 0 BMRU lo 65536 0 1077908 0 0 0 1077908 0 0 0 LRU I Turn up send/receive buffers on the network card to 2048 and problem still persist I increase the MTU for server A and problem still persist and i increase the MTU for server B for better connectivity and transfer speed but it couldn't transfer at all The problem is : as you can see from iperf utility, the transfer speed from server A to server B slow when i restart network service in server B the transfer in server A at full speed, after 2 minutes , it's getting slow How could i troubleshoot slow speed issue and fix it in server B ? Notice : if there any other commands i should execute in servers for more information, so it might help resolve the problem , let me know in comments

    Read the article

  • Network monitoring solution

    - by Hellfrost
    Hello Serverfault ! I have a big distributed system I need to monitor. Background: My system is comprised of two servers, concentrating and controlling the system. Each server is connected to a set of devices (some custom kind of RF controllers, doesnt matter to my question), each device connects to a network switch, and eventually all devices talk to the servers, the protocol between the servers and the devices is UDP, usually the packets are very small, but there are really a LOT of packets. the network is also somewhat complex, and is deployed on a large area physically. i'll have 150-300 of these devices, each generating up to 100+ packets per second, and several network switches, perhaps on 2 different subnets. Question I'm looking for some solution that will allow me to monitor all this mess, how many packets are sent, where, how do they move through the network, bandwidth utilization, throughput, stuff like that. what would you recommend to achieve this? BTW Playing nice with windows is a requirement.

    Read the article

  • Forward mDns from one subnet to another?

    - by user37278
    Is there an ipfw rule that can easily forward mDns packets from one subnet to another? I have a Snow Leopard Server machine serving as the gateway between the two subnets and would like for machines in each subnet to see the services available in the other subnet. The gateway machine is already confirmed as configured correctly such that packets route correctly between the two subnets (ping works, traceroute shows the subnet hop, etc). My problem in designing a ipfw rule is that I don't know how to instruct that I would like multicast packets addressed to 224.0.0.251:5353 on en0 to be addressed to the same ip/port but on fw0 (the other interface). I attempted a rule such as fwd 192.168.10.1 log udp from 192.168.1.0/24 to 224.0.0.251 recv en1 to force the packet to hop over to the other interface (from en1 to fw0), but no dice. The ipfw log shows that the rule is being triggered by packets, but tcpdump isn't showing any packets on the other interface. Also, the only other firewall rules in place are the divert port 8668 and rule #65535 "allow any to any". Any suggestions? Thanks.

    Read the article

  • How can I force all internet traffic over a PPTP VPN but still allow local lan access?

    - by user126715
    I have a server running Linux Mint 12 that I want to keep connected to a PPTP VPN all the time. The VPN server is pretty reliable, but it drops on occasion so I just want to make it so all internet activity is disabled if the VPN connection is broken. I'd also like to figure out a way to restart it automatically, but that's not as big of an issue since this happens pretty rarely. I also want to always be able to connect to the box from my lan, regardless of whether the VPN is up or not. Here's what my ifconfig looks like with the VPN connected properly: eth0 Link encap:Ethernet HWaddr 00:22:15:21:59:9a inet addr:192.168.0.171 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::222:15ff:fe21:599a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:37389 errors:0 dropped:0 overruns:0 frame:0 TX packets:29028 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:37781384 (37.7 MB) TX bytes:19281394 (19.2 MB) Interrupt:41 Base address:0x8000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1446 errors:0 dropped:0 overruns:0 frame:0 TX packets:1446 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:472178 (472.1 KB) TX bytes:472178 (472.1 KB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.10.11.10 P-t-P:10.10.11.9 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:14 errors:0 dropped:0 overruns:0 frame:0 TX packets:23 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:1368 (1.3 KB) TX bytes:1812 (1.8 KB) Here's an iptables script I found elsewhere that seemed to be for the problem I'm trying to solve, but it wound up blocking all access, but I'm not sure what I need to change: #!/bin/bash #Set variables IPT=/sbin/iptables VPN=`ifconfig|perl -nE'/dr:(\S+)/&&say$1'|grep 10.` LAN=192.168.0.0/24 #Flush rules $IPT -F $IPT -X #Default policies and define chains $IPT -P OUTPUT DROP $IPT -P INPUT DROP $IPT -P FORWARD DROP #Allow input from LAN and tun0 ONLY $IPT -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT $IPT -A INPUT -i lo -j ACCEPT $IPT -A INPUT -i tun0 -m conntrack --ctstate NEW -j ACCEPT $IPT -A INPUT -s $LAN -m conntrack --ctstate NEW -j ACCEPT $IPT -A INPUT -j DROP #Allow output from lo and tun0 ONLY $IPT -A OUTPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT $IPT -A OUTPUT -o lo -j ACCEPT $IPT -A OUTPUT -o tun0 -m conntrack --ctstate NEW -j ACCEPT $IPT -A OUTPUT -d $VPN -m conntrack --ctstate NEW -j ACCEPT $IPT -A OUTPUT -j DROP exit 0 Thanks for your help.

    Read the article

  • Router 2wire, Slackware desktop in DMZ mode, iptables policy aginst ping, but still pingable

    - by skriatok
    I'm in DMZ mode, so I'm firewalling myself, stealthy all ok, but I get faulty test results from Shields Up that there are pings. Yesterday I couldn't make a connection to game servers work, because ping block was enabled (on the router). I disabled it, but this persists even due to my firewall. What is the connection between me and my router in DMZ mode (for my machine, there is bunch of others too behind router firewall)? When it allows router affecting if I'm pingable or not and if router has setting not blocking ping, rules in my iptables for this scenario do not work. Please ignore commented rules, I do uncomment them as I want. These two should do the job right? iptables -A INPUT -p icmp --icmp-type echo-request -j DROP echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all Here are my iptables: #!/bin/sh # Begin /bin/firewall-start # Insert connection-tracking modules (not needed if built into the kernel). #modprobe ip_tables #modprobe iptable_filter #modprobe ip_conntrack #modprobe ip_conntrack_ftp #modprobe ipt_state #modprobe ipt_LOG # allow local-only connections iptables -A INPUT -i lo -j ACCEPT # free output on any interface to any ip for any service # (equal to -P ACCEPT) iptables -A OUTPUT -j ACCEPT # permit answers on already established connections # and permit new connections related to established ones (eg active-ftp) iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT #Gamespy&NWN #iptables -A INPUT -p tcp -m tcp -m multiport --ports 5120:5129 -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 6667 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 28910 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29900 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29901 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29920 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p udp -m udp -m multiport --ports 5120:5129 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 6500 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 27900 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 27901 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 29910 -j ACCEPT # Log everything else: What's Windows' latest exploitable vulnerability? iptables -A INPUT -j LOG --log-prefix "FIREWALL:INPUT" # set a sane policy: everything not accepted > /dev/null iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP iptables -A INPUT -p icmp --icmp-type echo-request -j DROP # be verbose on dynamic ip-addresses (not needed in case of static IP) echo 2 > /proc/sys/net/ipv4/ip_dynaddr # disable ExplicitCongestionNotification - too many routers are still # ignorant echo 0 > /proc/sys/net/ipv4/tcp_ecn #ping death echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all # If you are frequently accessing ftp-servers or enjoy chatting you might # notice certain delays because some implementations of these daemons have # the feature of querying an identd on your box for your username for # logging. Although there's really no harm in this, having an identd # running is not recommended because some implementations are known to be # vulnerable. # To avoid these delays you could reject the requests with a 'tcp-reset': #iptables -A INPUT -p tcp --dport 113 -j REJECT --reject-with tcp-reset #iptables -A OUTPUT -p tcp --sport 113 -m state --state RELATED -j ACCEPT # To log and drop invalid packets, mostly harmless packets that came in # after netfilter's timeout, sometimes scans: #iptables -I INPUT 1 -p tcp -m state --state INVALID -j LOG --log-prefix \ "FIREWALL:INVALID" #iptables -I INPUT 2 -p tcp -m state --state INVALID -j DROP # End /bin/firewall-start Active ruleset: bash-4.1# iptables -L -n -v Chain INPUT (policy DROP 38 packets, 2228 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 844 542K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 38 2228 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4 prefix `FIREWALL:INPUT' 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 38 2228 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4 prefix `FIREWALL:INPUT' Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 1158 111K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Active ruleset: (after editing iptables into below sugested form) bash-4.1# iptables -L -n -v Chain INPUT (policy DROP 2567 packets, 172K bytes) pkts bytes target prot opt in out source destination 49 4157 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 412K 441M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2567 172K LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4 prefix `FIREWALL:INPUT' 0 0 DROP icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmp type 8 Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 312K packets, 25M bytes) pkts bytes target prot opt in out source destination ping and syslog simultaneous screenshots from phone (pinger) and from laptop (being pinged) http://dl.dropbox.com/u/4160051/slckwr/pingfrom%20mobile.jpg http://dl.dropbox.com/u/4160051/slckwr/tailsyslog.jpg

    Read the article

  • IP queue buffer

    - by summerbulb
    I seem to have an issue with IP queue. I have a linux machine that I am using to run some experiments. The linux machine is configured to be a router, having two NICs, connecting two other computers, and managing their network traffic. All incoming packages are captured, using iptables, and analyzed by a C application. The application analyzing the packets has a built-in delay, as part of the experiment. So I have one very fast computer sending packets through my linux-router and a (relatively) slow linux-router that analyses and deals with the packets, one by one. This situation leads to the fact that when I fire up a sender application on one of the computers connected to the linux-router, my IP queue on the linux-router gets filled up (almost) instantaneously. The IP queue's max length is currently set to 1024, and if it overflows, the packets are dropped. This is expected and i'm OK with it. But, (and this is where it gets interesting), every now and then I get the following error: "Failed to receive netlink message: No buffer space available" At start, I thought this was due to the IP queue overflow, but after some analysis i found that sometimes I get the error even if the IP queue buffer did not overflow, and sometime I DON'T get the message even though the buffer DID overflow. When I run > cat /proc/net/ip_queue, I get the following table (also used to monitor the IP queue overflow): Peer PID : 27389 Copy mode : 2 Copy range : 65535 Queue length : 0 Queue max. length : 1024 Queue dropped : 1166875 Netlink dropped : 2916 Looking at the last two values, Queue dropped seems to refer to packets that did not manage to get into the IP queue because the buffer was full. I can see this value rise as I bombard the linux-router. Netlink dropped ( as it's name implies :) ) seems to have to do with the error i'm getting. I did my best to search for material on this error, but wasn't able to find anything that seemed to point me in the required direction. Bottom line: Why am I getting this error and what can I do to avoid it?

    Read the article

  • logging conntrack connection values into log file

    - by seaquest
    Linux netfilter iptables Conntrack table already has records for bytes and packets on both directions. Is there any way to log those values to a log file while a connection is closing by netfilter. tcp 6 430619 ESTABLISHED src=192.168.0.145 dst=33.42.42.42 sport=53601 dport=22 packets=66560 bytes=14800077 src=33.42.42.42 dst=192.168.1.2 sport=22 dport=53601 packets=89726 bytes=68403910 [ASSURED] mark=0 use=1

    Read the article

  • Can't get my Raspberry Pi to keep a static IP

    - by JonnyIrving
    I recently got given a Raspberry Pi and I would like to be able to remote into it using puTTy from my laptop so I don't have to sit next to my tv with a keyboard and mouse to use it. I am able to get a puTTy session going when I know the IP address that my router has given the Pi on each session but it keeps changing on each reboot as I would expect. So I followed a number if instruction to go about configuring the RPi to keep a static IP address. This involved changing the file at '/etc/netwrok/interfaces' which now contains (password removed): auto lo iface lo inet loopback iface eth0 inet static address 192.168.1.82 netmask 255.255.255.0 gateway 192.168.1.254 auto wlan0 allow-hotplug wlan0 iface wlan0 inet dhcp wpa-ssid "BeBoxD304BF" wpa-psk "**********" Despite this however, each time I reboot my RPi it gives me a new dynamic IP address still. I also noticed that in the 'ifconfig' output below that the details of the eth0 doesn't contain IP details for inet addr, Bcast or Mask which have been present in all other examples I have seen online. eth0 Link encap:Ethernet HWaddr b8:27:eb:b5:95:da UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) wlan0 Link encap:Ethernet HWaddr 00:87:c6:00:33:77 inet addr:192.168.1.83 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:918 errors:0 dropped:0 overruns:0 frame:0 TX packets:277 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 Also I'm not sure if this is relevant but it can't hurt! The file at '/etc/resolv.conf' contains: domain config search config nameserver 192.168.1.254 ..I heard it might mean something on one of the pages I was looking at. I would be very grateful for any help with this. I have tried everything I can think of and would really like to get this working this weekend so I can use it from work.

    Read the article

  • iptables rule(s) to send openvpn traffic from clients over an sshuttle tunnel?

    - by Sam Martin
    I have an Ubuntu 12.04 box with OpenVPN. The VPN is working as expected -- clients can connect, browse the Web, etc. The OpenVPN server IP is 10.8.0.1 on tun0. On that same box, I can use sshuttle to tunnel into another network to access a Web server on 10.10.0.9. sshuttle does its magic using the following iptables commands: iptables -t nat -N sshuttle-12300 iptables -t nat -F sshuttle-12300 iptables -t nat -I OUTPUT 1 -j sshuttle-12300 iptables -t nat -I PREROUTING 1 -j sshuttle-12300 iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.10.0.0/24 -p tcp --to-ports 12300 -m ttl ! --ttl 42 iptables -t nat -A sshuttle-12300 -j RETURN --dest 127.0.0.0/8 -p tcp Is it possible to forward traffic from OpenVPN clients over the sshuttle tunnel to the remote Web server? I'd ultimately like to be able to set up any complicated tunneling on the server, and have relatively "dumb" clients (iPad, etc.) be able to access the remote servers via OpenVPN. Below is a basic diagram of the scenario: [Edit: added output from the OpenVPN box] $ sudo iptables -nL -v -t nat Chain PREROUTING (policy ACCEPT 1498 packets, 252K bytes) pkts bytes target prot opt in out source destination 1512 253K sshuttle-12300 all -- * * 0.0.0.0/0 0.0.0.0/0 Chain INPUT (policy ACCEPT 322 packets, 58984 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 584 packets, 43241 bytes) pkts bytes target prot opt in out source destination 587 43421 sshuttle-12300 all -- * * 0.0.0.0/0 0.0.0.0/0 Chain POSTROUTING (policy ACCEPT 589 packets, 43595 bytes) pkts bytes target prot opt in out source destination 1175 76298 MASQUERADE all -- * eth0 10.8.0.0/24 0.0.0.0/0 Chain sshuttle-12300 (2 references) pkts bytes target prot opt in out source destination 17 1076 REDIRECT tcp -- * * 0.0.0.0/0 10.10.0.0/24 TTL match TTL != 42 redir ports 12300 0 0 RETURN tcp -- * * 0.0.0.0/0 127.0.0.0/8 $ sudo iptables -nL -v -t filter Chain INPUT (policy ACCEPT 97493 packets, 30M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 131K 109M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 1370 89160 ACCEPT all -- * * 10.8.0.0/24 0.0.0.0/0 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable [Edit 2: more OpenVPN server output] $ netstat -r Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface default 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 192.168.1.0 * 255.255.255.0 U 0 0 0 eth0 [Edit 3: still more debug output] IP forwarding appears to be enabled correctly on the OpenVPN server: # find /proc/sys/net/ipv4/conf/ -name forwarding -ls -execdir cat {} \; 18926 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/all/forwarding 1 18954 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/default/forwarding 1 18978 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/eth0/forwarding 1 19003 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/lo/forwarding 1 19028 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/tun0/forwarding 1 Client routing table: $ netstat -r Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire 0/1 10.8.0.5 UGSc 8 48 tun0 default 192.168.1.1 UGSc 2 1652 en1 10.8.0.1/32 10.8.0.5 UGSc 1 0 tun0 10.8.0.5 10.8.0.6 UHr 13 0 tun0 10.10.0/24 10.8.0.5 UGSc 0 0 tun0 <snip> Traceroute from client: $ traceroute 10.10.0.9 traceroute to 10.10.0.9 (10.10.0.9), 64 hops max, 52 byte packets 1 10.8.0.1 (10.8.0.1) 5.403 ms 1.173 ms 1.086 ms 2 192.168.1.1 (192.168.1.1) 4.693 ms 2.110 ms 1.990 ms 3 l100.my-verizon-garbage (client-ext-ip) 7.453 ms 7.089 ms 6.248 ms 4 * * * 5 10.10.0.9 (10.10.0.9) 14.915 ms !N * 6.620 ms !N

    Read the article

  • Does a bad Internet connection increase bandwidth usage?

    - by Synetech
    My (Rogers) cable connection has been pretty bad recently (channels 3 and 10 are particularly fuzzy—it’s analog, not digital cable). Not surprisingly, this has caused my cable modem to drop out and have to reestablish a connection a couple of times since it started. The poor connection of course means higher corruption (not necessarily dropped per se) which causes the TCP/IP stack to have to retransmit packets more often. Reduction of bandwidth throughput aside, I got to wondering if it increases the actual bandwidth usage. That is, if there is a high error rate on the line causing packets to have to be retransmitted: Does this increase a bandwidth monitoring program’s numbers? Does the ISP count the retransmitted packets toward the monthly cap? Based on what I remember from my university networking courses and common sense, I have a feeling that the answer to both questions is yes, but I cannot reliably measure the first, and have no authoritative answer for the second. I’m wondering if maybe the retransmitted packets are acknowledged as being duplicates and thus not counted somewhere along the line.

    Read the article

  • How to configure a large mtu (linux)

    - by Somejan
    I have a gigabit ethernet connection from my laptop to my router, and a working ipv6 connection to the internet. I can receive very large packets from sites on the internet, with sizes up to at least 10000 bytes (according to wireshark). (edit: turns out to be linux's 'generic receive offload') However, when trying to send anything, my local computer fragments at just below 1500 bytes for ipv6. (On ipv4, I can send tcp packets to the internet of at least 1514 bytes, I can ping with packets up to the configured mtu of 6128 but they are blackholed.) I'm on ubuntu 12.04. I have configured an mtu for my eth0 of 6128 (the maximum it accepts), both using ip link set dev eth0 mtu 6128 and in the NetworkManager applet gui, and restarted the connection. ip link show eth0 shows the 6128 mtu is indeed set. ip -6 route shows that none of the paths the kernel knows about have an mtu set. I can ping over ipv4 with packets up to 6128 bytes (though I don't get responses), but when I do ping6 myrouter -c3 -s1500 -Mdo I get error replies from my own computer saying that the packets are too large and the mtu is 1480. I have confirmed with Wireshark that nothing is put on the wire, and the replies are indeed generated by my own computer. So, how do I get my computer to use the larger mtu?

    Read the article

  • Suspected network performance issue on VirtualBox Ubuntu guest on Win7 host

    - by Adam
    I set up Ubuntu 12.04 in VirtualBox on the Win7 machine I was allocated on my new project. I am running Java, Eclipse, Tomcat to develop a large data-intensive application and I noticed that this application runs at half the speed of my colleague's identical machine, where he runs it all under Windows. I think I have narrowed down the performance issue to the network, after comparing and equalising all the Java VM settings with my colleague. Is there a ping test I can do or some other network diagnostic test to flag up any problems? To give some background, the network performance is confusing. Running a network speed test to my colleague's machine with iperf shows speeds of 6 Mb/s from my Ubuntu guest, and 90 Mb/s from the win7 host. Large downloads, e.g. the Java SDK, come down at about 1.2 MB/s on both the guest and the host. Pings are sub-1ms on the host, but 1.5ms on the guest. I also did a broadband speed test, and got 10Mb/s download speed on both, but the host has an upload speed of 10Mb/s but the guest only uploads at 3Mb/s. I've been trying to diagnose any MTU problems with ping -M do to identify any kind of packet fragmentation problem but it's progressing very slow because I don't have much experience in this area. From what I read on other people's networking issues with VB and Linux guests on Win7 hosts, I should be able to get the speed on the guest up to the same level as the host. I installed a fresh VM with Ubuntu again to see if I'd foobar'd it somehow, but I'm getting the same readings with iperf on the virgin installation. My setup is: Adapter 1: Intel PRO/1000 MT Desktop (NAT) Adapter 2: ditto (host-only adapter) eth0 Link encap:Ethernet HWaddr 08:00:27:0b:76:bf inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe0b:76bf/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:86236 errors:0 dropped:0 overruns:0 frame:0 TX packets:49369 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:69163946 (69.1 MB) TX bytes:3530535 (3.5 MB) eth2 Link encap:Ethernet HWaddr 08:00:27:a3:26:b8 inet addr:192.168.56.101 Bcast:192.168.56.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fea3:26b8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:59 errors:0 dropped:0 overruns:0 frame:0 TX packets:57 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:9148 (9.1 KB) TX bytes:7648 (7.6 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:701 errors:0 dropped:0 overruns:0 frame:0 TX packets:701 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:66321 (66.3 KB) TX bytes:66321 (66.3 KB)

    Read the article

  • How to forward OpenVPN Port to NAT'd XEN domU

    - by John
    I want to install a OpenVPN domU on XEN. Dom0 and domU are running Debian Squeeze, all domU are on a NAT'd privat network 10.0.0.1/24 My VPN-Gate is von 10.0.0.1 and running. How can I make it accessible under the dom0 public IP? I tried forwarding the port using iptables, but without any success. Here is what i did: ~ # iptables -L -n -v Chain INPUT (policy ACCEPT 1397 packets, 118K bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 930 packets, 133K bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED PHYSDEV match --physdev-out vif5.0 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in vif5.0 udp spt:68 dpt:67 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED PHYSDEV match --physdev-out vif5.0 0 0 ACCEPT all -- * * 10.0.0.1 0.0.0.0/0 PHYSDEV match --physdev-in vif5.0 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED PHYSDEV match --physdev-out vif3.0 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in vif3.0 udp spt:68 dpt:67 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED PHYSDEV match --physdev-out vif3.0 0 0 ACCEPT all -- * * 10.0.0.5 0.0.0.0/0 PHYSDEV match --physdev-in vif3.0 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED PHYSDEV match --physdev-out vif2.0 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in vif2.0 udp spt:68 dpt:67 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED PHYSDEV match --physdev-out vif2.0 0 0 ACCEPT all -- * * 10.0.0.2 0.0.0.0/0 PHYSDEV match --physdev-in vif2.0 147 8236 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 13 546 ACCEPT udp -- eth0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:1194 Chain OUTPUT (policy ACCEPT 1000 packets, 99240 bytes) pkts bytes target prot opt in out source destination ~ # iptables -L -t nat -n -v Chain PREROUTING (policy ACCEPT 324 packets, 23925 bytes) pkts bytes target prot opt in out source destination 139 7824 DNAT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:10.0.0.5:80 1 42 DNAT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:1194 to:10.0.0.1:1194 Chain POSTROUTING (policy ACCEPT 92 packets, 5030 bytes) pkts bytes target prot opt in out source destination 863 64983 MASQUERADE all -- * eth0 0.0.0.0/0 0.0.0.0/0 0 0 MASQUERADE all -- * eth0 0.0.0.0/0 0.0.0.0/0 0 0 MASQUERADE all -- * eth0 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 180 packets, 13953 bytes) pkts bytes target prot opt in out source destination

    Read the article

  • Routing table with two NIC adapters in libvirt/KVM

    - by lzap
    I created a virtual NAT network (192.168.100.0/24 network) in my libvirt and new guest with two interfaces - one in this network, one as bridged (10.34.1.0/24 network) to the local LAN. The reason for that is I need to have my own virtual network for my DHCP/TFTP/DNS testing and still want to access my guest externally from my LAN. On both networks I have working DHCP, both giving them IP addresses. When I setup NAT port forwarding (e.g. for ssh), I can connect to the eth0 (virtual network), everything is fine. But when I try to access the eth1 via bridged interface, I have no response. I guess I have problem with my routing table - outgoing packets are routed to the virtual NAT network (which has access to the machine I am connecting from - I can ping it). But I am not sure if this setup is correct. I think I need to add something to my routing table. # ifconfig eth0 Link encap:Ethernet HWaddr 52:54:00:B4:A7:5F inet addr:192.168.100.14 Bcast:192.168.100.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:feb4:a75f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:16468 errors:0 dropped:27 overruns:0 frame:0 TX packets:6081 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:22066140 (21.0 MiB) TX bytes:483249 (471.9 KiB) Interrupt:11 Base address:0x2000 eth1 Link encap:Ethernet HWaddr 52:54:00:DE:16:21 inet addr:10.34.1.111 Bcast:10.34.1.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:fede:1621/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:34 errors:0 dropped:0 overruns:0 frame:0 TX packets:189 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4911 (4.7 KiB) TX bytes:9 # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.34.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 eth1 0.0.0.0 192.168.100.1 0.0.0.0 UG 0 0 0 eth0 Network I am trying to connect from is different than network the hypervisor is connected to: 10.36.0.0. But it is accessible from that network. So I tried to add new route rule: route add -net 10.36.0.0 netmask 255.255.0.0 dev eth1 And it is not working. I thought setting correct interface would be sufficient. What is needed to get my packets coming through?

    Read the article

  • Configure static IPv6 on Ubuntu

    - by Charles Offenbacher
    I'm trying to configure IPv6 on a dedicated Ubuntu server. My provider gave me a "/64" (whatever that is - I'm still confused) of IPv6 addresses. However, when I try to use them, I can't ping anything. What do I do? :( # ping6 ipv6.google.com PING ipv6.google.com(vx-in-x63.1e100.net) 56 data bytes From fe80::219:d1ff:fefb:42d8 icmp_seq=1 Destination unreachable: Address unreachable From fe80::219:d1ff:fefb:42d8 icmp_seq=2 Destination unreachable: Address unreachable From fe80::219:d1ff:fefb:42d8 icmp_seq=3 Destination unreachable: Address unreachable --- ipv6.google.com ping statistics --- 3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2014ms # tracepath6 ipv6.google.com 1?: [LOCALHOST] 0.025ms pmtu 1500 1: fe80::219:d1ff:fefb:42d8%eth0 2000.022ms !H Resume: pmtu 1500 # cat /etc/network/interfaces # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address 64.***.***.*** netmask 255.255.255.248 gateway 64.***.***.*** iface eth0 inet6 static pre-up modprobe ipv6 address 2607:F878:1:***::1 netmask 64 gateway 2607:F878:1:***(same as address)::1 # ifconfig eth0 Link encap:Ethernet HWaddr 00:19:d1:fb:42:d8 inet addr:64.***.***.*** Bcast:64.***.***.*** Mask:255.255.255.248 inet6 addr: fe80::219:d1ff:fefb:42d8/64 Scope:Link inet6 addr: 2607:f878:1:***::1/64 Scope:Global UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:52451 errors:0 dropped:0 overruns:0 frame:0 TX packets:39729 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:6817761 (6.8 MB) TX bytes:6153835 (6.1 MB) Interrupt:41 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:166 errors:0 dropped:0 overruns:0 frame:0 TX packets:166 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:31714 (31.7 KB) TX bytes:31714 (31.7 KB)

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >