Search Results

Search found 4721 results on 189 pages for 'traffic'.

Page 29/189 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Mac OS X: pushing all traffic through a VMWare VM [closed]

    - by bj99
    I want to set up an Astaro (Sophos) UTM in a Virtual Machine. The Setup should be at the end the following: Cable Modem (one IP adress) | [Ethernet] Sophos UTM (running as VM [VMWare Fusion 5] on the MacMini) | [WIFI] Airport Express v2 (for sharing Local Network to wireless and wired clients) 1)| [WIFI] 2)| [Ethernet over Thunderbolt Ethernet Adapter]* Clients MacMini (Local File Server) *To have the Mini also protected behind the UTM So the setup process for the UTM works fine, but then the problems start: I just have one external IP (from my cable modem provider)== So if I put the VM in briged mode my Internet connection drops, because the MacMini also has its IP adress. If I put the VM to NAT mode the Mini itself is not protected by the UTM So: is there a way to hide the en0 interface(Ethernet) and the en1 interface (Wifi) from the MacMini, so that they not even appear in System Preferences Network section but are available to the VM? That way the Mini must connect to the en2 interface (Thunderbolt adapter) to make any Internet/LAN connection and I just use the given single IP from the Cable Modem. Thaks for any suggestions... Sebastian

    Read the article

  • Regarding traffic shaping on juniper SRX550

    - by peilin
    We have implemented the Juniper SRX550 in our company. Now we have one issue that how to restrict the internal user download speed from internet. Take one example that i want to restrict the end user with IP:192.168.1.20/32 downloading speed up to 1M via my external port ge-0/0/6.0. Below is my setting: [edit firewall policer p1M] root@SRX550# show if-exceeding { bandwidth-limit 1m; burst-size-limit 15k; } then discard; [edit firewall family inet] root@SRX550# show filter limit-user term 10 { from { destination-address { 192.168.1.20/32; } } then policer p1M; } term else { then accept; } [edit interfaces ge-0/0/6] root@SRX550# show per-unit-scheduler; unit 0 { family inet { filter { input limit-user; } address Hidden Here; } } As per the setting, the end user downloading speed should not exceed the 1m (125KB in windows), but the result is the downloading speed for this end users still can up to 400KB via HTTP/HTTPS. Please advise. Thanks.

    Read the article

  • SSH traffic over openvpn connection freezes when I cat a file

    - by user42055
    I have an openvpn (version 2.1_rc15 at both ends) connection setup between two gentoo boxes using shared keys. it works fine for the most part. I use mysql, http, ftp, scp over the vpn with no problems. But when I ssh from the client to the server over the vpn, weird things happen. I can login, i can execute some commands. But if i try to run an ncurses application like top, or i try to cat a file, the connection will stall and I'll have to sever the ssh session. I can, for example, execute "echo blah; echo .; echo blah" and it will output the three lines of text over the ssh session fine. But if i execute "cat /etc/motd" the session will freeze the moment I press enter. I compiled openvpn 2.1.1 on my mac and copied over my config directory from my gentoo client. The mac connected and ssh sessions worked fine without freezing. I then compiled it on my older gentoo box (2.6.26 kernel) which I am retiring due to a dying hard drive, and ssh over it also works perfectly. Why does it fail on my brand new gentoo box ? I've tried compiling three different kernels in case it was that, but other than that there should be no difference between my older and my newer gentoo boxes that I can think of. Any suggestions on what's wrong ?

    Read the article

  • HP MSR 30-10a Router - Route Traffic over Default Route

    - by SteadH
    We have a brand new HP MSR 30-10a Router. We have a fairly simple routing situation - we have two IP blocks, one which has a route out. We need things on the first block to go through the router, and out. I have an old Cisco 2801 router doing the job right now. For our example - IP Block 1: 50.203.110.232/29, Router interface on this block is 50.203.110.237, route out is 50.203.110.233. IP Block 2: 50.202.219.1/27, Router interface on this block at 50.202.219.20. I have a static route created for: 0.0.0.0 0.0.0.0 50.203.110.233 The router seems to understand this. When on the CLI via serial cable, I can ping 8.8.8.8 and hear responses from Google DNS. Woo hoo! The issue arrives when any client sits on the IP Block 2 side. I configured my client with a static IP of 50.202.219.15/27, default gateway 50.202.219.20. I can ping myself. I can ping the near side of the router (50.202.219.20), and I can ping the far side of the router (50.203.110.237. I cannot ping anything else in IP block 1, nor can I ping 8.8.8.8. Here is my configuration file: <HP>display current-configuration # version 5.20.106, Release 2507, Standard # sysname HP # domain default enable system # dar p2p signature-file flash:/p2p_default.mtd # port-security enable # undo ip http enable # password-recovery enable # vlan 1 # domain system access-limit disable state active idle-cut disable self-service-url disable # user-group system group-attribute allow-guest # local-user admin password cipher $c$3$40gC1cxf/wIJNa1ufFPJsjKAof+QP5aV authorization-attribute level 3 service-type telnet # cwmp undo cwmp enable # interface Aux0 async mode flow link-protocol ppp # interface Cellular0/0 async mode protocol link-protocol ppp # interface Ethernet0/0 port link-mode route ip address 50.203.110.237 255.255.255.248 # interface Ethernet0/1 port link-mode route ip address 50.202.219.20 255.255.255.224 # interface NULL0 # ip route-static 0.0.0.0 0.0.0.0 50.203.110.233 permanent # load xml-configuration # load tr069-configuration # user-interface tty 12 user-interface aux 0 user-interface vty 0 4 authentication-mode scheme # My guess right now is there is some sort of "permission" needed to use the default route. The manuals haven't turned up a lot in this area that don't make the situation much more complicated (but maybe it needs to be more complicated?) Background: we use HP switches, and I love the CLI. I bought HP thinking the command line interface would be similar, or at least speak the same language. Whoops! I'd be happy to provide more information or perform any additional tests. Thanks in advance! Update 1: The manual mentions routing rules. I hadn't previously added these (since our Cisco 2801 seems to route anything by default). I added: ip ip-prefix 1 permit 0.0.0.0 0 less-equal 32 alas, still no dice.

    Read the article

  • Howto configure openSuSE firewall to route local traffic to local ports

    - by Eduard Wirch
    I have openSUSE 11.3 installed. I'm using the openSUSE firewall configuration mechanism (/etc/sysconfig/SuSEfirewall2). I have a http server application running on port 8080. I want the http service to be accessible using port 80. I created a redirect rule usign: FW_REDIRECT="0/0,0/0,tcp,80,8080" This works fine for every request coming from external. But it doesn't for local requests. (example: wget http://myserver/) Is there a way how I can tell the firewall to redirect local requests addressed for 80 to port 8080? (using the SUSE firewall configuration file)

    Read the article

  • Prevent outgoing traffic unless OpenVPN connection is active using pf.conf on Mac OS X

    - by Nick
    I've been able to deny all connections to external networks unless my OpenVPN connection is active using pf.conf. However, I lose Wi-Fi connectivity if the connection is broken by closing and opening the laptop lid or toggling Wi-Fi off and on again. I'm on Mac OS 10.8.1. I connect to the Web via Wi-Fi (from varying locations, including Internet cafés). The OpenVPN connection is set up with Viscosity. I have the following packet filter rules set up in /etc/pf.conf # Deny all packets unless they pass through the OpenVPN connection wifi=en1 vpn=tun0 block all set skip on lo pass on $wifi proto udp to [OpenVPN server IP address] port 443 pass on $vpn I start the packet filter service with sudo pfctl -e and load the new rules with sudo pfctl -f /etc/pf.conf. I have also edited /System/Library/LaunchDaemons/com.apple.pfctl.plist and changed the line <string>-f</string> to read <string>-ef</string> so that the packet filter launches at system startup. This all seems to works great at first: applications can only connect to the web if the OpenVPN connection is active, so I'm never leaking data over an insecure connection. But, if I close and reopen my laptop lid or turn Wi-Fi off and on again, the Wi-Fi connection is lost, and I see an exclamation mark in the Wi-Fi icon in the status bar. Clicking the Wi-Fi icon shows an "Alert: No Internet connection" message: To regain the connection, I have to disconnect and reconnect Wi-Fi, sometimes five or six times, before the "Alert: No Internet connection" message disappears and I'm able to open the VPN connection again. Other times, the Wi-Fi alert disappears of its own accord, the exclamation mark clears, and I'm able to connect again. Either way, it can take five minutes or more to get a connection again, which can be frustrating. Why does Wi-Fi report "No internet connection" after losing connectivity, and how can I diagnose this issue and fix it?

    Read the article

  • Send all traffic over VPN connection not working Windows VPN host

    - by Adam Schiavone
    I am trying to get a mac (10.8) to connect to thru vpn to a server running Windows Server 2008 R2 pass all requests from the mac to the server. The VPN is setup and I can connect and access the server thru a web browser, but for all other sites, the DNS lookup fails. I have tried adding a DNS server to the VPN Host. ex. Lets say the the VPN server also hosts a website example.com. I connect to the VPN with my mac and point a browser to example.com and everything works fine. but when I point the browser to google.com it just sits there and will eventually come back with a DNS lookup failed message. HOWEVER: I tried running the command dig @myServersIpHere www.google.com. on the mac and it comes back with correct IP addresses. I really dont know what to do from here. How can I route all requests from my mac, thru my windows server via VPN?

    Read the article

  • Server unable to communicate OUT, fine serving traffic

    - by jonoabroad
    We have several servers that are randomly loses the ability to communicate out to other nodes on the local network and internet. However websites are being served fine and we still have ssh access. A reboot seems to fix the problem for a few days. The servers are running 10.5.X and are fully up to date with software updates. Does anyone have any opinions, we think it is probably an ISP router, but we are guessing. Cheers Jono

    Read the article

  • OpenVPN - client-to-client traffic working in one direction but not the other

    - by user42055
    I have the following VPN configuration: +------------+ +------------+ +------------+ | outpost |----------------| kino |----------------| guchuko | +------------+ +------------+ +------------+ OS: FreeBSD 6.2 OS: Gentoo 2.6.32 OS: Gentoo 2.6.33.3 Keyname: client3 Keyname: server Keyname: client1 eth0: 10.0.1.254 eth0: 203.x.x.x eth0: 192.168.0.6 tun0: 192.168.150.18 tun0: 192.168.150.1 tun0: 192.168.150.10 P-t-P: 192.166.150.17 P-t-P: 192.168.150.2 P-t-P: 192.168.150.9 Kino is the server and has client-to-client enabled. All three machines have ip forwarding enabled, by this on the gentoo boxes: net.ipv4.conf.all.forwarding = 1 And this on the FreeBSD box: net.inet.ip.forwarding: 1 In the server's "ccd" directory is the following files: client1: iroute 192.168.0.0 255.255.255.0 client3: iroute 10.0.1.0 255.255.255.0 The server config has these routes configured: push "route 192.168.0.0 255.255.255.0" push "route 10.0.1.0 255.255.255.0" route 192.168.0.0 255.255.255.0 route 10.0.1.0 255.255.255.0 Kino's routing table looks like this: 192.168.150.0 192.168.150.2 255.255.255.0 UG 0 0 0 tun0 10.0.1.0 192.168.150.2 255.255.255.0 UG 0 0 0 tun0 192.168.0.0 192.168.150.2 255.255.255.0 UG 0 0 0 tun0 192.168.150.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 Outpost's like this: 192.168.150 192.168.150.17 UGS 0 17 tun0 192.168.0 192.168.150.17 UGS 0 2 tun0 192.168.150.17 192.168.150.18 UH 3 0 tun0 And Guchuko's like this: 192.168.150.0 192.168.150.9 255.255.255.0 UG 0 0 0 tun0 10.0.1.0 192.168.150.9 255.255.255.0 UG 0 0 0 tun0 192.168.150.9 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 Now, the tests. Pings from Guchuko to Outpost's LAN IP work OK, as does the reverse - pings from Outpost to Guchuko's LAN IP. However... Pings from Outpost, to a machine on Guchuko's LAN work fine: .(( root@outpost )). (( 06:39 PM )) :: ~ :: # ping 192.168.0.3 PING 192.168.0.3 (192.168.0.3): 56 data bytes 64 bytes from 192.168.0.3: icmp_seq=0 ttl=63 time=462.641 ms 64 bytes from 192.168.0.3: icmp_seq=1 ttl=63 time=557.909 ms But a ping from Guchuko, to a machine on Outpost's LAN does not: .(( root@guchuko )). (( 06:43 PM )) :: ~ :: # ping 10.0.1.253 PING 10.0.1.253 (10.0.1.253) 56(84) bytes of data. --- 10.0.1.253 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2000ms Guchuko's tcpdump of tun0 shows: 18:46:27.716931 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 1, length 64 18:46:28.716715 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 2, length 64 18:46:29.716714 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 3, length 64 Outpost's tcpdump on tun0 shows: 18:44:00.333341 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 3, length 64 18:44:01.334073 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 4, length 64 18:44:02.331849 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 5, length 64 So Outpost is receiving the ICMP request destined for the machine on it's subnet, but appears not be forwarding it. Outpost has gateway_enable="YES" in its rc.conf which correctly sets net.inet.ip.forwarding to 1 as mentioned earlier. As far as I know, that's all that's required to make a FreeBSD box forward packets between interfaces. Is there something else I could be forgetting ?

    Read the article

  • ipfw to redirect traffic from port 80 and 443 to 8080

    - by user1048138
    -A PREROUTING -s 10.0.10.0/24 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8080 -A PREROUTING -s 10.0.10.0/24 -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 8080 -A POSTROUTING -s 10.0.10.0/24 -o eth0 -j MASQUERADE The above code is what I have used on linux to forward my ports to 8080, how can I do the same on a mac? I have tried test_machine:~ root# ipfw show 00666 0 0 fwd 127.0.0.1,8080 tcp from any to me dst-port 80 and its not working! any suggestions?

    Read the article

  • Prevent nginx from redirecting traffic from https to http when used as a reverse proxy

    - by Chris Pratt
    Here's my abbreviated nginx vhost conf: upstream gunicorn { server 127.0.0.1:8080 fail_timeout=0; } server { listen 80; listen 443 ssl; server_name domain.com ~^.+\.domain\.com$; location / { try_files $uri @proxy; } location @proxy { proxy_pass_header Server; proxy_redirect off; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 10; proxy_read_timeout 120; proxy_pass http://gunicorn; } } The same server needs to serve both HTTP and HTTPS, however, when the upstream issues a redirect (for instance, after a form is processed), all HTTPS requests are redirected to HTTP. The only thing I have found that will correct this issue is changing proxy_redirect to the following: proxy_redirect http:// https://; That works wonderfully for requests coming from HTTPS, but if a redirect is issued over HTTP it also redirects that to HTTPS, which is a problem. Out of desperation, I tried: if ($scheme = 'https') { proxy_redirect http:// https://; } But nginx complains that proxy_redirect isn't allowed here. The only other option I can think of is to define the two servers separately and set proxy_redirect only on the SSL one, but then I would have duplicate the rest of the conf (there's a lot in the server directive that I omitted for simplicity sake). I know I could also use an include directive to factor out the redundancy, but I really want to keep just one conf file without any dependencies. So, first, is there something I'm missing that will negate the problem entirely? Or, second, if not, is there any other way (besides including an external file) to factor out the redundant config information so that I can separate out the HTTP and HTTPS versions of the server config?

    Read the article

  • Howto configure openSuSE firewall to route local traffic to local ports

    - by Eduard Wirch
    I have openSUSE 11.3 installed. I'm using the openSUSE firewall configuration mechanism (/etc/sysconfig/SuSEfirewall2). I have a http server application running on port 8080. I want the http service to be accessible using port 80. I created a redirect rule usign: FW_REDIRECT="0/0,0/0,tcp,80,8080" This works fine for every request coming from external. But it doesn't for local requests. (example: wget http://myserver/) Is there a way how I can tell the firewall to redirect local requests addressed for 80 to port 8080? (using the SUSE firewall configuration file)

    Read the article

  • Server 2003 RAS Server Utilising High WAN Traffic

    - by Joe Sergeant
    We have Routing and Remote Access configured on Server 2003 (also our primary domain controller), allowing users to connect in remotely to access files, email, etc. With one user, the RAS Server is constantly sending data to that user's remote computer. From 9am this morning it has transferred almost 800MB. The user isn't transferring any files remotely, certainly not enough to total 800MB anyway. None of the other remote users have had this issue. We have ensured that the user in question has "Use default gateway on remote network" disabled for both IPv4 and IPv6 and we are fairly confident that Offline Files isn't trying to synchronise with the server remotely, too. My question is two-fold. Firstly, has anyone had a similar experience? Secondly, what would be the best software to discover exactly what data is being sent to the remote user?

    Read the article

  • HTTP traffic through PIX VPN from outside site

    - by fwrawx
    I have a remote site with a website that only allows access from the outside IP assigned to our local PIX. I have users connecting to the local networking using a VPN that need to be able to view this remote site. I don't think this works because the packets want to come in and go out over the same (ext) interface. So I'm looking for a way to make this work using the PIX or setting up a service on a server on the local network to act as a middle-man for the HTTP requests. The remote site doesn't support setting up a VPN to our PIX. The remote website is dishing out pages over a non-standard port. Can I use squid or something similar to proxy just one site?

    Read the article

  • routing traffic between two network cards through firewall

    - by RubyFreak
    I'm trying to test a network device (firewall) using a Linux box, with two network cards, one interface connected to the WAN zone and another interface to LAN zone. The configuration is similar with that |ETH0| <-> | FW | <-> ETH1 So from both interfaces I'm able to ping the respective firewall interface. But i'm not able to fire something like: ping -I eth0 ip.from.eth1 and to get any answer. Is that possible or should the linux network namespace solution or user level tcp stacks (VMs are out of question)

    Read the article

  • IIS not listening over external network, all other traffic working

    - by Beuy
    Hello there, I have a very odd situation, I have a server (let's call it X) running 2008 R2 with two NIC's in it, one is connected to the work domain and has a subnet of 192.168.10.0/24 the other is connected to a ADSL connection and has a subnet of 192.168.1.0/24. The server has IIS installed. On the ADSL connection I have setup a dynamic dns and port forwarding to allow external HTTP, HTTPS, FTP and RDP connections. FTP and RDP are working fine however neither HTTP or HTTPS are working at all. I can browse the websites by going to localhost on the machine, the HTTP and HTTPS ports appear as "Filtered" when I try to scan them using PortQueryUI and browsers respond with a "Server took too long to load or was not responding" error. This was working fine just a few days ago, Windows firewall is disabled I don't have any software firewall on it. And I'm really lost. Any help would be great.

    Read the article

  • Mac OS X: pushing all traffic through a VMWare VM

    - by bj99
    I want to set up an Astaro (Sophos) UTM in a Virtual Machine. The Setup should be at the end the following: Cable Modem (one IP adress) | [Ethernet] Sophos UTM (running as VM [VMWare Fusion 5] on the MacMini) | [WIFI] Airport Express v2 (for sharing Local Network to wireless and wired clients) 1)| [WIFI] 2)| [Ethernet over Thunderbolt Ethernet Adapter]* Clients MacMini (Local File Server) *To have the Mini also protected behind the UTM So the setup process for the UTM works fine, but then the problems start: I just have one external IP (from my cable modem provider)== So if I put the VM in briged mode my Internet connection drops, because the MacMini also has its IP adress. If I put the VM to NAT mode the Mini itself is not protected by the UTM So: is there a way to hide the en0 interface(Ethernet) and the en1 interface (Wifi) from the MacMini, so that they not even appear in System Preferences Network section but are available to the VM? That way the Mini must connect to the en2 interface (Thunderbolt adapter) to make any Internet/LAN connection and I just use the given single IP from the Cable Modem. Thaks for any suggestions... Sebastian

    Read the article

  • Slow http traffic between VMWare guest and host.

    - by toluju
    I have a web application running as an http server inside the VMWare guest OS, and I'm trying to access the content from the host OS. The guest is running Ubuntu, and the host is running Windows XP. The problem is, when I try to access the application from a browser in the host OS, the content takes a very long time to load (up to a minute for a single page). A browser in the guest OS can access the application with no problems. I've tried using both NAT and bridged networking, but the results are the same. The Windows firewall is turned off. The connection itself appears fine, as ping requests from guest to host as well as host to guest complete without errors or delays. Both guest and host can access the external Internet connection without a problem. I'm using VMWare Player. Any ideas?

    Read the article

  • wireless router - configuring for low-latency, high traffic environment

    - by Mark C
    Hey all, I have a few questions about configuring a router to achieve low-latency, high speed throughput on a local area network that is not connected to the internet. I've read up on some stuff, but thought I would solicit some opinions here on what I've found and what I want to know.... Turn off SSID broadcast - it produces extraneous packets that all clients receive and reply (?) to. Not a huge deal, but it may help a bit. Mixed-mode off - I should attempt to have all devices using the same standard (e.g. 802.11n) and turn mixed-mode off. Any thoughts on security? Does having WEP or any of the WPA variants actually increase latency? Nothing super secure is going over this LAN so if turning security off made things better, that'd be cool. Any other thoughts or things to focus on to create the low latency environment I'm trying to go for would be great. Links to webpages and papers are also cool. I'm open to go through a bunch of stuff. Thanks in advance!

    Read the article

  • Squid proxy server not forwarding traffic

    - by DilbertDave
    I'm trying to set up a web filtering system (dans guardian) on my home network but am failing at the first hurdle - configuring the Squid Proxy server. No matter what I do I cannot seem to configure it properly and I just receive the 'Requested URL could not be received' error page. If I remove the proxy setting on the browser everything works normally. Ultimately I want to run this on something like a Raspberry Pi but at the moment I'm testing with a virtual machine (although efforts with a netbook were equally unsuccessful). The VM has a clean installation of Linux Mint 15 and I've installed squid via apt-get. I've followed numerous walkthroughs but this on (https://help.ubuntu.com/lts/serverguide/squid.html) pretty much sums up the process I've been taking. I'm obviously missing something but cannot figure it out - any assistance appreciated.

    Read the article

  • Tunneling HTTPS traffic via a PUTTY/SSL tunnel with SOCKS

    - by ripper234
    I have configured a SOCKS ssh tunnel to a remote proxy, and set my Firefox to use localhost:<port> as a SOCKS proxy. My intention is to tunnel outgoing HTTP/S connections from my machine via a specific 3rd party server I own (on AWS). In my testing, HTTP UTLs are forwarded properly (e.g. when I access http://jsonip.com/ from my computer I do get the server's IP) However, whenever I try to reach an HTTPS address, I get this error: The proxy server is refusing connections How do I debug/fix it? My PUTTY tunnel config is simply (some random source port number + dynamic checked): P.S. I'm aware I might need to manually accept SSL certificates. The reason I'm doing this is to resolve problems using gmail as an outbound SMTP service.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >