Search Results

Search found 18781 results on 752 pages for 'ip port'.

Page 221/752 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Route forwarded traffic through eth0 but local traffic through tun0

    - by Ross Patterson
    I have a Ubuntu 12.04/Zentyal 2.3 server configured with WAN NATed on eth0, local interfaces eth1 and wlan0 bridged on br1 on which DHCP runs, and an OpenVPN connection on tun0. I only need the VPN for some things running on the gateway itself and I need to make sure that everything running on the gateway goes through the VPNs tun0. root:~# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default gw... 0.0.0.0 UG 100 0 0 eth0 link-local * 255.255.0.0 U 1000 0 0 br1 192.168.1.0 * 255.255.255.0 U 0 0 0 br1 A.B.C.0 * 255.255.255.0 U 0 0 0 eth0 root:~# ip route 169.254.0.0/16 dev br1 scope link metric 1000 192.168.1.0/24 dev br1 proto kernel scope link src 192.168.1.1 A.B.C.0/24 dev eth0 proto kernel scope link src A.B.C.186 root:~# ip route show table main 169.254.0.0/16 dev br1 scope link metric 1000 192.168.1.0/24 dev br1 proto kernel scope link src 192.168.1.1 A.B.C.0/24 dev eth0 proto kernel scope link src A.B.C.D root:~# ip route show table default default via A.B.C.1 dev eth0 How can I configure routing (or otherwise) such that all forwarded traffic for other hosts on the LAN goes through eth0 but all traffic for the gateway itself goes through the VPN on tun0? Also, since the OpenVPN client changes routing on startup/shutdown, how can I make sure that everything running on the gateway itself loses all network access if the VPN goes down and never goes out eth0.

    Read the article

  • Forward all traffic through an ssh tunnel

    - by Eamorr
    I hope someone can follow this and I'll explain as best I can. I'm trying to forward all traffic from port 6999 on x.x.x.224, through an ssh tunnel, and onto port 7000 on x.x.x.218. Here is some ASCII art: |browser|-----|Squid on x.x.x.224|------|ssh tunnel|------<satellite link>-----|Squid on x.x.x.218|-----|www| 3128 6999 7000 80 When I remove the ssh tunnel, everything works fine. The idea is to turn off encryption on the ssh tunnel (to save bandwidth) and turn on maximum compression (to save more bandwidth). This is because it's a satellite link. Here's the ssh tunnel I've been using: ssh -C -f -C -o CompressionLevel=9 -o Cipher=none [email protected] -L 7000:172.16.1.224:6999 -N The trouble is, I don't know how to get data from Squid on x.x.x.224 into the ssh tunnel? Am I going about this the wrong way? Should I create an ssh tunnel on x.x.x.218? I use iptables to stop squid on x.x.x.224 from reading port 80, but to feed from port 6999 instead (i.e. via the ssh tunnel). Do I need another iptables rule? Any comments greatly appreciated. Many thanks in advance,

    Read the article

  • connect server to server on secondary NIC

    - by microchasm
    Hi, I have a CentOS box with multiple NIC's running Apache. I also have another box running RHEL that will be the MySQL server. I'm trying to use the secondary NIC on the Apache box to connect directly to the MySQL server, but so far no luck. I want to isolate the MySQL box as much as possible which is why I'm going for a direct connection as opposed to running through a switch. I have a crossover cable running between them. IP configs: Apache box eth0 [to lan] ip addr: 192.168.200.100 netmask: 255.255.0.0 gateway: 192.168.111.1 eth1 [to mysql] ip addr: 192.168.200.101 netmask: 255.255.0.0 gateway: [blank] MySQL box eth0 [to apache] ip addr: 192.168.200.203 netmask: 255.255.0.0 gateway: 192.168.200.201 The rest of our network is on 192.168.111.0/24 subnet. Ping only returns Destination Host Unreachable. I've tried various variations of this setup (including straight through cable), and I can't seem to get them to talk to each other. Any help appreciated.

    Read the article

  • xampp apache on windows 7 returns http header only

    - by bumperbox
    i am having issues with xampp running on windows 7 RC32 i type in a localhost and get a header back only, no page content somedays it works fine, other days i can't get it to work after multiple attempts, reboot or otherwise the request doesn't even get put into the acccess log which seems unusual here is the log file at startup incase that helps any ideas ?? [Wed Sep 09 12:27:08 2009] [notice] Digest: generating secret for digest authentication ... [Wed Sep 09 12:27:08 2009] [notice] Digest: done [Wed Sep 09 12:27:09 2009] [notice] Apache/2.2.11 (Win32) DAV/2 mod_ssl/2.2.11 OpenSSL/0.9.8i PHP/5.2.9 configured -- resuming normal operations [Wed Sep 09 12:27:09 2009] [notice] Server built: Dec 10 2008 00:10:06 [Wed Sep 09 12:27:09 2009] [notice] Parent: Created child process 2500 [Wed Sep 09 12:27:10 2009] [notice] Digest: generating secret for digest authentication ... [Wed Sep 09 12:27:10 2009] [notice] Digest: done [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Child process is running [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Acquired the start mutex. [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Starting 250 worker threads. [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Starting thread to listen on port 443. [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Starting thread to listen on port 80. [Wed Sep 09 12:27:15 2009] [notice] Parent: child process exited with status 255 -- Restarting. [Wed Sep 09 12:27:15 2009] [notice] Digest: generating secret for digest authentication ... [Wed Sep 09 12:27:15 2009] [notice] Digest: done [Wed Sep 09 12:27:16 2009] [notice] Apache/2.2.11 (Win32) DAV/2 mod_ssl/2.2.11 OpenSSL/0.9.8i PHP/5.2.9 configured -- resuming normal operations [Wed Sep 09 12:27:16 2009] [notice] Server built: Dec 10 2008 00:10:06 [Wed Sep 09 12:27:16 2009] [notice] Parent: Created child process 3252 [Wed Sep 09 12:27:17 2009] [notice] Digest: generating secret for digest authentication ... [Wed Sep 09 12:27:17 2009] [notice] Digest: done [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Child process is running [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Acquired the start mutex. [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Starting 250 worker threads. [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Starting thread to listen on port 443. [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Starting thread to listen on port 80.

    Read the article

  • Sophos UTM in Hyper-V

    - by TheD
    So, I had a previous thread about this Virtualizing Firewalls/UTM. Essentially, I have configured what I think would work, but networking isn't my strong point! Two Virtual Adapters - with IP addresses 192.168.0.2 (External) and 192.168.0.3 (Internal) respectively. The External Adapater looks at 192.168.0.1 (my Zyxel) for it's default gateway. The Internal Adapter, 192.168.0.3, which is what the Sophos UTM listens on, has it's default gateway set to 192.168.0.2, the IP of the External Lan interface. So, PC (192.168.0.11, DHCP) --> (LAN) --> Switch --> 192.168.0.3 (Internal LAN Interface IP) --> Sophos UTM --> 192.168.0.2 (External LAN Interface IP) --> 192.168.0.1 --> Internet Would this be the correct setup, or am I completely out of the game here? Cheers!

    Read the article

  • Error during configuring kerberos5 using macports

    - by ario
    While trying to install libmemcached via MacPorts, I hit the following issue: libmemcached @0.40 +universal ---> Computing dependencies for libmemcached ---> Dependencies to be installed: cyrus-sasl2 kerberos5 ---> Configuring kerberos5 Error: org.macports.configure for port kerberos5 returned: configure failure: command execution failed Error: Failed to install kerberos5 It tells me to look in the log for details. Here's the last bit of the log file: :info:configure checking for setupterm in -lcurses... no :info:configure checking for setupterm in -lncurses... no :info:configure checking for tgetent... no :info:configure configure: error: Could not find tgetent; are you missing a curses/ncurses library? :info:configure configure: error: /bin/sh './configure' failed for appl/telnet :info:configure Command failed: cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_net_kerberos5/kerberos5/work/krb5-1.7.2/src" && ./configure --prefix=/opt/local --disable-dependency-tracking --mandir=/opt/local/share/man :info:configure Exit code: 1 :error:configure org.macports.configure for port kerberos5 returned: configure failure: command execution failed :debug:configure Error code: NONE :debug:configure Backtrace: configure failure: command execution failed while executing "$procedure $targetname" :info:configure Warning: targets not executed for kerberos5: org.macports.activate org.macports.configure org.macports.build org.macports.destroot org.macports.install :error:configure Failed to install kerberos5 :debug:configure Registry error: kerberos5 not registered as installed & active. invoked from within "registry_active ${subport}" invoked from within "$workername eval registry_active \${subport}" :notice:configure Please see the log file for port kerberos5 for details: /opt/local/var/macports/logs/_opt_local_var_macports_sources_rsync.macports.org_release_ports_net_kerberos5/kerberos5/main.log It seems to say it's missing ncurses. Looks like it's there though, since if I run port installed I see these: ncurses @5.7_0 ncurses @5.9_1 (active) ncursesw @5.7_0 Any ideas on how to get around this error?

    Read the article

  • Xen networking is inconsistent in multiple ways

    - by WildVelociraptor
    I've been running xen for a few weeks now on an Ubuntu 12.04 server. I've got 3 guests: a Windows Server 2003 guest, an Ubuntu guest, and a Windows 7 Guest. My Server 2003 guest seems to work fine; I can ping it from the network, the hostname resolves correctly, and it can see the internet. This guest is attached to xenbr0, and its IP is 10.100.1.21. My Win7 guest is what is driving me crazy. I use the same configuration script as a base, changing the important parts (hostname and boot disk, mainly). It installed correctly, and is currently running, but I am unable to ping this guest. It's hostname is "alexander", with an IP of 10.100.1.22. It is also using xenbr0. The guest can ping the firewall and various IP addressess, but seems unable to resolve hostnames. Now heres the weird part: when I use rdesktop (RDP client) from my laptop (not the xen host) to connect to alexander, it works just fine. It apparently resolves the hostname fine, and does the same with the IP address. So, can someone tell me why I can access this guest using RDP, but not using ping, nslookup, traceroute, etc? It's apparently invisible to all but RDP. Also, is it okay to use two guests on the same bridge, or do i need different ones for each guest? Thanks in advance for any help. Regards

    Read the article

  • Do you need to advertise an AFP service via Avahi for an Ubuntu Server to show up in OSX Finder?

    - by James
    I am only advertising an NFS share plus the "model", and I don't want to install extra services on the Server unless I have to, ie netatalk, as it is used solely for NFS exports. Currently there is no entry in Finder under "Shared" with below config of Avahi. serveradmin@FILESERVER:/etc/avahi/services$ cat nfs.service <?xml version="1.0" standalone='no'?><!--*-nxml-*--> <!DOCTYPE service-group SYSTEM "avahi-service.dtd"> <service-group> <name replace-wildcards="yes">%h</name> <service> <type>_nfs._tcp</type> <port>2049</port> <txt-record>path=/Volumes/StoragePool</txt-record> </service> <service> <type>_device-info._tcp</type> <port>0</port> <txt-record>model=Xserve</txt-record> </service> </service-group> Server: Ubuntu 12.04.01 x64 Clients: OSX 10.6.8 , 10.7.5, 10.8.2 The goal is to advertise that NFS share, then assign a really old Model code of Mac like a Powermac and switch out the icon for a more "LinuxServer-y" one. Plus allow users to connect to NFS in a manner they are familiar with like our other Xserve servers. I think Avahi is working in general as if I do: nfs://FILESERVER.local/Volumes/StoragePool it will connect fine. Any ideas?

    Read the article

  • IIS 7 URL Rewrite to GeoServer running on Apache

    - by Maxim Zaslavsky
    I'm building a mapping application based on OpenLayers that uses GeoServer to serve up mapping data. The problem I'm having is that besides the map images I'm requesting through WMS, I'm using jQuery AJAX to get information from GeoServer. As GeoServer is running on a different port, my requests are being blocked due to cross-site scripting security policies in JavaScript. As a Java application, GeoServer runs on Apache on port 8080, while my IIS instance is running on port 80. Instead of building a proxy, I've decided to use URL Rewriting in IIS7 to fix this problem. I'm following this guide, but it's still not working. Here are my URL Rewrite rule settings: Matches URL: (.*) Condition: {HTTP_URL} matching /geoserver Action: rewrite to http://localhost:8080/{R:1}, appending query string When I request http://localhost/geoserver/wms?QUERY_LAYERS=SanDiego:FWSA_sandiego&LAYERS=SanDiego:FWSA_sandiego&SERVICE=WMS&VERSION=1.1.1&FEATURE_COUNT=20&REQUEST=GetFeatureInfo&EXCEPTIONS=application/vnd.ogc.se_xml&BBOX=-13009123.590156,3862057.2905992,-13006066.109025,3865114.7717302&INFO_FORMAT=text/html&x=20&y=20&width=40&height=40&srs=EPSG:900913, however, all I get is a 404, although the same request on port 8080 returns the proper result. What am I doing wrong? Thanks in advance.

    Read the article

  • Unable to Access Localhost after starting Xampp

    - by user7370
    OS: Windows XP Professional, SP 2. Few days back i had xampplite 1.7.1 installed and was able to access localhost and phpmyadmin through browser...... Today it suddenly stopped working. In firefox after i type http://localhost/ nothing happens, just blank white screen. I removed all the files in xampplite folders and re-installed ver 1.7.1 again, it's of no use. Then i installed xampplite 1.7.2 (latest), which i had downloaded from xampp website, again it's of no use. Apache and MySql are running though. I tried using locally installing wordpress, as i have a theme ready and want to convert that design to wordpress, test it and start using it online. Running 'Port-check' on xampp control panel showed this -- RESULT ------ Service -- -- Port -- -- Status -- --------------------------------------------------- Apache (HTTP) -- 80 -- C:\xampplite\apache\bin\httpd.exe Apache (WebDAV) -- 81 -- free Apache (HTTPS) -- 443 -- C:\xampplite\apache\bin\httpd.exe MySQL -- 3306 -- C:\xampplite\mysql\bin\mysqld.exe FileZilla (FTP) -- 21 -- free FileZilla (Admin) -- 14147 -- free Mercury (SMTP) -- 25 -- free Mercury (POP3) -- 110 -- free Mercury (IMAP) -- 143 -- free Mercury (HTTP) -- 2224 -- free Mercury (Finger) -- 79 -- free Mercury (PH) -- 105 -- free Mercury (PopPass) -- 106 -- free Tomcat (AJP/1.3) -- 8009 -- free Tomcat (HTTP) -- 8080 -- free --------------------------------------------- I also Have skype installed but it's not using 'Port 80' (as i have read, this was the issue, but checked under skype option the port is 65013). And when i run file:///C:/xampp/htdocs/index.php - it shows "Something is wrong with the XAMPP installation :-( " Please help with this problem. thanks Sharath kumar

    Read the article

  • Having trouble setting up my router

    - by indyK1ng
    I just moved into my apartment and the Internet connection is working. It's Comcast in case that matters. Anyway, I'm having trouble setting up my wireless router (Netgear WNR2000) to work with it. Are there any settings that I could be missing? I currently have it set up to use a static IP address and I found the DNS servers I'm supposed to use and the Internet light is green, but I can't get out to the Internet. When I am trying, I'm connecting to an Ethernet port on the back of my router. Is there a setting I'm missing or a setting that I have set wrong? I used the automatic set up wizard to learn that it's a static IP address. Any help would be appreciated. I am currently only able to use my Linux machine, so please make any help in Linux commands. Yes, I can connect to the Internet if I connect to the modem directly and I've been using the web interface when I'm connected to the router, so I suppose I can ping the router. My router detected the connection as using a static IP address, so I connected to the modem directly and figured out what my IP address, gateway, and mask were as well as DNS servers.

    Read the article

  • Apache 403 after configuring varnish

    - by w0rldart
    I just don't know where else to look and what else to do. I keep getting a 403 error on all my vhosts after setting varnish 3.0 Apacher log: [error] [client 127.0.0.1] client denied by server configuration: /etc/apache2/htdocs Headers: http://domain.com/ GET / HTTP/1.1 Host: domain.com User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko/20100101 Firefox/16.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Connection: keep-alive Cookie: __utma=106762181.277908140.1348005089.1354040972.1354058508.6; __utmz=106762181.1348005089.1.1.utmcsr=OTHERDOMAIN.com|utmccn=(referral)|utmcmd=referral|utmcct=/galerias/cocinas Cache-Control: max-age=0 HTTP/1.1 403 Forbidden Vary: Accept-Encoding Content-Encoding: gzip Content-Type: text/html; charset=iso-8859-1 X-Cacheable: YES Content-Length: 223 Accept-Ranges: bytes Date: Sat, 01 Dec 2012 20:35:14 GMT X-Varnish: 1030961813 1030961811 Age: 26 Via: 1.1 varnish Connection: keep-alive X-Cache: HIT ---------------------------------------------------------- /etc/default/varnish: DAEMON_OPTS="-a ip.ip.ip.ip:80 \ -T localhost:6082 \ -f /etc/varnish/main.domain.vcl \ -S /etc/varnish/secret \ -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G" #-s malloc,256m" My vcl file: http://pastebin.com/axJ57kD8 So, any ideas what I could be missing? Update Just so you know, ports: NameVirtualHost *:8000 Listen 8000 and <VirtualHost 205.13.12.12:8000>

    Read the article

  • ssh connection slow when using @hostname.com but now when using @ipaddress

    - by Alex Recarey
    When connecting to a Debian server using ssh, if I use [email protected] (the IP address of hte server) the connection is instant. If however I use [email protected] (a DNS redirected to the IP address of the server) the ssh connection hangs for a 20 seconds before connecting successfully. The ssh logs show the following: [alex@alex home]$ ssh -v -v [email protected] OpenSSH_5.5p1, OpenSSL 1.0.0c-fips 2 Dec 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 and here it hangs during 20 seconds before continuing. I think it might have something to do with reverse DNS or similar (the server does not really "know" it's name is hostname.com, it just has that DNS rediriected to its IP address). I have added the following options to /etc/ssh/sshd_config: UseDNS no GSSAPIAuthentication no to no effect. The server's DNS records in /etc/resolv.conf are configured correctly: ping hostname.com PING sub.domain.com (X.X.X.X) 56(84) bytes of data. 64 bytes from replicant (X.X.X.X): icmp_seq=1 ttl=64 time=0.029 ms 64 bytes from replicant (X.X.X.X): icmp_seq=2 ttl=64 time=0.050 ms?s Thanks for the help. Solution: It seems the DSL router my ISP saddled me with was causing the trouble. Changing my DNS server from 192.168.1.1 (router's IP) to google's (8.8.8.8, always good to know when you are in a hurry) instantly solved the connection delay problem. I am guessing that the 50€ router provided does not cache DNS entries, although I don't understand why pinging the DNS address had no delay, and 20 seconds is too long of a wait, even for uncached DNS. Tnanks again for the help!

    Read the article

  • NetApp NDMP backup with BE 2010 R2 works, restore fails

    - by uuwe
    Hi, I'm having some issues with a new Backup Exec 2010 R2 installation. I configured a NetApp FAS2020 as an NDMP device and want to backup files from the NAS to a tape drive connected to my backup server. I set up ndmpd according to this document (http://www.symantec.com/business/support/index?page=content&id=TECH48957) and created a separate backup user (http://filers.blogspot.com/2006/09/setting-veritas-netbackup-with-non.html). Backup works perfectly, but restoring any file gives me an authentication failed error. The NDMP device has a "global" ndmp user configured in the device tab (tried this with the newly created ndmpd backup user and the netapp root) and I can also configure separate resource credentials in the BE restore job. I have tried setting the same accounts for the "global" ndmp device and the restore credentials and have also tried setting different accounts for them. NDMP debug level is at 5 and this is what shows up in /etc/messages. The session is closed immediately after it has been granted. 16:12:07 PST [Java_Thread:info]: ndmpdserver: ndmpd.access allowed for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 16:12:07 PST [Java_Thread:info]: Ndmpd51: ndmpd session closed successfully for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 Running wireshark on the backup server doesn't produce much. It shows a SYN - SYN/ACK - NDMP CONNECT_CLOSE Request from the backup server. The Resource Credentials for the restore job behave very oddly. If I enter NDMP credentials and do "Test All" it fails. If I use my regular domain backup account, it is successful. There are no failed or succeeded logons in the NetApp ndmp log and tracing this check shows that it doesn't even connect to the NAS. This makes me think that this is more likely flaky BE behaviour rather than misconfiguration of the NAS. Here is the options ndmp output: FAS2020-1 options ndmp ndmpd.access all ndmpd.authtype challenge ndmpd.connectlog.enabled on ndmpd.enable on ndmpd.ignore_ctime.enabled off ndmpd.offset_map.enable on ndmpd.password_length 16 ndmpd.preferred_interface disable ndmpd.tcpnodelay.enable off

    Read the article

  • Cisco Multi-DMZ firewall

    - by BParker
    I need to find a firewall that will give me 1 LAN port, and 5-7 DMZ ports. I have a requirement to replace some FreeBSD systems that are used to run some testing equipment. It is essential that the DMZ ports cannot communicate with each other, but the LAN port can communicate with everyone. That way a user on the LAN can connect to the test systems, but the test systems are isolated entirely and cannot interfere with each other. One of the DMZ's will be connected to a VMWare ESXi server, one to a standard server, and the rest to various types of equipment. The lan port will be connected to the corporate LAN switch. Sorry if i am a little vague, I am just trying to work all this out myself! Currently we have a FreeBSD configured, but the quad port NIC's are pretty expensive, and the PC itself is old, so i would prefer to replace it with a dedicate piece of kit which can do the same job, but more reliably! These test rigs are used all over the place, and get moved quite often, so i am aiming for Cisco kit for ease of configuration and reliability of the hardware itself. Thanks

    Read the article

  • File download speed issue over a dedicated fibre link

    - by nixnotwin
    My ISP has installed a fibre based dedicated internet connection at the place where I work. In the beginning the connection terminated at one of the ISP's core routers. It resulted in a strange issue. Eventhough the assigned speed was 5mbps, when tests were done by downloading large files over http and ftp from multiple locations, the speed never went above 2mbps. But bittorrent downloads reached 5mbps. Even file download from the ISP servers were fine. So, at the ISP our link was attached directly to their edge router. After this file downloads from high bandwidth servers, like Google and MS, reached the 5 mbps limit. Sometimes the speed would fall down below 2 mbps and suddenly it will go up to the 5 mbps limit ( it keeps on happening during any single file download). But other downloads like ubuntu apt repositories still struggle to go above 2 mbps. The engineers at the ISP have not been able to sort out the issue. After they moved us to their edge router instead of giving us 8 public ip's, they just gave 4 ip's. When we enquired about it, they told us that giving more ip's would result in arp overload at their edge router. But somehow I was able to convince them to give us the 8 ip's which we wanted. But the file download issue has remained. What might be the reason for files from different location getting downloaded with different speeds, that too with heavy fluctuation in speeds? I have downloaded files from same url's from a connection belonging to another smaller ISP, and the speeds were fine and reached full 5 mbps limit.

    Read the article

  • Windows 7 machine, can't connect remotely until after ping

    - by rjohnston
    I have a Windows 7 (Home Premium) machine that doubles as a media centre and subversion server. There's a couple of problems with this setup, when connecting to the server from an XP (SP3) machine: Firstly, the machine won't respond to it's machine name until after it's IP address has been pinged. Here's an example: Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. C:\Documents and Settings\Rob>ping damascus Ping request could not find host damascus. Please check the name and try again. C:\Documents and Settings\Rob>ping 192.168.1.17 Pinging 192.168.1.17 with 32 bytes of data: Reply from 192.168.1.17: bytes=32 time=2ms TTL=128 ... Ping statistics for 192.168.1.17: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 1ms, Maximum = 2ms, Average = 1ms C:\Documents and Settings\Rob>ping damascus Pinging damascus [192.168.1.17] with 32 bytes of data: Reply from 192.168.1.17: bytes=32 time<1ms TTL=128 .... Ping statistics for 192.168.1.17: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 1ms, Average = 0ms C:\Documents and Settings\Rob> Likewise, subversion commands with either the machine name or IP address will fail until the machine's IP address is pinged. Occasionally, the machine won't respond to pings on it's IP address, it'll just come back with "Request timed out". The svn server is VisualSVN, if that helps... Any ideas?

    Read the article

  • How to prioritize openvpn traffic?

    - by aditsu
    I have an openvpn server, with one network interface. VPN traffic is extremely slow. I tried to do traffic control with this configuration (currently): qdisc del dev eth0 root qdisc add dev eth0 root handle 1: htb default 12 class add dev eth0 parent 1: classid 1:1 htb rate 900mbit #vpn class add dev eth0 parent 1:1 classid 1:10 htb rate 1500kbit ceil 3000kbit prio 1 #local net class add dev eth0 parent 1:1 classid 1:11 htb rate 10mbit ceil 900mbit prio 2 #other class add dev eth0 parent 1:1 classid 1:12 htb rate 500kbit ceil 1000kbit prio 2 filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 1194 0xffff flowid 1:10 filter add dev eth0 protocol ip parent 1:0 prio 2 u32 match ip dst 192.168.10.0/24 flowid 1:11 qdisc add dev eth0 parent 1:10 handle 10: sfq perturb 10 qdisc add dev eth0 parent 1:11 handle 11: sfq perturb 10 qdisc add dev eth0 parent 1:12 handle 12: sfq perturb 10 But it's still extremely slow. I have an imaps connection that keeps transferring data continuously (I successfully limited the rate) but with openvpn I can't seem to get more than about 100kbit/s The internet connection speed is about 3mbit/s (symmetric) What could be the problem? Does the sport filter work for udp?

    Read the article

  • Server 2008 NAT Internet Not Working

    - by Jack
    I'm trying to set up Routing and Remote Access on Windows Server 2008 R2, I have a network connection that I want to share the internet from to another private network. The server has two NICs which are configured as follows: External NIC (Dynamically assigned by ISP) IP:10.175.4.150 Subnet:255.255.192.0 Gateway:10.175.0.1 DNS:10.175.0.1 Internal NIC IP:172.16.254.1 Subnet:255.255.255.0 Gateway:None DNS:None I have set the external NIC to be the public interface and enabled NAT on it in the RRAS MMC and set the internal NIC to be a private interface. I have also set up the DNS forwarding or whatever it is in the NAT section. From a client (IP:172.16.254.2) I can ping the server and access files on it, when I try to browse the web with the default gateway set to the internal NIC ip I end up getting a 404 page which is returned from the ISPs default gateway. I'm guessing it's something to do with the double NAT possibly. Trying to ping the ISPs default gateway from a private network client just times out as does accessing it directly. I've disabled and reconfigured RRAS multiple times and that doesn't seem to have made a difference, so can anyone tell me what I'm doing wrong? Thanks.

    Read the article

  • Portable, battery-powered, wireless access point, ethernet adapter

    - by Jed
    I am in need of an adapter that will convert an ethernet port into a wireless access point. I have found a handful of devices, but I'm unable to find a device that is battery powered. Does a self-powered wireless access point even exist? The particular scenario that I will be using the device for is not your typical computer/PC scenario. For the curious, here's a bit of background on the problem I'm trying to solve: I make devices (controllers) that monitor water systems. Our controllers have a Webserver that serves out web pages so that users can configure the controller's settings. Typically, the user will use a cross-over cable to connect directly to the controller's ethernet port with their laptop to gain access to the controller's web pages. Now that tablets (devices that don't have an ethernet port - iPad, for example) are becoming more common, I need to find a device that will convert the controller's ethernet port into a wireless access point so that the user can connect to the controller's web pages via Wi-Fi or Bluetooth. It's worth noting that this wireless device that I'm looking for will NOT be permanently installed on the controller. It will be a portable device that the user will use on any of his controllers when he needs to make a connection to the controller. If you know of a device that will solve the scenario that I mention above, please share your info.

    Read the article

  • Default Gateway solution on NAT'd network (best options)

    - by kwiksand
    I've recently changed a network from a bunch of machines exposed to the net on a network to a more security conscious Firewall-fronted network with a DMZ for public services. Everything's mostly working perfectly now, but I've got the old problem of NAT Loopback where a machine within the LAN wants to access a public service via the public/external IP. I've solved this problem previously in a small/SOHO environment simply using NAT loopback features of the router in use or a simple iptables rule to do the same, but I want to make sure I make the most resilient choice with the least concern. It seems I can: Use iptables as I've said to DNAT and MASQUERADE the change source/destination so the connection works correctly i.e iptables -A PREROUTING -t nat -d ip.of.eth0.here -p tcp --dport 8080 -j DNAT --to 192.168.0.201:8080 iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -p tcp --dport 8080 -d 192.168.0.201 -j MASQUERADE Use split DNS, with internal mappings for public IP's Potentially do some route nastyness by setting the Default Gateway to use a different externally exposed IP to then come back in the public route (messy) Someone mentioned putting the Default Gateway within the DMZ as well (on serverfault), but I can't find the post again. I'm sure this is a common issue for many with NAT'd networks, but I've not really seen the perfect solve all when it comes to fixing this problem. What is your opinion?

    Read the article

  • How use DNS server to create simple HA (High availability) of my website?

    - by marc22
    Welcome, How can i use DNS server to create simple HA (High availability) of website ? For example if my web-server ( for better understanding i use internal IP in real it will be other hosting companies) 192.168.0.120 :80 (is offline) traffic go to 192.168.0.130 :80 You have right, i use bad word "hight avability" of course i was thinking about failover. Using few IP in A records is good for simple load-balancing. But not in case, if i want notice user about failure (for example display page, Oops something is wrong without our server, we working on it) against "can't establish connection". I was thinking about setting up something like this 2 DNS servers, one installed on www server Both have low TTL on my domain, set up 2 ns records first for DNS with my apache server second to other dns If user try connect he will get ip of www server using first dns, if that dns is offline (probably www server is also down) so it will try second NS record, what will point to another dns, that dns will point to "backup" page. That's what i would like to do. If You have other idea please share. Reverse proxy is not option, because IP of server can change, or i can use other country for backup.

    Read the article

  • VMware Server Host-Only Network Routing

    - by Chris
    I have a windows 2008 web server machine running VMware server. I have 3 VM's - All 3 are test servers so security isn't really a concern... each of them running windows 2008 standard and some of them serving web content. My ISP only allows one MAC address to access the physical switch, however they give me 10 public IP addresses to use. My question is, if I put each VM on their own Host only network, how can I route all traffic from a specific public IP on the host, to the corresponding host only adapter, therefore routing to the specific VM? For example: A single physical Adapter on the Host has the following public IP's assigned to it in windows networking: 74.208.14.10 74.208.14.20 74.208.14.30 Each VM is on a host-only network vm1 - 192.168.196.1 vm2 - 192.168.197.1 vm3 - 192.168.198.1 On the host, I want to route all traffic from 74.208.14.10 to VM1 and 74.208.14.20 to VM2 and 74.208.14.30 to vm3 without using VMware NAT, or bridged connections. I want each server to appear to have its own public IP address. My guess is i can modify the route tables somehow, or perhaps in ICS...but i'm not sure how.

    Read the article

  • Total newb having SSH and remote MySQL access problems

    - by kscott
    I don't often work with linux or need to SSH into remote MySQL databases, so pardon my ignorance. For months I had been using the HeidiSQL client application to remotely access a MySQL database. Today two things happened: the DB moved to a new server and I updated HeidiSQL, now I cannot log in to the MySQL server, when attempting I get this message from Heidi: SQL Error (2003) in statement #0: Can't connect to MySQL server on 'localhost' (10061) If I use Putty, I can connect to the server and get MySQL access through command line, including fetching data from the DB. I assume this means my credentials and address are correct, but do not understand why putting those same details into HeidiSQL's SSH tunnel info won't work. I also downloaded the MySQL Workbench and attempted to set up a connection through that client and got this message: Cannot Connect to Database Server Your connection attempt failed for user 'myusername' from your host to server at localhost:3306: Lost connection to MySQL server at 'reading initial communication packet', system error: 0 Please: 1 Check that mysql is running on server localhost 2 Check that mysql is running on port 3306 (note: 3306 is the default, but this can be changed) 3 Check the myusername has rights to connect to localhost from your address (mysql rights define what clients can connect to the server and from which machines) 4 Make sure you are both providing a password if needed and using the correct password for localhost connecting from the host address you're connecting from From Googling around I see that it could be related to the MySQL bind-address, but I am a third party sub-contractor with no access to the MySQL settings of this box and the system admin is assuring me that I'm an idiot and need to figure it out on my end. This is completely possible but I don't know what else to try. Edit 1 - The client settings I am using In Heidi and MySQL Workbench I am using the following: SSH host + port: theHostnameOfTheRemoteServer.com:22 {this is the same host I can Putty to} SSH Username: mySSHusername {the same user name I use for my Putty connection} SSH Password: mySSHpassword {the same password for the Putty connection} Local port: 3307 MySQL host: theHostnameOfTheRemoteServer.com MySQL User: mySQLusername {which I can connect with once in with Putty} MySQL Password: mySQLpassword {which works once in with Putty} Port: 3306

    Read the article

  • IIS7 - Web Deployment Tool - SetParam/SetParamFile to set http and https bindings + Cert

    - by Andras Zoltan
    Hi, we're currently using the MS Web Deployment Tool to sync a live website and some WebServices from a staging box to two live servers. The staging box hosts the site on any IP on port 17000, whereas the two live servers are load-balanced and have a different IP for each of them. At present, I generate two separate packages for deployment - one for each machine - using the sync operation and specifying a DestinationBinding parameter as follows: msdeploy -verb:sync -source:WebServer,computerName=localhost -dest:package="machinename.zip" -setParam:type="DestinationBinding",scope="SiteName",value="ip_address:port:". (Split across multiple lines to make it easier to read!) I run this twice, with a different target filename and ip address for each of the two machines. When it comes to deployment, I simply do a sync from each package to its respective live site. I know, I know - I should be able to do it by generating one parameterised package and then perhaps using the SetParamFile switch for each of the two Servers - believe me I'd like to, but the documentation on doing this is frankly non-existent. Now I need to configure and deploy both HTTP and HTTPS binding for this site; including also the ssl cert that is to be used. I've added an SSL binding for the site on the staging box - which uses a development cert (which will need to be replaced - or should the staging box be using the live cert?), and now the above command line has the effect of replacing the target IP on both http and https entries. It appears that I cannot specify multiple bindings plus the cert information in the DestinationBinding value in the -setParam above, so anyone know how would I go about doing this? Any help greatly appreciated.

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >