Search Results

Search found 4990 results on 200 pages for 'traffic measurement'.

Page 51/200 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Setting up vsftpd, hangs on list command

    - by Victor
    I installed vsftpd and configured it. When I try to connect to the ftp server using Transmit, it manages to connect but hangs on Listing "/" Then, I get a message stating: Could not retrieve file listing for “/”. Control connection timed out. Does it have anything to do with my iptables? My rules are as listed: *filter # Allows all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT ! -i lo -d 127.0.0.0/8 -j REJECT # Accepts all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allows all outbound traffic # You can modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allows HTTP and HTTPS connections from anywhere (the normal ports for websites) -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT -A INPUT -p tcp -m tcp --dport 21 -j ACCEPT # Allows SSH connections # # THE -dport NUMBER IS THE SAME ONE YOU SET UP IN THE SSHD_CONFIG FILE # -A INPUT -p tcp -m state --state NEW --dport 30000 -j ACCEPT # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # log iptables denied calls -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT

    Read the article

  • Encrypting peer-to-peer application with iptables and stunnel

    - by Jonathan Oliver
    I'm running legacy applications in which I do not have access to the source code. These components talk to each other using plaintext on a particular port. I would like to be able to secure the communications between the two or more nodes using something like stunnel to facilitate peer-to-peer communication rather than using a more traditional (and centralized) VPN package like OpenVPN, etc. Ideally, the traffic flow would go like this: app@hostA:1234 tries to open a TCP connection to app@hostB:1234. iptables captures and redirects the traffic on port 1234 to stunnel running on hostA at port 5678. stunnel@hostA negotiates and establishes a connection with stunnel@hostB:4567. stunnel@hostB forwards any decrypted traffic to app@hostB:1234. In essence, I'm trying to set this up to where any outbound traffic (generated on the local machine) to port N forwards through stunnel to port N+1, and the receiving side receives on port N+1, decrypts, and forwards to the local application at port N. I'm not particularly concerned about losing the hostA origin IP address/machine identity when stunnel@hostB forwards to app@hostB because the communications payload contains identifying information. The other trick in this is that normally with stunnel you have a client/server architecture. But this application is much more P2P because nodes can come and go dynamically and hard-coding some kind of "connection = hostN:port" in the stunnel configuration won't work.

    Read the article

  • Windows Server 2003 IPSec Tunnel Connected, But Not Working (Possibly NAT/RRAS Related)

    - by Kevinoid
    Configuration I have setup a "raw" IPSec tunnel between a Windows Server 2003 (SBS) machine and a Netgear FVG318 according to the instructions in Microsoft KB816514. The configuration is as follows (using the same conventions as the article): NetA | SBS2003 | FVG318 | NetB 10.0.0.0/24 | 216.x.x.x | 69.y.y.y | 10.0.254.0/24 Both the Main Mode and Quick Mode Security Associations are successfully completed and appear in the IP Security Monitor. I am also able to ping the SBS2003 server on its private address from any computer on NetB. The Problem Any traffic sent from a computer on NetA to NetB, or from SBS2003 to NetB (excluding ICMP Ping responses), is sent out on the public network interface outside the IPSec tunnel (no encryption or header authentication, as if the tunnel were not there). Pings sent from a computer on NetB to a computer on NetA successfully reach computers on NetA, but the responses are silently discarded by SBS2003 (they do not go out in the clear and do not generate any encrypted traffic). Possible Solutions Incorrect Configuration I could have mistyped something, somewhere, or KB816514 could be incorrect in some way. I have tried very hard to eliminate the first option. Have re-created the configuration several times, tried tweaking and adjusting all the settings I could without success (most prevent the SA from being established). NAT/RRAS I have seen multiple posts elsewhere suggesting that this could be due to interaction between NAT and the IPSec filters. Possibly the NetA private addresses get rewritten to 216.x.x.x before being compared with the Quick Mode IPSec filters and don't get tunneled because of the mismatch. In fact, The Cable Guy article from June 2005 "TCP/IP Packet Processing Paths" suggests that this is the case, (see step 2 and 4 of the Transit Traffic path). If this is the case, is there a way to exclude NetA-NetB traffic from NAT? Any thoughts, ideas, suggestions, and/or comments are appreciated.

    Read the article

  • Offloading backups to secondary network

    - by user1467163
    I'm trying to solve a problem- Currently, we are constantly backing up and have no budget for additional servers. Our production network is still a 10/100 and handles voip, SQL plus our backup traffic, and I'd like to offload the backup traffic onto a secondary network- all of our servers have secondary NIC's that are not in use, and all support gigabit (Our switching hardware does not- a topic for another day). I'd like to move my backups off the production network, but I am having a hard time getting the computers to communicate. I am using a Netgear GS724T switch for the backup network- Chosen for cost and because I have used them extensively on networks saturated with ghosting traffic, so I know it's up to the task. I have defined a VLAN, with ports that are not members of any other VLAN. All traffic is untagged on the VLAN. I have set the servers with 192.168.1.10 and 192.168.1.11 addresses, 255.255.255.0 netmask and I have tried a blank GW, using the local IP of the server 192.168.1.whatever address, and I have tried using the switch's production-side IP as the GW. The machines cannot find each other. DNS addresses are blank because I am going purely by IP for now... Any ideas how to get these machines to talk? they are Windows machines, running Server 2008R2 and 2003R2. Thanks!

    Read the article

  • iptables: How to combine DNAT and SNAT to use a secondary IP address?

    - by Que_273
    There are lots of questions on here about iptables DNAT/SNAT setups but I haven't found one that solves my current problem. I have services bound to the IP address of eth0 (e.g. 192.168.0.20) and I also have a IP address on eth0:0 (192.168.0.40) which is shared with another server. Only one server is active, so this alias interface comes and goes depending on which server is active. In order to get traffic accepted by the service a DNAT rule is used to change the destination IP. iptables -t nat -A PREROUTING -d 192.168.0.40 -p udp --dport 7100 -j DNAT --to-destination 192.168.0.20 I also wish all outbound traffic from this service to appear to come from the shared IP, so that return responses will work in the event of a active-standby failover. iptables -t nat -A POSTROUTING -p udp --sport 7100 -j SNAT --to-source 192.168.0.40 My problem is that the SNAT rule is not always run. Inbound traffic causes a connection tracking entry like this. [root]# conntrack -L -p udp udp 17 170 src=192.168.0.185 dst=192.168.0.40 sport=7100 dport=7100 src=192.168.0.20 dst=192.168.0.185 sport=7100 dport=7100 [ASSURED] mark=0 secmark=0 use=2 which means the POSTROUTING chain is not run and outbound traffic leaves with the real IP address as the source. I am thinking I can set up a NOTRACK rule in the raw table to prevent conntracking for this port number, but is there a better or more efficient way to make this work? Edit - Alternative question: Is there a way (in CentOS/Linux) to have an interface that can be bound to but not used, such that it can be attached to the network or detached when a shared IP address is swapped between servers?

    Read the article

  • Allow SFTP in iptables

    - by Kevin Orriss
    I have just purchased a VPS from linode and am going through the setup guide. I have everything running (apache2, php, mysql etc) but I am being denied access via SFTP when using fileZilla to upload a file. Now this is my second time installing the server as I missed a section out the first time. I was able to connect to my server through SFTP on filezilla the first time and the thing I missed out was adding a new user and editing the iptables in the firewall. So it would seem that the guide I have been following has blocked SFTP but allowed SSH. Here is the iptables file: *filter # Allow all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT ! -i lo -d 127.0.0.0/8 -j REJECT # Accept all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow all outbound traffic - you can modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allow HTTP and HTTPS connections from anywhere (the normal ports for websites and SSL). -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT # Allow SSH connections # # The -dport number should be the same port number you set in sshd_config # -A INPUT -p tcp -m state --state NEW --dport 22 -j ACCEPT # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # Log iptables denied calls -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT All I would like is a line I need to put in there which allows SFTP over port 22. Thank you for reading this.

    Read the article

  • How much free memory should I have on my webserver?

    - by neanderslob
    I have a webserver that's currently hosting two Wordpress sites and some java-based collaboration software. The server has 2G of memory and is currently using about 1.8G of the available memory. Right now what's on here is pretty much a pilot project that's getting negligible traffic so I think it's pretty clear that I'll be needing more memory. I was wondering, if I was to release it, how I might anticipate my memory needs based on the traffic it gets. I've poked around on Google and what I've found has been a bit tenuous. Is there a good heuristic that one should use when calculating memory demands as a function of the base (no traffic) load on the server? For reference, the output of free -m can be seen below: total used free shared buffers cached Mem: 2048 1832 215 0 0 0 -/+ buffers/cache: 1832 215 Swap: 0 0 0 To me this looks like actual memory used and isn't an illusion due to caching or anything else. I figure the demands of my collaboration software will have to be experimentally tested so here's free -m without that software running: total used free shared buffers cached Mem: 2048 1109 938 0 0 0 -/+ buffers/cache: 1109 938 Swap: 0 0 0 My plan B to figure this out is to add a bunch of swap space to the server, give it some traffic and adjust according the the amount that swap gets used. I was just wondering if anyone had a good rule of thumb to estimate how much memory I should plan on in advance...or if what I'm thinking is nuts. Many thanks in advance (I'm really quite new to this).

    Read the article

  • LDAPS being redirected to 389

    - by Ikkoras
    We're trying to perform an LDAPS bind to a server which blocks 389 with a firewall so all traffic must travel over 636. In our test lab we're connecting to a test ldap (located on the same server) which does not have this firewall so both ports are exposed. Running ldp.exe on the test server we generate the trace below which seems to suggest that it is successfully binding over 636. However if we monitor the traffic with wireshark all the traffic is being sent to 389 with no attempt to even contact 636. Other tools will bind only with SSL on 636 or without SSL on 389 whjich seems to suggest it is behaving correctly but Wireshark shows 389. Only the test server we are using RawCap to capture the local loopback traffic. Any ideas? 0x0 = ldap_unbind(ld); ld = ldap_sslinit("WIN-GF49504Q77T.test.com", 636, 1); Error 0 = ldap_set_option(hLdap, LDAP_OPT_PROTOCOL_VERSION, 3); Error 0 = ldap_connect(hLdap, NULL); Error 0 = ldap_get_option(hLdap,LDAP_OPT_SSL,(void*)&lv); Host supports SSL, SSL cipher strength = 128 bits Established connection to WIN-GF49504Q77T.test.com. Retrieving base DSA information... Getting 1 entries: Dn: (RootDSE)

    Read the article

  • Configuring iptables rules for HAProxy and others

    - by MLister
    I have the following relevant settings for HAProxy: defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 500 contimeout 5s clitimeout 15s srvtimeout 15s frontend public bind *:80 option http-server-close option http-pretend-keepalive option forwardfor # ACLs ... I have three backends (including a Nginx server) configured in HAProxy, all listening on different ports of 127.0.0.1. And my iptables config is this: *filter # Allows all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT -i lo -d 127.0.0.0/8 -j REJECT # Accepts all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allows all outbound traffic # You can modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allows HTTP and HTTPS connections from anywhere (the normal ports for websites) -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT # Allows SSH connections # # THE -dport NUMBER IS THE SAME ONE YOU SET UP IN THE SSHD_CONFIG FILE # -A INPUT -p tcp -m state --state NEW --dport 22 -j ACCEPT # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # log iptables denied calls -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT My questions are: Would the above iptables config work with the settings/options in my HAProxy config? I am also runnning a postgres and a redis server on the same machine, what settings do I need to adjust for these two to enable them work with iptables?

    Read the article

  • How do I tunnel an HTTPS proxy through a virtual machine (VMWare)

    - by Kyle
    I have a personal setup at home using VMWare Workstation. I also have a set of Virtual Private Machines that run Squid, and therefore provide me HTTPS proxy tunnels. Using Proxifier, I can tunnel all traffic for given applications through these tunnels. However, I also have a few virtual machines for dev/staging/experimentation/etc. I generally just use NAT to provide Internet access to the machines, and if I need to use these proxies, I can just setup Proxifier (or a Linux equivalent) to pipe the traffic through them. No problem. But... I got to thinking: Wouldn't it be great if I could assign these proxy tunnels to a virtual machine, so that when I start up the VM, it has instant-on access through the tunnel and not my local connection? (EDIT: Of course, it would USE my local connection, but it would tunnel traffic through the proxy.) To be more clear: I want a solution that binds the proxy to a VM, so that when I start the VM, I don't have to use a proxy client to connect to the tunnel - I am already piping all traffic from that VM through that proxy. I did a bit of searching, and the closest thing I could find was this: How to route public static IP to a virtual machine on a vmware ESXi host? Which wasn't all that applicable. The proxies are protected by user/pass but do not filter by IP. Again, they are HTTPS proxies setup through Squid. Any ideas on how to make this happen? Thanks a ton.

    Read the article

  • Quick guide to Oracle IRM 11g: Configuring SSL

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index So far in this guide we have an IRM Server up and running, however I skipped over SSL configuration in the previous article because I wanted to focus in more detail now. You can, if you wish, not bother with setting up SSL, but considering this is a security technology it is worthwhile doing. Contents Setting up a one way, self signed SSL certificate in WebLogic Setting up an official SSL certificate in Apache 2.x Configuring Apache to proxy traffic to the IRM server There are two common scenarios in which an Oracle IRM server is configured. For a development or evaluation system, people usually communicate directly to the WebLogic Server running the IRM service. However in a production environment and for some proof of concept evaluations that require a setup reflecting a production system, the traffic to the IRM server travels via a web server proxy, commonly Apache. In this guide we are building an Oracle Enterprise Linux based IRM service and this article will go over the configuration of SSL in WebLogic and also in Apache. Like in the past articles, we are going to use two host names in the configuration below,irm.company.com will refer to the public Apache server irm.company.internal will refer to the internal WebLogic IRM server Setting up a one way, self signed SSL certificate in WebLogic First lets look at creating just a simple self signed SSL certificate to be used in WebLogic. This is a quick and easy way to get SSL working in your environment, however the downside is that no browsers are going to trust this certificate you create and you'll need to manually install the certificate onto any machine's communicating with the server. This is fine for development or when you have only a few users evaluating the system, but for any significant use it's usually better to have a fully trusted certificate in use and I explain that in the next section. But for now lets go through creating, installing and testing a self signed certificate. We use a library in Java to create the certificates, open a console and running the following commands. Note you should choose your own secure passwords whenever you see password below. [oracle@irm /] source /oracle/middleware/wlserver_10.3/server/bin/setWLSEnv.sh [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/ [oracle@irm /] java utils.CertGen -selfsigned -certfile MyOwnSelfCA.cer -keyfile MyOwnSelfKey.key -keyfilepass password -cn "irm.oracle.demo" [oracle@irm /] java utils.ImportPrivateKey -keystore MyOwnIdentityStore.jks -storepass password -keypass password -alias trustself -certfile MyOwnSelfCA.cer.pem -keyfile MyOwnSelfKey.key.pem -keyfilepass password [oracle@irm /] keytool -import -trustcacerts -alias trustself -keystore TrustMyOwnSelf.jks -file MyOwnSelfCA.cer.der -keyalg RSA We now have two Java Key Stores, MyOwnIdentityStore.jks and TrustMyOwnSelf.jks. These contain keys and certificates which we will use in WebLogic Server. Now we need to tell the IRM server to use these stores when setting up SSL connections for incoming requests. Make sure the Admin server is running and login into the WebLogic Console at http://irm.company.intranet:7001/console and do the following; In the menu on the left, select the + next to Environment to expose the submenu, then click on Servers. You will see two servers in the list, AdminServer(admin) and IRM_server1. If the IRM server is running, shut it down either by hitting CONTROL + C in the console window it was started from, or you can switch to the CONTROL tab, select IRM_server1 and then select the Shutdown menu and then Force Shutdown Now. In the Configuration tab select IRM_server1 and switch to the Keystores tab. By default WebLogic Server uses it's own demo identity and trust. We are now going to switch to the self signed one's we've just created. So select the Change button and switch to Custom Identity and Custom Trust and hit save. Now we have to complete the resulting fields, the setting's i've used in my evaluation server are below. IdentityCustom Identity Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/MyOwnIdentityStore.jks Custom Identity Keystore Type: JKS Custom Identity Keystore Passphrase: password Confirm Custom Identity Keystore Passphrase: password TrustCustom Trust Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/TrustMyOwnSelf.jks Custom Trust Keystore Type: JKS Custom Trust Keystore Passphrase: password Confirm Custom Trust Keystore Passphrase: password Now click on the SSL tab for the IRM_server1 and enter in the alias and passphrase, in my demo here the details are; IdentityPrivate Key Alias: trustself Private Key Passphrase: password Confirm Private Key Passphrase: password And hit save. Now lets test a connection to the IRM server over HTTPS using SSL. Go back to a console window and start the IRM server, a quick reminder on how to do this is... [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/bin [oracle@irm /] ./startManagedWeblogic IRM_server1 Once running, open a browser and head to the SSL port of the server. By default the IRM server will be listening on the URL https://irm.company.intranet:16101/irm_rights. Note in the example image on the right the port is 7002 because it's a system that has the IRM services installed on the Admin server, this isn't typical (or advisable). Your system is going to have a separate managed server which will be listening on port 16101. Once you open this address you will notice that your browser is going to complain that the server certificate is untrusted. The images on the right show how Firefox displays this error. You are going to be prompted every time you create a new SSL session with the server, both from the browser and more annoyingly from the IRM Desktop. If you plan on always using a self signed certificate, it is worth adding it to the Windows certificate store so that when you are accessing sealed content you do not keep being informed this certificate is not trusted. Follow these instructions (which are for Internet Explorer 8, they may vary for your version of IE.) Start Internet Explorer and open the URL to your IRM server over SSL, e.g. https://irm.company.intranet:16101/irm_rights. IE will complain that about the certificate, click on Continue to this website (not recommended). From the IE Tools menu select Internet Options and from the resulting dialog select Security and then click on Trusted Sites and then the Sites button. Add to the list of trusted sites a URL which mates the server you are accessing, e.g. https://irm.company.intranet/ and select OK. Now refresh the page you were accessing and next to the URL you should see a red cross and the words Certificate Error. Click on this button and select View Certificates. You will now see a dialog with the details of the self signed certificate and the Install Certificate... button should be enabled. Click on this to start the wizard. Click next and you'll be asked where you should install the certificate. Change the option to Place all certificates in the following store. Select browse and choose the Trusted Root Certification Authorities location and hit OK. You'll then be prompted to install the certificate and answer yes. You also need to import the root signed certificate into the same location, so once again select the red Certificate Error option and this time when viewing the certificate, switch to the Certification Path tab and you should see a CertGenCAB certificate. Select this and then click on View Certificate and go through the same process as above to import the certificate into the store. Finally close all instances of the IE browser and re-access the IRM server URL again, this time you should not receive any errors. Setting up an official SSL certificate in Apache 2.x At this point we now have an IRM server that you can communicate with over SSL. However this certificate isn't trusted by any browser because it's path of trust doesn't end in a recognized certificate authority (CA). Also you are communicating directly to the WebLogic Server over a non standard SSL port, 16101. In a production environment it is common to have another device handle the initial public internet traffic and then proxy this to the WebLogic server. The diagram below shows a very simplified view of this type of deployment. What i'm going to walk through next is configuring Apache to proxy traffic to a WebLogic server and also to use a real SSL certificate from an official CA. First step is to configure Apache to handle incoming requests over SSL. In this guide I am configuring the IRM service in Oracle Enterprise Linux 5 update 3 and Apache 2.2.3 which came with OpenSSL and mod_ssl components. Before I purchase an SSL certificate, I need to generate a certificate request from the server. Oracle.com uses Verisign and for my own personal needs I use cheaper certificates from GoDaddy. The following instructions are specific to Apache, but there are many references out there for other web servers. For Apache I have OpenSSL and the commands are; [oracle@irm /] cd /usr/bin [oracle@irm bin] openssl genrsa -des3 -out irm-apache-server.key 2048 Generating RSA private key, 2048 bit long modulus ............................+++ .........+++ e is 65537 (0x10001) Enter pass phrase for irm-apache-server.key: Verifying - Enter pass phrase for irm-apache-server.key: [oracle@irm bin] openssl req -new -key irm-apache-server.key -out irm-apache-server.csr Enter pass phrase for irm-apache-server.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [GB]:US State or Province Name (full name) [Berkshire]:CA Locality Name (eg, city) [Newbury]:San Francisco Organization Name (eg, company) [My Company Ltd]:Oracle Organizational Unit Name (eg, section) []:Security Common Name (eg, your name or your server's hostname) []:irm.company.com Email Address []:[email protected] Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []:testing An optional company name []: You must make sure to remember the pass phrase you used in the initial key generation, you will need this when later configuring Apache. In the /usr/bin directory there are now two new files. The irm-apache-server.csr contains our certificate request and is what you cut and paste, or upload, to your certificate authority when you purchase and validate your SSL certificate. In response you will typically get two files. Your server certificate and another certificate file that will likely contain a set of certificates from your CA which validate your certificate's trust. Next we need to configure Apache to use these files. Typically there is an ssl.conf file which is where all the SSL configuration is done. On my Oracle Enterprise Linux server this file is located in /etc/httpd/conf.d/ssl.conf and i've added the following lines. <VirtualHost irm.company.com> # Setup SSL for irm.company.com ServerName irm.company.com SSLEngine On SSLCertificateFile /oracle/secure/irm.company.com.crt SSLCertificateKeyFile /oracle/secure/irm.company.com.key SSLCertificateChainFile /oracle/secure/gd_bundle.crt </VirtualHost> Restarting Apache (apachectl restart) and I can now attempt to connect to the Apache server in a web browser, https://irm.company.com/. If all is configured correctly I should now see an Apache test page delivered to me over HTTPS. Configuring Apache to proxy traffic to the IRM server Final piece in setting up SSL is to have Apache proxy requests for the IRM server but do so securely. So the requests to Apache will be over HTTPS using a legitimate certificate, but we can also configure Apache to proxy these requests internally across to the IRM server using SSL with the self signed certificate we generated at the start of this article. To do this proxying we use the WebLogic Web Server plugin for Apache which you can download here from Oracle. Download the zip file and extract onto the server. The file extraction reveals a set of zip files, each one specific to a supported web server. In my instance I am using Apache 2.2 32bit on an Oracle Enterprise Linux, 64 bit server. If you are not sure what version your Apache server is, run the command /usr/sbin/httpd -V and you'll see version and it its 32 or 64 bit. Mine is a 32bit server so I need to extract the file WLSPlugin1.1-Apache2.2-linux32-x86.zip. The from the resulting lib folder copy the file mod_wl.so into /usr/lib/httpd/modules/. First we want to test that the plug in will work for regular HTTP traffic. Edit the httpd.conf for Apache and add the following section at the bottom. LoadModule weblogic_module modules/mod_wl.so <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16100    WLLogFile /tmp/wl-proxy.log </IfModule> <Location /irm_rights>    SetHandler weblogic-handler </Location> <Location /irm_desktop>    SetHandler weblogic-handler </Location> <Location /irm_sealing>    SetHandler weblogic-handler </Location> <Location /irm_services>    SetHandler weblogic-handler </Location> Now restart Apache again (apachectl restart) and now open a browser to http://irm.company.com/irm_rights. Apache will proxy the HTTP traffic from the port 80 of your Apache server to the IRM service listening on port 16100 of the WebLogic Managed server. Note above I have included all four of the Locations you might wish to proxy. http://irm.company.internalirm_rights is the URL to the management website, /irm_desktop is the URL used for the IRM Desktop to communicate. irm_sealing is for web services based document sealing and irm_services is for IRM server web services. The last two are typically only used when you have the IRM server integrated with another application and it is unlikely you'd be accessing these resources from the public facing Apache server. However, just in case, i've mentioned them above. Now let's enable SSL communication from Apache to WebLogic. In the ZIP file we extracted were some more modules we need to copy into the Apache folder. Looking back in the lib that we extracted, there are some more files. Copy the following into the /usr/lib/httpd/modules/ folder. libwlssl.so libnnz11.so libclntsh.so.11.1 Now the documentation states that should only need to do this, but I found that I also needed to create an environment variable called LD_LIBRARY_PATH and point this to the folder /usr/lib/httpd/modules/. If I didn't do this, starting Apache with the WebLogic module configured to SSL would throw the error. [crit] (20014)Internal error: WL SSL Init failed for server: (null) on 0 So I had to edit the file /etc/profile and add the following lines at the bottom. You may already have the LD_LIBRARY_PATH variable defined, therefore simply add this path to it. LD_LIBRARY_PATH=/usr/lib/httpd/modules/ export LD_LIBRARY_PATH Now the WebLogic plug in uses an Oracle Wallet to store the required certificates.You'll need to copy the self signed certificate from the IRM server over to the Apache server. Copy over the MyOwnSelfCA.cer.der into the same folder where you are storing your public certificates, in my example this is /oracle/secure. It's worth mentioning these files should ONLY be readable by root (the user Apache runs as). Now lets create an Oracle Wallet and import the self signed certificate from the IRM server. The file orapki was included in the bin folder of the Apache 1.1 plugin zip you extracted. orapki wallet create -wallet /oracle/secure/my-wallet -auto_login_only orapki wallet add -wallet /oracle/secure/my-wallet -trusted_cert -cert MyOwnSelfCA.cer.der -auto_login_only Finally change the httpd.conf to reflect that we want the WebLogic Apache plug-in to use HTTPS/SSL and not just plain HTTP. <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16101    SecureProxy ON    WLSSLWallet /oracle/secure/my-wallet    WLLogFile /tmp/wl-proxy.log </IfModule> Then restart Apache once more and you can go back to the browser to test the communication. Opening the URL https://irm.company.com/irm_rights will proxy your request to the WebLogic server at https://irm.company.internal:16101/irm_rights. At this point you have a fully functional Oracle IRM service, the next step is to create a sealed document and test the entire system.

    Read the article

  • Benchmark MySQL Cluster using flexAsynch: No free node id found for mysqld(API)?

    - by quanta
    I am going to benchmark MySQL Cluster using flexAsynch follow this guide, details as below: mkdir /usr/local/mysqlc732/ cd /usr/local/src/mysql-cluster-gpl-7.3.2 cmake . -DCMAKE_INSTALL_PREFIX=/usr/local/mysqlc732/ -DWITH_NDB_TEST=ON make make install Everything works fine until this step: # /usr/local/mysqlc732/bin/flexAsynch -t 1 -p 80 -l 2 -o 100 -c 100 -n FLEXASYNCH - Starting normal mode Perform benchmark of insert, update and delete transactions 1 number of concurrent threads 80 number of parallel operation per thread 100 transaction(s) per round 2 iterations Load Factor is 80% 25 attributes per table 1 is the number of 32 bit words per attribute Tables are with logging Transactions are executed with hint provided No force send is used, adaptive algorithm used Key Errors are disallowed Temporary Resource Errors are allowed Insufficient Space Errors are disallowed Node Recovery Errors are allowed Overload Errors are allowed Timeout Errors are allowed Internal NDB Errors are allowed User logic reported Errors are allowed Application Errors are disallowed Using table name TAB0 NDBT_ProgramExit: 1 - Failed ndb_cluster.log: WARNING -- Failed to allocate nodeid for API at 127.0.0.1. Returned eror: 'No free node id found for mysqld(API).' I also have recompiled with -DWITH_DEBUG=1 -DWITH_NDB_DEBUG=1. How can I run flexAsynch in the debug mode? # /usr/local/mysqlc732/bin/flexAsynch -h FLEXASYNCH Perform benchmark of insert, update and delete transactions Arguments: -t Number of threads to start, default 1 -p Number of parallel transactions per thread, default 32 -o Number of transactions per loop, default 500 -l Number of loops to run, default 1, 0=infinite -load_factor Number Load factor in index in percent (40 -> 99) -a Number of attributes, default 25 -c Number of operations per transaction -s Size of each attribute, default 1 (PK is always of size 1, independent of this value) -simple Use simple read to read from database -dirty Use dirty read to read from database -write Use writeTuple in insert and update -n Use standard table names -no_table_create Don't create tables in db -temp Create table(s) without logging -no_hint Don't give hint on where to execute transaction coordinator -adaptive Use adaptive send algorithm (default) -force Force send when communicating -non_adaptive Send at a 10 millisecond interval -local 1 = each thread its own node, 2 = round robin on node per parallel trans 3 = random node per parallel trans -ndbrecord Use NDB Record -r Number of extra loops -insert Only run inserts on standard table -read Only run reads on standard table -update Only run updates on standard table -delete Only run deletes on standard table -create_table Only run Create Table of standard table -drop_table Only run Drop Table on standard table -warmup_time Warmup Time before measurement starts -execution_time Execution Time where measurement is done -cooldown_time Cooldown time after measurement completed -table Number of standard table, default 0

    Read the article

  • New Walkthrough Capability in AutoVue 20

    - by warren.baird
    New in AutoVue 20 is the capability to view a 3D model of a building from the inside - this is a very powerful tool for anyone who needs to work with models of plants, refineries, or other buildings. All of the standard AutoVue functionality is available, so you can click on any part of the building to get attribute data, manipulate the view, do measurement, etc. For example, in the image below we've made the Architectural model (Walls, Floors, etc.) transparent, but left the electrical and mechanical models opaque, so it's easy to see where the wires and piping run behind the walls. Additionally you can bring together different files and different types of files, using our digital mockup capability - in the image below the heating and air conditioning sytem on the left came from one file, and the electrical box on the right came from another wile, and the model of the room came from yet a third file, but with everything brought together into AutoVue you can do things like use our measurement capability to ensure there's enough space to get maintenance equipment down the hallway, before the building is even built. For more information about Walkthrough, you can view a video demo at http://download.oracle.com/autovue/3D_walkthrough_movie.wmv We're very excited about this new capability - do you think this will be useful for you in your work with AutoVue? Let us know!

    Read the article

  • Connection Pooling is Busted

    - by MightyZot
    A few weeks ago we started getting complaints about performance in an application that has performed very well for many years.  The application is a n-tier application that uses ADODB with the SQLOLEDB provider to talk to a SQL Server database.  Our object model is written in such a way that each public method validates security before performing requested actions, so there is a significant number of queries executed to get information about file cabinets, retrieve images, create workflows, etc.  (PaperWise is a document management and workflow system.)  A common factor for these customers is that they have remote offices connected via MPLS networks. Naturally, the first thing we looked at was the query performance in SQL Profiler.  All of the queries were executing within expected timeframes, most of them were so fast that the duration in SQL Profiler was zero.  After getting nowhere with SQL Profiler, the situation was escalated to me.  I decided to take a peek with Process Monitor.  Procmon revealed some “gaps” in the TCP/IP traffic.  There were notable delays between send and receive pairs.  The send and receive pairs themselves were quite snappy, but quite often there was a notable delay between a receive and the next send.  You might expect some delay because, presumably, the application is doing some thinking in-between the pairs.  But, comparing the procmon data at the remote locations with the procmon data for workstations on the local network showed that the remote workstations were significantly delayed.  Procmon also showed a high number of disconnects. Wireshark traces showed that connections to the database were taking between 75ms and 150ms.  Not only that, but connections to a file share containing images were taking 2 seconds!  So, I asked about a trust.  Sure enough there was a trust between two domains and the file share was on the second domain.  Joining a remote workstation to the domain hosting the share containing images alleviated the time delay in accessing the file share.  Removing the trust had no affect on the connections to the database. Microsoft Network Monitor includes filters that parse TDS packets.  TDS is the protocol that SQL Server uses to communicate.  There is a certificate exchange and some SSL that occurs during authentication.  All of this was evident in the network traffic.  After staring at the network traffic for a while, and examining packets, I decided to call it a night.  On the way home that night, something about the traffic kept nagging at me.  Then it dawned on me…at the beginning of the dance of packets between the client and the server all was well.  Connection pooling was working and I could see multiple queries getting executed on the same connection and ethereal port.  After a particular query, connecting to two different servers, I noticed that ADODB and SQLOLEDB started making repeated connections to the database on different ethereal ports.  SQL Server would execute a single query and respond on a port, then open a new port and execute the next query.  Connection pooling appeared to be broken. The next morning I wrote a test to confirm my hypothesis.  Turns out that the sequence causing the connection nastiness goes something like this: Make a connection to the database. Open a result set that returns enough records to require multiple roundtrips to the server. For each result, query for some other data in the database (this will open a new implicit connection.) Close the inner result set and repeat for every item in the original result set. Close the original connection. Provided that the first result set returns enough data to require multiple roundtrips to the server, ADODB and SQLOLEDB will start making new connections to the database for each query executed in the loop.  Originally, I thought this might be due to Microsoft’s denial of service (ddos) attack protection.  After turning those features off to no avail, I eventually thought to switch my queries to client-side cursors instead of server-side cursors.  Server-side cursors are the default, by the way.  Voila!  After switching to client-side cursors, the disconnects were gone and the above sequence yielded two connections as expected. While the real problem is the amount of time it takes to make connections over these MPLS networks (100ms on average), switching to client-side cursors made the problem go away.  Believe it or not, this is actually documented by Microsoft, and rather difficult to find.  (At least it was while we were trying to troubleshoot the problem!)  So, if you’re noticing performance issues on slower networks, or networks with slower switching, take a look at the traffic in a tool like Microsoft Network Monitor.  If you notice a high number of disconnects, and you’re using fire-hose or server-side cursors, then try switching to client-side cursors and you may see the problem go away. Most likely, Microsoft believes this to be appropriate behavior, because ADODB can’t guarantee that all of the data has been retrieved when you execute the inner queries.  I’m not convinced, though, because the problem remains even after replacing all of the implicit connections with explicit connections and closing those connections in-between each of the inner queries.  In that case, there doesn’t seem to be a reason why ADODB can’t use a single connection from the connection pool to make the additional queries, bringing the total number of connections to two.  Instead ADO appears to make an assumption about the state of the connection. I’ve reported the behavior to Microsoft and am awaiting to hear from the appropriate team, so that I can demonstrate the problem.  Maybe they can explain to us why this is appropriate behavior.  :)

    Read the article

  • Keeping a domain model consistent with actual data

    - by fstuijt
    Recently domain driven design got my attention, and while thinking about how this approach could help us I came across the following problem. In DDD the common approach is to retrieve entities (or better, aggregate roots) from a repository which acts as a in-memory collection of these entities. After these entities have been retrieved, they can be updated or deleted by the user, however after retrieval they are essentially disconnected from the data source and one must actively inform the repository to update the data source and make is consistent again with our in-memory representation. What is the DDD approach to retrieving entities that should remain connected to the data source? For example, in our situation we retrieve a series of sensors that have a specific measurement during retrieval. Over time, these measurement values may change and our business logic in the domain model should respond to these changes properly. E.g., domain events may be raised if a sensor value exceeds a predefined threshold. However, using the repository approach, these sensor values are just snapshots, and are disconnected from the data source. Does any of you have an idea on how to solve this following the DDD approach?

    Read the article

  • How can I configure Symantec Endpoint Protection Agent to allow access to windows shares?

    - by Peter Bernier
    I'm having some difficulties exposing a standard windows file share on a Windows Embedded Standard 2009 device that is running Symantec Endpoint Protection Agent 5.1. I'm using simply file sharing to expose a particular directory. That share is visible locally on the machine and externally visible when I disable the endpoint protection agent. I've added a rule (and moved it to the to ensure priority) allowing all hosts access on TDP ports 137,138,138,445 and another rule allowing UDP access on ports 137,138,139. When I try to connect, two endpoint protection dialogs pop up saying: Traffic has been blocked from this application: NWLINK2 IPX Protocol Driver (nwlnkipx.sys) Traffic has been blocked from this application: IPv6 driver (tcpip6.sys) I'm not using IPv6 anywhere. Interestingly, I discovered a workaround in that I can white-list all traffic from the subnet the device is on, which meets my needs, but I'm still curious as to why my original approach wasn't successful. Can anyone suggestion a reason why the above endpoint protection rules won't allow me to access windows file shares on the device?

    Read the article

  • Setup Exchange 2007 ActiveSync web application on a separate server

    - by mwillmott
    Hello, I have Exchange 2007 installed on SBS 2008. I also run a web server on the network. I only have one static IP and all traffic trough port 443 is routed to the webserver. I would like to publish the ActiveSync application externally. If i temporarily route 443 traffic to the SBS then it is published (along with owa and everything else which i don't want). Is there a way to host the ActiveSync application on the web server (Server 2008 with IIS7) or to get it to route traffic meant for the ActiveSync application? I have tried creating a site on the webserver which uses the ActiveSync folder on the SBS but that does not seem to work. Thanks, Michael

    Read the article

  • Best solution for Multi-WAN failover (inside & out)?

    - by Sean O
    Looking for a way to setup 2 ISPs in failover mode, for both incoming & outgoing traffic, for our small (<100 devices) network. The leading contender for now seems to be the Peplink Balance 310. However, a reseller I spoke with said it's great for 100% outgoing connectivity, but didn't seem to be confident in its abilities to handle incoming traffic. This is important as we host our own web site, Exchange e-mail, and virtual desktops (RDP). Do any Peplink owners use this for failover of incoming traffic? Are there other devices I should be considering? We're currently using a Cisco 1800 series router & ASA 5500 series firewall, with Comcast & T-1 lines (the goal being to replace the T with DSL/FiOS {whenever that becomes availble}). Price range: ~$1000 - $2500 USD. Thanks.

    Read the article

  • Are there any tools for monitoring individual Apache virtual hosts in real-time?

    - by Dave Forgac
    I'm looking for a way to monitor and record Apache traffic, separated by virtual host. I am currently using Munin to capture this and other data for the entire server however I can't seem to find a way to do this by vhost. This link describes using a module called mod_watch which is apparently no longer in development: http://www.freshnet.org/wordpress/2007/03/08/monitoring-apaches-virtualhost-with-munin/ The file that is listed as being compatible with Apache 2.x is reported to have problems with missing vhosts an reporting data correctly. Does anyone know of a reliable way to determine real-time traffic per vhost? If I can find this it should be easy enough to write a new Munin plugin. Edit: What I'd really like to see is something similar to the Apache server-status scoreboard page with the number of connections / requests separated by virtual host. This would give me the ability to check which vhost may be experiencing a spike in traffic in real time and would also provide the data needed for a Munin module (or some alternative performance monitoring / analysis system.)

    Read the article

  • How to set the preffered network interface in linx

    - by Mike Cooper
    I have my network set up like this. http://docs.google.com/Doc?docid=0AZ1YxuLE4djaZGhqN2s1NmRfMjhjNjc0Ym1meg&hl=en In words: I have a machine (Calcium, running Arch Linux) that has two network interfaces. eth0 is hoooked up to a router, and is gigabit. Eth1 is hooked up directly to the university network over 10Megabit. The router's uplink is hooked up to the university network as well, and it is also 10Megabit. Currently (I believe) all traffic on Calcium is going through eth0, through the router, regardless of whether it is internal or external. (How can I confirm this?) Ideally, traffic that is destined for the internal network (192.168.10.0/24) would travel over eth0 to the router, and wherever it is going. ALL other traffic should go over eth1. I suspect that this behavior could be acheived with IP tables? I don't really know where to start looking to learn that though, so any links would be appreciated.

    Read the article

  • IPSEC tunnel Fortinet Transparent Mode to inside Fortinet firewall in NAT Mode does not respond to i

    - by TrevJen
    I have 2 fortinet firewalls (fully patched); fw1 is providing an IPSEC tunnel in transparent mode. beneath this firewall is a fw2, a NAT firewall with a VIP address that has been confirmed to work. This configuration is required for my customers who want to connect to a public address space inside of the tunnel, in order to prevent cross over in IP space. This configuration works great for traffic going outbound to the remote side of the tunnel, but not inbound. While sniffing the traffic, I can see the inbound traffic going out of the fw1, but it is never seen at the fw2. Cust Net > 10.1.1.100 | | | FW1 >TRANSPARENT IPSEC | | | FW2 EXT >99.1.1.1.100-VIP | FW2 NAT >192.1.1.100-NAT

    Read the article

  • multicast tcpdump and subscriptions

    - by Karoly Horvath
    From the multicast howto: IP_ADD_MEMBERSHIP. Recall that you need to tell the kernel which multicast groups you are interested in. If no process is interested in a group, packets destined to it that arrive to the host are discarded. If you don't do that, you won't see those packets with tcpdump. Is it possible to subscribe to all multicast traffic so I can do a tcpdump for all existing traffic? I would think IGMP doesn't allow this, so probably not.. but maybe you can configure a switch to still send all multicast traffic. Is that possible? Is it possible to do subscription (for a specific IP) with a command line tool? (note: I know how to do this in C.. but would prefer to use an existing tool and not compile a separate program for this)

    Read the article

  • Can I easily use a VPN to duplicate SSH Tunneling functionality?

    - by Steve V.
    Right now, when I want to use an unsecured wireless connection with my (Linux) laptop, I secure my connection using a variation of the method provided here. However, to the best of my knowledge, the (non-jailbroken) iPad does not allow applications to tunnel traffic through local ports. However, it does seem to allow certain VPN traffic. I have never set up, or even used, a VPN before. I'm looking for confirmation that I'm not barking up the wrong tree before I invest significant effort into setting up my own VPN server. If I want to secure my wireless iPad traffic over an unsecure wireless connection, would I be on the right track by looking at a VPN?

    Read the article

  • Linux policy routing - packets not coming back

    - by Bugsik
    i am trying to set up policy routing on my home server. My network looks like this: Host routed VPN gateway Internet link through VPN 192.168.0.35/24 ---> 192.168.0.5/24 ---> 192.168.0.1 DSL router 10.200.2.235/22 .... .... 10.200.0.1 VPN server The traffic from 192.168.0.32/27 should be and is routed through VPN. I wanted to define some routing policies to route some traffic from 192.168.0.5 through VPN as well - for start - from user with uid 2000. Policy routing is done using iptables mark target and ip rule fwmark. The problem: When connecting using user 2000 from 192.168.0.5 tcpdump shows outgoing packets, but nothing comes back. Traffic from 192.168.0.35 works fine (here I am not using fwmark but src policy). Here is my VPN gateway setup: # uname -a Linux placebo 3.2.0-34-generic #53-Ubuntu SMP Thu Nov 15 10:49:02 UTC 2012 i686 i686 i386 GNU/Linux # iptables -V iptables v1.4.12 # ip -V ip utility, iproute2-ss111117 IPtables rules (all policies in table filter are ACCEPT) # iptables -t mangle -nvL Chain PREROUTING (policy ACCEPT 770K packets, 314M bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 767K packets, 312M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 5520 packets, 1920K bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 782K packets, 901M bytes) pkts bytes target prot opt in out source destination 74 4707 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner UID match 2000 MARK set 0x3 Chain POSTROUTING (policy ACCEPT 788K packets, 903M bytes) pkts bytes target prot opt in out source destination # iptables -t nat -nvL Chain PREROUTING (policy ACCEPT 996 packets, 51172 bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 7 packets, 432 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 1364 packets, 112K bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 2302 packets, 160K bytes) pkts bytes target prot opt in out source destination 119 7588 MASQUERADE all -- * vpn 0.0.0.0/0 0.0.0.0/0 Routing: # ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master lan state UNKNOWN qlen 1000 link/ether 00:40:63:f9:c3:8f brd ff:ff:ff:ff:ff:ff valid_lft forever preferred_lft forever 3: lan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 00:40:63:f9:c3:8f brd ff:ff:ff:ff:ff:ff inet 192.168.0.5/24 brd 192.168.0.255 scope global lan inet6 fe80::240:63ff:fef9:c38f/64 scope link valid_lft forever preferred_lft forever 4: vpn: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 100 link/none inet 10.200.2.235/22 brd 10.200.3.255 scope global vpn # ip rule show 0: from all lookup local 32764: from all fwmark 0x3 lookup VPN 32765: from 192.168.0.32/27 lookup VPN 32766: from all lookup main 32767: from all lookup default # ip route show table VPN default via 10.200.0.1 dev vpn 10.200.0.0/22 dev vpn proto kernel scope link src 10.200.2.235 192.168.0.0/24 dev lan proto kernel scope link src 192.168.0.5 # ip route show default via 192.168.0.1 dev lan metric 100 10.200.0.0/22 dev vpn proto kernel scope link src 10.200.2.235 192.168.0.0/24 dev lan proto kernel scope link src 192.168.0.5 TCP dump showing no traffic coming back when connection is made from 192.168.0.5 user 2000 # tcpdump -i vpn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on vpn, link-type RAW (Raw IP), capture size 65535 bytes ### Traffic from user 2000 on 192.168.0.5 ### 10:19:05.629985 IP 10.200.2.235.37291 > 10.100-78-194.akamai.com.http: Flags [S], seq 2868799562, win 14600, options [mss 1460,sackOK,TS val 6887764 ecr 0,nop,wscale 4], length 0 10:19:21.678001 IP 10.200.2.235.37291 > 10.100-78-194.akamai.com.http: Flags [S], seq 2868799562, win 14600, options [mss 1460,sackOK,TS val 6891776 ecr 0,nop,wscale 4], length 0 ### Traffic from 192.168.0.35 ### 10:23:12.066174 IP 10.200.2.235.49247 > 10.100-78-194.akamai.com.http: Flags [S], seq 2294159276, win 65535, options [mss 1460,nop,wscale 4,nop,nop,TS val 557451322 ecr 0,sackOK,eol], length 0 10:23:12.265640 IP 10.100-78-194.akamai.com.http > 10.200.2.235.49247: Flags [S.], seq 2521908813, ack 2294159277, win 14480, options [mss 1367,sackOK,TS val 388565772 ecr 557451322,nop,wscale 1], length 0 10:23:12.276573 IP 10.200.2.235.49247 > 10.100-78-194.akamai.com.http: Flags [.], ack 1, win 8214, options [nop,nop,TS val 557451534 ecr 388565772], length 0 10:23:12.293030 IP 10.200.2.235.49247 > 10.100-78-194.akamai.com.http: Flags [P.], seq 1:480, ack 1, win 8214, options [nop,nop,TS val 557451552 ecr 388565772], length 479 10:23:12.574773 IP 10.100-78-194.akamai.com.http > 10.200.2.235.49247: Flags [.], ack 480, win 7776, options [nop,nop,TS val 388566081 ecr 557451552], length 0

    Read the article

  • Configuring Ubuntu for Global SOCKS5 proxy

    - by x50
    Does anyone know the best way to configure Ubuntu to use a SOCKS5 proxy for all network traffic? Server is ubuntu server - all cli. So I cannot set via the Proxy Settings GUI. We want to push all outbound traffic through the proxy (apt-get, http, https, etc). I do need to separate ssh traffic so it stays locally. Everything else should hit the proxy server. not that it matters, but I'm using Squid for the proxy server. I know this is easy on Mac and Windows as you can set a proxy on the actual network interface. Can you do the same on Ubuntu?

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >