Search Results

Search found 2118 results on 85 pages for 'trust metric'.

Page 12/85 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Code Metrics: Number of IL Instructions

    - by DigiMortal
    In my previous posting about code metrics I introduced how to measure LoC (Lines of Code) in .NET applications. Now let’s take a step further and let’s take a look how to measure compiled code. This way we can somehow have a picture about what compiler produces. In this posting I will introduce you code metric called number of IL instructions. NB! Number of IL instructions is not something you can use to measure productivity of your team. If you want to get better idea about the context of this metric and LoC then please read my first posting about LoC. What are IL instructions? When code written in some .NET Framework language is compiled then compiler produces assemblies that contain byte code. These assemblies are executed later by Common Language Runtime (CLR) that is code execution engine of .NET Framework. The byte code is called Intermediate Language (IL) – this is more common language than C# and VB.NET by example. You can use ILDasm tool to convert assemblies to IL assembler so you can read them. As IL instructions are building blocks of all .NET Framework binary code these instructions are smaller and highly general – we don’t want very rich low level language because it executes slower than more general language. For every method or property call in some .NET Framework language corresponds set of IL instructions. There is no 1:1 relationship between line in high level language and line in IL assembler. There are more IL instructions than lines in C# code by example. How much instructions there are? I have no common answer because it really depends on your code. Here you can see some metrics from my current community project that is developed on SharePoint Server 2007. As average I have about 7 IL instructions per line of code. This is not metric you should use, it is just illustrative example so you can see the differences between numbers of lines and IL instructions. Why should I measure the number of IL instructions? Just take a look at chart above. Compiler does something that you cannot see – it compiles your code to IL. This is not intuitive process because you usually cannot say what is exactly the end result. You know it at greater plain but you don’t know it exactly. Therefore we can expect some surprises and that’s why we should measure the number of IL instructions. By example, you may find better solution for some method in your source code. It looks nice, it works nice and everything seems to be okay. But on server under load your fix may be way slower than previous code. Although you minimized the number of lines of code it ended up with increasing the number of IL instructions. How to measure the number of IL instructions? My choice is NDepend because Visual Studio is not able to measure this metric. Steps to make are easy. Open your NDepend project or create new and add all your application assemblies to project (you can also add Visual Studio solution to project). Run project analysis and wait until it is done. You can see over-all stats form global summary window. This is the same window I used to read the LoC and the number of IL instructions metrics for my chart. Meanwhile I made some changes to my code (enabled advanced caching for events and event registrations module) and then I ran code analysis again to get results for this section of this posting. NDepend is also able to tell you exactly what parts of code have problematically much IL instructions. The code quality section of CQL Query Explorer shows you how much problems there are with members in analyzed code. If you click on the line Methods too big (NbILInstructions) you can see all the problematic members of classes in CQL Explorer shown in image on right. In my case if have 10 methods that are too big and two of them have horrible number of IL instructions – just take a look at first two methods in this TOP10. Also note the query box. NDepend has easy and SQL-like query language to query code analysis results. You can modify these queries if you like and also you can define your own ones if default set is not enough for you. What is good result? As you can see from query window then the number of IL instructions per member should have maximally 200 IL instructions. Of course, like always, the less instructions you have, the better performing code you have. I don’t mean here little differences but big ones. By example, take a look at my first method in warnings list. The number of IL instructions it has is huge. And believe me – this method looks awful. Conclusion The number of IL instructions is useful metric when optimizing your code. For analyzing code at general level to find out too long methods you can use the number of LoC metric because it is more intuitive for you and you can therefore handle the situation more easily. Also you can use NDepend as code metrics tool because it has a lot of metrics to offer.

    Read the article

  • Quick guide to Oracle IRM 11g: Configuring SSL

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index So far in this guide we have an IRM Server up and running, however I skipped over SSL configuration in the previous article because I wanted to focus in more detail now. You can, if you wish, not bother with setting up SSL, but considering this is a security technology it is worthwhile doing. Contents Setting up a one way, self signed SSL certificate in WebLogic Setting up an official SSL certificate in Apache 2.x Configuring Apache to proxy traffic to the IRM server There are two common scenarios in which an Oracle IRM server is configured. For a development or evaluation system, people usually communicate directly to the WebLogic Server running the IRM service. However in a production environment and for some proof of concept evaluations that require a setup reflecting a production system, the traffic to the IRM server travels via a web server proxy, commonly Apache. In this guide we are building an Oracle Enterprise Linux based IRM service and this article will go over the configuration of SSL in WebLogic and also in Apache. Like in the past articles, we are going to use two host names in the configuration below,irm.company.com will refer to the public Apache server irm.company.internal will refer to the internal WebLogic IRM server Setting up a one way, self signed SSL certificate in WebLogic First lets look at creating just a simple self signed SSL certificate to be used in WebLogic. This is a quick and easy way to get SSL working in your environment, however the downside is that no browsers are going to trust this certificate you create and you'll need to manually install the certificate onto any machine's communicating with the server. This is fine for development or when you have only a few users evaluating the system, but for any significant use it's usually better to have a fully trusted certificate in use and I explain that in the next section. But for now lets go through creating, installing and testing a self signed certificate. We use a library in Java to create the certificates, open a console and running the following commands. Note you should choose your own secure passwords whenever you see password below. [oracle@irm /] source /oracle/middleware/wlserver_10.3/server/bin/setWLSEnv.sh [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/ [oracle@irm /] java utils.CertGen -selfsigned -certfile MyOwnSelfCA.cer -keyfile MyOwnSelfKey.key -keyfilepass password -cn "irm.oracle.demo" [oracle@irm /] java utils.ImportPrivateKey -keystore MyOwnIdentityStore.jks -storepass password -keypass password -alias trustself -certfile MyOwnSelfCA.cer.pem -keyfile MyOwnSelfKey.key.pem -keyfilepass password [oracle@irm /] keytool -import -trustcacerts -alias trustself -keystore TrustMyOwnSelf.jks -file MyOwnSelfCA.cer.der -keyalg RSA We now have two Java Key Stores, MyOwnIdentityStore.jks and TrustMyOwnSelf.jks. These contain keys and certificates which we will use in WebLogic Server. Now we need to tell the IRM server to use these stores when setting up SSL connections for incoming requests. Make sure the Admin server is running and login into the WebLogic Console at http://irm.company.intranet:7001/console and do the following; In the menu on the left, select the + next to Environment to expose the submenu, then click on Servers. You will see two servers in the list, AdminServer(admin) and IRM_server1. If the IRM server is running, shut it down either by hitting CONTROL + C in the console window it was started from, or you can switch to the CONTROL tab, select IRM_server1 and then select the Shutdown menu and then Force Shutdown Now. In the Configuration tab select IRM_server1 and switch to the Keystores tab. By default WebLogic Server uses it's own demo identity and trust. We are now going to switch to the self signed one's we've just created. So select the Change button and switch to Custom Identity and Custom Trust and hit save. Now we have to complete the resulting fields, the setting's i've used in my evaluation server are below. IdentityCustom Identity Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/MyOwnIdentityStore.jks Custom Identity Keystore Type: JKS Custom Identity Keystore Passphrase: password Confirm Custom Identity Keystore Passphrase: password TrustCustom Trust Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/TrustMyOwnSelf.jks Custom Trust Keystore Type: JKS Custom Trust Keystore Passphrase: password Confirm Custom Trust Keystore Passphrase: password Now click on the SSL tab for the IRM_server1 and enter in the alias and passphrase, in my demo here the details are; IdentityPrivate Key Alias: trustself Private Key Passphrase: password Confirm Private Key Passphrase: password And hit save. Now lets test a connection to the IRM server over HTTPS using SSL. Go back to a console window and start the IRM server, a quick reminder on how to do this is... [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/bin [oracle@irm /] ./startManagedWeblogic IRM_server1 Once running, open a browser and head to the SSL port of the server. By default the IRM server will be listening on the URL https://irm.company.intranet:16101/irm_rights. Note in the example image on the right the port is 7002 because it's a system that has the IRM services installed on the Admin server, this isn't typical (or advisable). Your system is going to have a separate managed server which will be listening on port 16101. Once you open this address you will notice that your browser is going to complain that the server certificate is untrusted. The images on the right show how Firefox displays this error. You are going to be prompted every time you create a new SSL session with the server, both from the browser and more annoyingly from the IRM Desktop. If you plan on always using a self signed certificate, it is worth adding it to the Windows certificate store so that when you are accessing sealed content you do not keep being informed this certificate is not trusted. Follow these instructions (which are for Internet Explorer 8, they may vary for your version of IE.) Start Internet Explorer and open the URL to your IRM server over SSL, e.g. https://irm.company.intranet:16101/irm_rights. IE will complain that about the certificate, click on Continue to this website (not recommended). From the IE Tools menu select Internet Options and from the resulting dialog select Security and then click on Trusted Sites and then the Sites button. Add to the list of trusted sites a URL which mates the server you are accessing, e.g. https://irm.company.intranet/ and select OK. Now refresh the page you were accessing and next to the URL you should see a red cross and the words Certificate Error. Click on this button and select View Certificates. You will now see a dialog with the details of the self signed certificate and the Install Certificate... button should be enabled. Click on this to start the wizard. Click next and you'll be asked where you should install the certificate. Change the option to Place all certificates in the following store. Select browse and choose the Trusted Root Certification Authorities location and hit OK. You'll then be prompted to install the certificate and answer yes. You also need to import the root signed certificate into the same location, so once again select the red Certificate Error option and this time when viewing the certificate, switch to the Certification Path tab and you should see a CertGenCAB certificate. Select this and then click on View Certificate and go through the same process as above to import the certificate into the store. Finally close all instances of the IE browser and re-access the IRM server URL again, this time you should not receive any errors. Setting up an official SSL certificate in Apache 2.x At this point we now have an IRM server that you can communicate with over SSL. However this certificate isn't trusted by any browser because it's path of trust doesn't end in a recognized certificate authority (CA). Also you are communicating directly to the WebLogic Server over a non standard SSL port, 16101. In a production environment it is common to have another device handle the initial public internet traffic and then proxy this to the WebLogic server. The diagram below shows a very simplified view of this type of deployment. What i'm going to walk through next is configuring Apache to proxy traffic to a WebLogic server and also to use a real SSL certificate from an official CA. First step is to configure Apache to handle incoming requests over SSL. In this guide I am configuring the IRM service in Oracle Enterprise Linux 5 update 3 and Apache 2.2.3 which came with OpenSSL and mod_ssl components. Before I purchase an SSL certificate, I need to generate a certificate request from the server. Oracle.com uses Verisign and for my own personal needs I use cheaper certificates from GoDaddy. The following instructions are specific to Apache, but there are many references out there for other web servers. For Apache I have OpenSSL and the commands are; [oracle@irm /] cd /usr/bin [oracle@irm bin] openssl genrsa -des3 -out irm-apache-server.key 2048 Generating RSA private key, 2048 bit long modulus ............................+++ .........+++ e is 65537 (0x10001) Enter pass phrase for irm-apache-server.key: Verifying - Enter pass phrase for irm-apache-server.key: [oracle@irm bin] openssl req -new -key irm-apache-server.key -out irm-apache-server.csr Enter pass phrase for irm-apache-server.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [GB]:US State or Province Name (full name) [Berkshire]:CA Locality Name (eg, city) [Newbury]:San Francisco Organization Name (eg, company) [My Company Ltd]:Oracle Organizational Unit Name (eg, section) []:Security Common Name (eg, your name or your server's hostname) []:irm.company.com Email Address []:[email protected] Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []:testing An optional company name []: You must make sure to remember the pass phrase you used in the initial key generation, you will need this when later configuring Apache. In the /usr/bin directory there are now two new files. The irm-apache-server.csr contains our certificate request and is what you cut and paste, or upload, to your certificate authority when you purchase and validate your SSL certificate. In response you will typically get two files. Your server certificate and another certificate file that will likely contain a set of certificates from your CA which validate your certificate's trust. Next we need to configure Apache to use these files. Typically there is an ssl.conf file which is where all the SSL configuration is done. On my Oracle Enterprise Linux server this file is located in /etc/httpd/conf.d/ssl.conf and i've added the following lines. <VirtualHost irm.company.com> # Setup SSL for irm.company.com ServerName irm.company.com SSLEngine On SSLCertificateFile /oracle/secure/irm.company.com.crt SSLCertificateKeyFile /oracle/secure/irm.company.com.key SSLCertificateChainFile /oracle/secure/gd_bundle.crt </VirtualHost> Restarting Apache (apachectl restart) and I can now attempt to connect to the Apache server in a web browser, https://irm.company.com/. If all is configured correctly I should now see an Apache test page delivered to me over HTTPS. Configuring Apache to proxy traffic to the IRM server Final piece in setting up SSL is to have Apache proxy requests for the IRM server but do so securely. So the requests to Apache will be over HTTPS using a legitimate certificate, but we can also configure Apache to proxy these requests internally across to the IRM server using SSL with the self signed certificate we generated at the start of this article. To do this proxying we use the WebLogic Web Server plugin for Apache which you can download here from Oracle. Download the zip file and extract onto the server. The file extraction reveals a set of zip files, each one specific to a supported web server. In my instance I am using Apache 2.2 32bit on an Oracle Enterprise Linux, 64 bit server. If you are not sure what version your Apache server is, run the command /usr/sbin/httpd -V and you'll see version and it its 32 or 64 bit. Mine is a 32bit server so I need to extract the file WLSPlugin1.1-Apache2.2-linux32-x86.zip. The from the resulting lib folder copy the file mod_wl.so into /usr/lib/httpd/modules/. First we want to test that the plug in will work for regular HTTP traffic. Edit the httpd.conf for Apache and add the following section at the bottom. LoadModule weblogic_module modules/mod_wl.so <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16100    WLLogFile /tmp/wl-proxy.log </IfModule> <Location /irm_rights>    SetHandler weblogic-handler </Location> <Location /irm_desktop>    SetHandler weblogic-handler </Location> <Location /irm_sealing>    SetHandler weblogic-handler </Location> <Location /irm_services>    SetHandler weblogic-handler </Location> Now restart Apache again (apachectl restart) and now open a browser to http://irm.company.com/irm_rights. Apache will proxy the HTTP traffic from the port 80 of your Apache server to the IRM service listening on port 16100 of the WebLogic Managed server. Note above I have included all four of the Locations you might wish to proxy. http://irm.company.internalirm_rights is the URL to the management website, /irm_desktop is the URL used for the IRM Desktop to communicate. irm_sealing is for web services based document sealing and irm_services is for IRM server web services. The last two are typically only used when you have the IRM server integrated with another application and it is unlikely you'd be accessing these resources from the public facing Apache server. However, just in case, i've mentioned them above. Now let's enable SSL communication from Apache to WebLogic. In the ZIP file we extracted were some more modules we need to copy into the Apache folder. Looking back in the lib that we extracted, there are some more files. Copy the following into the /usr/lib/httpd/modules/ folder. libwlssl.so libnnz11.so libclntsh.so.11.1 Now the documentation states that should only need to do this, but I found that I also needed to create an environment variable called LD_LIBRARY_PATH and point this to the folder /usr/lib/httpd/modules/. If I didn't do this, starting Apache with the WebLogic module configured to SSL would throw the error. [crit] (20014)Internal error: WL SSL Init failed for server: (null) on 0 So I had to edit the file /etc/profile and add the following lines at the bottom. You may already have the LD_LIBRARY_PATH variable defined, therefore simply add this path to it. LD_LIBRARY_PATH=/usr/lib/httpd/modules/ export LD_LIBRARY_PATH Now the WebLogic plug in uses an Oracle Wallet to store the required certificates.You'll need to copy the self signed certificate from the IRM server over to the Apache server. Copy over the MyOwnSelfCA.cer.der into the same folder where you are storing your public certificates, in my example this is /oracle/secure. It's worth mentioning these files should ONLY be readable by root (the user Apache runs as). Now lets create an Oracle Wallet and import the self signed certificate from the IRM server. The file orapki was included in the bin folder of the Apache 1.1 plugin zip you extracted. orapki wallet create -wallet /oracle/secure/my-wallet -auto_login_only orapki wallet add -wallet /oracle/secure/my-wallet -trusted_cert -cert MyOwnSelfCA.cer.der -auto_login_only Finally change the httpd.conf to reflect that we want the WebLogic Apache plug-in to use HTTPS/SSL and not just plain HTTP. <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16101    SecureProxy ON    WLSSLWallet /oracle/secure/my-wallet    WLLogFile /tmp/wl-proxy.log </IfModule> Then restart Apache once more and you can go back to the browser to test the communication. Opening the URL https://irm.company.com/irm_rights will proxy your request to the WebLogic server at https://irm.company.internal:16101/irm_rights. At this point you have a fully functional Oracle IRM service, the next step is to create a sealed document and test the entire system.

    Read the article

  • Linux router: ping doesn't route back

    - by El Barto
    I have a Debian box which I'm trying to set up as a router and an Ubuntu box which I'm using as a client. My problem is that when the Ubuntu client tries to ping a server on the Internet, all the packets are lost (though, as you can see below, they seem to go to the server and back without problem). I'm doing this in the Ubuntu Box: # ping -I eth1 my.remote-server.com PING my.remote-server.com (X.X.X.X) from 10.1.1.12 eth1: 56(84) bytes of data. ^C --- my.remote-server.com ping statistics --- 13 packets transmitted, 0 received, 100% packet loss, time 12094ms (I changed the name and IP of the remote server for privacy). From the Debian Router I see this: # tcpdump -i eth1 -qtln icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 7, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 305, seq 8, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 8, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 305, seq 9, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 9, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 305, seq 10, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 10, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 305, seq 11, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 305, seq 11, length 64 ^C 9 packets captured 9 packets received by filter 0 packets dropped by kernel # tcpdump -i eth2 -qtln icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 213, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 213, length 64 IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 214, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 214, length 64 IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 215, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 215, length 64 IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 216, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 216, length 64 IP 192.168.1.10 > X.X.X.X: ICMP echo request, id 360, seq 217, length 64 IP X.X.X.X > 192.168.1.10: ICMP echo reply, id 360, seq 217, length 64 ^C 10 packets captured 10 packets received by filter 0 packets dropped by kernel And at the remote server I see this: # tcpdump -i eth0 -qtln icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 1, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 1, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 2, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 2, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 3, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 3, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 4, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 4, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 5, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 5, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 6, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 6, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 7, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 7, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 8, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 8, length 64 IP Y.Y.Y.Y > X.X.X.X: ICMP echo request, id 360, seq 9, length 64 IP X.X.X.X > Y.Y.Y.Y: ICMP echo reply, id 360, seq 9, length 64 18 packets captured 228 packets received by filter 92 packets dropped by kernel Here "X.X.X.X" is my remote server's IP and "Y.Y.Y.Y" is my local network's public IP. So, what I understand is that the ping packets are coming out of the Ubuntu box (10.1.1.12), to the router (10.1.1.1), from there to the next router (192.168.1.1) and reaching the remote server (X.X.X.X). Then they come back all the way to the Debian router, but they never reach the Ubuntu box back. What am I missing? Here's the Debian router setup: # ifconfig eth1 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.1.1 Bcast:10.1.1.255 Mask:255.255.255.0 inet6 addr: fe80::960c:6dff:fe82:d98/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:105761 errors:0 dropped:0 overruns:0 frame:0 TX packets:48944 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:40298768 (38.4 MiB) TX bytes:44831595 (42.7 MiB) Interrupt:19 Base address:0x6000 eth2 Link encap:Ethernet HWaddr 6c:f0:49:a4:47:38 inet addr:192.168.1.10 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::6ef0:49ff:fea4:4738/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:38335992 errors:0 dropped:0 overruns:0 frame:0 TX packets:37097705 errors:0 dropped:0 overruns:0 carrier:1 collisions:0 txqueuelen:1000 RX bytes:4260680226 (3.9 GiB) TX bytes:3759806551 (3.5 GiB) Interrupt:27 eth3 Link encap:Ethernet HWaddr 94:0c:6d:82:c8:72 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:20 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:3408 errors:0 dropped:0 overruns:0 frame:0 TX packets:3408 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:358445 (350.0 KiB) TX bytes:358445 (350.0 KiB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:2767779 errors:0 dropped:0 overruns:0 frame:0 TX packets:1569477 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:3609469393 (3.3 GiB) TX bytes:96113978 (91.6 MiB) # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.8.0.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 192.168.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth2 10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth2 # arp -n # Note: Here I have changed all the different MACs except the ones corresponding to the Ubuntu box (on 10.1.1.12 and 192.168.1.12) Address HWtype HWaddress Flags Mask Iface 192.168.1.118 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.72 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.94 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.102 ether NN:NN:NN:NN:NN:NN C eth2 10.1.1.12 ether 00:1e:67:15:2b:f0 C eth1 192.168.1.86 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.2 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.61 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.64 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.116 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.91 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.52 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.93 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.87 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.92 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.100 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.40 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.53 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.1 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.83 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.89 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.12 ether 00:1e:67:15:2b:f1 C eth2 192.168.1.77 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.66 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.90 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.65 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.41 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.78 ether NN:NN:NN:NN:NN:NN C eth2 192.168.1.123 ether NN:NN:NN:NN:NN:NN C eth2 # iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # iptables -L -n -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- 10.1.1.0/24 !10.1.1.0/24 MASQUERADE all -- !10.1.1.0/24 10.1.1.0/24 Chain OUTPUT (policy ACCEPT) target prot opt source destination And here's the Ubuntu box: # ifconfig eth0 Link encap:Ethernet HWaddr 00:1e:67:15:2b:f1 inet addr:192.168.1.12 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21e:67ff:fe15:2bf1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:28785139 errors:0 dropped:0 overruns:0 frame:0 TX packets:19050735 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:32068182803 (32.0 GB) TX bytes:6061333280 (6.0 GB) Interrupt:16 Memory:b1a00000-b1a20000 eth1 Link encap:Ethernet HWaddr 00:1e:67:15:2b:f0 inet addr:10.1.1.12 Bcast:10.1.1.255 Mask:255.255.255.0 inet6 addr: fe80::21e:67ff:fe15:2bf0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:285086 errors:0 dropped:0 overruns:0 frame:0 TX packets:12719 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:30817249 (30.8 MB) TX bytes:2153228 (2.1 MB) Interrupt:16 Memory:b1900000-b1920000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:86048 errors:0 dropped:0 overruns:0 frame:0 TX packets:86048 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:11426538 (11.4 MB) TX bytes:11426538 (11.4 MB) # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 0.0.0.0 10.1.1.1 0.0.0.0 UG 100 0 0 eth1 10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.8.0.0 192.168.1.10 255.255.255.0 UG 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 # arp -n # Note: Here I have changed all the different MACs except the ones corresponding to the Debian box (on 10.1.1.1 and 192.168.1.10) Address HWtype HWaddress Flags Mask Iface 192.168.1.70 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.90 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.97 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.103 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.13 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.120 (incomplete) eth0 192.168.1.111 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.118 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.51 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.102 (incomplete) eth0 192.168.1.64 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.52 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.74 (incomplete) eth0 192.168.1.94 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.121 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.72 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.87 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.91 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.71 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.78 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.83 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.88 (incomplete) eth0 192.168.1.82 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.98 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.100 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.93 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.73 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.11 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.85 (incomplete) eth0 192.168.1.112 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.89 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.65 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.81 ether NN:NN:NN:NN:NN:NN C eth0 10.1.1.1 ether 94:0c:6d:82:0d:98 C eth1 192.168.1.53 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.116 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.61 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.10 ether 6c:f0:49:a4:47:38 C eth0 192.168.1.86 (incomplete) eth0 192.168.1.119 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.66 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.1 ether NN:NN:NN:NN:NN:NN C eth0 192.168.1.1 ether NN:NN:NN:NN:NN:NN C eth1 192.168.1.92 ether NN:NN:NN:NN:NN:NN C eth0 # iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # iptables -L -n -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination Edit: Following Patrick's suggestion, I did a tcpdump con the Ubuntu box and I see this: # tcpdump -i eth1 -qtln icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 1, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 1, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 2, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 2, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 3, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 3, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 4, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 4, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 5, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 5, length 64 IP 10.1.1.12 > X.X.X.X: ICMP echo request, id 21967, seq 6, length 64 IP X.X.X.X > 10.1.1.12: ICMP echo reply, id 21967, seq 6, length 64 ^C 12 packets captured 12 packets received by filter 0 packets dropped by kernel So the question is: if all packets seem to be coming and going, why does ping report 100% packet loss?

    Read the article

  • OpenSSL: certificate signature failure error

    - by e-t172
    I'm trying to wget La Banque Postale's website. $ wget https://www.labanquepostale.fr/ --2009-10-08 17:25:03-- https://www.labanquepostale.fr/ Resolving www.labanquepostale.fr... 81.252.54.6 Connecting to www.labanquepostale.fr|81.252.54.6|:443... connected. ERROR: cannot verify www.labanquepostale.fr's certificate, issued by `/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL SGC CA': certificate signature failure To connect to www.labanquepostale.fr insecurely, use `--no-check-certificate'. Unable to establish SSL connection. I'm using Debian Sid. On another machine which is running Debian Sid with same software versions the command works perfectly. ca-certificates is installed on both machines (I tried removing it and reinstalling it in case a certificate got corrupted somehow, no luck). Opening https://www.labanquepostale.fr/ in Iceweasel on the same machine works perfectly. Additional information: $ openssl s_client -CApath /etc/ssl/certs -connect www.labanquepostale.fr:443 CONNECTED(00000003) depth=3 /C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority verify error:num=7:certificate signature failure verify return:0 --- Certificate chain 0 s:/1.3.6.1.4.1.311.60.2.1.3=FR/2.5.4.15=V1.0, Clause 5.(b)/serialNumber=421100645/C=FR/postalCode=75006/ST=PARIS/L=PARIS/streetAddress=115 RUE DE SEVRES/O=LA BANQUE POSTALE/OU=DISF2/CN=www.labanquepostale.fr i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL SGC CA 1 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL SGC CA i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 2 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority 3 s:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority --- Server certificate -----BEGIN CERTIFICATE----- <base64-encoded certificate removed for lisibility> -----END CERTIFICATE----- subject=/1.3.6.1.4.1.311.60.2.1.3=FR/2.5.4.15=V1.0, Clause 5.(b)/serialNumber=421100645 /C=FR/postalCode=75006/ST=PARIS/L=PARIS/streetAddress=115 RUE DE SEVRES/O=LA BANQUE POSTALE/OU=DISF2/CN=www.labanquepostale.fr issuer=/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL SGC CA --- No client certificate CA names sent --- SSL handshake has read 5101 bytes and written 300 bytes --- New, TLSv1/SSLv3, Cipher is RC4-MD5 Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : RC4-MD5 Session-ID: 0009008CB3ADA9A37CE45B464E989C82AD0793D7585858584ACE056700035363 Session-ID-ctx: Master-Key: 1FB7DAD98B6738BEA7A3B8791B9645334F9C760837D95E3403C108058A3A477683AE74D603152F6E4BFEB6ACA48BC2C3 Key-Arg : None Start Time: 1255015783 Timeout : 300 (sec) Verify return code: 7 (certificate signature failure) --- Any idea why I get certificate signature failure? As if this wasn't strange enough, copy-pasting the "server certificate" mentionned in the output and running openssl verify on it returns OK...

    Read the article

  • Install Previous Version of PHP Package from Debian Testing Using Apt

    - by Metric Scantlings
    Is there a way to install an older Debian testing repository version of a package using apt-get? Specifically, I am looking to install the latest version of PHP 5.2.x on Debian Lenny. The last time I set up an environment, 5.2.12 just happened to be the version in Debian testing. That was perfect, convenient. Now, testing is at 5.3.x which won't work for my purposes, and my attempts at sudo apt-get -t testing install php5=5.2.12* are answered with E: Version '5.2.12*' for 'php5' was not found.

    Read the article

  • WebSVN accept untrusted HTTPS certificate

    - by Laurent
    I am using websvn with a remote repository. This repository uses https protocol. After having configured websvn I get on the websvn webpage: svn --non-interactive --config-dir /tmp list --xml --username '***' --password '***' 'https://scm.gforge.....' OPTIONS of 'https://scm.gforge.....': Server certificate verification failed: issuer is not trusted I don't know how to indicate to websvn to execute svn command in order to accept and to store the certificate. Does someone knows how to do it? UPDATE: It works! In order to have something which is well organized I have updated the WebSVN config file to relocate the subversion config directory to /etc/subversion which is the default path for debian: $config->setSvnConfigDir('/etc/subversion'); In /etc/subversion/servers I have created a group and associated the certificate to trust: [groups] my_repo = my.repo.url.to.trust [global] ssl-trust-default-ca = true store-plaintext-passwords = no [my_repo] ssl-authority-files = /etc/apache2/ssl/my.repo.url.to.trust.crt

    Read the article

  • When Your Boss Doesn't Want you to Succeed

    - by Phil Factor
    You're working hard to get an application finished. You are programming long into the evenings sometimes, and eating sandwiches at your desk instead of taking a lunch break. Then one day you glance up at the IT manager, serene in his mysterious round of meetings, and think 'Does he actually care whether this project succeeds or not?'. The question may seem absurd. Of course the project must succeed. The truth, as always, is often far more complex. Your manager may even be doing his best to make sure you don't succeed. Why? There have always been rich pickings for the unscrupulous in IT.  In extreme cases, where administrators struggle with scarcely-comprehended technical issues, huge sums of money can be lost and gained without any perceptible results. In a very few cases can fraud be proven: most of the time, the intricacies of the 'game' are such that one can do little more than harbor suspicion.  Where does over-enthusiastic salesmanship end and fraud begin? The Business of Information Technology provides rich opportunities for White-collar crime. The poor developer has his, or her, hands full with the task of wrestling with the sheer complexity of building an application. He, or she, has no time for following the complexities of the chicanery of the management that is directing affairs.  Most likely, the developers wouldn't even suspect that their company management had ulterior motives. I'll illustrate what I mean with an entirely fictional, hypothetical, example. The Opportunist and the Aged Charities often do good, unexciting work that is funded by the income from a bequest that dates back maybe hundreds of years.  In our example, it isn't exciting work, for it involves the welfare of elderly people who have fallen on hard times.  Volunteers visit, giving a smile and a chat, and check that they are all right, but are able to spend a little money on their discretion to ameliorate any pressing needs for these old folk.  The money is made to work very hard and the charity averts a great deal of suffering and eases the burden on the state. Daisy hears the garden gate creak as Mrs Rainer comes up the path. She looks forward to her twice-weekly visit from the nice lady from the trust. She always asked ‘is everything all right, Love’. Cheeky but nice. She likes her cheery manner. She seems interested in hearing her memories, and talking about her far-away family. She helps her with those chores in the house that she couldn’t manage and once even paid to fill the back-shed with coke, the other year. Nice, Mrs. Rainer is, she thought as she goes to open the door. The trustees are getting on in years themselves, and worry about the long-term future of the charity: is it relevant to modern society? Is it likely to attract a new generation of workers to take it on. They are instantly attracted by the arrival to the board of a smartly dressed University lecturer with the ear of the present Government. Alain 'Stalin' Jones is earnest, persuasive and energetic. The trustees welcome him to the board and quickly forgive his humorless political-correctness. He talks of 'diversity', 'relevance', 'social change', 'equality' and 'communities', but his eye is on that huge bequest. Alain first came to notice as a Trotskyite union official, who insinuated himself into one of the duller Trades Unions and turned it, through his passionate leadership, into a radical, headline-grabbing organization.  Middle age, and the rise of European federal socialism, had brought him quiet prosperity and charcoal suits, an ear in the current government, and a wide influence as a member of various Quangos (government bodies staffed by well-paid unelected courtiers).  He was employed as a 'consultant' by several organizations that relied on government contracts. After gaining the confidence of the trustees, and showing a surprising knowledge of mundane processes and the regulatory framework of charities, Alain launches his plan.  The trust will expand their work by means of a bold IT initiative that will coordinate the interventions of several 'caring agencies', and provide  emergency cover, a special Website so anxious relatives can see how their elderly charges are doing, and a vastly more efficient way of coordinating the work of the volunteer carers. It will also provide a special-purpose site that gives 'social networking' facilities, rather like Facebook, to the few elderly folk on the lists with access to the internet. The trustees perk up. Their own experience of the internet is restricted to the occasional scanning of railway timetables, but they can see that it is 'relevant'. In his next report to the other trustees, Alain proudly announces that all this glamorous and exciting technology can be paid for by a grant from the government. He admits darkly that he has influence. True to his word, the government promises a grant of a size that is an order of magnitude greater than any budget that the trustees had ever handled. There was the understandable proviso that the company that would actually do the IT work would have to be one of the government's preferred suppliers and the work would need to be tendered under EU competition rules. The only company that tenders, a multinational IT company with a long track record of government work, quotes ten million pounds for the work. A trustee questions the figure as it seems enormous for the reasonably trivial internet facilities being built, but the IT Salesmen dazzle them with presentations and three-letter acronyms until they subside into quiescent acceptance. After all, they can’t stay locked in the Twentieth century practices can they? The work is put in hand with a large project team, in a splendid glass building near west London. The trustees see rooms of programmers working diligently at screens, and who talk with enthusiasm of the project. Paul, the project manager, looked through his resource schedule with growing unease. His initial excitement at being given his first major project hadn’t lasted. He’d been allocated a lackluster team of developers whose skills didn’t seem right, and he was allowed only a couple of contractors to make good the deficit. Strangely, the presentation he’d given to his management, where he’d saved time and resources with a OTS solution to a great deal of the development work, and a sound conservative architecture, hadn’t gone down nearly as big as he’d hoped. He almost got the feeling they wanted a more radical and ambitious solution. The project starts slipping its dates. The costs build rapidly. There are certain uncomfortable extra charges that appear, such as the £600-a-day charge by the 'Business Manager' appointed to act as a point of liaison between the charity and the IT Company.  When he appeared, his face permanently split by a 'Mr Sincerity' smile, they'd thought he was provided at the cost of the IT Company. Derek, the DBA, didn’t have to go to the server room quite some much as he did: but It got him away from the poisonous despair of the development group. Wave after wave of events had conspired to delay the project.  Why the management had imposed hideous extra bureaucracy to cover ISO 9000 and 9001:2008 accreditation just as the project was struggling to get back on-schedule was  beyond belief.  Then  the Business manager was coming back with endless changes in scope, sorrowing saying that the Trustees were very insistent, though hopelessly out in touch with the reality of technical challenges. Suddenly, the costs mount to the point of consuming the government grant in its entirety. The project remains tantalizingly just out of reach. Alain Jones gives an emotional rallying speech at the trustees review meeting, urging them not to lose their nerve. Sadly, the trustees dip into the accumulated capital of the trust, the seed-corn of all their revenues, in order to save the IT project. A few months later it is all over. The IT project is never delivered, even though it had seemed so incredibly close.  With the trust's capital all gone, the activities it funded have to be terminated and the trust becomes just a shell. There aren't even the funds to mount a legal challenge against the IT company, even had the trust's solicitor advised such a foolish thing. Alain leaves as suddenly as he had arrived, only to pop up a few months later, bronzed and rested, at another charity. The IT workers who were permanent employees are dispersed to other projects, and the contractors leave to other contracts. Within months the entire project is but a vague memory. One or two developers remain  puzzled that their managers had been so obstructive when they should have welcomed progress toward completion of the project, but they put it down to incompetence and testosterone. Few suspected that they were actively preventing the project from getting finished. The relationships between the IT consultancy, and the government of the day are intricate, and made more complex by the Private Finance initiatives and political patronage.  The losers in this case were the taxpayers, and the beneficiaries of the trust, and, perhaps the soul of the original benefactor of the trust, whose bid to give his name some immortality had been scuppered by smooth-talking white-collar political apparatniks.  Even now, nobody is certain whether a crime was ever committed. The perfect heist, I guess. Where’s the victim? "I hear that Daisy’s cottage is up for sale. She’s had to go into a care home.  She didn’t want to at all, but then there is nobody to keep an eye on her since she had that minor stroke a while back.  A charity used to help out. The ‘social’ don’t have the funding, evidently for community care. Yes, her old cat was put down. There was a good clearout, and now the house is all scrubbed and cleared ready for sale. The skip was full of old photos and letters, memories. No room in her new ‘home’."

    Read the article

  • Ubuntu 12.04 LTS - Apple iPad 3 (iOS 7 non-jailbroken) USB connection problem

    - by Eldian
    I've bought an Apple iPad 3 (iOS 7 non-jailbroken), and I'm trying to connect it to my PC running Ubuntu 12.04 through the original USB cable. After I plugged in the cable, I got this message on the PC: The device 'iPad' is locked. Enter the passcode on the device and click 'Try again'. Slide to unlock on iPad, it's OK, I got this message on the PC again: The device 'iPad' is locked. Enter the passcode on the device and click 'Try again'. iPad: Trust this computer? TRUST/DON'T TRUST I've selected TRUST. PC: Unable to mount - DBus error org.freedesktop.DBus.Error.NoReply: Message did not receive a reply (timeout by message bus) OK, I've read some forums, everywhere there was this solution: "Install: ifuse, libimobiledevice-utils, Rhythmbox OK, I've installed them through terminal, and used the idevicepair unpair && idevicepair pair command also in the terminal. Then I got this message: Device is not paired with this host. I don't know what to do. Please help!

    Read the article

  • Set up tunnel to HE.net and now only ipv6.google.com works, but other sites ping fine.

    - by AndrejaKo
    I'm setting up IPv6 using my router which is running OpenWRT, version Backfire 10.03.1-rc4. I made a tunnel using Hurricane Electric's tunnel broker and set it up on the router and I'm using RADVD to hand out IPv6 addresses. My problem is that on computers on the network, I can only access ipv6.google.com using a browser, but other sites seem to be loading forever and won't open in any browser. I can ping and traceroute to them fine, but can't open them with a browser. I can open any site normally with a browser from the router. Stopping firewall service on the router doesn't help, so it's probably not a firewall issue. All AAAA records resolve fine, so it's probably not a DNS issue. Computers on the network get their IPv6 addresses fine, so it's probably not a radvd issue. Similar setup worked fine for SixXs, but I'm having problems with my PoP there, so I decided to move to HE. Here are some traceroutes: From a client computer: Tracing route to ipv6.he.net [2001:470:0:64::2] over a maximum of 30 hops: 1 <1 ms 1 ms 1 ms 2001:470:1f0b:de5::1 2 62 ms 63 ms 62 ms andrejako-1.tunnel.tserv6.fra1.ipv6.he.net [2001:470:1f0a:de5::1] 3 60 ms 60 ms 63 ms gige-g2-4.core1.fra1.he.net [2001:470:0:69::1] 4 63 ms 68 ms 68 ms 10gigabitethernet1-4.core1.ams1.he.net [2001:470:0:47::1] 5 84 ms 74 ms 76 ms 10gigabitethernet1-4.core1.lon1.he.net [2001:470:0:3f::1] 6 146 ms 147 ms 151 ms 10gigabitethernet4-4.core1.nyc4.he.net [2001:470:0:128::1] 7 200 ms 198 ms 202 ms 10gigabitethernet5-3.core1.lax1.he.net [2001:470:0:10e::1] 8 219 ms * 210 ms 10gigabitethernet2-2.core1.fmt2.he.net [2001:470:0:18d::1] 9 221 ms 338 ms 209 ms gige-g4-18.core1.fmt1.he.net [2001:470:0:2d::1] 10 206 ms 210 ms 207 ms ipv6.he.net [2001:470:0:64::2] Trace complete. and another from a cliet computer Tracing route to whatismyipv6.com [2001:4870:a24f:2::90] over a maximum of 30 hops: 1 7 ms 1 ms 1 ms 2001:470:1f0b:de5::1 2 69 ms 70 ms 63 ms AndrejaKo-1.tunnel.tserv6.fra1.ipv6.he.net [2001:470:1f0a:de5::1] 3 57 ms 65 ms 58 ms gige-g2-4.core1.fra1.he.net [2001:470:0:69::1] 4 73 ms 74 ms 75 ms 10gigabitethernet1-4.core1.ams1.he.net [2001:470:0:47::1] 5 71 ms 74 ms 76 ms 10gigabitethernet1-4.core1.lon1.he.net [2001:470:0:3f::1] 6 141 ms 149 ms 148 ms 10gigabitethernet2-3.core1.nyc4.he.net [2001:470:0:3e::1] 7 141 ms 147 ms 143 ms 10gigabitethernet1-2.core1.nyc1.he.net [2001:470:0:37::2] 8 144 ms 145 ms 142 ms 2001:504:1::a500:4323:1 9 226 ms 225 ms 218 ms 2001:4870:a240::2 10 220 ms 224 ms 219 ms 2001:4870:a240::2 11 219 ms 218 ms 220 ms 2001:4870:a24f::2 12 221 ms 222 ms 220 ms www.whatismyipv6.com [2001:4870:a24f:2::90] Trace complete. Here's some firewall info on the router: root@OpenWrt:/# iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 syn_flood tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 input_rule all -- 0.0.0.0/0 0.0.0.0/0 input all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP) target prot opt source destination zone_wan_MSSFIX all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED forwarding_rule all -- 0.0.0.0/0 0.0.0.0/0 forward all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 output_rule all -- 0.0.0.0/0 0.0.0.0/0 output all -- 0.0.0.0/0 0.0.0.0/0 Chain forward (1 references) target prot opt source destination zone_lan_forward all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_forward all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_forward all -- 0.0.0.0/0 0.0.0.0/0 Chain forwarding_lan (1 references) target prot opt source destination Chain forwarding_rule (1 references) target prot opt source destination nat_reflection_fwd all -- 0.0.0.0/0 0.0.0.0/0 Chain forwarding_wan (1 references) target prot opt source destination Chain input (1 references) target prot opt source destination zone_lan all -- 0.0.0.0/0 0.0.0.0/0 zone_wan all -- 0.0.0.0/0 0.0.0.0/0 zone_wan all -- 0.0.0.0/0 0.0.0.0/0 Chain input_lan (1 references) target prot opt source destination Chain input_rule (1 references) target prot opt source destination Chain input_wan (1 references) target prot opt source destination Chain nat_reflection_fwd (1 references) target prot opt source destination ACCEPT tcp -- 192.168.1.0/24 192.168.1.2 tcp dpt:80 Chain output (1 references) target prot opt source destination zone_lan_ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain output_rule (1 references) target prot opt source destination Chain reject (7 references) target prot opt source destination REJECT tcp -- 0.0.0.0/0 0.0.0.0/0 reject-with tcp-reset REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain syn_flood (1 references) target prot opt source destination RETURN tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x17/0x02 limit: avg 25/sec burst 50 DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan (1 references) target prot opt source destination input_lan all -- 0.0.0.0/0 0.0.0.0/0 zone_lan_ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan_ACCEPT (2 references) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan_DROP (0 references) target prot opt source destination DROP all -- 0.0.0.0/0 0.0.0.0/0 DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan_MSSFIX (0 references) target prot opt source destination TCPMSS tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU Chain zone_lan_REJECT (1 references) target prot opt source destination reject all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_lan_forward (1 references) target prot opt source destination zone_wan_ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 forwarding_lan all -- 0.0.0.0/0 0.0.0.0/0 zone_lan_REJECT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan (2 references) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:68 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 ACCEPT 41 -- 0.0.0.0/0 0.0.0.0/0 input_wan all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_REJECT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan_ACCEPT (2 references) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan_DROP (0 references) target prot opt source destination DROP all -- 0.0.0.0/0 0.0.0.0/0 DROP all -- 0.0.0.0/0 0.0.0.0/0 DROP all -- 0.0.0.0/0 0.0.0.0/0 DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan_MSSFIX (1 references) target prot opt source destination TCPMSS tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU TCPMSS tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU Chain zone_wan_REJECT (2 references) target prot opt source destination reject all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 reject all -- 0.0.0.0/0 0.0.0.0/0 Chain zone_wan_forward (2 references) target prot opt source destination ACCEPT tcp -- 0.0.0.0/0 192.168.1.2 forwarding_wan all -- 0.0.0.0/0 0.0.0.0/0 zone_wan_REJECT all -- 0.0.0.0/0 0.0.0.0/0 Here's some routing info: root@OpenWrt:/# ip -f inet6 route 2001:470:1f0a:de5::/64 via :: dev 6in4-henet proto kernel metric 256 mtu 1280 advmss 1220 hoplimit 0 2001:470:1f0b:de5::/64 dev br-lan proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev br-lan proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev eth0.1 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 dev eth0.2 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::/64 via :: dev 6in4-henet proto kernel metric 256 mtu 1280 advmss 1220 hoplimit 0 default dev 6in4-henet metric 1024 mtu 1280 advmss 1220 hoplimit 0 I have computers running windows 7 SP1 and openSUSE 11.3 and all of them have same problem. I also made a thread about this on HE's forum, but it seems that people there are out of ideas what to do.

    Read the article

  • OpenVPN on Ubuntu 11.10 - unable to redirect default gateway

    - by Vladimir Kadalashvili
    I'm trying to connect to connect to OpenVPN server from my Ubuntu 11.10 machine. I use the following command to do it (under root user): openvpn --config /home/vladimir/client.ovpn Everything seems to be OK, it connects normally without any warnings and errors, but when I try to browse the internet I see that I still use my own IP address, so VPN connection doesn't work. When I run openvpn command, it displays the following message among others: NOTE: unable to redirect default gateway -- Cannot read current default gateway from system I think it's the cause of this problem, but unfortunately I don't know how to fix it. Below is full output of openvpn command: Sat Jun 9 23:51:36 2012 OpenVPN 2.2.0 x86_64-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Jul 4 2011 Sat Jun 9 23:51:36 2012 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Sat Jun 9 23:51:36 2012 Control Channel Authentication: tls-auth using INLINE static key file Sat Jun 9 23:51:36 2012 Outgoing Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication Sat Jun 9 23:51:36 2012 Incoming Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication Sat Jun 9 23:51:36 2012 LZO compression initialized Sat Jun 9 23:51:36 2012 Control Channel MTU parms [ L:1542 D:166 EF:66 EB:0 ET:0 EL:0 ] Sat Jun 9 23:51:36 2012 Socket Buffers: R=[126976->200000] S=[126976->200000] Sat Jun 9 23:51:36 2012 Data Channel MTU parms [ L:1542 D:1450 EF:42 EB:135 ET:0 EL:0 AF:3/1 ] Sat Jun 9 23:51:36 2012 Local Options hash (VER=V4): '504e774e' Sat Jun 9 23:51:36 2012 Expected Remote Options hash (VER=V4): '14168603' Sat Jun 9 23:51:36 2012 UDPv4 link local: [undef] Sat Jun 9 23:51:36 2012 UDPv4 link remote: [AF_INET]94.229.78.130:1194 Sat Jun 9 23:51:37 2012 TLS: Initial packet from [AF_INET]94.229.78.130:1194, sid=13fd921b b42072ab Sat Jun 9 23:51:37 2012 VERIFY OK: depth=1, /CN=OpenVPN_CA Sat Jun 9 23:51:37 2012 VERIFY OK: nsCertType=SERVER Sat Jun 9 23:51:37 2012 VERIFY OK: depth=0, /CN=OpenVPN_Server Sat Jun 9 23:51:38 2012 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Sat Jun 9 23:51:38 2012 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Sat Jun 9 23:51:38 2012 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Sat Jun 9 23:51:38 2012 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Sat Jun 9 23:51:38 2012 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Sat Jun 9 23:51:38 2012 [OpenVPN_Server] Peer Connection Initiated with [AF_INET]94.229.78.130:1194 Sat Jun 9 23:51:40 2012 SENT CONTROL [OpenVPN_Server]: 'PUSH_REQUEST' (status=1) Sat Jun 9 23:51:40 2012 PUSH: Received control message: 'PUSH_REPLY,explicit-exit-notify,topology subnet,route-delay 5 30,dhcp-pre-release,dhcp-renew,dhcp-release,route-metric 101,ping 5,ping-restart 40,redirect-gateway def1,redirect-gateway bypass-dhcp,redirect-gateway autolocal,route-gateway 5.5.0.1,dhcp-option DNS 8.8.8.8,dhcp-option DNS 8.8.4.4,register-dns,comp-lzo yes,ifconfig 5.5.117.43 255.255.0.0' Sat Jun 9 23:51:40 2012 Unrecognized option or missing parameter(s) in [PUSH-OPTIONS]:4: dhcp-pre-release (2.2.0) Sat Jun 9 23:51:40 2012 Unrecognized option or missing parameter(s) in [PUSH-OPTIONS]:5: dhcp-renew (2.2.0) Sat Jun 9 23:51:40 2012 Unrecognized option or missing parameter(s) in [PUSH-OPTIONS]:6: dhcp-release (2.2.0) Sat Jun 9 23:51:40 2012 Unrecognized option or missing parameter(s) in [PUSH-OPTIONS]:16: register-dns (2.2.0) Sat Jun 9 23:51:40 2012 OPTIONS IMPORT: timers and/or timeouts modified Sat Jun 9 23:51:40 2012 OPTIONS IMPORT: explicit notify parm(s) modified Sat Jun 9 23:51:40 2012 OPTIONS IMPORT: LZO parms modified Sat Jun 9 23:51:40 2012 OPTIONS IMPORT: --ifconfig/up options modified Sat Jun 9 23:51:40 2012 OPTIONS IMPORT: route options modified Sat Jun 9 23:51:40 2012 OPTIONS IMPORT: route-related options modified Sat Jun 9 23:51:40 2012 OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified Sat Jun 9 23:51:40 2012 ROUTE: default_gateway=UNDEF Sat Jun 9 23:51:40 2012 TUN/TAP device tun0 opened Sat Jun 9 23:51:40 2012 TUN/TAP TX queue length set to 100 Sat Jun 9 23:51:40 2012 do_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0 Sat Jun 9 23:51:40 2012 /sbin/ifconfig tun0 5.5.117.43 netmask 255.255.0.0 mtu 1500 broadcast 5.5.255.255 Sat Jun 9 23:51:45 2012 NOTE: unable to redirect default gateway -- Cannot read current default gateway from system Sat Jun 9 23:51:45 2012 Initialization Sequence Completed Output of route command: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default * 0.0.0.0 U 0 0 0 ppp0 5.5.0.0 * 255.255.0.0 U 0 0 0 tun0 link-local * 255.255.0.0 U 1000 0 0 wlan0 192.168.0.0 * 255.255.255.0 U 0 0 0 wlan0 stream-ts1.net. * 255.255.255.255 UH 0 0 0 ppp0 Output of ifconfig command: eth0 Link encap:Ethernet HWaddr 6c:62:6d:44:0d:12 inet6 addr: fe80::6e62:6dff:fe44:d12/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:54594 errors:0 dropped:0 overruns:0 frame:0 TX packets:59897 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:44922107 (44.9 MB) TX bytes:8839969 (8.8 MB) Interrupt:41 Base address:0x8000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4561 errors:0 dropped:0 overruns:0 frame:0 TX packets:4561 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:685425 (685.4 KB) TX bytes:685425 (685.4 KB) ppp0 Link encap:Point-to-Point Protocol inet addr:213.206.63.44 P-t-P:213.206.34.4 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1492 Metric:1 RX packets:53577 errors:0 dropped:0 overruns:0 frame:0 TX packets:58892 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:43667387 (43.6 MB) TX bytes:7504776 (7.5 MB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:5.5.117.43 P-t-P:5.5.117.43 Mask:255.255.0.0 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) wlan0 Link encap:Ethernet HWaddr 00:27:19:f6:b5:cf inet addr:192.168.0.1 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::227:19ff:fef6:b5cf/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:12079 errors:0 dropped:0 overruns:0 frame:0 TX packets:11178 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1483691 (1.4 MB) TX bytes:4307899 (4.3 MB) So my question is - how to make OpenVPN redirect default gateway? Thanks!

    Read the article

  • Setup routing and iptables for new VPN connection to redirect **only** ports 80 and 443

    - by Steve
    I have a new VPN connection (using openvpn) to allow me to route around some ISP restrictions. Whilst it is working fine, it is taking all the traffic over the vpn. This is causing me issues for downloading (my internet connection is a lot faster than the vpn allows), and for remote access. I run an ssh server, and have a daemon running that allows me to schdule downloads via my phone. I have my existing ethernet connection on eth0, and the new VPN connection on tun0. I believe I need to setup the default route to use my existing eth0 connection on the 192.168.0.0/24 network, and set the default gateway to 192.168.0.1 (my knowledge is shaky as I haven't done this for a number of years). If that is correct, then I'm not exactly sure how to do it!. My current routing table is: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface MSS Window irtt 0.0.0.0 10.51.0.169 0.0.0.0 UG 0 0 0 tun0 0 0 0 10.51.0.1 10.51.0.169 255.255.255.255 UGH 0 0 0 tun0 0 0 0 10.51.0.169 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 0 0 0 85.25.147.49 192.168.0.1 255.255.255.255 UGH 0 0 0 eth0 0 0 0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 0 0 0 192.168.0.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 0 0 0 After fixing the routing, I believe I need to use iptables to configure prerouting or masquerading to force everything for destination port 80 or 443 over tun0. Again, I'm not exactly sure how to do this! Everything I've found on the internet is trying to do something far more complicated, and trying to sort the wood from the trees is proving difficult. Any help would be much appreciated. UPDATE So far, from the various sources, I've cobbled together the following: #!/bin/sh DEV1=eth0 IP1=`ifconfig|perl -nE'/dr:(\S+)/&&say$1'|grep 192.` GW1=192.168.0.1 TABLE1=internet TABLE2=vpn DEV2=tun0 IP2=`ifconfig|perl -nE'/dr:(\S+)/&&say$1'|grep 10.` GW2=`route -n | grep 'UG[ \t]' | awk '{print $2}'` ip route flush table $TABLE1 ip route flush table $TABLE2 ip route show table main | grep -Ev ^default | while read ROUTE ; do ip route add table $TABLE1 $ROUTE ip route add table $TABLE2 $ROUTE done ip route add table $TABLE1 $GW1 dev $DEV1 src $IP1 ip route add table $TABLE2 $GW2 dev $DEV2 src $IP2 ip route add table $TABLE1 default via $GW1 ip route add table $TABLE2 default via $GW2 echo "1" > /proc/sys/net/ipv4/ip_forward echo "1" > /proc/sys/net/ipv4/ip_dynaddr ip rule add from $IP1 lookup $TABLE1 ip rule add from $IP2 lookup $TABLE2 ip rule add fwmark 1 lookup $TABLE1 ip rule add fwmark 2 lookup $TABLE2 iptables -t nat -A POSTROUTING -o $DEV1 -j SNAT --to-source $IP1 iptables -t nat -A POSTROUTING -o $DEV2 -j SNAT --to-source $IP2 iptables -t nat -A PREROUTING -m state --state ESTABLISHED,RELATED -j CONNMARK --restore-mark iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j CONNMARK --restore-mark iptables -t nat -A PREROUTING -i $DEV1 -m state --state NEW -j CONNMARK --set-mark 1 iptables -t nat -A PREROUTING -i $DEV2 -m state --state NEW -j CONNMARK --set-mark 2 iptables -t nat -A PREROUTING -m connmark --mark 1 -j MARK --set-mark 1 iptables -t nat -A PREROUTING -m connmark --mark 2 -j MARK --set-mark 2 iptables -t nat -A PREROUTING -m state --state NEW -m connmark ! --mark 0 -j CONNMARK --save-mark iptables -t mangle -A PREROUTING -i $DEV2 -m state --state NEW -p tcp --dport 80 -j CONNMARK --set-mark 2 iptables -t mangle -A PREROUTING -i $DEV2 -m state --state NEW -p tcp --dport 443 -j CONNMARK --set-mark 2 route del default route add default gw 192.168.0.1 eth0 Now this seems to be working. Except it isn't! Connections to the blocked websites are going through, connections not on ports 80 and 443 are using the non-VPN connection. However port 80 and 443 connections that aren't to the blocked websites are using the non-VPN connection too! As the general goal has been reached, I'm relatively happy, but it would be nice to know why it isn't working exactly right. Any ideas? For reference, I now have 3 routing tables, main, internet, and vpn. The listing of them is as follows... Main: default via 192.168.0.1 dev eth0 10.38.0.1 via 10.38.0.205 dev tun0 10.38.0.205 dev tun0 proto kernel scope link src 10.38.0.206 85.removed via 192.168.0.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.73 metric 1 Internet: default via 192.168.0.1 dev eth0 10.38.0.1 via 10.38.0.205 dev tun0 10.38.0.205 dev tun0 proto kernel scope link src 10.38.0.206 85.removed via 192.168.0.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.73 metric 1 192.168.0.1 dev eth0 scope link src 192.168.0.73 VPN: default via 10.38.0.205 dev tun0 10.38.0.1 via 10.38.0.205 dev tun0 10.38.0.205 dev tun0 proto kernel scope link src 10.38.0.206 85.removed via 192.168.0.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.73 metric 1

    Read the article

  • Why are my Flex resource bundles not being loaded?

    - by Chris R
    I have an Actionscript module in the flex source folder filterModules, which is one of two additional source folders in my project (the main source folder is reports, but I'm not dealing with anything in there right now). Here's the MXML content that references the resources. ... This array is assigned to the dataProvider field of a ComboBox. It's not bound using the bindings, presumably for reasons that made sense to the original developer, and it'd be nontrivial to change the class to make that happen. I additionally have a resource property file in a folder resources/en_US and I have the source folder resources/{locale} in the project source settings. My additional compiler options are -locale en_US. The resource property file is resources/en_US/labels.properties (All paths are relative to the flash builder project root) and contains (amongst other things) these keys: metric.q3 = Overall Satisfaction metric.q5 = Personnel metric.q9a = Issue Resolution metric.q42 = Visit Duration Sat metric.q34 = Visit Duration I have written some FlexUnit tests that run in my local Flash Player that exercise these resources -- they check that every label is represented in the metrics array, for example, so I know that the resource file is loaded when run locally. However, when I copy the module .swf file over to my server, the combo box to which the array is assigned is empty. I copy the .swf like so, if it matters: rsync -rlDv --inplace -T /tmp ~/projects/flex_reports/bin-debug/rankingFilter.swf HOSTNAME:WEBROOT/flashPath/ Why is this? I am not able to debug the remote module because our surrounding site sets up a lot of context and makes some database calls to determine which module to load. I'm hoping to get some pointers on why resource bundles might not show up. I'd understand it if the array was present with wrong labels, but the array is instead completely empty, which is pretty odd.

    Read the article

  • Bridging LXC containers to host eth0 so they can have a public IP

    - by Vianney Stroebel
    UPDATE: I found the solution there: http://www.linuxfoundation.org/collaborate/workgroups/networking/bridge#No_traffic_gets_trough_.28except_ARP_and_STP.29 # cd /proc/sys/net/bridge # ls bridge-nf-call-arptables bridge-nf-call-iptables bridge-nf-call-ip6tables bridge-nf-filter-vlan-tagged # for f in bridge-nf-*; do echo 0 $f; done But I'd like to have expert opinions on this: is it safe to disable all bridge-nf-*? What are they here for? END OF UPDATE I need to bridge LXC containers to the physical interface (eth0) of my host, reading numerous tutorials, documents and blog posts on the subject. I need the containers to have their own public IP (which I've previously done KVM/libvirt). After two days of searching and trying, I still can't make it work with LXC containers. The host runs a freshly installed Ubuntu Server Quantal (12.10) with only libvirt (which I'm not using here) and lxc installed. I created the containers with : lxc-create -t ubuntu -n mycontainer So they also run Ubuntu 12.10. Content of /var/lib/lxc/mycontainer/config is: lxc.utsname = mycontainer lxc.mount = /var/lib/lxc/test/fstab lxc.rootfs = /var/lib/lxc/test/rootfs lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.name = eth0 lxc.network.veth.pair = vethmycontainer lxc.network.ipv4 = 179.43.46.233 lxc.network.hwaddr= 02:00:00:86:5b:11 lxc.devttydir = lxc lxc.tty = 4 lxc.pts = 1024 lxc.arch = amd64 lxc.cap.drop = sys_module mac_admin mac_override lxc.pivotdir = lxc_putold # uncomment the next line to run the container unconfined: #lxc.aa_profile = unconfined lxc.cgroup.devices.deny = a # Allow any mknod (but not using the node) lxc.cgroup.devices.allow = c *:* m lxc.cgroup.devices.allow = b *:* m # /dev/null and zero lxc.cgroup.devices.allow = c 1:3 rwm lxc.cgroup.devices.allow = c 1:5 rwm # consoles lxc.cgroup.devices.allow = c 5:1 rwm lxc.cgroup.devices.allow = c 5:0 rwm #lxc.cgroup.devices.allow = c 4:0 rwm #lxc.cgroup.devices.allow = c 4:1 rwm # /dev/{,u}random lxc.cgroup.devices.allow = c 1:9 rwm lxc.cgroup.devices.allow = c 1:8 rwm lxc.cgroup.devices.allow = c 136:* rwm lxc.cgroup.devices.allow = c 5:2 rwm # rtc lxc.cgroup.devices.allow = c 254:0 rwm #fuse lxc.cgroup.devices.allow = c 10:229 rwm #tun lxc.cgroup.devices.allow = c 10:200 rwm #full lxc.cgroup.devices.allow = c 1:7 rwm #hpet lxc.cgroup.devices.allow = c 10:228 rwm #kvm lxc.cgroup.devices.allow = c 10:232 rwm Then I changed my host /etc/network/interfaces to: auto lo iface lo inet loopback auto br0 iface br0 inet static bridge_ports eth0 bridge_fd 0 address 92.281.86.226 netmask 255.255.255.0 network 92.281.86.0 broadcast 92.281.86.255 gateway 92.281.86.254 dns-nameservers 213.186.33.99 dns-search ovh.net When I try command line configuration ("brctl addif", "ifconfig eth0", etc.) my remote host becomes inaccessible and I have to hard reboot it. I changed the content of /var/lib/lxc/mycontainer/rootfs/etc/network/interfaces to: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 179.43.46.233 netmask 255.255.255.255 broadcast 178.33.40.233 gateway 92.281.86.254 It takes several minutes for mycontainer to start (lxc-start -n mycontainer). I tried replacing gateway 92.281.86.254 by : post-up route add 92.281.86.254 dev eth0 post-up route add default gw 92.281.86.254 post-down route del 92.281.86.254 dev eth0 post-down route del default gw 92.281.86.254 My container then starts instantly. But whatever configuration I set in /var/lib/lxc/mycontainer/rootfs/etc/network/interfaces, I cannot ping from mycontainer to any IP (including the host's) : ubuntu@mycontainer:~$ ping 92.281.86.226 PING 92.281.86.226 (92.281.86.226) 56(84) bytes of data. ^C --- 92.281.86.226 ping statistics --- 6 packets transmitted, 0 received, 100% packet loss, time 5031ms And my host cannot ping the container: root@host:~# ping 179.43.46.233 PING 179.43.46.233 (179.43.46.233) 56(84) bytes of data. ^C --- 179.43.46.233 ping statistics --- 5 packets transmitted, 0 received, 100% packet loss, time 4000ms My container's ifconfig: ubuntu@mycontainer:~$ ifconfig eth0 Link encap:Ethernet HWaddr 02:00:00:86:5b:11 inet addr:179.43.46.233 Bcast:255.255.255.255 Mask:0.0.0.0 inet6 addr: fe80::ff:fe79:5a31/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:64 errors:0 dropped:6 overruns:0 frame:0 TX packets:54 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4070 (4.0 KB) TX bytes:4168 (4.1 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:32 errors:0 dropped:0 overruns:0 frame:0 TX packets:32 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2496 (2.4 KB) TX bytes:2496 (2.4 KB) My host's ifconfig: root@host:~# ifconfig br0 Link encap:Ethernet HWaddr 4c:72:b9:43:65:2b inet addr:92.281.86.226 Bcast:91.121.67.255 Mask:255.255.255.0 inet6 addr: fe80::4e72:b9ff:fe43:652b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1453 errors:0 dropped:18 overruns:0 frame:0 TX packets:1630 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:145125 (145.1 KB) TX bytes:299943 (299.9 KB) eth0 Link encap:Ethernet HWaddr 4c:72:b9:43:65:2b UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3178 errors:0 dropped:0 overruns:0 frame:0 TX packets:1637 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:298263 (298.2 KB) TX bytes:309167 (309.1 KB) Interrupt:20 Memory:fe500000-fe520000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:6 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:300 (300.0 B) TX bytes:300 (300.0 B) vethtest Link encap:Ethernet HWaddr fe:0d:7f:3e:70:88 inet6 addr: fe80::fc0d:7fff:fe3e:7088/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:54 errors:0 dropped:0 overruns:0 frame:0 TX packets:67 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4168 (4.1 KB) TX bytes:4250 (4.2 KB) virbr0 Link encap:Ethernet HWaddr de:49:c5:66:cf:84 inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) I have disabled lxcbr0 (USE_LXC_BRIDGE="false" in /etc/default/lxc). root@host:~# brctl show bridge name bridge id STP enabled interfaces br0 8000.4c72b943652b no eth0 vethtest I have configured the IP 179.43.46.233 to point to 02:00:00:86:5b:11 in my hosting provider (OVH) config panel. (The IPs in this post are not the real ones.) Thanks for reading this long question! :-) Vianney

    Read the article

  • Regular expression for a string containing one word but not another

    - by Chris Stahl
    I'm setting up some goals in Google Analytics and could use a little regex help. Lets say I have 4 URLs http://www.anydotcom.com/test/search.cfm?metric=blah&selector=size&value=1 http://www.anydotcom.com/test/search.cfm?metric=blah2&selector=style&value=1 http://www.anydotcom.com/test/search.cfm?metric=blah3&selector=size&value=1 http://www.anydotcom.com/test/details.cfm?metric=blah&selector=size&value=1 I want to create an expression that will identify any URL that contains the string selector=size but does NOT contain details.cfm I know that to find a string that does NOT contain another string I can use this expression: (^((?!details.cfm).)*$) But, I'm not sure how to add in the selector=size portion. Any help would be greatly appreciated!

    Read the article

  • Cannot ping host stale ARP cache?

    - by gkchicago
    I am having a strange issue with a Debian (Lenny/Linux 2.6.26-2-amd64) that has been driving me nuts. On some machines within my network I can ping the host in question just fine, other times I have to manually hard-code the ARP ethernet address for the IP in order to establish connectivity. I've finally worked it down to somehow involving ARP. I just found how to fix it in a way that made it work but I'm looking for help explaining this issue and also I don't trust my fix to be permanent.. My thought process has been the following but I just can't make any sense out of it: Could it be the card? (Intel 82555 rev 4) Could it be because there are two network cards? (Default route is eth0) Could it be because of the network aliases? Lenny? AMD x86_64? Argh.. Thank you for any insight you might have // Ping doesn't go thru [gordon@ubuntu ~]$ ping 192.168.135.101 PING 192.168.135.101 (192.168.135.101) 56(84) bytes of data. --- 192.168.135.101 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3014ms // Here's the ARP Table, sometimes the .151 address is good, sometimes it // also matches the Gateways MAC like .101 is doing right here. [gordon@ubuntu ~]$ cat /proc/net/arp IP address HW type Flags HW address Mask Device 192.168.135.15 0x1 0x2 00:0B:DB:2B:24:89 * eth0 192.168.135.151 0x1 0x2 00:0B:6A:3A:30:A6 * eth0 192.168.135.1 0x1 0x2 00:1A:A2:2D:2A:04 * eth0 192.168.135.101 0x1 0x2 00:1A:A2:2D:2A:04 * eth0 // Drop the bad arp table listing and set it manually based on /sbin/ifconfig [gordon@ubuntu ~]$ sudo arp -d 192.168.135.101 [gordon@ubuntu ~]$ sudo arp -s 192.168.135.101 00:0B:6A:3A:30:A6 // Ping starts going thru..?!? [gordon@ubuntu ~]$ ping 192.168.135.101 PING 192.168.135.101 (192.168.135.101) 56(84) bytes of data. 64 bytes from 192.168.135.101: icmp_seq=1 ttl=64 time=15.8 ms 64 bytes from 192.168.135.101: icmp_seq=2 ttl=64 time=15.9 ms 64 bytes from 192.168.135.101: icmp_seq=3 ttl=64 time=16.0 ms 64 bytes from 192.168.135.101: icmp_seq=4 ttl=64 time=15.9 ms --- 192.168.135.101 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3012ms rtt min/avg/max/mdev = 15.836/15.943/16.064/0.121 ms The following is my network config on this. gordon@db01:~$ /sbin/ifconfig eth0 Link encap:Ethernet HWaddr 00:0b:6a:3a:30:a6 inet addr:192.168.135.151 Bcast:192.168.135.255 Mask:255.255.255.0 inet6 addr: fe80::20b:6aff:fe3a:30a6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:15476725 errors:0 dropped:0 overruns:0 frame:0 TX packets:10030036 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:18565307359 (17.2 GiB) TX bytes:3412098075 (3.1 GiB) eth0:0 Link encap:Ethernet HWaddr 00:0b:6a:3a:30:a6 inet addr:192.168.135.150 Bcast:192.168.135.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth0:1 Link encap:Ethernet HWaddr 00:0b:6a:3a:30:a6 inet addr:192.168.135.101 Bcast:192.168.135.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth1 Link encap:Ethernet HWaddr 00:e0:81:2a:6e:d0 inet addr:10.10.62.1 Bcast:10.10.62.255 Mask:255.255.255.0 inet6 addr: fe80::2e0:81ff:fe2a:6ed0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10233315 errors:0 dropped:0 overruns:0 frame:0 TX packets:19400286 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1112500658 (1.0 GiB) TX bytes:27952809020 (26.0 GiB) Interrupt:24 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:387 errors:0 dropped:0 overruns:0 frame:0 TX packets:387 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:41314 (40.3 KiB) TX bytes:41314 (40.3 KiB) gordon@db01:~$ sudo mii-tool -v eth0 eth0: negotiated 100baseTx-FD, link ok product info: Intel 82555 rev 4 basic mode: autonegotiation enabled basic status: autonegotiation complete, link ok capabilities: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD advertising: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD flow-control link partner: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD gordon@db01:~$ sudo route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface localnet * 255.255.255.0 U 0 0 0 eth0 10.10.62.0 * 255.255.255.0 U 0 0 0 eth1 default 192.168.135.1 0.0.0.0 UG 0 0 0 eth0

    Read the article

  • unable to sniff traffic despite network interface being in monitor or promiscuous mode

    - by user65126
    I'm trying to sniff out my network's wireless traffic but am having issues. I'm able to put the card in monitor mode, but am unable to see any traffic except broadcasts, multicasts and probe/beacon frames. I have two network interfaces on this laptop. One is connected normally to 'linksys' and the other is in monitor mode. The interface in monitor mode is on the right channel. I'm not associated with the access point because, as I understand, I don't need to if using monitor mode (vs promiscuous). When I try to ping the router ip, I'm not seeing that traffic show up in wireshark. Here's my ifconfig settings: daniel@seasonBlack:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:1f:29:9e:b2:89 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:112 errors:0 dropped:0 overruns:0 frame:0 TX packets:112 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8518 (8.5 KB) TX bytes:8518 (8.5 KB) wlan0 Link encap:Ethernet HWaddr 00:21:00:34:f7:f4 inet addr:192.168.1.116 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::221:ff:fe34:f7f4/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:9758 errors:0 dropped:0 overruns:0 frame:0 TX packets:4869 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3291516 (3.2 MB) TX bytes:677386 (677.3 KB) wlan1 Link encap:UNSPEC HWaddr 00-02-72-7B-92-53-33-34-00-00-00-00-00-00-00-00 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:1500 Metric:1 RX packets:112754 errors:0 dropped:0 overruns:0 frame:0 TX packets:101 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:18569124 (18.5 MB) TX bytes:12874 (12.8 KB) wmaster0 Link encap:UNSPEC HWaddr 00-21-00-34-F7-F4-00-00-00-00-00-00-00-00-00-00 UP RUNNING MTU:0 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) wmaster1 Link encap:UNSPEC HWaddr 00-02-72-7B-92-53-00-00-00-00-00-00-00-00-00-00 UP RUNNING MTU:0 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Here's my iwconfig settings: daniel@seasonBlack:~$ iwconfig lo no wireless extensions. eth0 no wireless extensions. wmaster0 no wireless extensions. wlan0 IEEE 802.11bg ESSID:"linksys" Mode:Managed Frequency:2.437 GHz Access Point: 00:18:F8:D6:17:34 Bit Rate=54 Mb/s Tx-Power=27 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=68/70 Signal level=-42 dBm Noise level=-69 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 wmaster1 no wireless extensions. wlan1 IEEE 802.11bg Mode:Monitor Frequency:2.437 GHz Tx-Power=27 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 Here's how I know I'm on the right channel: daniel@seasonBlack:~$ iwlist channel lo no frequency information. eth0 no frequency information. wmaster0 no frequency information. wlan0 11 channels in total; available frequencies : Channel 01 : 2.412 GHz Channel 02 : 2.417 GHz Channel 03 : 2.422 GHz Channel 04 : 2.427 GHz Channel 05 : 2.432 GHz Channel 06 : 2.437 GHz Channel 07 : 2.442 GHz Channel 08 : 2.447 GHz Channel 09 : 2.452 GHz Channel 10 : 2.457 GHz Channel 11 : 2.462 GHz Current Frequency=2.437 GHz (Channel 6) wmaster1 no frequency information. wlan1 11 channels in total; available frequencies : Channel 01 : 2.412 GHz Channel 02 : 2.417 GHz Channel 03 : 2.422 GHz Channel 04 : 2.427 GHz Channel 05 : 2.432 GHz Channel 06 : 2.437 GHz Channel 07 : 2.442 GHz Channel 08 : 2.447 GHz Channel 09 : 2.452 GHz Channel 10 : 2.457 GHz Channel 11 : 2.462 GHz Current Frequency=2.437 GHz (Channel 6)

    Read the article

  • Bonding: works only for download

    - by Crazy_Bash
    I would like to install bonding with 4 links with mode 4. but only "download/receiving" works with bondig. for transmitting the system chooses one link. ifconfig bond0 Link encap:Ethernet HWaddr 90:E2:BA:0F:76:B4 inet addr:ip Bcast:ip Mask:255.255.255.248 inet6 addr: fe80::92e2:baff:fe0f:76b4/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:239187413 errors:0 dropped:10944 overruns:0 frame:0 TX packets:536902370 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:14688536197 (13.6 GiB) TX bytes:799521192901 (744.6 GiB) eth2 Link encap:Ethernet HWaddr 90:E2:BA:0F:76:B4 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:54969488 errors:0 dropped:0 overruns:0 frame:0 TX packets:2537 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3374778591 (3.1 GiB) TX bytes:314290 (306.9 KiB) eth3 Link encap:Ethernet HWaddr 90:E2:BA:0F:76:B4 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:64935805 errors:0 dropped:1 overruns:0 frame:0 TX packets:2532 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3993499746 (3.7 GiB) TX bytes:313968 (306.6 KiB) eth4 Link encap:Ethernet HWaddr 90:E2:BA:0F:76:B4 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:57352105 errors:0 dropped:2 overruns:0 frame:0 TX packets:536894778 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3524236530 (3.2 GiB) TX bytes:799520265627 (744.6 GiB) eth5 Link encap:Ethernet HWaddr 90:E2:BA:0F:76:B4 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:61930025 errors:0 dropped:3 overruns:0 frame:0 TX packets:2540 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3796021948 (3.5 GiB) TX bytes:314274 (306.9 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:62 errors:0 dropped:0 overruns:0 frame:0 TX packets:62 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:5320 (5.1 KiB) TX bytes:5320 (5.1 KiB) those are my configs: DEVICE="eth2" BOOTPROTO="none" MASTER=bond0 SLAVE=yes USERCTL=no NM_CONTROLLED="no" ONBOOT="yes" DEVICE="eth3" BOOTPROTO="none" MASTER=bond0 SLAVE=yes USERCTL=no NM_CONTROLLED="no" ONBOOT="yes" DEVICE="eth4" BOOTPROTO="none" MASTER=bond0 SLAVE=yes USERCTL=no NM_CONTROLLED="no" ONBOOT="yes" DEVICE="eth5" BOOTPROTO="none" MASTER=bond0 SLAVE=yes USERCTL=no NM_CONTROLLED="no" ONBOOT="yes" DEVICE=bond0 IPADDR=<ip> BROADCAST=<ip> NETWORK=<ip> GATEWAY=<ip> NETMASK=<ip> USERCTL=no BOOTPROTO=none ONBOOT=yes NM_CONTROLLED=no cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2 (0) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 802.3ad info LACP rate: slow Aggregator selection policy (ad_select): stable Active Aggregator Info: Aggregator ID: 1 Number of ports: 4 Actor Key: 17 Partner Key: 11 Partner Mac Address: 00:24:51:12:63:00 Slave Interface: eth2 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 90:e2:ba:0f:76:b4 Aggregator ID: 1 Slave queue ID: 0 Slave Interface: eth3 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 90:e2:ba:0f:76:b5 Aggregator ID: 1 Slave queue ID: 0 Slave Interface: eth4 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 90:e2:ba:0f:76:b6 Aggregator ID: 1 Slave queue ID: 0 Slave Interface: eth5 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 90:e2:ba:0f:76:b7 Aggregator ID: 1 Slave queue ID: 0 /etc/modprobe.d/bonding.conf alias bond0 bonding options bond0 mode=4 miimon=100 updelay=200 #downdelay=200 xmit_hash_policy=layer3+4 lacp_rate=1 Linux: Linux 3.0.0+ #1 SMP Fri Oct 26 07:55:47 EEST 2012 x86_64 x86_64 x86_64 GNU/Linux what i've tried: downdelay=200 xmit_hash_policy=layer3+4 lacp_rate=1 mode 6

    Read the article

  • Can't connect to samba using openVPN

    - by Arthur
    I'm fairly new to using VPN. For a home project I'm running a OpenVPN server. This server runs within a network 192.168.2.0 and subnet 255.255.255.0 I can connect to this net work using the ip range 5.5.0.0 I guess the subnet is 255.255.255.192, but I'm not really sure about that. When connecting to my VPN network I can access the server via 5.5.0.1 and I can see the samba shares created on that machine. However I'm not allowed to connect to the samba share. When I look at the samba log of the computer which tries to connect I can see these messages: lib/access.c:338(allow_access) Denied connection from 5.5.0.132 (5.5.0.132) These are the share definition in /etc/samba/smb.conf interfaces = 192.168.2.0/32 5.5.0.0/24 security = user # wins-support = no # wins-server = w.x.y.z. // A LOT OF MORE SETTINGS AND COMMENTS hosts allow = 127.0.0.1 192.168.2.0/24 5.5.0.132/24 hosts deny = 0.0.0.0/0 browseable = yes path = [path to share] directory mask = 0755 force create mode = 0755 valid users = [a valid user, which i use to login with] writeable = yes force group = [the group i force to write with] force user = [the user i force to write with] This is the output of the ifconfig command as0t0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:5.5.0.1 P-t-P:5.5.0.1 Mask:255.255.255.192 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:200 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) as0t1 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:5.5.0.65 P-t-P:5.5.0.65 Mask:255.255.255.192 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:200 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) as0t2 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:5.5.0.129 P-t-P:5.5.0.129 Mask:255.255.255.192 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:xxxx errors:0 dropped:0 overruns:0 frame:0 TX packets:xxxx errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:200 RX bytes:xxxx (xxxx MB) TX bytes:12403514 (xxxx MB) as0t3 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:5.5.0.193 P-t-P:5.5.0.193 Mask:255.255.255.192 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:7041 errors:0 dropped:0 overruns:0 frame:0 TX packets:9797 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:200 RX bytes:xxxx (xxxx KB) TX bytes:xxxx (xxxx MB) eth1 Link encap:Ethernet HWaddr 00:0e:2e:61:78:21 inet addr:192.168.2.100 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: xxxx:xxxx:xxxx:xxxx:7821/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:xxxx errors:0 dropped:0 overruns:0 frame:0 TX packets:xxxx errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:xxxx (xxxx MB) TX bytes:xxxx (xxxx MB) Interrupt:16 Base address:0x6000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:xxxx errors:0 dropped:0 overruns:0 frame:0 TX packets:xxxx errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:xxxx (xxxx MB) TX bytes:xxxx (xxxx MB) Can anyone tell me what is going wrong? My server is running Ubuntu 12.04 LTS

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • The Incremental Architect&acute;s Napkin &ndash; #3 &ndash; Make Evolvability inevitable

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/04/the-incremental-architectacutes-napkin-ndash-3-ndash-make-evolvability-inevitable.aspxThe easier something to measure the more likely it will be produced. Deviations between what is and what should be can be readily detected. That´s what automated acceptance tests are for. That´s what sprint reviews in Scrum are for. It´s no small wonder our software looks like it looks. It has all the traits whose conformance with requirements can easily be measured. And it´s lacking traits which cannot easily be measured. Evolvability (or Changeability) is such a trait. If an operation is correct, if an operation if fast enough, that can be checked very easily. But whether Evolvability is high or low, that cannot be checked by taking a measure or two. Evolvability might correlate with certain traits, e.g. number of lines of code (LOC) per function or Cyclomatic Complexity or test coverage. But there is no threshold value signalling “evolvability too low”; also Evolvability is hardly tangible for the customer. Nevertheless Evolvability is of great importance - at least in the long run. You can get away without much of it for a short time. Eventually, though, it´s needed like any other requirement. Or even more. Because without Evolvability no other requirement can be implemented. Evolvability is the foundation on which all else is build. Such fundamental importance is in stark contrast with its immeasurability. To compensate this, Evolvability must be put at the very center of software development. It must become the hub around everything else revolves. Since we cannot measure Evolvability, though, we cannot start watching it more. Instead we need to establish practices to keep it high (enough) at all times. Chefs have known that for long. That´s why everybody in a restaurant kitchen is constantly seeing after cleanliness. Hygiene is important as is to have clean tools at standardized locations. Only then the health of the patrons can be guaranteed and production efficiency is constantly high. Still a kitchen´s level of cleanliness is easier to measure than software Evolvability. That´s why important practices like reviews, pair programming, or TDD are not enough, I guess. What we need to keep Evolvability in focus and high is… to continually evolve. Change must not be something to avoid but too embrace. To me that means the whole change cycle from requirement analysis to delivery needs to be gone through more often. Scrum´s sprints of 4, 2 even 1 week are too long. Kanban´s flow of user stories across is too unreliable; it takes as long as it takes. Instead we should fix the cycle time at 2 days max. I call that Spinning. No increment must take longer than from this morning until tomorrow evening to finish. Then it should be acceptance checked by the customer (or his/her representative, e.g. a Product Owner). For me there are several resasons for such a fixed and short cycle time for each increment: Clear expectations Absolute estimates (“This will take X days to complete.”) are near impossible in software development as explained previously. Too much unplanned research and engineering work lurk in every feature. And then pervasive interruptions of work by peers and management. However, the smaller the scope the better our absolute estimates become. That´s because we understand better what really are the requirements and what the solution should look like. But maybe more importantly the shorter the timespan the more we can control how we use our time. So much can happen over the course of a week and longer timespans. But if push comes to shove I can block out all distractions and interruptions for a day or possibly two. That´s why I believe we can give rough absolute estimates on 3 levels: Noon Tonight Tomorrow Think of a meeting with a Product Owner at 8:30 in the morning. If she asks you, how long it will take you to implement a user story or bug fix, you can say, “It´ll be fixed by noon.”, or you can say, “I can manage to implement it until tonight before I leave.”, or you can say, “You´ll get it by tomorrow night at latest.” Yes, I believe all else would be naive. If you´re not confident to get something done by tomorrow night (some 34h from now) you just cannot reliably commit to any timeframe. That means you should not promise anything, you should not even start working on the issue. So when estimating use these four categories: Noon, Tonight, Tomorrow, NoClue - with NoClue meaning the requirement needs to be broken down further so each aspect can be assigned to one of the first three categories. If you like absolute estimates, here you go. But don´t do deep estimates. Don´t estimate dozens of issues; don´t think ahead (“Issue A is a Tonight, then B will be a Tomorrow, after that it´s C as a Noon, finally D is a Tonight - that´s what I´ll do this week.”). Just estimate so Work-in-Progress (WIP) is 1 for everybody - plus a small number of buffer issues. To be blunt: Yes, this makes promises impossible as to what a team will deliver in terms of scope at a certain date in the future. But it will give a Product Owner a clear picture of what to pull for acceptance feedback tonight and tomorrow. Trust through reliability Our trade is lacking trust. Customers don´t trust software companies/departments much. Managers don´t trust developers much. I find that perfectly understandable in the light of what we´re trying to accomplish: delivering software in the face of uncertainty by means of material good production. Customers as well as managers still expect software development to be close to production of houses or cars. But that´s a fundamental misunderstanding. Software development ist development. It´s basically research. As software developers we´re constantly executing experiments to find out what really provides value to users. We don´t know what they need, we just have mediated hypothesises. That´s why we cannot reliably deliver on preposterous demands. So trust is out of the window in no time. If we switch to delivering in short cycles, though, we can regain trust. Because estimates - explicit or implicit - up to 32 hours at most can be satisfied. I´d say: reliability over scope. It´s more important to reliably deliver what was promised then to cover a lot of requirement area. So when in doubt promise less - but deliver without delay. Deliver on scope (Functionality and Quality); but also deliver on Evolvability, i.e. on inner quality according to accepted principles. Always. Trust will be the reward. Less complexity of communication will follow. More goodwill buffer will follow. So don´t wait for some Kanban board to show you, that flow can be improved by scheduling smaller stories. You don´t need to learn that the hard way. Just start with small batch sizes of three different sizes. Fast feedback What has been finished can be checked for acceptance. Why wait for a sprint of several weeks to end? Why let the mental model of the issue and its solution dissipate? If you get final feedback after one or two weeks, you hardly remember what you did and why you did it. Resoning becomes hard. But more importantly youo probably are not in the mood anymore to go back to something you deemed done a long time ago. It´s boring, it´s frustrating to open up that mental box again. Learning is harder the longer it takes from event to feedback. Effort can be wasted between event (finishing an issue) and feedback, because other work might go in the wrong direction based on false premises. Checking finished issues for acceptance is the most important task of a Product Owner. It´s even more important than planning new issues. Because as long as work started is not released (accepted) it´s potential waste. So before starting new work better make sure work already done has value. By putting the emphasis on acceptance rather than planning true pull is established. As long as planning and starting work is more important, it´s a push process. Accept a Noon issue on the same day before leaving. Accept a Tonight issue before leaving today or first thing tomorrow morning. Accept a Tomorrow issue tomorrow night before leaving or early the day after tomorrow. After acceptance the developer(s) can start working on the next issue. Flexibility As if reliability/trust and fast feedback for less waste weren´t enough economic incentive, there is flexibility. After each issue the Product Owner can change course. If on Monday morning feature slices A, B, C, D, E were important and A, B, C were scheduled for acceptance by Monday evening and Tuesday evening, the Product Owner can change her mind at any time. Maybe after A got accepted she asks for continuation with D. But maybe, just maybe, she has gotten a completely different idea by then. Maybe she wants work to continue on F. And after B it´s neither D nor E, but G. And after G it´s D. With Spinning every 32 hours at latest priorities can be changed. And nothing is lost. Because what got accepted is of value. It provides an incremental value to the customer/user. Or it provides internal value to the Product Owner as increased knowledge/decreased uncertainty. I find such reactivity over commitment economically very benefical. Why commit a team to some workload for several weeks? It´s unnecessary at beast, and inflexible and wasteful at worst. If we cannot promise delivery of a certain scope on a certain date - which is what customers/management usually want -, we can at least provide them with unpredecented flexibility in the face of high uncertainty. Where the path is not clear, cannot be clear, make small steps so you´re able to change your course at any time. Premature completion Customers/management are used to premeditating budgets. They want to know exactly how much to pay for a certain amount of requirements. That´s understandable. But it does not match with the nature of software development. We should know that by now. Maybe there´s somewhere in the world some team who can consistently deliver on scope, quality, and time, and budget. Great! Congratulations! I, however, haven´t seen such a team yet. Which does not mean it´s impossible, but I think it´s nothing I can recommend to strive for. Rather I´d say: Don´t try this at home. It might hurt you one way or the other. However, what we can do, is allow customers/management stop work on features at any moment. With spinning every 32 hours a feature can be declared as finished - even though it might not be completed according to initial definition. I think, progress over completion is an important offer software development can make. Why think in terms of completion beyond a promise for the next 32 hours? Isn´t it more important to constantly move forward? Step by step. We´re not running sprints, we´re not running marathons, not even ultra-marathons. We´re in the sport of running forever. That makes it futile to stare at the finishing line. The very concept of a burn-down chart is misleading (in most cases). Whoever can only think in terms of completed requirements shuts out the chance for saving money. The requirements for a features mostly are uncertain. So how does a Product Owner know in the first place, how much is needed. Maybe more than specified is needed - which gets uncovered step by step with each finished increment. Maybe less than specified is needed. After each 4–32 hour increment the Product Owner can do an experient (or invite users to an experiment) if a particular trait of the software system is already good enough. And if so, she can switch the attention to a different aspect. In the end, requirements A, B, C then could be finished just 70%, 80%, and 50%. What the heck? It´s good enough - for now. 33% money saved. Wouldn´t that be splendid? Isn´t that a stunning argument for any budget-sensitive customer? You can save money and still get what you need? Pull on practices So far, in addition to more trust, more flexibility, less money spent, Spinning led to “doing less” which also means less code which of course means higher Evolvability per se. Last but not least, though, I think Spinning´s short acceptance cycles have one more effect. They excert pull-power on all sorts of practices known for increasing Evolvability. If, for example, you believe high automated test coverage helps Evolvability by lowering the fear of inadverted damage to a code base, why isn´t 90% of the developer community practicing automated tests consistently? I think, the answer is simple: Because they can do without. Somehow they manage to do enough manual checks before their rare releases/acceptance checks to ensure good enough correctness - at least in the short term. The same goes for other practices like component orientation, continuous build/integration, code reviews etc. None of that is compelling, urgent, imperative. Something else always seems more important. So Evolvability principles and practices fall through the cracks most of the time - until a project hits a wall. Then everybody becomes desperate; but by then (re)gaining Evolvability has become as very, very difficult and tedious undertaking. Sometimes up to the point where the existence of a project/company is in danger. With Spinning that´s different. If you´re practicing Spinning you cannot avoid all those practices. With Spinning you very quickly realize you cannot deliver reliably even on your 32 hour promises. Spinning thus is pulling on developers to adopt principles and practices for Evolvability. They will start actively looking for ways to keep their delivery rate high. And if not, management will soon tell them to do that. Because first the Product Owner then management will notice an increasing difficulty to deliver value within 32 hours. There, finally there emerges a way to measure Evolvability: The more frequent developers tell the Product Owner there is no way to deliver anything worth of feedback until tomorrow night, the poorer Evolvability is. Don´t count the “WTF!”, count the “No way!” utterances. In closing For sustainable software development we need to put Evolvability first. Functionality and Quality must not rule software development but be implemented within a framework ensuring (enough) Evolvability. Since Evolvability cannot be measured easily, I think we need to put software development “under pressure”. Software needs to be changed more often, in smaller increments. Each increment being relevant to the customer/user in some way. That does not mean each increment is worthy of shipment. It´s sufficient to gain further insight from it. Increments primarily serve the reduction of uncertainty, not sales. Sales even needs to be decoupled from this incremental progress. No more promises to sales. No more delivery au point. Rather sales should look at a stream of accepted increments (or incremental releases) and scoup from that whatever they find valuable. Sales and marketing need to realize they should work on what´s there, not what might be possible in the future. But I digress… In my view a Spinning cycle - which is not easy to reach, which requires practice - is the core practice to compensate the immeasurability of Evolvability. From start to finish of each issue in 32 hours max - that´s the challenge we need to accept if we´re serious increasing Evolvability. Fortunately higher Evolvability is not the only outcome of Spinning. Customer/management will like the increased flexibility and “getting more bang for the buck”.

    Read the article

  • Digitally Signing a XAP Silverlight

    Ive been referring a lot of people lately to the steps to sign a XAP. So I decided to post an excerpt I wrote about signing Silverlight XAP files in the Silverlight 4 Whitepaper on Channel 9 here to help spread the word. The signing process is important if you are creating an elevated trust out of browser application because it helps: Reassure your users that the application is authentic Allow updates to elevated trust applications Elevated trust out-of-browser applications enable developers...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Trouble connecting a Ubuntu system to IPv6 tunnel over NAT

    - by John Millikin
    I'm trying to set up an IPv6 tunnel, via Hurricane Electric's tunnel-broker service. I've configured my system using their example commands: # $ipv4a = tunnel server's IPv4 IP # $ipv4b = user's IPv4 IP # $ipv6a = tunnel server's side of point-to-point /64 allocation # $ipv6b = user's side of point-to-point /64 allocation ip tunnel add he-ipv6 mode sit remote $ipv4a local $ipv4b ttl 255 ip link set he-ipv6 up ip addr add $ipv6b dev he-ipv6 ip route add ::/0 dev he-ipv6 And have configured my desktop to be in my NAT router's DMZ. The router is running Tomato firmware. But I can't ping any IPv6 services: $ ping6 -I he-ipv6 '2001:470:1f04:454::1' PING 2001:470:1f04:454::1(2001:470:1f04:454::1) from 2001:470:1f04:454::2 he-ipv6: 56 data bytes From 2001:470:1f04:454::2 icmp_seq=1 Destination unreachable: Address unreachable From 2001:470:1f04:454::2 icmp_seq=2 Destination unreachable: Address unreachable I can ping my local address: $ ping6 -I he-ipv6 '2001:470:1f04:454::2' PING 2001:470:1f04:454::2(2001:470:1f04:454::2) from 2001:470:1f04:454::2 he-ipv6: 56 data bytes 64 bytes from 2001:470:1f04:454::2: icmp_seq=1 ttl=64 time=0.037 ms 64 bytes from 2001:470:1f04:454::2: icmp_seq=2 ttl=64 time=0.039 ms I don't know much about routing, but results I found online suggested the output of ip -6 route and ip addr could be useful: $ ip -6 route 2001:470:1f04:454::/64 via :: dev he-ipv6 proto kernel metric 256 mtu 1480 advmss 1420 hoplimit 4294967295 fe80::/64 dev virbr0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev eth1 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 via :: dev he-ipv6 proto kernel metric 256 mtu 1480 advmss 1420 hoplimit 4294967295 default dev he-ipv6 metric 1024 mtu 1480 advmss 1420 hoplimit 4294967295 $ ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 100 link/ether 00:1c:c0:a1:98:b2 brd ff:ff:ff:ff:ff:ff inet 192.168.1.10/24 brd 192.168.1.255 scope global eth1 inet6 fe80::21c:c0ff:fea1:98b2/64 scope link valid_lft forever preferred_lft forever 3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 36:4c:33:ab:0d:c6 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 inet6 fe80::344c:33ff:feab:dc6/64 scope link valid_lft forever preferred_lft forever 4: vboxnet0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 00:76:62:6e:65:74 brd ff:ff:ff:ff:ff:ff 5: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 7e:29:5e:7c:ba:93 brd ff:ff:ff:ff:ff:ff 6: sit0: <NOARP> mtu 1480 qdisc noop state DOWN link/sit 0.0.0.0 brd 0.0.0.0 7: he-ipv6@NONE: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN link/sit 24.130.225.239 peer 72.52.104.74 inet6 2001:470:1f04:454::2/64 scope global valid_lft forever preferred_lft forever inet6 fe80::1882:e1ef/128 scope link valid_lft forever preferred_lft forever

    Read the article

  • Command-line connect to wired network for Ubuntu

    - by Tim
    I like to know how to use command-line to connect to a wired network in general for Ubuntu 8.10? In my case, I connect a cable to my laptop but it doesn't work with my WICD. So I like to try command-line method. Here is the ifconfig of my network adapters: $ ifconfig eth0 Link encap:Ethernet HWaddr 00:c0:9f:8d:23:74 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:19 Base address:0x1800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4457 errors:0 dropped:0 overruns:0 frame:0 TX packets:4457 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:493002 (493.0 KB) TX bytes:493002 (493.0 KB) wlan0 Link encap:Ethernet HWaddr 00:0e:9b:ab:56:19 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:576 Metric:1 RX packets:1508929 errors:0 dropped:0 overruns:0 frame:0 TX packets:768144 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:806027375 (806.0 MB) TX bytes:78834873 (78.8 MB) wlan0:avahi Link encap:Ethernet HWaddr 00:0e:9b:ab:56:19 inet addr:169.254.5.92 Bcast:169.254.255.255 Mask:255.255.0.0 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:576 Metric:1 wmaster0 Link encap:UNSPEC HWaddr 00-0E-9B-AB-56-19-00-00-00-00-00-00-00-00-00-00 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Thanks and regards! UPDATE: Tried what oyvindio suggested. Here is the failing message: $ sudo dhclient3 eth0 There is already a pid file /var/run/dhclient.pid with pid 18279 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ mon0: unknown hardware address type 803 wmaster0: unknown hardware address type 801 mon0: unknown hardware address type 803 wmaster0: unknown hardware address type 801 Listening on LPF/eth0/00:c0:9f:8d:23:74 Sending on LPF/eth0/00:c0:9f:8d:23:74 Sending on Socket/fallback DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 5 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 10 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 12 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 15 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 11 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 8 No DHCPOFFERS received. No working leases in persistent database - sleeping.

    Read the article

  • how to use iptables to block the IP of device connected to openwrt router

    - by scola
    I have two routers(A,B).the A connect to internet with IP:192.168.1.1 The openwrt router B connect the lan of A by bridge with static IP:192.168.1.111. I am learning to use iptables to control the devices connected to B(wlan) . I use my phone to connect wifi of B,the phone's IP is IP:192.168.1.100.it can surf the internet normally. I want to block the phone's IP to make the phone can not connect to internet. refer to http://bredsaal.dk/some-small-iptables-on-openwrt-tips iptables -A input_wan -s 192.168.1.100 --jump REJECT iptables -A forwarding_rule -d 192.168.1.100 --jump REJECT but it do not work.the phone still connect to internet normally. and I tried other chain(INPUT,OUTPUT,FORWARD).so many chains confused me. iptables -I OUTPUT -o br-lan -s 192.168.1.100 -j DROP and it do not work again. I'm sure that the iptables have no problem. root@OpenWrt:/etc# iptables -L|grep Chain Chain INPUT (policy ACCEPT) Chain FORWARD (policy DROP) Chain OUTPUT (policy ACCEPT) Chain forward (1 references) Chain forwarding_lan (1 references) Chain forwarding_rule (1 references) Chain forwarding_wan (1 references) Chain input (1 references) Chain input_lan (1 references) Chain input_rule (1 references) Chain input_wan (1 references) Chain output (1 references) root@OpenWrt:/etc# ifconfig br-lan Link encap:Ethernet HWaddr 0C:82:68:97:57:BA inet addr:192.168.1.111 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::e82:68ff:fe97:57ba/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:14976 errors:0 dropped:0 overruns:0 frame:0 TX packets:7656 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2851980 (2.7 MiB) TX bytes:1902785 (1.8 MiB) eth0 Link encap:Ethernet HWaddr 0C:82:68:97:57:BA UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:58201 errors:0 dropped:11 overruns:0 frame:0 TX packets:45012 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:54591348 (52.0 MiB) TX bytes:5711142 (5.4 MiB) Interrupt:4 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:312 errors:0 dropped:0 overruns:0 frame:0 TX packets:312 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:39961 (39.0 KiB) TX bytes:39961 (39.0 KiB) mon.wlan0 Link encap:UNSPEC HWaddr 0C-82-68-97-57-BA-00-48-00-00-00-00-00-00-00-00 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4900 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:1223807 (1.1 MiB) TX bytes:0 (0.0 B) wlan0 Link encap:Ethernet HWaddr 0C:82:68:97:57:BA UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:37346 errors:0 dropped:0 overruns:0 frame:0 TX packets:49662 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:3808021 (3.6 MiB) TX bytes:54486310 (51.9 MiB) root@OpenWrt:/etc/config# cat network config 'interface' 'loopback' option 'ifname' 'lo' option 'proto' 'static' option 'ipaddr' '127.0.0.1' option 'netmask' '255.0.0.0' config 'interface' 'lan' option 'ifname' 'eth0' option 'type' 'bridge' option 'proto' 'static' option 'ipaddr' '192.168.1.111' option 'netmask' '255.255.255.0' option 'gateway' '192.168.1.1' option dns 192.168.1.1 and how to use iptables to control the network of wlan? Thanks in advance and sorry for poor English.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >