Search Results

Search found 13128 results on 526 pages for 'square root'.

Page 198/526 | < Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >

  • routing specific IP to ppp0 tunnel

    - by gompertz
    Hi All, I feel I've struggled with this long enough and need some help. I have a pptp tunnel and am trying to route destination traffic from 208.85.40.20 to the pptp tunnel (ppp0). (Keen observers may recognize the ip as being that of pandora.com). I am doing all this configuration on a router... and I know it's not working successfully as traceroute yields nothing but astericks. I've pasted relevant outputs below: (with some "security" editing to the addresses) root@OpenWrt:~# ifconfig br0 Link encap:Ethernet HWaddr 00:1A:92:BC:XX:XX inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:28185 errors:0 dropped:0 overruns:0 frame:0 TX packets:24936 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:4894242 (4.6 MiB) TX bytes:5941902 (5.6 MiB) eth0 Link encap:Ethernet HWaddr 00:1A:92:BC:XX:XX UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:51829 errors:0 dropped:0 overruns:0 frame:0 TX packets:56824 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:11490288 (10.9 MiB) TX bytes:11857913 (11.3 MiB) Interrupt:4 eth2 Link encap:Ethernet HWaddr 00:1A:92:BC:XX:XX UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:4 errors:0 dropped:0 overruns:0 frame:15426 TX packets:9529 errors:21 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:423 (423.0 B) TX bytes:596036 (582.0 KiB) Interrupt:2 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:30 errors:0 dropped:0 overruns:0 frame:0 TX packets:30 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2300 (2.2 KiB) TX bytes:2300 (2.2 KiB) ppp0 Link encap:Point-Point Protocol inet addr:68.68.39.250 P-t-P:172.16.20.1 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1400 Metric:1 RX packets:165 errors:2 dropped:0 overruns:0 frame:0 TX packets:68 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:7006 (6.8 KiB) TX bytes:3462 (3.3 KiB) vlan0 Link encap:Ethernet HWaddr 00:1A:92:BC:XX:XX UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:28182 errors:0 dropped:0 overruns:0 frame:0 TX packets:33813 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:5006544 (4.7 MiB) TX bytes:6609774 (6.3 MiB) vlan1 Link encap:Ethernet HWaddr 00:1A:92:BC:XX:XX inet addr:173.183.111.3 Bcast:173.183.111.255 Mask:255.255.224.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:23653 errors:0 dropped:0 overruns:0 frame:0 TX packets:23012 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:5522012 (5.2 MiB) TX bytes:4982944 (4.7 MiB) wds0.4915 Link encap:Ethernet HWaddr 00:1A:92:BC:XX:XX UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) wds0.4915 Link encap:Ethernet HWaddr 00:1A:92:BC:XX:XX UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) root@OpenWrt:~# cat /etc/ppp/ip-up iptables -A FORWARD -t filter -i br0 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -t filter -i ppp0 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -t nat -A POSTROUTING -o ppp0 -s 192.168.1.1/24 -d 0/0 -j MASQUERADE iptables -A forwarding_rule -o ppp0 -j ACCEPT iptables -A forwarding_rule -i ppp0 -j ACCEPT iptables -t nat -A postrouting_rule -o ppp0 -j MASQUERADE root@OpenWrt:~# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 172.16.20.1 * 255.255.255.255 UH 0 0 0 ppp0 208.85.40.20 * 255.255.255.255 UH 0 0 0 ppp0 192.168.1.0 * 255.255.255.0 U 0 0 0 br0 173.183.192.0 * 255.255.224.0 U 0 0 0 vlan1 default d173-183-192-1. 0.0.0.0 UG 0 0 0 vlan1 default 192.168.1.1 0.0.0.0 UG 0 0 0 br0 Any advice is greatly appreciated, I'm not too great with network but am pretty astute at learning ;-)

    Read the article

  • PHP install sqlite3 extension

    - by Kevin
    We are using PHP 5.3.6 here, but we used the --without-sqlite3 command when compiling PHP. (It stands in the 'Configure Command' column). But, it is very risky to recompile PHP on that server; there are many visitors. How can we install/use sqlite3? Regards, Kevin [EDIT] yum repolist gives: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.nl.leaseweb.net * extras: mirror.nl.leaseweb.net * updates: mirror.nl.leaseweb.net repo id repo name status base CentOS-5 - Base 3,566 extras CentOS-5 - Extras 237 updates CentOS-5 - Updates 376 repolist: 4,179 rpm -qa | grep php gives: php-pdo-5.3.6-1.w5 php-mysql-5.3.6-1.w5 psa-php5-configurator-1.5.3-cos5.build95101022.10 php-mbstring-5.3.6-1.w5 php-imap-5.3.6-1.w5 php-cli-5.3.6-1.w5 php-gd-5.3.6-1.w5 php-5.3.6-1.w5 php-common-5.3.6-1.w5 php-xml-5.3.6-1.w5 php -i | grep sqlite gives: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/sqlite3.so' - /usr/lib64/php/modules/sqlite3.so: cannot open shared object file: No such file or directory in Unknown on line 0 Configure Command => './configure' '--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--localstatedir=/var' '--sharedstatedir=/usr/com' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--cache-file=../config.cache' '--with-libdir=lib64' '--with-config-file-path=/etc' '--with-config-file-scan-dir=/etc/php.d' '--disable-debug' '--with-pic' '--disable-rpath' '--without-pear' '--with-bz2' '--with-exec-dir=/usr/bin' '--with-freetype-dir=/usr' '--with-png-dir=/usr' '--with-xpm-dir=/usr' '--enable-gd-native-ttf' '--without-gdbm' '--with-gettext' '--with-gmp' '--with-iconv' '--with-jpeg-dir=/usr' '--with-openssl' '--with-pcre-regex=/usr' '--with-zlib' '--with-layout=GNU' '--enable-exif' '--enable-ftp' '--enable-magic-quotes' '--enable-sockets' '--enable-sysvsem' '--enable-sysvshm' '--enable-sysvmsg' '--with-kerberos' '--enable-ucd-snmp-hack' '--enable-shmop' '--enable-calendar' '--without-mime-magic' '--without-sqlite' '--without-sqlite3' '--with-libxml-dir=/usr' '--enable-xml' '--with-system-tzdata' '--enable-force-cgi-redirect' '--enable-pcntl' '--with-imap=shared' '--with-imap-ssl' '--enable-mbstring=shared' '--enable-mbregex' '--with-gd=shared' '--enable-bcmath=shared' '--enable-dba=shared' '--with-db4=/usr' '--with-xmlrpc=shared' '--with-ldap=shared' '--with-ldap-sasl' '--with-mysql=shared,/usr' '--with-mysqli=shared,/usr/bin/mysql_config' '--enable-dom=shared' '--with-pgsql=shared' '--enable-wddx=shared' '--with-snmp=shared,/usr' '--enable-soap=shared' '--with-xsl=shared,/usr' '--enable-xmlreader=shared' '--enable-xmlwriter=shared' '--with-curl=shared,/usr' '--enable-fastcgi' '--enable-pdo=shared' '--with-pdo-odbc=shared,unixODBC,/usr' '--with-pdo-mysql=shared,/usr' '--with-pdo-pgsql=shared,/usr' '--with-pdo-sqlite=shared,/usr' '--with-pdo-dblib=shared,/usr' '--enable-json=shared' '--enable-zip=shared' '--with-readline' '--with-pspell=shared' '--enable-phar=shared' '--with-mcrypt=shared,/usr' '--with-tidy=shared,/usr' '--with-mssql=shared,/usr' '--enable-sysvmsg=shared' '--enable-sysvshm=shared' '--enable-sysvsem=shared' '--enable-posix=shared' '--with-unixODBC=shared,/usr' '--enable-fileinfo=shared' '--enable-intl=shared' '--with-icu-dir=/usr' '--with-recode=shared,/usr' /etc/php.d/pdo_sqlite.ini, /etc/php.d/sqlite3.ini, PHP Warning: Unknown: It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'Europe/Berlin' for 'CET/1.0/no DST' instead in Unknown on line 0 PDO drivers => mysql, sqlite pdo_sqlite PWD => /root/sqlite _SERVER["PWD"] => /root/sqlite _ENV["PWD"] => /root/sqlite

    Read the article

  • Openldap/Sasl/GSSAPI on Debian: Key table entry not found

    - by badbishop
    The goal: to make an OpenLDAP server to authenticate using Kerberos V via GSSAPI Setup: several virtual machines running on freshly installed/updated Debian Squeeze A master KDC server kdc.example.com A LDAP server, running OpenLDAP ldap.example.com The problem: tom@ldap:~$ ldapsearch -b 'dc=example,dc=com' SASL/GSSAPI authentication started ldap_sasl_interactive_bind_s: Other (e.g., implementation specific) error (80) additional info: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Key table entry not found) One might suggest to add that bloody keytab entry, but here's the real problem: ktutil: rkt /etc/ldap/ldap.keytab ktutil: list slot KVNO Principal ---- ---- --------------------------------------------------------------------- 1 2 ldap/[email protected] 2 2 ldap/[email protected] 3 2 ldap/[email protected] 4 2 ldap/[email protected] So, the entry as suggested by the OpenLDAP manual is there allright. Deleting and re-creating both service principal and the keytab on ldap.example.com didn't help, I get the same error. And before I make the keytab file readable by openldap, I get "Permission denied" error instead of the one in the subject. Which implies, that the right keytab file is being accessed, as set in /etc/default/slapd. I have my doubts about the following part of slapd config: root@ldap:~# cat /etc/ldap/slapd.d/cn\=config.ldif | grep -v "^#" dn: cn=config objectClass: olcGlobal cn: config olcArgsFile: /var/run/slapd/slapd.args olcLogLevel: 256 olcPidFile: /var/run/slapd/slapd.pid olcToolThreads: 1 structuralObjectClass: olcGlobal entryUUID: d6737f5c-d321-1030-9dbe-27d2a7751e11 olcSaslHost: kdc.example.com olcSaslRealm: EXAMPLE.COM olcSaslSecProps: noplain,noactive,noanonymous,minssf=56 olcAuthzRegexp: {0}"uid=([^/]*),cn=EXAMPLE.COM,cn=GSSAPI,cn=auth" "uid=$1,ou=People,dc=example,dc=com" olcAuthzRegexp: {1}"uid=host/([^/]*).example.com,cn=example.com,cn=gssapi,cn=auth" "cn=$1,ou=hosts,dc=example,dc=com" A HOWTO at https://help.ubuntu.com/community/OpenLDAPServer#Kerberos_Authentication mentiones vaguely: Also, it is frequently necessary to map the Distinguished Name (DN) of an authorized Kerberos client to an existing entry in the DIT. I fail to understand where in the tree this should be defined, what schema should be used, etc. After hours of googling, it's official: I'm stuck! Please, help. Other things checked: Kerberos as such works fine (I can ssh without using a password to any machine in this setup). That means there should be no DNS-related problems. ldapsearch -b 'dc=example,dc=com' -x works OK. SASL/GSSAPI has been tested using sasl-sample-server -m GSSAPI -s ldap and sasl-sample-client -s ldap -n ldap.example.com -u tom without errors: root@ldap:~# sasl-sample-server -m GSSAPI -s ldap Forcing use of mechanism GSSAPI Sending list of 1 mechanism(s) S: R1NTQVBJ Waiting for client mechanism... C: R1NTQVBJAGCCAmUGCSqGSIb3EgECAgEAboICVDCCAlCgAwIBBaEDAgEOogcDBQAgAAAAo4IBamGCAWYwggFioAMCAQWhDRsLRVhBTVBMRS5DT02iIzAhoAMCAQOhGjAYGwRsZGFwGxBsZGFwLmV4YW1wbGUuY29to4IBJTCCASGgAwIBEqEDAgECooIBEwSCAQ8Re8XUnscB8dx6V/cXL+uzSF2/olZvcrVAJHZBZrfRKUFEQmU1Li46bUGK3GZwsn6qUVwmW6lyqVctOIYwGvBpz81Rw/5mj4V5iQudZbIRa+5Ew6W1oBB7ALi2cnPsbUroqzGmEh8/Vw8zSFk7W1gND4DLuWrPXD2xhLDUMMekBn5nXEPTnNAnV4w81Sj3ZlyLZz5OSitGVUEnQweV53z1spWsASHHWod/tSuxb19YeWmY5QHXPLG+lL5+w+Cykr0EhYVj8f8MDWFB8qoN1cr85xDfn18r8JldSw+i18nFKOo8usG+37hZTWynHYvBfMONtG9mLJv82KGPZMydWK7pzyTZDcnSsIjo2AftMZd5pIHMMIHJoAMCARKigcEEgb5aG1k4xgxmUXX7RKfvAbVBVJ12dWOgFFjMYceKjziXwrrOkv8ZwIvef9Yn2KsWznb5L55SXt2c/zlPa5mLKIktvw77hsK1h/GYc7p//BGOsmr47aCqVWsGuTqVT129uo5LNQDeSFwl2jXCkCZJZavOVrqYsM6flrPYE4n5lASTcPitX+/WNsf6WrvZoaexiv1JqyM/MWqS/vMBRMMc5xlurj6OARFvP9aFZoK/BLmfkSyAJj6MLbLVXZtkHiIPgot 'GSSAPI' Sending response... S: YIGZBgkqhkiG9xIBAgICAG+BiTCBhqADAgEFoQMCAQ+iejB4oAMCARKicQRvkxggi9pW+yJ1ExbTwLDclqw/VQ98aPq8mt39hkO6PPfcO2cB+t6vJ01xRKBrT9D2qF2XK0SWD4PQNb5UFbH4RM/bKAxDuCfZ1MHKgIWTLu4bK7VGZTbYydcckU2d910jIdvkkHhaRqUEM4cqp/cR Waiting for client reply... C: got '' Sending response... S: BQQF/wAMAAAAAAAAMBOWqQcACAAlCodrXW66ZObsEd4= Waiting for client reply... C: BQQE/wAMAAAAAAAAFUYbXQQACAB0b20VynB4uGH/iIzoRhw=got '?' Negotiation complete Username: tom Realm: (NULL) SSF: 56 sending encrypted message 'srv message 1' S: AAAASgUEB/8AAAAAAAAAADATlqrqrBW0NRfPMXMdMz+zqY32YakrHqFps3o/vO6yDeyPSaSqprrhI+t7owk7iOsbrZ/idJRxCBm8Wazx Waiting for encrypted message... C: AAAATQUEBv8AAAAAAAAAABVGG17WC1+/kIV9xTMUdq6Y4qYmmTahHVCjidgGchTOOOrBLEwA9IqiTCdRFPVbK1EgJ34P/vxMQpV1v4WZpcztgot '' recieved decoded message 'client message 1' root@ldap:~# sasl-sample-client -s ldap -n ldap.example.com -u tom service=ldap Waiting for mechanism list from server... S: R1NTQVBJrecieved 6 byte message Choosing best mechanism from: GSSAPI returning OK: tom Using mechanism GSSAPI Preparing initial. Sending initial response... C: R1NTQVBJAGCCAmUGCSqGSIb3EgECAgEAboICVDCCAlCgAwIBBaEDAgEOogcDBQAgAAAAo4IBamGCAWYwggFioAMCAQWhDRsLRVhBTVBMRS5DT02iIzAhoAMCAQOhGjAYGwRsZGFwGxBsZGFwLmV4YW1wbGUuY29to4IBJTCCASGgAwIBEqEDAgECooIBEwSCAQ8Re8XUnscB8dx6V/cXL+uzSF2/olZvcrVAJHZBZrfRKUFEQmU1Li46bUGK3GZwsn6qUVwmW6lyqVctOIYwGvBpz81Rw/5mj4V5iQudZbIRa+5Ew6W1oBB7ALi2cnPsbUroqzGmEh8/Vw8zSFk7W1gND4DLuWrPXD2xhLDUMMekBn5nXEPTnNAnV4w81Sj3ZlyLZz5OSitGVUEnQweV53z1spWsASHHWod/tSuxb19YeWmY5QHXPLG+lL5+w+Cykr0EhYVj8f8MDWFB8qoN1cr85xDfn18r8JldSw+i18nFKOo8usG+37hZTWynHYvBfMONtG9mLJv82KGPZMydWK7pzyTZDcnSsIjo2AftMZd5pIHMMIHJoAMCARKigcEEgb5aG1k4xgxmUXX7RKfvAbVBVJ12dWOgFFjMYceKjziXwrrOkv8ZwIvef9Yn2KsWznb5L55SXt2c/zlPa5mLKIktvw77hsK1h/GYc7p//BGOsmr47aCqVWsGuTqVT129uo5LNQDeSFwl2jXCkCZJZavOVrqYsM6flrPYE4n5lASTcPitX+/WNsf6WrvZoaexiv1JqyM/MWqS/vMBRMMc5xlurj6OARFvP9aFZoK/BLmfkSyAJj6MLbLVXZtkHiIP Waiting for server reply... S: YIGZBgkqhkiG9xIBAgICAG+BiTCBhqADAgEFoQMCAQ+iejB4oAMCARKicQRvkxggi9pW+yJ1ExbTwLDclqw/VQ98aPq8mt39hkO6PPfcO2cB+t6vJ01xRKBrT9D2qF2XK0SWD4PQNb5UFbH4RM/bKAxDuCfZ1MHKgIWTLu4bK7VGZTbYydcckU2d910jIdvkkHhaRqUEM4cqp/cRrecieved 156 byte message C: Waiting for server reply... S: BQQF/wAMAAAAAAAAMBOWqQcACAAlCodrXW66ZObsEd4=recieved 32 byte message Sending response... C: BQQE/wAMAAAAAAAAFUYbXQQACAB0b20VynB4uGH/iIzoRhw= Negotiation complete Username: tom SSF: 56 Waiting for encoded message... S: AAAASgUEB/8AAAAAAAAAADATlqrqrBW0NRfPMXMdMz+zqY32YakrHqFps3o/vO6yDeyPSaSqprrhI+t7owk7iOsbrZ/idJRxCBm8Wazxrecieved 78 byte message recieved decoded message 'srv message 1' sending encrypted message 'client message 1' C: AAAATQUEBv8AAAAAAAAAABVGG17WC1+/kIV9xTMUdq6Y4qYmmTahHVCjidgGchTOOOrBLEwA9IqiTCdRFPVbK1EgJ34P/vxMQpV1v4WZpczt

    Read the article

  • How to setup linux permissions the WWW folder?

    - by Xeoncross
    Updated Summery The /var/www directory is owned by root:root which means that no one can use it and it's entirely useless. Since we all want a web server that actually works (and no-one should be logging in as "root"), then we need to fix this. Only two entities need access. PHP/Perl/Ruby/Python all need access to the folders and files since they create many of them (i.e. /uploads/). These scripting languages should be running under nginx or apache (or even some other thing like FastCGI for PHP). The developers How do they get access? I know that someone, somewhere has done this before. With however-many billions of websites out there you would think that there would be more information on this topic. I know that 777 is full read/write/execute permission for owner/group/other. So this doesn't seem to be needed as it leaves random users full permissions. What permissions are need to be used on /var/www so that... Source control like git or svn Users in a group like "websites" (or even added to "www-data") Servers like apache or lighthttpd And PHP/Perl/Ruby can all read, create, and run files (and directories) there? If I'm correct, Ruby and PHP scripts are not "executed" directly - but passed to an interpreter. So there is no need for execute permission on files in /var/www...? Therefore, it seems like the correct permission would be chmod -R 1660 which would make all files shareable by these four entities all files non-executable by mistake block everyone else from the directory entirely set the permission mode to "sticky" for all future files Is this correct? Update: I just realized that files and directories might need different permissions - I was talking about files above so i'm not sure what the directory permissions would need to be. Update 2: The folder structure of /var/www changes drastically as one of the four entities above are always adding (and sometimes removing) folders and sub folders many levels deep. They also create and remove files that the other 3 entities might need read/write access to. Therefore, the permissions need to do the four things above for both files and directories. Since non of them should need execute permission (see question about ruby/php above) I would assume that rw-rw-r-- permission would be all that is needed and completely safe since these four entities are run by trusted personal (see #2) and all other users on the system only have read access. Update 3: This is for personal development machines and private company servers. No random "web customers" like a shared host. Update 4: This article by slicehost seems to be the best at explaining what is needed to setup permissions for your www folder. However, I'm not sure what user or group apache/nginx with PHP OR svn/git run as and how to change them. Update 5: I have (I think) finally found a way to get this all to work (answer below). However, I don't know if this is the correct and SECURE way to do this. Therefore I have started a bounty. The person that has the best method of securing and managing the www directory wins.

    Read the article

  • linux routing bug?

    - by Balázs Pozsár
    I have been struggling with this not easily reproducible issue since a while. I am using linux kernel v3.1.0, and sometimes routing to a few IP addresses does not work. What seems to happen is that instead of sending the packet to the gateway, the kernel treats the destination address as local, and tries to gets its MAC address via ARP. For example, now my current IP address is 172.16.1.104/24, the gateway is 172.16.1.254: # ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:1B:63:97:FC:DC inet addr:172.16.1.104 Bcast:172.16.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:230772 errors:0 dropped:0 overruns:0 frame:0 TX packets:171013 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:191879370 (182.9 Mb) TX bytes:47173253 (44.9 Mb) Interrupt:17 # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 172.16.1.254 0.0.0.0 UG 0 0 0 eth0 172.16.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 I can ping a few addresses, but not 172.16.0.59: # ping -c1 172.16.1.254 PING 172.16.1.254 (172.16.1.254) 56(84) bytes of data. 64 bytes from 172.16.1.254: icmp_seq=1 ttl=64 time=0.383 ms --- 172.16.1.254 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms root@pozsybook:~# ping -c1 172.16.0.1 PING 172.16.0.1 (172.16.0.1) 56(84) bytes of data. 64 bytes from 172.16.0.1: icmp_seq=1 ttl=63 time=5.54 ms --- 172.16.0.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 5.545/5.545/5.545/0.000 ms root@pozsybook:~# ping -c1 172.16.0.2 PING 172.16.0.2 (172.16.0.2) 56(84) bytes of data. 64 bytes from 172.16.0.2: icmp_seq=1 ttl=62 time=7.92 ms --- 172.16.0.2 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 7.925/7.925/7.925/0.000 ms root@pozsybook:~# ping -c1 172.16.0.59 PING 172.16.0.59 (172.16.0.59) 56(84) bytes of data. From 172.16.1.104 icmp_seq=1 Destination Host Unreachable --- 172.16.0.59 ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms When trying to ping 172.16.0.59, I can see in tcpdump that an ARP req was sent: # tcpdump -n -i eth0|grep ARP tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes 15:25:16.671217 ARP, Request who-has 172.16.0.59 tell 172.16.1.104, length 28 and /proc/net/arp has an incomplete entry for 172.16.0.59: # grep 172.16.0.59 /proc/net/arp 172.16.0.59 0x1 0x0 00:00:00:00:00:00 * eth0 Please note, that 172.16.0.59 is accessible from this LAN from other computers. Does anyone have any idea of what's going on? Thanks. update: replies to the comments below: there are no interfaces besides eth0 and lo the ARP req cannot be seen on the other end, but that's how it should work. the main problem is that an ARP req should not even be sent at the first place the problem persist even if I add an explicit route with the command "route add -host 172.16.0.59 gw 172.16.1.254 dev eth0"

    Read the article

  • Strange Upload Problem on Hyper-V

    - by Ring0
    Hi, This one is driving me totally nuts. I have being trying to upload a file to www.virustotal.com (its a harmless exe I have since found out - DiskWipe.exe from diskwipe.org). Using IE8. From Win 7 and Win 2008 R2 Datacenter (which I select to boot from vhd's) onto my main machine hardware, and also on another Win 7 PC elsewhere on my network, when I upload the file to virustotal.com it works perfectly. So, using my native NIC's everything is fine. Using another machine also perfect. Right. OK, from my boot menu the default is my main development machine - the one I'm typing on now. This runs on the metal and has Hyper-V role and I have some guests. All guests are not running. Amazingly, from my console (root partition to be exact) or any guest OS 2003 /XP / 2008 R2 etc. My upload to virustotal.com slows at 32% then HANGS at 38.something% & never finishes!! Here is the kicker. I have another box (my main server) running Hyper-V on the metal and three live guests. Identical H/W to my main dev machine in another room. (Except OS is Datacenter - Mine is Enterprise). If I try and upload from its bare metal console or any guest this file to virustotal.com using IE8 it stops exactly in the same place!! As for "steps I have tried etc." are kind-of blown out of the water as my server box is doing the precise same thing as the machine in my room here. OK, comonalities: Mobo: Gigabyte GA-X58-UD5, 12GB Kingston RAM, Corei7 920 4 cores hyperthreading = 8 & Realtek RTL8168D/8111D Family PCI-E Gigabit Ethernet NIC's. All 3 machines have this same motherboard - revision F11 Bios, all have 12GB RAM, all have the Realtek Nic's. All x64 by the way as I mentioned before I have a Win 7 box also with the UD5 m/Board, 12 GB RAM - bit of an overkill. :-) All these machines when NOT running Hyper-V can upload this file. Perhaps you may like to try it on a Hyepr-v (2008 R2) yourselves with IE8 and the desktop experience is on. See if it works or fails for you. Root OS or any guest. So, looking like its the NIC + Hyper-V = Cannot upload this file (any file I must add.) Realtek Nic is Ver 7.002.1125.2008. Using IE8 I see in the nic settings there are the usual parameters for Jumbo frames / Checksum offloading etc. several others. Should I fiddle with these? I ran Netmon 3.3 in a guest and the TCP session halted as the upload failed. I suppose I could study that further. I dont have Netmon on the root partition machine (yet)! All OS's fully patched - including todays defender files. My box running Office 2007 - but identical server in another room is not. Also, if I fire up a VPN to a distant client and do the upload it works! Of course its a different network path. Suggestions welcome please. If I left out anything important - please yell at me. Many Thanks,

    Read the article

  • http request via iptables --to-destination ip redirect results in no response

    - by Wouter Vegter
    I have two Ubuntu servers with each having their own ip addresses. Let's call them server1 and server2, having respectively ip 1.1.1.1 and 2.2.2.2 I have a nginx running on server2. The sole purpose I want server1 to have is to redirect all incoming http (so port 80) requests to server2 without clients noticing that their request is being redirected. I tried the following command on server1: iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 2.2.2.2 But when I enter 1.1.1.1 in my browser I get no respond: the page keeps trying to load without giving any message or error message (I get a time-out after 2-3 mins). But when I do remove the above iptables rule I immediately do get a "page not found error" when I enter 1.1.1.1 in my browser; so something is working but not as it should: when I enter 1.1.1.1 I want the html page to load that is hosted on 2.2.2.2 Because when i enter 2.2.2.2 in my browser I do see the webpage loaded. Could anyone please help me with this? I am searching quite some time (on severfault & Google) on this now so that's why I ask. Many thanks for reading my question! Update: Thank you all for you information. Unfortunately I still get no response I have the following iptables configuration: root@ip-10-48-238-216:/home/ubuntu# sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination root@ip-10-48-238-216:/home/ubuntu# sudo iptables -t nat -L Chain PREROUTING (policy ACCEPT) target prot opt source destination DNAT tcp -- anywhere anywhere tcp dpt:www to:2.2.2.2 Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination When i run tcpdump and do request via chrome to 1.1.1.1 i get the following root@ip-10-48-238-216:/home/ubuntu# sudo tcpdump -i eth0 port 80 -vv tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 13:56:18.346625 IP (tos 0x0, ttl 52, id 12055, offset 0, flags [DF], proto TCP (6), length 60) 212-123-161-112.ip.telfort.nl.16386 ip-10-48-238-216.eu-west-1.compute.internal.www: Flags [S], cksum 0xb398 (correct), seq 2639758575, win 5840, options [mss 1460,sackOK,TS val 1223672 ecr 0,nop,wscale 6], length 0 13:56:18.346662 IP (tos 0x0, ttl 51, id 12055, offset 0, flags [DF], proto TCP (6), length 60) 212-123-161-112.ip.telfort.nl.16386 ww1dc1.shopreme.com.www: Flags [S], cksum 0x9ee0 (correct), seq 2639758575, win 5840, options [mss 1460,sackOK,TS val 1223672 ecr 0,nop,wscale 6], length 0 13:56:18.598747 IP (tos 0x0, ttl 52, id 10138, offset 0, flags [DF], proto TCP (6), length 60) 212-123-161-112.ip.telfort.nl.16387 ip-10-48-238-216.eu-west-1.compute.internal.www: Flags [S], cksum 0xac40 (correct), seq 2645658541, win 5840, options [mss 1460,sackOK,TS val 1223735 ecr 0,nop,wscale 6], length 0 13:56:18.598777 IP (tos 0x0, ttl 51, id 10138, offset 0, flags [DF], proto TCP (6), length 60) 212-123-161-112.ip.telfort.nl.16387 ww1dc1.shopreme.com.www: Flags [S], cksum 0x9788 (correct), seq 2645658541, win 5840, options [mss 1460,sackOK,TS val 1223735 ecr 0,nop,wscale 6], length 0 ^C 4 packets captured 4 packets received by filter 0 packets dropped by kernel the mentioned address relate to the following 212-123-161-112.ip.telfort.nl.16386 : my personal computer ww1dc1.shopreme.com.www : dns of server2 (2.2.2.2) ip-10-48-238-216.eu-west-1.compute.internal.www : amazon web services ec2 internal address of server1 (1.1.1.1) However, the tcpdump log on server2 (2.2.2.2) stays empty and I get no response back in my browser. I am able to ping from server1 to server2. And net.ipv4.ip_forward is set to 1 and so is /proc/sys/net/ipv4/ip_forward Could there be anything else that is missing?

    Read the article

  • nginx - 403 Forbidden

    - by michell90
    I've trouble to get aliases working correctly on nginx. When i try to access the aliases, /pma and /mba (see secure.example.com.conf), i get a 403 Forbidden but the base url works correctly. I read a lot of posts but nothing helped, so here i am. Nginx and php-fpm are running as www-data:www-data and the permissions for the directories are set to: drwxrwsr-x+ 5 www-data www-data 4.0K Dec 5 22:48 ./ drwxr-xr-x. 3 root root 4.0K Dec 4 22:50 ../ drwxrwsr-x+ 2 www-data www-data 4.0K Dec 5 13:10 mda.example.com/ drwxrwsr-x+ 11 www-data www-data 4.0K Dec 5 10:34 pma.example.com/ drwxrwsr-x+ 3 www-data www-data 4.0K Dec 5 11:49 www.example.com/ lrwxrwxrwx. 1 www-data www-data 18 Dec 5 09:56 secure.example.com -> www.example.com/ Im sorry for the bulk, but i thought better too much than too little. Here are the configuration files: /etc/nginx/nginx.conf user www-data www-data; worker_processes 1; error_log /var/log/nginx/error.log; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; include /etc/nginx/sites-enabled/*; } /etc/nginx/sites-enabled/secure.example.com server { listen 80; server_name secure.example.com; return 301 https://$host$request_uri; } server { listen 443; server_name secure.example.com; access_log /var/log/nginx/secure.example.com.access.log; error_log /var/log/nginx/secure.example.com.error.log; root /srv/http/secure.example.com; include /etc/nginx/ssl/secure.example.com.conf; include /etc/nginx/conf.d/index.conf; include /etc/nginx/conf.d/php-ssl.conf; autoindex off; location /pma/ { alias /srv/http/pma.example.com; } location /mda/ { alias /srv/http/mda.example.com; } } /etc/nginx/ssl/secure.example.com.conf ssl on; ssl_certificate /etc/nginx/ssl/secure.example.com.crt; ssl_certificate_key /etc/nginx/ssl/secure.example.com.key; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; /etc/nginx/conf.d/index.conf index index.php index.html index.htm; /etc/nginx/conf.d/php-ssl.conf location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param HTTPS on; fastcgi_param SCRIPT_FILENAME $request_filename; include fastcgi_params; } /var/log/nginx/secure.example.com.error.log 2013/12/05 22:49:04 [error] 29291#0: *2 directory index of "/srv/http/pma.example.com" is forbidden, client: 176.199.78.88, server: secure.example.com, request: "GET /pma/ HTTP/1.1", host: "secure.example.com" EDIT: forgot to mention, i'm running CentOS 6.4 x86_64 and nginx 1.0.15 Thanks in advance!

    Read the article

  • Troubleshooting sudoers via ldap

    - by dafydd
    The good news is that I got sudoers via ldap working on Red Hat Directory Server. The package is sudo-1.7.2p1. I have some LDAP/Kerberos users in an LDAP group called wheel, and I have this entry in LDAP: # %wheel, SUDOers, example.com dn: cn=%wheel,ou=SUDOers,dc=example,dc=com cn: %wheel description: Members of group wheel have access to all privileges. objectClass: sudoRole objectClass: top sudoCommand: ALL sudoHost: ALL sudoUser: %wheel So, members of group wheel have administrative privileges via sudo. This has been tested and works fine. Now, I have this other sudo privilege set up to allow members of a group called Administrators to perform two commands as the non-root owner of those commands. # %Administrators, SUDOers, example.com dn: cn=%Administrators,ou=SUDOers,dc=example,dc=com sudoRunAsGroup: appGroup sudoRunAsUser: appOwner cn: %Administrators description: Allow members of the group Administrators to run various commands . objectClass: sudoRole objectClass: top sudoCommand: appStop sudoCommand: appStart sudoCommand: /path/to/appStop sudoCommand: /path/to/appStart sudoUser: %Administrators Unfortunately, members of Administrators are still refused permission to run appStart or appStop: -bash-3.2$ sudo /path/to/appStop [sudo] password for Aaron: Sorry, user Aaron is not allowed to execute '/path/to/appStop' as root on host.example.com. -bash-3.2$ sudo -u appOwner /path/to/appStop [sudo] password for Aaron: Sorry, user Aaron is not allowed to execute '/path/to/appStop' as appOwner on host.example.com. /var/log/secure shows me these two sets of messages for the two attempts: Oct 31 15:02:36 host sudo: pam_unix(sudo:auth): authentication failure; logname=Aaron uid=0 euid=0 tty=/dev/pts/3 ruser= rhost= user=Aaron Oct 31 15:02:37 host sudo: pam_krb5[1508]: TGT verified using key for 'host/[email protected]' Oct 31 15:02:37 host sudo: pam_krb5[1508]: authentication succeeds for 'Aaron' ([email protected]) Oct 31 15:02:37 host sudo: Aaron : command not allowed ; TTY=pts/3 ; PWD=/auto/home/Aaron ; USER=root ; COMMAND=/path/to/appStop Oct 31 15:02:52 host sudo: pam_unix(sudo:auth): authentication failure; logname=Aaron uid=0 euid=0 tty=/dev/pts/3 ruser= rhost= user=Aaron Oct 31 15:02:52 host sudo: pam_krb5[1547]: TGT verified using key for 'host/[email protected]' Oct 31 15:02:52 host sudo: pam_krb5[1547]: authentication succeeds for 'Aaron' ([email protected]) Oct 31 15:02:52 host sudo: Aaron : command not allowed ; TTY=pts/3 ; PWD=/auto/home/Aaron ; USER=appOwner; COMMAND=/path/to/appStop The questions: Does sudo have some sort of verbose or debug mode where I can actually watch it capture the sudoers privilege list and determine whether or not Aaron should have the privilege to run this command? (This question is probably independent of where the sudoers database is kept.) Does sudo work with some background mechanism that might have a log level I could turn up? Right now, I can't fix a problem I can't identify. Is this an LDAP search failure? Is this a group member matching failure? Identifying why the command fails will help me identify the fix... Next step: Recreate the privilege in /etc/sudoers, and see if it works locally... Cheers!

    Read the article

  • XFS disk becomes unavailable after a while

    - by Guard
    Ubuntu 12.04 (but the same was on 11.10 before upgrading) WD MyBook, 2TB, no RAID (or RAID0, not completely sure, anyway no mirroring, both 1TB disks are in use, mounted as a single device). Formatted to XFS, normally used for big movie files. Connected to Firewire 800. At some point the LED started going up and down as when constantly reading/writing. The device gives access error. When unplugged (cable, then holding the power button for a while, then unplugging the power) and re-connected becomes available. xfs_check with no results. xfs_repair did something, but looks like didn't fix any error. Then after a massive read (checking 1.5GB torrent file for integrity) becomes unavailable again. Any ideas what's wrong? Drives? Cables? Motherboard? OS? UPD: not sure how relevant this is, but here are dmesg output [14380.632816] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled [14380.633356] SGI XFS Quota Management subsystem [14421.812220] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [14441.890596] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [14441.896858] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [14453.895347] firewire_core: created device fw1: GUID 0090a99500a35518, S400, 9 config ROM retries [14453.904818] scsi6 : SBP-2 IEEE-1394 [14453.905014] scsi7 : SBP-2 IEEE-1394 [14454.139993] firewire_sbp2: fw1.0: logged in to LUN 0000 (0 retries) [14454.158769] scsi 6:0:0:0: Direct-Access WD My Book 1015 PQ: 0 ANSI: 4 [14454.159251] sd 6:0:0:0: Attached scsi generic sg3 type 0 [14454.162391] firewire_sbp2: fw1.1: logged in to LUN 0001 (0 retries) [14454.167453] sd 6:0:0:0: [sdc] 3907017568 512-byte logical blocks: (2.00 TB/1.81 TiB) [14454.178822] sd 6:0:0:0: [sdc] Write Protect is off [14454.178826] sd 6:0:0:0: [sdc] Mode Sense: 10 00 00 00 [14454.186830] scsi 7:0:0:1: Enclosure WD My Book Device 1015 PQ: 0 ANSI: 4 [14454.186995] scsi 7:0:0:1: Attached scsi generic sg4 type 13 [14454.190078] sd 6:0:0:0: [sdc] Cache data unavailable [14454.190087] sd 6:0:0:0: [sdc] Assuming drive cache: write through [14454.202176] sd 6:0:0:0: [sdc] Cache data unavailable [14454.202185] sd 6:0:0:0: [sdc] Assuming drive cache: write through [14454.239940] sdc: [mac] sdc1 sdc2 sdc3 sdc4 [14454.271262] sd 6:0:0:0: [sdc] Cache data unavailable [14454.271270] sd 6:0:0:0: [sdc] Assuming drive cache: write through [14454.271354] sd 6:0:0:0: [sdc] Attached SCSI disk [14454.272149] ses 7:0:0:1: Attached Enclosure device [14606.090024] XFS (sdc3): Mounting Filesystem [14612.048343] XFS (sdc3): Starting recovery (logdev: internal) [14620.697636] XFS (sdc3): Ending recovery (logdev: internal) [14748.120957] e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: Rx/Tx [14748.120963] e1000e 0000:00:19.0: eth0: 10/100 speed: disabling TSO [14752.568382] uhci_hcd 0000:00:1a.0: PCI INT A disabled [14752.568579] uhci_hcd 0000:00:1a.1: PCI INT B disabled [14752.568738] ehci_hcd 0000:00:1a.7: PCI INT C disabled [14752.568779] ehci_hcd 0000:00:1a.7: PME# enabled [14752.584526] uhci_hcd 0000:00:1d.1: PCI INT B disabled [14752.584689] uhci_hcd 0000:00:1d.2: PCI INT C disabled [14752.680079] ehci_hcd 0000:00:1a.7: BAR 0: set to [mem 0xe4641000-0xe46413ff] (PCI address [0xe4641000-0xe46413ff]) [14752.680104] ehci_hcd 0000:00:1a.7: restoring config space at offset 0xf (was 0x300, writing 0x30b) [14752.680136] ehci_hcd 0000:00:1a.7: restoring config space at offset 0x1 (was 0x2900000, writing 0x2900002) [14752.680170] ehci_hcd 0000:00:1a.7: PME# disabled [14752.680182] ehci_hcd 0000:00:1a.7: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [14752.680190] ehci_hcd 0000:00:1a.7: setting latency timer to 64 [14752.710334] uhci_hcd 0000:00:1a.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [14752.710342] uhci_hcd 0000:00:1a.0: setting latency timer to 64 [14752.749186] uhci_hcd 0000:00:1a.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17 [14752.749194] uhci_hcd 0000:00:1a.1: setting latency timer to 64 [14752.790231] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 22 (level, low) -> IRQ 22 [14752.790239] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [14752.829170] uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [14752.829178] uhci_hcd 0000:00:1d.2: setting latency timer to 64

    Read the article

  • vlans on openvz, centos 6

    - by arheops
    i have centos 6 with openvz installed on it, switch with vlan support. I need following setup: 1) eth0 on openvz have be tagged multiple vlans. 2) each virtualhost have to be in single vlan. yes,i already read wiki on openvz, but it is just not work. I have on main server interface eth0.108 and able ping address on that interface(using nootbook on untagged port vlan 108), but i not able ping address inside container. Main node: [root@box1 conf]# ifconfig eth0 Link encap:Ethernet HWaddr D0:67:E5:F4:11:60 inet6 addr: fe80::d267:e5ff:fef4:1160/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:506 errors:0 dropped:0 overruns:0 frame:0 TX packets:25 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:68939 (67.3 KiB) TX bytes:1780 (1.7 KiB) Interrupt:16 Memory:c0000000-c0012800 eth0.108 Link encap:Ethernet HWaddr D0:67:E5:F4:11:60 inet addr:10.11.108.3 Bcast:10.11.111.255 Mask:255.255.252.0 inet6 addr: fe80::d267:e5ff:fef4:1160/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:238 errors:0 dropped:0 overruns:0 frame:0 TX packets:19 errors:0 dropped:12 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:25890 (25.2 KiB) TX bytes:926 (926.0 b) eth1 Link encap:Ethernet HWaddr D0:67:E5:F4:11:61 inet addr:192.168.23.233 Bcast:192.168.23.255 Mask:255.255.255.0 inet6 addr: fe80::d267:e5ff:fef4:1161/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1967 errors:0 dropped:0 overruns:0 frame:0 TX packets:356 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:365298 (356.7 KiB) TX bytes:115007 (112.3 KiB) Interrupt:17 Memory:c2000000-c2012800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:7 errors:0 dropped:0 overruns:0 frame:0 TX packets:7 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:784 (784.0 b) TX bytes:784 (784.0 b) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet6 addr: fe80::1/128 Scope:Link UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:3 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) veth108.0 Link encap:Ethernet HWaddr 00:18:51:DA:94:D5 inet6 addr: fe80::218:51ff:feda:94d5/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:639 errors:0 dropped:0 overruns:0 frame:0 TX packets:5 errors:0 dropped:1 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:17996 (17.5 KiB) TX bytes:308 (308.0 b) virtual node [root@pbx108 /]# ifconfig eth0.108 Link encap:Ethernet HWaddr 00:18:51:CA:B5:C5 inet addr:10.11.108.1 Bcast:10.11.111.255 Mask:255.255.252.0 inet6 addr: fe80::218:51ff:feca:b5c5/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5 errors:0 dropped:0 overruns:0 frame:0 TX packets:685 errors:0 dropped:2 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:308 (308.0 b) TX bytes:19284 (18.8 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:683 errors:0 dropped:0 overruns:0 frame:0 TX packets:683 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:76288 (74.5 KiB) TX bytes:76288 (74.5 KiB) /etc/vz/conf/108.conf # RAM PHYSPAGES="0:4000M" # Swap SWAPPAGES="0:512M" # Disk quota parameters (in form of softlimit:hardlimit) DISKSPACE="200G:200G" DISKINODES="20000000:22000000" QUOTATIME="0" # CPU fair scheduler parameter CPUUNITS="4000" VE_ROOT="/vz/root/$VEID" VE_PRIVATE="/vz/private/$VEID" OSTEMPLATE="centos-6-x86_64" ORIGIN_SAMPLE="vswap-256m" NETIF="ifname=eth0.108,mac=00:18:51:CA:B5:C5,host_ifname=veth108.0,host_mac=00:18:51:DA:94:D5" NAMESERVER="8.8.8.8" HOSTNAME="pbx108.localhost" IP_ADDRESS=""

    Read the article

  • Reinstall Postfix

    - by Kevin
    I tried to reinstall Postfix, but I get this bunch of errors: root@***:/etc/init.d# sudo apt-get install -f postfix Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: procmail postfix-mysql postfix-pgsql postfix-ldap postfix-pcre resolvconf postfix-cdb mail-reader The following NEW packages will be installed: postfix 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/1,389kB of archives. After this operation, 3,531kB of additional disk space will be used. Preconfiguring packages ... Selecting previously deselected package postfix. (Reading database ... 56122 files and directories currently installed.) Unpacking postfix (from .../postfix_2.7.1-1ubuntu0.1_amd64.deb) ... Processing triggers for ureadahead ... Processing triggers for ufw ... Processing triggers for man-db ... Setting up postfix (2.7.1-1ubuntu0.1) ... Configuration file `/etc/init.d/postfix' ==> File on system created by you or by a script. ==> File also in package provided by package maintainer. What would you like to do about it ? Your options are: Y or I : install the package maintainer's version N or O : keep your currently-installed version D : show the differences between the versions Z : start a shell to examine the situation The default action is to keep your current version. *** postfix (Y/I/N/O/D/Z) [default=N] ? Y Installing new version of config file /etc/init.d/postfix ... Adding group `postfix' (GID 109) ... Done. Adding system user `postfix' (UID 106) ... Adding new user `postfix' (UID 106) with group `postfix' ... Not creating home directory `/var/spool/postfix'. Creating /etc/postfix/dynamicmaps.cf Adding tcp map entry to /etc/postfix/dynamicmaps.cf Adding group `postdrop' (GID 115) ... Done. setting myhostname: ***.net setting alias maps setting alias database setting myorigin setting destinations: ***.net, localhost.***.net, , localhost setting relayhost: setting mynetworks: 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 setting mailbox_size_limit: 0 setting recipient_delimiter: + setting inet_interfaces: all Postfix is now set up with a default configuration. If you need to make changes, edit /etc/postfix/main.cf (and others) as needed. To view Postfix configuration values, see postconf(1). After modifying main.cf, be sure to run '/etc/init.d/postfix reload'. Running newaliases postalias: fatal: /etc/mailname: cannot open file: Permission denied dpkg: error processing postfix (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: postfix E: Sub-process /usr/bin/dpkg returned an error code (1) I tried aptitude purge, remove, autoclean and all of dpkg options (configure, remove, purge) but nothing did the trick. /etc/mailname exists (0644 root:root) with as content *.net (fetched from hostname). What am I doing wrong?

    Read the article

  • Can't communicate between lan ports on openwrt router

    - by ScaryAardvark
    I've got a WBMR-HP-G300H Buffalo Airstation router on which I've installed the lates OpenWRT software. All is working well (ADSL, WIFI etc) except for one niggle. I can't communicate between lan ports. i.e. if I have one computer connected on lan port 1 and I try to ping another computer on lan port 2 then I get "destination unreachable". I can ping both computers from the router itself and can also ping each computer from a seperate laptop connected wirelessly. All computers are in the same subnet range (10.0.0.?/24). I suspect that I may need to configure a vlan on the switch but everytime I try and do this with various google'ed configuration I keep freezing out all lan-ports and I have to revert back using a wirelessly connected laptop. Here's my /etc/config/network: config interface 'loopback' option ifname 'lo' option proto 'static' option ipaddr '127.0.0.1' option netmask '255.0.0.0' config interface 'lan' option type 'bridge' option proto 'static' option netmask '255.255.255.0' option ipaddr '10.0.0.1' option _orig_ifname 'eth0 wlan0' option _orig_bridge 'true' option ifname 'eth0' config adsl-device 'adsl' option fwannex 'a' option annex 'a2p' config interface 'wan' option _orig_ifname 'nas0' option _orig_bridge 'false' option proto 'pppoa' option encaps 'vc' option atmdev '0' option vci '38' option vpi '0' option username '?????????????' option password '??????????????' Any help would be warmly received. Here's some more config stuff. root@OpenWrt:~# ifconfig -a br-lan Link encap:Ethernet HWaddr 00:24:A5:BD:66:08 inet addr:10.0.0.1 Bcast:10.0.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:226576 errors:0 dropped:346 overruns:0 frame:0 TX packets:269292 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:26771676 (25.5 MiB) TX bytes:183986450 (175.4 MiB) eth0 Link encap:Ethernet HWaddr 00:24:A5:BD:66:08 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ifb0 Link encap:Ethernet HWaddr 36:60:EC:DF:13:A1 BROADCAST NOARP MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ifb1 Link encap:Ethernet HWaddr 4A:7B:75:67:54:E0 BROADCAST NOARP MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:780 errors:0 dropped:0 overruns:0 frame:0 TX packets:780 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:58369 (57.0 KiB) TX bytes:58369 (57.0 KiB) mon.wlan0 Link encap:UNSPEC HWaddr 00-24-A5-BD-66-08-00-48-00-00-00-00-00-00-00-00 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2424 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:320188 (312.6 KiB) TX bytes:0 (0.0 B) pppoa-wan Link encap:Point-to-Point Protocol inet addr:81.136.179.204 P-t-P:81.134.80.1 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:258894 errors:0 dropped:0 overruns:0 frame:0 TX packets:212976 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:177341656 (169.1 MiB) TX bytes:25192459 (24.0 MiB) wlan0 Link encap:Ethernet HWaddr 00:24:A5:BD:66:08 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:204063 errors:0 dropped:0 overruns:0 frame:0 TX packets:245516 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:26613140 (25.3 MiB) TX bytes:162799765 (155.2 MiB) root@OpenWrt:~# brctl show bridge name bridge id STP enabled interfaces br-lan 8000.0024a5bd6608 no wlan0 eth0 root@OpenWrt:~# swconfig dev eth0 show Global attributes: enable_vlan: 0 Port 0: pvid: 0 link: port:0 link:up speed:1000baseT full-duplex txflow rxflow Port 1: pvid: 0 link: port:1 link:down Port 2: pvid: 0 link: port:2 link:down Port 3: pvid: 0 link: port:3 link:down Port 4: pvid: 0 link: port:4 link:up speed:100baseT full-duplex txflow rxflow auto Port 5: pvid: 0 link: port:5 link:up speed:100baseT full-duplex txflow rxflow auto Regards Mark.

    Read the article

  • dpkg: error processing /var/cache/apt/archives/python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--unpack)

    - by udo
    I had an issue (Question 199582) which was resolved. Unfortunately I am stuck at this point now. Running root@X100e:/var/cache/apt/archives# apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following NEW packages will be installed: file libexpat1 libmagic1 libreadline6 libsqlite3-0 mime-support python python-minimal python2.6 python2.6-minimal readline-common 0 upgraded, 11 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/5,204kB of archives. After this operation, 19.7MB of additional disk space will be used. Do you want to continue [Y/n]? Y (Reading database ... 6108 files and directories currently installed.) Unpacking python2.6-minimal (from .../python2.6-minimal_2.6.6-5ubuntu1_i386.deb) ... new installation of python2.6-minimal; /usr/lib/python2.6/site-packages is a directory which is expected a symlink to /usr/local/lib/python2.6/dist-packages. please find the package shipping files in /usr/lib/python2.6/site-packages and file a bug report to ship these in /usr/lib/python2.6/dist-packages instead aborting installation of python2.6-minimal dpkg: error processing /var/cache/apt/archives/python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/python2.6-minimal_2.6.6-5ubuntu1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) results in above error. Running root@X100e:/var/cache/apt/archives# dpkg -i python2.6-minimal_2.6.6-5ubuntu1_i386.deb (Reading database ... 6108 files and directories currently installed.) Unpacking python2.6-minimal (from python2.6-minimal_2.6.6-5ubuntu1_i386.deb) ... new installation of python2.6-minimal; /usr/lib/python2.6/site-packages is a directory which is expected a symlink to /usr/local/lib/python2.6/dist-packages. please find the package shipping files in /usr/lib/python2.6/site-packages and file a bug report to ship these in /usr/lib/python2.6/dist-packages instead aborting installation of python2.6-minimal dpkg: error processing python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--install): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: python2.6-minimal_2.6.6-5ubuntu1_i386.deb results in above error. Running root@X100e:/var/cache/apt/archives# dpkg -i --force-depends python2.6-minimal_2.6.6-5ubuntu1_i386.deb (Reading database ... 6108 files and directories currently installed.) Unpacking python2.6-minimal (from python2.6-minimal_2.6.6-5ubuntu1_i386.deb) ... new installation of python2.6-minimal; /usr/lib/python2.6/site-packages is a directory which is expected a symlink to /usr/local/lib/python2.6/dist-packages. please find the package shipping files in /usr/lib/python2.6/site-packages and file a bug report to ship these in /usr/lib/python2.6/dist-packages instead aborting installation of python2.6-minimal dpkg: error processing python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--install): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: python2.6-minimal_2.6.6-5ubuntu1_i386.deb is not able to fix this. Any clues how to fix this?

    Read the article

  • Capistrano + Nginx + Passenger = 403

    - by slimchrisp
    I asked this over at stackoverflow as well, but still haven't received any answers that have helped me to solve this problem. I have spent almost a week at this point trying to solve the issue, and I'm just not making any headway. It seems that this issue is pretty common, but none of the solutions I found online work for me. A buddy of mine is actually creating the same setup, and he is having the same issue. After a few days stuck with the 403 error I started over using this tutorial: http://blog.ninjahideout.com/posts/a-guide-to-a-nginx-passenger-and-rvm-server I had hoped starting from scratch using this tutorial would work, but no dice. Either way, if you view the tutorial you can see what steps I have taken. Here is essentially what I have going on. I have a VPS account on linode.com Server OS is Ubuntu 10.04 Local OS (shouldn't matter, but just so you know) used to deploy with Capistrano is Snow Leopard 10.6.6 I use RVM on the server. Version is 1.2.2 I was previously on ruby-1.9.2-p0 [ i386 ], but per the tutorial listed above I switched to ree-1.8.7-2010.02 [ i386 ]. Running 'which ruby' from the command line verifies that I am using 1.8.7 with the following output: /usr/local/rvm/rubies/ree-1.8.7-2010.02/bin/ruby passenger -v prints the following: Phusion Passenger version 3.0.2 Running 'nginx -v' gives me a message that the command nginx could not be found. The server is definitely there and running as I can use nginx to serve static files, but this could have something to do with my problem. I have two users dealing with the install. root which I used to install everything, and deployer which is a user I created specifically to for deploying my applications My web app directory is in the deployer user's home directory as follows: /home/deployer/webapps/mysite.com/public Per Capistrano default deploy, a symbolic link called current is created in the public folder, and points to /home/deployer/webapps/mysite.com/public/releases/most_current_release I have chmodded the deployer directory recursively to 777 /opt/nginx permissions: rwxr-xr-x /usr/local/rvm/gems/ree-1.8.7-2010.02/gems/passenger-3.0.2 permissions: rwxrwsrwx My nginx config file has gone through just short of eternity variations, but currently looks like this: ================================================================================== worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/local/rvm/gems/ree-1.8.7-2010.02/gems/passenger-3.0.2; passenger_ruby /usr/local/rvm/bin/passenger_ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { # listen *:80; server_name mysite.com www.mysite.com; root /home/deployer/webapps/mysite.com/public/current; passenger_enabled on; passenger_friendly_error_pages on; access_log logs/mysite.com/server.log; error_log logs/mysite.com/error.log info; error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } ================================================================================== I bounce nginx, hit the site, and boom. 403, and logs say directory index of /home/deployer... is forbidden As others with a similar problem have said, you can drop an index.html into the public/releases/current_release and it will render. But rails no worky. That's basically it. At this point I have just about completely exhausted every possible solution attempt I can think of. I am a programmer and definitely not a sysadmin, so I am 99% sure this has something to do with permissions that I have hosed, but for the life of me I just can't figure out where. If anyone can help I would really really appreciate it. If there's any specific permission things you want me to check (ie groups/permissions), can you please include the commands to do so as well. Hopefully this will help others in the future who read this post. Let me know if there is any other information I can provide, and thanks in advance!!!

    Read the article

  • x11vnc working in Ubuntu 10.10

    - by pablorc
    I'm trying to start x11vnc in a Ubuntu 10.10 (my server is in Amazon EC2), but I have the next error $ sudo x11vnc -forever -usepw -httpdir /usr/share/vnc-java/ -httpport 5900 -auth /usr/sbin/gdm 25/11/2010 13:29:51 passing arg to libvncserver: -httpport 25/11/2010 13:29:51 passing arg to libvncserver: 5900 25/11/2010 13:29:51 -usepw: found /home/ubuntu/.vnc/passwd 25/11/2010 13:29:51 x11vnc version: 0.9.10 lastmod: 2010-04-28 pid: 3504 25/11/2010 13:29:51 XOpenDisplay(":0.0") failed. 25/11/2010 13:29:51 Trying again with XAUTHLOCALHOSTNAME=localhost ... 25/11/2010 13:29:51 *************************************** 25/11/2010 13:29:51 *** XOpenDisplay failed (:0.0) *** x11vnc was unable to open the X DISPLAY: ":0.0", it cannot continue. *** There may be "Xlib:" error messages above with details about the failure. Some tips and guidelines: ** An X server (the one you wish to view) must be running before x11vnc is started: x11vnc does not start the X server. (however, see the -create option if that is what you really want). ** You must use -display <disp>, -OR- set and export your $DISPLAY environment variable to refer to the display of the desired X server. - Usually the display is simply ":0" (in fact x11vnc uses this if you forget to specify it), but in some multi-user situations it could be ":1", ":2", or even ":137". Ask your administrator or a guru if you are having difficulty determining what your X DISPLAY is. ** Next, you need to have sufficient permissions (Xauthority) to connect to the X DISPLAY. Here are some Tips: - Often, you just need to run x11vnc as the user logged into the X session. So make sure to be that user when you type x11vnc. - Being root is usually not enough because the incorrect MIT-MAGIC-COOKIE file may be accessed. The cookie file contains the secret key that allows x11vnc to connect to the desired X DISPLAY. - You can explicitly indicate which MIT-MAGIC-COOKIE file should be used by the -auth option, e.g.: x11vnc -auth /home/someuser/.Xauthority -display :0 x11vnc -auth /tmp/.gdmzndVlR -display :0 you must have read permission for the auth file. See also '-auth guess' and '-findauth' discussed below. ** If NO ONE is logged into an X session yet, but there is a greeter login program like "gdm", "kdm", "xdm", or "dtlogin" running, you will need to find and use the raw display manager MIT-MAGIC-COOKIE file. Some examples for various display managers: gdm: -auth /var/gdm/:0.Xauth -auth /var/lib/gdm/:0.Xauth kdm: -auth /var/lib/kdm/A:0-crWk72 -auth /var/run/xauth/A:0-crWk72 xdm: -auth /var/lib/xdm/authdir/authfiles/A:0-XQvaJk dtlogin: -auth /var/dt/A:0-UgaaXa Sometimes the command "ps wwwwaux | grep auth" can reveal the file location. Starting with x11vnc 0.9.9 you can have it try to guess by using: -auth guess (see also the x11vnc -findauth option.) Only root will have read permission for the file, and so x11vnc must be run as root (or copy it). The random characters in the filenames will of course change and the directory the cookie file resides in is system dependent. See also: http://www.karlrunge.com/x11vnc/faq.html I've already tried with some -auth options but the error persist. I have gdm running. Thank you in advance

    Read the article

  • Address bar showing long URL

    - by Abel
    I recently upgraded my hosting account to Deluxe where I can host multiple websites. I added a domain name and created a folder in the root directory giving it the same name as my domain name and uploaded my files. Now when I navigate the site the address bar shows: 'http://mywebsite/mywebsite/default.aspx' I want it to display: 'http://mywebsite/default.aspx' My thinking in creating folders that match the domain names is to keep them somewhat organized; never intended to have my domain names listed twice in the address bar.

    Read the article

  • Using Durandal to Create Single Page Apps

    - by Stephen.Walther
    A few days ago, I gave a talk on building Single Page Apps on the Microsoft Stack. In that talk, I recommended that people use Knockout, Sammy, and RequireJS to build their presentation layer and use the ASP.NET Web API to expose data from their server. After I gave the talk, several people contacted me and suggested that I investigate a new open-source JavaScript library named Durandal. Durandal stitches together Knockout, Sammy, and RequireJS to make it easier to use these technologies together. In this blog entry, I want to provide a brief walkthrough of using Durandal to create a simple Single Page App. I am going to demonstrate how you can create a simple Movies App which contains (virtual) pages for viewing a list of movies, adding new movies, and viewing movie details. The goal of this blog entry is to give you a sense of what it is like to build apps with Durandal. Installing Durandal First things first. How do you get Durandal? The GitHub project for Durandal is located here: https://github.com/BlueSpire/Durandal The Wiki — located at the GitHub project — contains all of the current documentation for Durandal. Currently, the documentation is a little sparse, but it is enough to get you started. Instead of downloading the Durandal source from GitHub, a better option for getting started with Durandal is to install one of the Durandal NuGet packages. I built the Movies App described in this blog entry by first creating a new ASP.NET MVC 4 Web Application with the Basic Template. Next, I executed the following command from the Package Manager Console: Install-Package Durandal.StarterKit As you can see from the screenshot of the Package Manager Console above, the Durandal Starter Kit package has several dependencies including: · jQuery · Knockout · Sammy · Twitter Bootstrap The Durandal Starter Kit package includes a sample Durandal application. You can get to the Starter Kit app by navigating to the Durandal controller. Unfortunately, when I first tried to run the Starter Kit app, I got an error because the Starter Kit is hard-coded to use a particular version of jQuery which is already out of date. You can fix this issue by modifying the App_Start\DurandalBundleConfig.cs file so it is jQuery version agnostic like this: bundles.Add( new ScriptBundle("~/scripts/vendor") .Include("~/Scripts/jquery-{version}.js") .Include("~/Scripts/knockout-{version}.js") .Include("~/Scripts/sammy-{version}.js") // .Include("~/Scripts/jquery-1.9.0.min.js") // .Include("~/Scripts/knockout-2.2.1.js") // .Include("~/Scripts/sammy-0.7.4.min.js") .Include("~/Scripts/bootstrap.min.js") ); The recommendation is that you create a Durandal app in a folder off your project root named App. The App folder in the Starter Kit contains the following subfolders and files: · durandal – This folder contains the actual durandal JavaScript library. · viewmodels – This folder contains all of your application’s view models. · views – This folder contains all of your application’s views. · main.js — This file contains all of the JavaScript startup code for your app including the client-side routing configuration. · main-built.js – This file contains an optimized version of your application. You need to build this file by using the RequireJS optimizer (unfortunately, before you can run the optimizer, you must first install NodeJS). For the purpose of this blog entry, I wanted to start from scratch when building the Movies app, so I deleted all of these files and folders except for the durandal folder which contains the durandal library. Creating the ASP.NET MVC Controller and View A Durandal app is built using a single server-side ASP.NET MVC controller and ASP.NET MVC view. A Durandal app is a Single Page App. When you navigate between pages, you are not navigating to new pages on the server. Instead, you are loading new virtual pages into the one-and-only-one server-side view. For the Movies app, I created the following ASP.NET MVC Home controller: public class HomeController : Controller { public ActionResult Index() { return View(); } } There is nothing special about the Home controller – it is as basic as it gets. Next, I created the following server-side ASP.NET view. This is the one-and-only server-side view used by the Movies app: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that I set the Layout property for the view to the value null. If you neglect to do this, then the default ASP.NET MVC layout will be applied to the view and you will get the <!DOCTYPE> and opening and closing <html> tags twice. Next, notice that the view contains a DIV element with the Id applicationHost. This marks the area where virtual pages are loaded. When you navigate from page to page in a Durandal app, HTML page fragments are retrieved from the server and stuck in the applicationHost DIV element. Inside the applicationHost element, you can place any content which you want to display when a Durandal app is starting up. For example, you can create a fancy splash screen. I opted for simply displaying the text “Loading app…”: Next, notice the view above includes a call to the Scripts.Render() helper. This helper renders out all of the JavaScript files required by the Durandal library such as jQuery and Knockout. Remember to fix the App_Start\DurandalBundleConfig.cs as described above or Durandal will attempt to load an old version of jQuery and throw a JavaScript exception and stop working. Your application JavaScript code is not included in the scripts rendered by the Scripts.Render helper. Your application code is loaded dynamically by RequireJS with the help of the following SCRIPT element located at the bottom of the view: <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> The data-main attribute on the SCRIPT element causes RequireJS to load your /app/main.js JavaScript file to kick-off your Durandal app. Creating the Durandal Main.js File The Durandal Main.js JavaScript file, located in your App folder, contains all of the code required to configure the behavior of Durandal. Here’s what the Main.js file looks like in the case of the Movies app: require.config({ paths: { 'text': 'durandal/amd/text' } }); define(function (require) { var app = require('durandal/app'), viewLocator = require('durandal/viewLocator'), system = require('durandal/system'), router = require('durandal/plugins/router'); //>>excludeStart("build", true); system.debug(true); //>>excludeEnd("build"); app.start().then(function () { //Replace 'viewmodels' in the moduleId with 'views' to locate the view. //Look for partial views in a 'views' folder in the root. viewLocator.useConvention(); //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id"); app.adaptToDevice(); //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); }); }); There are three important things to notice about the main.js file above. First, notice that it contains a section which enables debugging which looks like this: //>>excludeStart(“build”, true); system.debug(true); //>>excludeEnd(“build”); This code enables debugging for your Durandal app which is very useful when things go wrong. When you call system.debug(true), Durandal writes out debugging information to your browser JavaScript console. For example, you can use the debugging information to diagnose issues with your client-side routes: (The funny looking //> symbols around the system.debug() call are RequireJS optimizer pragmas). The main.js file is also the place where you configure your client-side routes. In the case of the Movies app, the main.js file is used to configure routes for three page: the movies show, add, and details pages. //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id");   The route for movie details includes a route parameter named id. Later, we will use the id parameter to lookup and display the details for the right movie. Finally, the main.js file above contains the following line of code: //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); This line of code causes Durandal to load up a JavaScript file named shell.js and an HTML fragment named shell.html. I’ll discuss the shell in the next section. Creating the Durandal Shell You can think of the Durandal shell as the layout or master page for a Durandal app. The shell is where you put all of the content which you want to remain constant as a user navigates from virtual page to virtual page. For example, the shell is a great place to put your website logo and navigation links. The Durandal shell is composed from two parts: a JavaScript file and an HTML file. Here’s what the HTML file looks like for the Movies app: <h1>Movies App</h1> <div class="container-fluid page-host"> <!--ko compose: { model: router.activeItem, //wiring the router afterCompose: router.afterCompose, //wiring the router transition:'entrance', //use the 'entrance' transition when switching views cacheViews:true //telling composition to keep views in the dom, and reuse them (only a good idea with singleton view models) }--><!--/ko--> </div> And here is what the JavaScript file looks like: define(function (require) { var router = require('durandal/plugins/router'); return { router: router, activate: function () { return router.activate('movies/show'); } }; }); The JavaScript file contains the view model for the shell. This view model returns the Durandal router so you can access the list of configured routes from your shell. Notice that the JavaScript file includes a function named activate(). This function loads the movies/show page as the first page in the Movies app. If you want to create a different default Durandal page, then pass the name of a different age to the router.activate() method. Creating the Movies Show Page Durandal pages are created out of a view model and a view. The view model contains all of the data and view logic required for the view. The view contains all of the HTML markup for rendering the view model. Let’s start with the movies show page. The movies show page displays a list of movies. The view model for the show page looks like this: define(function (require) { var moviesRepository = require("repositories/moviesRepository"); return { movies: ko.observable(), activate: function() { this.movies(moviesRepository.listMovies()); } }; }); You create a view model by defining a new RequireJS module (see http://requirejs.org). You create a RequireJS module by placing all of your JavaScript code into an anonymous function passed to the RequireJS define() method. A RequireJS module has two parts. You retrieve all of the modules which your module requires at the top of your module. The code above depends on another RequireJS module named repositories/moviesRepository. Next, you return the implementation of your module. The code above returns a JavaScript object which contains a property named movies and a method named activate. The activate() method is a magic method which Durandal calls whenever it activates your view model. Your view model is activated whenever you navigate to a page which uses it. In the code above, the activate() method is used to get the list of movies from the movies repository and assign the list to the view model movies property. The HTML for the movies show page looks like this: <table> <thead> <tr> <th>Title</th><th>Director</th> </tr> </thead> <tbody data-bind="foreach:movies"> <tr> <td data-bind="text:title"></td> <td data-bind="text:director"></td> <td><a data-bind="attr:{href:'#/movies/details/'+id}">Details</a></td> </tr> </tbody> </table> <a href="#/movies/add">Add Movie</a> Notice that this is an HTML fragment. This fragment will be stuffed into the page-host DIV element in the shell.html file which is stuffed, in turn, into the applicationHost DIV element in the server-side MVC view. The HTML markup above contains data-bind attributes used by Knockout to display the list of movies (To learn more about Knockout, visit http://knockoutjs.com). The list of movies from the view model is displayed in an HTML table. Notice that the page includes a link to a page for adding a new movie. The link uses the following URL which starts with a hash: #/movies/add. Because the link starts with a hash, clicking the link does not cause a request back to the server. Instead, you navigate to the movies/add page virtually. Creating the Movies Add Page The movies add page also consists of a view model and view. The add page enables you to add a new movie to the movie database. Here’s the view model for the add page: define(function (require) { var app = require('durandal/app'); var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToAdd: { title: ko.observable(), director: ko.observable() }, activate: function () { this.movieToAdd.title(""); this.movieToAdd.director(""); this._movieAdded = false; }, canDeactivate: function () { if (this._movieAdded == false) { return app.showMessage('Are you sure you want to leave this page?', 'Navigate', ['Yes', 'No']); } else { return true; } }, addMovie: function () { // Add movie to db moviesRepository.addMovie(ko.toJS(this.movieToAdd)); // flag new movie this._movieAdded = true; // return to list of movies router.navigateTo("#/movies/show"); } }; }); The view model contains one property named movieToAdd which is bound to the add movie form. The view model also has the following three methods: 1. activate() – This method is called by Durandal when you navigate to the add movie page. The activate() method resets the add movie form by clearing out the movie title and director properties. 2. canDeactivate() – This method is called by Durandal when you attempt to navigate away from the add movie page. If you return false then navigation is cancelled. 3. addMovie() – This method executes when the add movie form is submitted. This code adds the new movie to the movie repository. I really like the Durandal canDeactivate() method. In the code above, I use the canDeactivate() method to show a warning to a user if they navigate away from the add movie page – either by clicking the Cancel button or by hitting the browser back button – before submitting the add movie form: The view for the add movie page looks like this: <form data-bind="submit:addMovie"> <fieldset> <legend>Add Movie</legend> <div> <label> Title: <input data-bind="value:movieToAdd.title" required /> </label> </div> <div> <label> Director: <input data-bind="value:movieToAdd.director" required /> </label> </div> <div> <input type="submit" value="Add" /> <a href="#/movies/show">Cancel</a> </div> </fieldset> </form> I am using Knockout to bind the movieToAdd property from the view model to the INPUT elements of the HTML form. Notice that the FORM element includes a data-bind attribute which invokes the addMovie() method from the view model when the HTML form is submitted. Creating the Movies Details Page You navigate to the movies details Page by clicking the Details link which appears next to each movie in the movies show page: The Details links pass the movie ids to the details page: #/movies/details/0 #/movies/details/1 #/movies/details/2 Here’s what the view model for the movies details page looks like: define(function (require) { var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToShow: { title: ko.observable(), director: ko.observable() }, activate: function (context) { // Grab movie from repository var movie = moviesRepository.getMovie(context.id); // Add to view model this.movieToShow.title(movie.title); this.movieToShow.director(movie.director); } }; }); Notice that the view model activate() method accepts a parameter named context. You can take advantage of the context parameter to retrieve route parameters such as the movie Id. In the code above, the context.id property is used to retrieve the correct movie from the movie repository and the movie is assigned to a property named movieToShow exposed by the view model. The movie details view displays the movieToShow property by taking advantage of Knockout bindings: <div> <h2 data-bind="text:movieToShow.title"></h2> directed by <span data-bind="text:movieToShow.director"></span> </div> Summary The goal of this blog entry was to walkthrough building a simple Single Page App using Durandal and to get a feel for what it is like to use this library. I really like how Durandal stitches together Knockout, Sammy, and RequireJS and establishes patterns for using these libraries to build Single Page Apps. Having a standard pattern which developers on a team can use to build new pages is super valuable. Once you get the hang of it, using Durandal to create new virtual pages is dead simple. Just define a new route, view model, and view and you are done. I also appreciate the fact that Durandal did not attempt to re-invent the wheel and that Durandal leverages existing JavaScript libraries such as Knockout, RequireJS, and Sammy. These existing libraries are powerful libraries and I have already invested a considerable amount of time in learning how to use them. Durandal makes it easier to use these libraries together without losing any of their power. Durandal has some additional interesting features which I have not had a chance to play with yet. For example, you can use the RequireJS optimizer to combine and minify all of a Durandal app’s code. Also, Durandal supports a way to create custom widgets (client-side controls) by composing widgets from a controller and view. You can download the code for the Movies app by clicking the following link (this is a Visual Studio 2012 project): Durandal Movie App

    Read the article

  • Adding DTrace Probes to PHP Extensions

    - by cj
    The powerful DTrace tracing facility has some PHP-specific probes that can be enabled with --enable-dtrace. DTrace for Linux is being created by Oracle and is currently in tech preview. Currently it doesn't support userspace tracing so, in the meantime, Systemtap can be used to monitor the probes implemented in PHP. This was recently outlined in David Soria Parra's post Probing PHP with Systemtap on Linux. My post shows how DTrace probes can be added to PHP extensions and traced on Linux. I was using Oracle Linux 6.3. Not all Linux kernels are built with Systemtap, since this can impact stability. Check whether your running kernel (or others installed) have Systemtap enabled, and reboot with such a kernel: # grep CONFIG_UTRACE /boot/config-`uname -r` # grep CONFIG_UTRACE /boot/config-* When you install Systemtap itself, the package systemtap-sdt-devel is needed since it provides the sdt.h header file: # yum install systemtap-sdt-devel You can now install and build PHP as shown in David's article. Basically the build is with: $ cd ~/php-src $ ./configure --disable-all --enable-dtrace $ make (For me, running 'make' a second time failed with an error. The workaround is to do 'git checkout Zend/zend_dtrace.d' and then rerun 'make'. See PHP Bug 63704) David's article shows how to trace the probes already implemented in PHP. You can also use Systemtap to trace things like userspace PHP function calls. For example, create test.php: <?php $c = oci_connect('hr', 'welcome', 'localhost/orcl'); $s = oci_parse($c, "select dbms_xmlgen.getxml('select * from dual') xml from dual"); $r = oci_execute($s); $row = oci_fetch_array($s, OCI_NUM); $x = $row[0]->load(); $row[0]->free(); echo $x; ?> The normal output of this file is the XML form of Oracle's DUAL table: $ ./sapi/cli/php ~/test.php <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> To trace the PHP function calls, create the tracing file functrace.stp: probe process("sapi/cli/php").function("zif_*") { printf("Started function %s\n", probefunc()); } probe process("sapi/cli/php").function("zif_*").return { printf("Ended function %s\n", probefunc()); } This makes use of the way PHP userspace functions (not builtins) like oci_connect() map to C functions with a "zif_" prefix. Login as root, and run System tap on the PHP script: # cd ~cjones/php-src # stap -c 'sapi/cli/php ~cjones/test.php' ~cjones/functrace.stp Started function zif_oci_connect Ended function zif_oci_connect Started function zif_oci_parse Ended function zif_oci_parse Started function zif_oci_execute Ended function zif_oci_execute Started function zif_oci_fetch_array Ended function zif_oci_fetch_array Started function zif_oci_lob_load <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> Ended function zif_oci_lob_load Started function zif_oci_free_descriptor Ended function zif_oci_free_descriptor Each call and return is logged. The Systemtap scripting language allows complex scripts to be built. There are many examples on the web. To augment this generic capability and the PHP probes in PHP, other extensions can have probes too. Below are the steps I used to add probes to OCI8: I created a provider file ext/oci8/oci8_dtrace.d, enabling three probes. The first one will accept a parameter that runtime tracing can later display: provider php { probe oci8__connect(char *username); probe oci8__nls_start(); probe oci8__nls_done(); }; I updated ext/oci8/config.m4 with the PHP_INIT_DTRACE macro. The patch is at the end of config.m4. The macro takes the provider prototype file, a name of the header file that 'dtrace' will generate, and a list of sources files with probes. When --enable-dtrace is used during PHP configuration, then the outer $PHP_DTRACE check is true and my new probes will be enabled. I've chosen to define an OCI8 specific macro, HAVE_OCI8_DTRACE, which can be used in the OCI8 source code: diff --git a/ext/oci8/config.m4 b/ext/oci8/config.m4 index 34ae76c..f3e583d 100644 --- a/ext/oci8/config.m4 +++ b/ext/oci8/config.m4 @@ -341,4 +341,17 @@ if test "$PHP_OCI8" != "no"; then PHP_SUBST_OLD(OCI8_ORACLE_VERSION) fi + + if test "$PHP_DTRACE" = "yes"; then + AC_CHECK_HEADERS([sys/sdt.h], [ + PHP_INIT_DTRACE([ext/oci8/oci8_dtrace.d], + [ext/oci8/oci8_dtrace_gen.h],[ext/oci8/oci8.c]) + AC_DEFINE(HAVE_OCI8_DTRACE,1, + [Whether to enable DTrace support for OCI8 ]) + ], [ + AC_MSG_ERROR( + [Cannot find sys/sdt.h which is required for DTrace support]) + ]) + fi + fi In ext/oci8/oci8.c, I added the probes at, for this example, semi-arbitrary places: diff --git a/ext/oci8/oci8.c b/ext/oci8/oci8.c index e2241cf..ffa0168 100644 --- a/ext/oci8/oci8.c +++ b/ext/oci8/oci8.c @@ -1811,6 +1811,12 @@ php_oci_connection *php_oci_do_connect_ex(char *username, int username_len, char } } +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_CONNECT_ENABLED()) { + DTRACE_OCI8_CONNECT(username); + } +#endif + /* Initialize global handles if they weren't initialized before */ if (OCI_G(env) == NULL) { php_oci_init_global_handles(TSRMLS_C); @@ -1870,11 +1876,22 @@ php_oci_connection *php_oci_do_connect_ex(char *username, int username_len, char size_t rsize = 0; sword result; +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_NLS_START_ENABLED()) { + DTRACE_OCI8_NLS_START(); + } +#endif PHP_OCI_CALL_RETURN(result, OCINlsEnvironmentVariableGet, (&charsetid_nls_lang, 0, OCI_NLS_CHARSET_ID, 0, &rsize)); if (result != OCI_SUCCESS) { charsetid_nls_lang = 0; } smart_str_append_unsigned_ex(&hashed_details, charsetid_nls_lang, 0); + +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_NLS_DONE_ENABLED()) { + DTRACE_OCI8_NLS_DONE(); + } +#endif } timestamp = time(NULL); The oci_connect(), oci_pconnect() and oci_new_connect() calls all use php_oci_do_connect_ex() internally. The first probe simply records that the PHP application made a connection call. I already showed a way to do this without needing a probe, but adding a specific probe lets me record the username. The other two probes can be used to time how long the globalization initialization takes. The relationships between the oci8_dtrace.d names like oci8__connect, the probe guards like DTRACE_OCI8_CONNECT_ENABLED() and probe names like DTRACE_OCI8_CONNECT() are obvious after seeing the pattern of all three probes. I included the new header that will be automatically created by the dtrace tool when PHP is built. I did this in ext/oci8/php_oci8_int.h: diff --git a/ext/oci8/php_oci8_int.h b/ext/oci8/php_oci8_int.h index b0d6516..c81fc5a 100644 --- a/ext/oci8/php_oci8_int.h +++ b/ext/oci8/php_oci8_int.h @@ -44,6 +44,10 @@ # endif # endif /* osf alpha */ +#ifdef HAVE_OCI8_DTRACE +#include "oci8_dtrace_gen.h" +#endif + #if defined(min) #undef min #endif Now PHP can be rebuilt: $ cd ~/php-src $ rm configure && ./buildconf --force $ ./configure --disable-all --enable-dtrace \ --with-oci8=instantclient,/home/cjones/instantclient $ make If 'make' fails, do the 'git checkout Zend/zend_dtrace.d' trick I mentioned. The new probes can be seen by logging in as root and running: # stap -l 'process.provider("php").mark("oci8*")' -c 'sapi/cli/php -i' process("sapi/cli/php").provider("php").mark("oci8__connect") process("sapi/cli/php").provider("php").mark("oci8__nls_done") process("sapi/cli/php").provider("php").mark("oci8__nls_start") To test them out, create a new trace file, oci.stp: global numconnects; global start; global numcharlookups = 0; global tottime = 0; probe process.provider("php").mark("oci8-connect") { printf("Connected as %s\n", user_string($arg1)); numconnects += 1; } probe process.provider("php").mark("oci8-nls_start") { start = gettimeofday_us(); numcharlookups++; } probe process.provider("php").mark("oci8-nls_done") { tottime += gettimeofday_us() - start; } probe end { printf("Connects: %d, Charset lookups: %ld\n", numconnects, numcharlookups); printf("Total NLS charset initalization time: %ld usecs/connect\n", (numcharlookups 0 ? tottime/numcharlookups : 0)); } This calculates the average time that the NLS character set lookup takes. It also prints out the username of each connection, as an example of using parameters. Login as root and run Systemtap over the PHP script: # cd ~cjones/php-src # stap -c 'sapi/cli/php ~cjones/test.php' ~cjones/oci.stp Connected as cj <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> Connects: 1, Charset lookups: 1 Total NLS charset initalization time: 164 usecs/connect This shows the time penalty of making OCI8 look up the default character set. This time would be zero if a character set had been passed as the fourth argument to oci_connect() in test.php.

    Read the article

  • how to solve eclipse's Type The project was not built due to "Could not delete

    - by user50680
    when I change a properties file's content, Eclipse always show error,say "Description Resource Path Location Type The project was not built due to "Could not delete '/lichong-test-tester/target/test-classes/config'.". Fix the problem, then try refreshing this project and building it since it may be inconsistent lichong-test-tester Unknown Java Problem ". I have to clean and rebuild whole project to solve this problem ,can anybody tell me how to avoid this. https://skydrive.live.com/redir.aspx?cid=02a1e6543b4cc73e&resid=2A1E6543B4CC73E!458&parid=root that's my Screenshot

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • How to subscribe to the free Oracle Linux errata yum repositories

    - by Lenz Grimmer
    Now that updates and errata for Oracle Linux are available for free (both as in beer and freedom), here's a quick HOWTO on how to subscribe your Oracle Linux system to the newly added yum repositories on our public yum server, assuming that you just installed Oracle Linux from scratch, e.g. by using the installation media (ISO images) available from the Oracle Software Delivery Cloud You need to download the appropriate yum repository configuration file from the public yum server and install it in the yum repository directory. For Oracle Linux 6, the process would look as follows: as the root user, run the following command: [root@oraclelinux62 ~]# wget http://public-yum.oracle.com/public-yum-ol6.repo \ -P /etc/yum.repos.d/ --2012-03-23 00:18:25-- http://public-yum.oracle.com/public-yum-ol6.repo Resolving public-yum.oracle.com... 141.146.44.34 Connecting to public-yum.oracle.com|141.146.44.34|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1461 (1.4K) [text/plain] Saving to: “/etc/yum.repos.d/public-yum-ol6.repo” 100%[=================================================>] 1,461 --.-K/s in 0s 2012-03-23 00:18:26 (37.1 MB/s) - “/etc/yum.repos.d/public-yum-ol6.repo” saved [1461/1461] For Oracle Linux 5, the file name would be public-yum-ol5.repo in the URL above instead. The "_latest" repositories that contain the errata packages are already enabled by default — you can simply pull in all available updates by running "yum update" next: [root@oraclelinux62 ~]# yum update Loaded plugins: refresh-packagekit, security ol6_latest | 1.1 kB 00:00 ol6_latest/primary | 15 MB 00:42 ol6_latest 14643/14643 Setting up Update Process Resolving Dependencies --> Running transaction check ---> Package at.x86_64 0:3.1.10-43.el6 will be updated ---> Package at.x86_64 0:3.1.10-43.el6_2.1 will be an update ---> Package autofs.x86_64 1:5.0.5-39.el6 will be updated ---> Package autofs.x86_64 1:5.0.5-39.el6_2.1 will be an update ---> Package bind-libs.x86_64 32:9.7.3-8.P3.el6 will be updated ---> Package bind-libs.x86_64 32:9.7.3-8.P3.el6_2.2 will be an update ---> Package bind-utils.x86_64 32:9.7.3-8.P3.el6 will be updated ---> Package bind-utils.x86_64 32:9.7.3-8.P3.el6_2.2 will be an update ---> Package cvs.x86_64 0:1.11.23-11.el6_0.1 will be updated ---> Package cvs.x86_64 0:1.11.23-11.el6_2.1 will be an update [...] ---> Package yum.noarch 0:3.2.29-22.0.1.el6 will be updated ---> Package yum.noarch 0:3.2.29-22.0.2.el6_2.2 will be an update ---> Package yum-plugin-security.noarch 0:1.1.30-10.el6 will be updated ---> Package yum-plugin-security.noarch 0:1.1.30-10.0.1.el6 will be an update ---> Package yum-utils.noarch 0:1.1.30-10.el6 will be updated ---> Package yum-utils.noarch 0:1.1.30-10.0.1.el6 will be an update --> Finished Dependency Resolution Dependencies Resolved ===================================================================================== Package Arch Version Repository Size ===================================================================================== Installing: kernel x86_64 2.6.32-220.7.1.el6 ol6_latest 24 M kernel-uek x86_64 2.6.32-300.11.1.el6uek ol6_latest 21 M kernel-uek-devel x86_64 2.6.32-300.11.1.el6uek ol6_latest 6.3 M Updating: at x86_64 3.1.10-43.el6_2.1 ol6_latest 60 k autofs x86_64 1:5.0.5-39.el6_2.1 ol6_latest 470 k bind-libs x86_64 32:9.7.3-8.P3.el6_2.2 ol6_latest 839 k bind-utils x86_64 32:9.7.3-8.P3.el6_2.2 ol6_latest 178 k cvs x86_64 1.11.23-11.el6_2.1 ol6_latest 711 k [...] xulrunner x86_64 10.0.3-1.0.1.el6_2 ol6_latest 12 M yelp x86_64 2.28.1-13.el6_2 ol6_latest 778 k yum noarch 3.2.29-22.0.2.el6_2.2 ol6_latest 987 k yum-plugin-security noarch 1.1.30-10.0.1.el6 ol6_latest 36 k yum-utils noarch 1.1.30-10.0.1.el6 ol6_latest 94 k Transaction Summary ===================================================================================== Install 3 Package(s) Upgrade 96 Package(s) Total download size: 173 M Is this ok [y/N]: y Downloading Packages: (1/99): at-3.1.10-43.el6_2.1.x86_64.rpm | 60 kB 00:00 (2/99): autofs-5.0.5-39.el6_2.1.x86_64.rpm | 470 kB 00:01 (3/99): bind-libs-9.7.3-8.P3.el6_2.2.x86_64.rpm | 839 kB 00:02 (4/99): bind-utils-9.7.3-8.P3.el6_2.2.x86_64.rpm | 178 kB 00:00 [...] (96/99): yelp-2.28.1-13.el6_2.x86_64.rpm | 778 kB 00:02 (97/99): yum-3.2.29-22.0.2.el6_2.2.noarch.rpm | 987 kB 00:03 (98/99): yum-plugin-security-1.1.30-10.0.1.el6.noarch.rpm | 36 kB 00:00 (99/99): yum-utils-1.1.30-10.0.1.el6.noarch.rpm | 94 kB 00:00 ------------------------------------------------------------------------------------- Total 306 kB/s | 173 MB 09:38 warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY Retrieving key from http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 Importing GPG key 0xEC551F03: Userid: "Oracle OSS group (Open Source Software group) " From : http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 Is this ok [y/N]: y Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Updating : yum-3.2.29-22.0.2.el6_2.2.noarch 1/195 Updating : xorg-x11-server-common-1.10.4-6.el6_2.3.x86_64 2/195 Updating : kernel-uek-headers-2.6.32-300.11.1.el6uek.x86_64 3/195 Updating : 12:dhcp-common-4.1.1-25.P1.el6_2.1.x86_64 4/195 Updating : tzdata-java-2011n-2.el6.noarch 5/195 Updating : tzdata-2011n-2.el6.noarch 6/195 Updating : glibc-common-2.12-1.47.el6_2.9.x86_64 7/195 Updating : glibc-2.12-1.47.el6_2.9.x86_64 8/195 [...] Cleanup : kernel-firmware-2.6.32-220.el6.noarch 191/195 Cleanup : kernel-uek-firmware-2.6.32-300.3.1.el6uek.noarch 192/195 Cleanup : glibc-common-2.12-1.47.el6.x86_64 193/195 Cleanup : glibc-2.12-1.47.el6.x86_64 194/195 Cleanup : tzdata-2011l-4.el6.noarch 195/195 Installed: kernel.x86_64 0:2.6.32-220.7.1.el6 kernel-uek.x86_64 0:2.6.32-300.11.1.el6uek kernel-uek-devel.x86_64 0:2.6.32-300.11.1.el6uek Updated: at.x86_64 0:3.1.10-43.el6_2.1 autofs.x86_64 1:5.0.5-39.el6_2.1 bind-libs.x86_64 32:9.7.3-8.P3.el6_2.2 bind-utils.x86_64 32:9.7.3-8.P3.el6_2.2 cvs.x86_64 0:1.11.23-11.el6_2.1 dhclient.x86_64 12:4.1.1-25.P1.el6_2.1 [...] xorg-x11-server-common.x86_64 0:1.10.4-6.el6_2.3 xulrunner.x86_64 0:10.0.3-1.0.1.el6_2 yelp.x86_64 0:2.28.1-13.el6_2 yum.noarch 0:3.2.29-22.0.2.el6_2.2 yum-plugin-security.noarch 0:1.1.30-10.0.1.el6 yum-utils.noarch 0:1.1.30-10.0.1.el6 Complete! At this point, your system is fully up to date. As the kernel was updated as well, a reboot is the recommended next action. If you want to install the latest release of the Unbreakable Enterprise Kernel Release 2 as well, you need to edit the .repo file and enable the respective yum repository (e.g. "ol6_UEK_latest" for Oracle Linux 6 and "ol5_UEK_latest" for Oracle Linux 5) manually, by setting enabled to "1". The next yum update run will download and install the second release of the Unbreakable Enterprise Kernel, which will be enabled after the next reboot. -Lenz

    Read the article

  • Solaris 11 Launch Blog Carnival Roundup

    - by constant
    Solaris 11 is here! And together with the official launch activities, a lot of Oracle and non-Oracle bloggers contributed helpful and informative blog articles to help your datacenter go to eleven. Here are some notable blog postings, sorted by category for your Solaris 11 blog-reading pleasure: Getting Started/Overview A lot of people speculated that the official launch of Solaris 11 would be on 11/11 (whatever way you want to turn it), but it actually happened two days earlier. Larry Wake himself offers 11 Reasons Why Oracle Solaris 11 11/11 Isn't Being Released on 11/11/11. Then, Larry goes on with a summary: Oracle Solaris 11: The First Cloud OS gives you a short and sweet rundown of what the major new features of Solaris 11 are. Jeff Victor has his own list of What's New in Oracle Solaris 11. A popular Solaris 11 meme is to write a blog post about 11 favourite features: Jim Laurent's 11 Reasons to Love Solaris 11, Darren Moffat's 11 Favourite Solaris 11 Features, Mike Gerdt's 11 of My Favourite Things! are just three examples of "11 Favourite Things..." type blog posts, I'm sure many more will follow... More official overview content for Solaris 11 is available from the Oracle Tech Network Solaris 11 Portal. Also, check out Rick Ramsey's blog post Solaris 11 Resources for System Administrators on the OTN Blog and his secret 5 Commands That Make Solaris Administration Easier post from the OTN Garage. (Automatic) Installation and the Image Packaging System (IPS) The brand new Image Packaging System (IPS) and the Automatic Installer (IPS), together with numerous other install/packaging/boot/patching features are among the most significant improvements in Solaris 11. But before installing, you may wonder whether Solaris 11 will support your particular set of hardware devices. Again, the OTN Garage comes to the rescue with Rick Ramsey's post How to Find Out Which Devices Are Supported By Solaris 11. Included is a useful guide to all the first steps to get your Solaris 11 system up and running. Tim Foster had a whole handful of blog posts lined up for the launch, teaching you everything you need to know about IPS but didn't dare to ask: The IPS System Repository, IPS Self-assembly - Part 1: Overlays and Part 2: Multiple Packages Delivering Configuration. Watch out for more IPS posts from Tim! If installing packages or upgrading your system from the net makes you uneasy, then you're not alone: Jim Laurent will tech you how Building a Solaris 11 Repository Without Network Connection will make your life easier. Many of you have already peeked into the future by installing Solaris 11 Express. If you're now wondering whether you can upgrade or whether a fresh install is necessary, then check out Alan Hargreaves's post Upgrading Solaris 11 Express b151a with support to Solaris 11. The trick is in upgrading your pkg(1M) first. Networking One of the first things to do after installing Solaris 11 (or any operating system for that matter), is to set it up for networking. Solaris 11 comes with the brand new "Network Auto-Magic" feature which can figure out everything by itself. For those cases where you want to exercise a little more control, Solaris 11 left a few people scratching their heads. Fortunately, Tschokko wrote up this cool blog post: Solaris 11 manual IPv4 & IPv6 configuration right after the launch ceremony. Thanks, Tschokko! And Milek points out a long awaited networking feature in Solaris 11 called Solaris 11 - hostmodel, which I know for a fact that many customers have looked forward to: How to "bind" a Solaris 11 system to a specific gateway for specific IP address it is using. Steffen Weiberle teaches us how to tune the Solaris 11 networking stack the proper way: ipadm(1M). No more fiddling with ndd(1M)! Check out his tutorial on Solaris 11 Network Tunables. And if you want to get even deeper into the networking stack, there's nothing better than DTrace. Alan Maguire teaches you in: DTracing TCP Congestion Control how to probe deeply into the Solaris 11 TCP/IP stack, the TCP congestion control part in particular. Don't miss his other DTrace and TCP related blog posts! DTrace And there we are: DTrace, the king of all observability tools. Long time DTrace veteran and co-author of The DTrace book*, Brendan Gregg blogged about Solaris 11 DTrace syscall provider changes. BTW, after you install Solaris 11, check out the DTrace toolkit which is installed by default in /usr/dtrace/DTT. It is chock full of handy DTrace scripts, many of which contributed by Brendan himself! Security Another big theme in Solaris 11, and one that is crucial for the success of any operating system in the Cloud is Security. Here are some notable posts in this category: Darren Moffat starts by showing us how to completely get rid of root: Completely Disabling Root Logins on Solaris 11. With no root user, there's one major entry point less to worry about. But that's only the start. In Immutable Zones on Encrypted ZFS, Darren shows us how to double the security of your services: First by locking them into the new Immutable Zones feature, then by encrypting their data using the new ZFS encryption feature. And if you're still missing sudo from your Linux days, Darren again has a solution: Password (PAM) caching for Solaris su - "a la sudo". If you're wondering how much compute power all this encryption will cost you, you're in luck: The Solaris X86 AESNI OpenSSL Engine will make sure you'll use your Intel's embedded crypto support to its fullest. And if you own a brand new SPARC T4 machine you're even luckier: It comes with its own SPARC T4 OpenSSL Engine. Dan Anderson's posts show how there really is now excuse not to encrypt any more... Developers Solaris 11 has a lot to offer to developers as well. Ali Bahrami has a series of blog posts that cover diverse developer topics: elffile: ELF Specific File Identification Utility, Using Stub Objects and The Stub Proto: Not Just For Stub Objects Anymore to name a few. BTW, if you're a developer and want to shape the future of Solaris 11, then Vijay Tatkar has a hint for you: Oracle (Sun Systems Group) is hiring! Desktop and Graphics Yes, Solaris 11 is a 100% server OS, but it can also offer a decent desktop environment, especially if you are a developer. Alan Coopersmith starts by discussing S11 X11: ye olde window system in today's new operating system, then Calum Benson shows us around What's new on the Solaris 11 Desktop. Even accessibility is a first-class citizen in the Solaris 11 user interface. Peter Korn celebrates: Accessible Oracle Solaris 11 - released! Performance Gone are the days of "Slowaris", when Solaris was among the few OSes that "did the right thing" while others cut corners just to win benchmarks. Today, Solaris continues doing the right thing, and it delivers the right performance at the same time. Need proof? Check out Brian's BestPerf blog with continuous updates from the benchmarking lab, including Recent Benchmarks Using Oracle Solaris 11! Send Me More Solaris 11 Launch Articles! These are just a few of the more interesting blog articles that came out around the Solaris 11 launch, I'm sure there are many more! Feel free to post a comment below if you find a particularly interesting blog post that hasn't been listed so far and share your enthusiasm for Solaris 11! *Affiliate link: Buy cool stuff and support this blog at no extra cost. We both win! var flattr_uid = '26528'; var flattr_tle = 'Solaris 11 Launch Blog Carnival Roundup'; var flattr_dsc = '<strong>Solaris 11 is here!</strong>And together with the official launch activities, a lot of Oracle and non-Oracle bloggers contributed helpful and informative blog articles to help your datacenter <a href="http://en.wikipedia.org/wiki/Up_to_eleven">go to eleven</a>.Here are some notable blog postings, sorted by category for your Solaris 11 blog-reading pleasure:'; var flattr_tag = 'blogging,digest,Oracle,Solaris,solaris,solaris 11'; var flattr_cat = 'text'; var flattr_url = 'http://constantin.glez.de/blog/2011/11/solaris-11-launch-blog-carnival-roundup'; var flattr_lng = 'en_GB'

    Read the article

  • Hey, Google: It’s Time to Add Multi-Window Multitasking To Android

    - by Chris Hoffman
    In 2012, Google’s Dianne Hackborn threatened to revoke CyanogenMod’s access to the Android Market if they moved forward with adding “Cornerstone” multitasking to their custom ROM. Samsung has since created their own multi-window multitasking feature. Dianne Hackborn said this “is something that needs to be done at the mainline platform level” so apps wouldn’t break. She was right — Android needs this as a standard feature and it’s time for Google to provide it. Doesn’t Android Have Multitasking? Android originally stood out from Apple’s iOS with its powerful multitasking. Applications can continue running in the background while you’re using another application. This makes Android powerful — you can even have BitTorrent clients downloading files in the background while using another app. Android still kept the design of a single app on screen at a time. This made a lot of sense when Android only ran on smartphones with small screens. Today, Android runs on everything from smaller smartphones all the way up to huge “phablets” like the Galaxy Note. Android has gone beyond phones and runs on 12-inch tablets, convertibles with keyboard docks, laptops, and even Android desktops. Android isn’t just a phone operating system. Samsung’s Multi-Window Isn’t Good Enough Samsung has tried to add value to Android by adding a multi-window feature. When you’re using a high-end phone like the Galaxy Note or Galaxy S, or a Galaxy tablet, you have the ability to run certain apps side-by-side with each other. There are big problems here. This only works on Samsung devices, and only on specific Samsung devices. To add support for this feature in a way that doesn’t break other apps, Samsung’s multi-window feature also only works with specific apps. You can’t just run any app in multi-window view, only the apps on the Multi Window bar Samsung provides. This prevents third-party apps from breaking, which is what Google was worried about with CyanogenMod’s Cornerstone feature. A feature that only works with a handful of apps on specific devices from a single manufacturer isn’t good enough. This feature needs to work on every Android device — or at least ones with suitably large screens and powerful enough internals. It needs to be an Android platform feature so application developers can ensure their apps will work properly with it on every device. Android developers shouldn’t have to add support for each manufacturer’s own multi-window feature if other manufacturers decide to copy Samsung. Floating Apps Are a Dirty Hack Floating apps also enable real multitasking. Remember that Android allows apps to run in the background while you’re using an app in the foreground. These apps can present interfaces that appear floating above the current app — think of it like using “always on top” to make a window always appear over every other app on a desktop operating system. You can install floating apps to browse the web, take notes, chat, and watch videos while using any app. Only apps specifically designed to run as floating apps will work, so you have to seek them out. Floating apps are also awkward to use because they float over the app you’re using, blocking parts of its interface. Microsoft added floating-window support to Skype for Android. You can have a video conversation and the other person’s face will always appear on your screen, even when you leave the Skype app. Microsoft is using more of Android’s multi-window multitasking power than Google is. Custom ROMs and Root-Only Tweaks Aren’t Acceptable Some custom ROMs are adding this feature to Android. Google threatened to revoke CyanogenMod’s access to the Android Market (now known as Google Play) if they added this feature because it could potentially break third-party apps. Today, other custom ROMs are working on split-screen multitasking. Samsung added their own version to their own devices. You can also get this feature by using a root-only Xposed Framework tweak known as XMultiWindow. If you have root access, you can get multi-window multitasking or any app on your device. This shouldn’t require rooting your device or installing a custom ROM. These third-party solutions often have awkward interfaces and bugs. We need an integrated, supported solution that works the same on every device. Why Multi-Window is Important Microsoft’s Windows 8.1 stands out among tablet operating systems for its powerful multitasking support, allowing you to view several apps side-by-side at the same time. Apple is also reported to be working on adding side-by-side apps to the iPad with iOS 8. On every competitor’s operating system, you’ll be able to view a web page while you write an email, watch a video while you browse the web, or chat with someone while you do anything else. But Android’s still remained frozen in time. Despite all Android’s underlying power — and despite the way Android allows apps to adapt to different screen sizes — Google is resisting adding this feature. Large-screen Android tablets like the Nexus 10 (remember that tablet Google hasn’t updated in over 18 months?) need this feature. So do huge phones, convertibles, laptops, and Android desktops. If tablets are the future of personal computing, we should be able to do more than one thing at a time on our tablets’ big screens. Microsoft, Samsung, and even Apple are realizing this — now it’s Google’s turn. Image Credit: Sergey Galyonkin on Flickr, Karlis Dambrans on Flickr

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >