Search Results

Search found 442 results on 18 pages for 'expire'.

Page 18/18 | < Previous Page | 14 15 16 17 18 

  • Enabling DNS for IPv6 infrastructure

    After successful automatic distribution of IPv6 address information via DHCPv6 in your local network it might be time to start offering some more services. Usually, we would use host names in order to communicate with other machines instead of their bare IPv6 addresses. During the following paragraphs we are going to enable our own DNS name server with IPv6 address resolving. This is the third article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. What's your name and your IPv6 address? $ sudo service bind9 status * bind9 is running If the service is not recognised, you have to install it first on your system. This is done very easy and quickly like so: $ sudo apt-get install bind9 Once again, there is no specialised package for IPv6. Just the regular application is good to go. But of course, it is necessary to enable IPv6 binding in the options. Let's fire up a text editor and modify the configuration file. $ sudo nano /etc/bind/named.conf.optionsacl iosnet {        127.0.0.1;        192.168.1.0/24;        ::1/128;        2001:db8:bad:a55::/64;};listen-on { iosnet; };listen-on-v6 { any; };allow-query { iosnet; };allow-transfer { iosnet; }; Most important directive is the listen-on-v6. This will enable your named to bind to your IPv6 addresses specified on your system. Easiest is to specify any as value, and named will bind to all available IPv6 addresses during start. More details and explanations are found in the man-pages of named.conf. Save the file and restart the named service. As usual, check your log files and correct your configuration in case of any logged error messages. Using the netstat command you can validate whether the service is running and to which IP and IPv6 addresses it is bound to, like so: $ sudo service bind9 restart $ sudo netstat -lnptu | grep "named\W*$"tcp        0      0 192.168.1.2:53        0.0.0.0:*               LISTEN      1734/named      tcp        0      0 127.0.0.1:53          0.0.0.0:*               LISTEN      1734/named      tcp6       0      0 :::53                 :::*                    LISTEN      1734/named      udp        0      0 192.168.1.2:53        0.0.0.0:*                           1734/named      udp        0      0 127.0.0.1:53          0.0.0.0:*                           1734/named      udp6       0      0 :::53                 :::*                                1734/named   Sweet! Okay, now it's about time to resolve host names and their assigned IPv6 addresses using our own DNS name server. $ host -t aaaa www.6bone.net 2001:db8:bad:a55::2Using domain server:Name: 2001:db8:bad:a55::2Address: 2001:db8:bad:a55::2#53Aliases: www.6bone.net is an alias for 6bone.net.6bone.net has IPv6 address 2001:5c0:1000:10::2 Alright, our newly configured BIND named is fully operational. Eventually, you might be more familiar with the dig command. Here is the same kind of IPv6 host name resolve but it will provide more details about that particular host as well as the domain in general. $ dig @2001:db8:bad:a55::2 www.6bone.net. AAAA More details on the Berkeley Internet Name Domain (bind) daemon and IPv6 are available in Chapter 22.1 of Peter Bieringer's HOWTO on IPv6. Setting up your own DNS zone Now, that we have an operational named in place, it's about time to implement and configure our own host names and IPv6 address resolving. The general approach is to create your own zone database below the bind folder and to add AAAA records for your hosts. In order to achieve this, we have to define the zone first in the configuration file named.conf.local. $ sudo nano /etc/bind/named.conf.local //// Do any local configuration here//zone "ios.mu" {        type master;        file "/etc/bind/zones/db.ios.mu";}; Here we specify the location of our zone database file. Next, we are going to create it and add our host names, our IP and our IPv6 addresses. $ sudo nano /etc/bind/zones/db.ios.mu $ORIGIN .$TTL 259200     ; 3 daysios.mu                  IN SOA  ios.mu. hostmaster.ios.mu. (                                2014031101 ; serial                                28800      ; refresh (8 hours)                                7200       ; retry (2 hours)                                604800     ; expire (1 week)                                86400      ; minimum (1 day)                                )                        NS      server.ios.mu.$ORIGIN ios.mu.server                  A       192.168.1.2server                  AAAA    2001:db8:bad:a55::2client1                 A       192.168.1.3client1                 AAAA    2001:db8:bad:a55::3client2                 A       192.168.1.4client2                 AAAA    2001:db8:bad:a55::4 With a couple of machines in place, it's time to reload that new configuration. Note: Each time you are going to change your zone databases you have to modify the serial information, too. Named loads the plain text zone definitions and converts them into an internal, indexed binary format to improve lookup performance. If you forget to change your serial then named will not use the new records from the text file but the indexed ones. Or you have to flush the index and force a reload of the zone. This can be done easily by either restarting the named: $ sudo service bind9 restart or by reloading the configuration file using the name server control utility - rndc: $ sudo rndc reconfig Check your log files for any error messages and whether the new zone database has been accepted. Next, we are going to resolve a host name trying to get its IPv6 address like so: $ host -t aaaa server.ios.mu. 2001:db8:bad:a55::2Using domain server:Name: 2001:db8:bad:a55::2Address: 2001:db8:bad:a55::2#53Aliases: server.ios.mu has IPv6 address 2001:db8:bad:a55::2 Looks good. Alternatively, you could have just ping'd the system as well using the ping6 command instead of the regular ping: $ ping6 serverPING server(2001:db8:bad:a55::2) 56 data bytes64 bytes from 2001:db8:bad:a55::2: icmp_seq=1 ttl=64 time=0.615 ms64 bytes from 2001:db8:bad:a55::2: icmp_seq=2 ttl=64 time=0.407 ms^C--- ios1 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1001msrtt min/avg/max/mdev = 0.407/0.511/0.615/0.104 ms That also looks promising to me. How about your configuration? Next, it might be interesting to extend the range of available services on the network. One essential service would be to have web sites at hand.

    Read the article

  • php - upload script mkdir saying file already exists when same directory even though different filename

    - by neeko
    my upload script says my file already exists when i try upload even though different filename <?php // Start a session for error reporting session_start(); ?> <?php // Check, if username session is NOT set then this page will jump to login page if (!isset($_SESSION['username'])) { header('Location: index.html'); } // Call our connection file include('config.php'); // Check to see if the type of file uploaded is a valid image type function is_valid_type($file) { // This is an array that holds all the valid image MIME types $valid_types = array("image/jpg", "image/JPG", "image/jpeg", "image/bmp", "image/gif", "image/png"); if (in_array($file['type'], $valid_types)) return 1; return 0; } // Just a short function that prints out the contents of an array in a manner that's easy to read // I used this function during debugging but it serves no purpose at run time for this example function showContents($array) { echo "<pre>"; print_r($array); echo "</pre>"; } // Set some constants // Grab the User ID we sent from our form $user_id = $_SESSION['username']; $category = $_POST['category']; // This variable is the path to the image folder where all the images are going to be stored // Note that there is a trailing forward slash $TARGET_PATH = "img/users/$category/$user_id/"; mkdir($TARGET_PATH, 0755, true); // Get our POSTed variables $fname = $_POST['fname']; $lname = $_POST['lname']; $contact = $_POST['contact']; $price = $_POST['price']; $image = $_FILES['image']; // Build our target path full string. This is where the file will be moved do // i.e. images/picture.jpg $TARGET_PATH .= $image['name']; // Make sure all the fields from the form have inputs if ( $fname == "" || $lname == "" || $image['name'] == "" ) { $_SESSION['error'] = "All fields are required"; header("Location: error.php"); exit; } // Check to make sure that our file is actually an image // You check the file type instead of the extension because the extension can easily be faked if (!is_valid_type($image)) { $_SESSION['error'] = "You must upload a jpeg, gif, or bmp"; header("Location: error.php"); exit; } // Here we check to see if a file with that name already exists // You could get past filename problems by appending a timestamp to the filename and then continuing if (file_exists($TARGET_PATH)) { $_SESSION['error'] = "A file with that name already exists"; header("Location: error.php"); exit; } // Lets attempt to move the file from its temporary directory to its new home if (move_uploaded_file($image['tmp_name'], $TARGET_PATH)) { // NOTE: This is where a lot of people make mistakes. // We are *not* putting the image into the database; we are putting a reference to the file's location on the server $imagename = $image['name']; $sql = "insert into people (price, contact, category, username, fname, lname, expire, filename) values (:price, :contact, :category, :user_id, :fname, :lname, now() + INTERVAL 1 MONTH, :imagename)"; $q = $conn->prepare($sql) or die("failed!"); $q->bindParam(':price', $price, PDO::PARAM_STR); $q->bindParam(':contact', $contact, PDO::PARAM_STR); $q->bindParam(':category', $category, PDO::PARAM_STR); $q->bindParam(':user_id', $user_id, PDO::PARAM_STR); $q->bindParam(':fname', $fname, PDO::PARAM_STR); $q->bindParam(':lname', $lname, PDO::PARAM_STR); $q->bindParam(':imagename', $imagename, PDO::PARAM_STR); $q->execute(); $sql1 = "UPDATE people SET firstname = (SELECT firstname FROM user WHERE username=:user_id1) WHERE username=:user_id2"; $q = $conn->prepare($sql1) or die("failed!"); $q->bindParam(':user_id1', $user_id, PDO::PARAM_STR); $q->bindParam(':user_id2', $user_id, PDO::PARAM_STR); $q->execute(); $sql2 = "UPDATE people SET surname = (SELECT surname FROM user WHERE username=:user_id1) WHERE username=:user_id2"; $q = $conn->prepare($sql2) or die("failed!"); $q->bindParam(':user_id1', $user_id, PDO::PARAM_STR); $q->bindParam(':user_id2', $user_id, PDO::PARAM_STR); $q->execute(); header("Location: search.php"); exit; } else { // A common cause of file moving failures is because of bad permissions on the directory attempting to be written to // Make sure you chmod the directory to be writeable $_SESSION['error'] = "Could not upload file. Check read/write persmissions on the directory"; header("Location: error.php"); exit; } ?>

    Read the article

  • Sendmail to local domain ignoring MX records (part 2)

    - by FractalizeR
    Hello. I have the exact problem, like in this post: http://serverfault.com/questions/25068/sendmail-to-local-domain-ignoring-mx-records I am also using email provider like GMail For Your Domain (which stores your mail and manages it). I am sending mail from my server directly, but receiving mail is done via Yandex (email provider). Since the server hosts forum, I prefer to send mail directly from it because using another mail provider can slow things. Also, when I send 300.000 emails to my subscribers, email provider will surely block me thinking I send spam. My DNS zone now is: ; ; GSMFORUM.RU ; $TTL 1H gsmforum.ru. SOA ns1.hc.ru. support.hc.ru. ( 2009122268 ; Serial 1H ; Refresh 30M ; Retry 1W ; Expire 1H ) ; Minimum gsmforum.ru. NS ns1.hc.ru. gsmforum.ru. NS ns2.hc.ru. @ A 79.174.68.223 *.gsmforum.ru. CNAME @ ns1 A 79.174.68.223 ns2 A 79.174.68.224 @ MX 10 mx.yandex.ru. mail CNAME domain.mail.yandex.net. yamail-xxxxxxxxx CNAME mail.yandex.ru. Server hostname is server.gsmforum.ru. May be this is the cause? Can someone explain the reason of the matter (the rules that make sendmail consider domain to be local)? Can I easily change *.gsmforum.ru. CNAME @ into *.gsmforum.ru. A 79.174.68.224 to solve this problem? [root@server ~]# cat /etc/mail/local-host-names localhost localhost.localdomain This server hosts gsmforum.ru so I cannot put it into another domain like David Mackintosh suggests. Putting domain in mailertable doesn't solve the problem also. sendmail -bt still shows, that address is local. DontProbeInterfaces is also set to true at sendmail config. M4 file follows: divert(-1)dnl dnl # dnl # This is the sendmail macro config file for m4. If you make changes to dnl # /etc/mail/sendmail.mc, you will need to regenerate the dnl # /etc/mail/sendmail.cf file by confirming that the sendmail-cf package is dnl # installed and then performing a dnl # dnl # make -C /etc/mail dnl # include(`/usr/share/sendmail-cf/m4/cf.m4')dnl VERSIONID(`setup for linux')dnl OSTYPE(`linux')dnl dnl # dnl # Do not advertize sendmail version. dnl # dnl define(`confSMTP_LOGIN_MSG', `$j Sendmail; $b')dnl dnl # dnl # default logging level is 9, you might want to set it higher to dnl # debug the configuration dnl # dnl define(`confLOG_LEVEL', `9')dnl dnl # dnl # Uncomment and edit the following line if your outgoing mail needs to dnl # be sent out through an external mail server: dnl # dnl define(`SMART_HOST', `smtp.your.provider')dnl dnl # define(`confDEF_USER_ID', ``8:12'')dnl dnl define(`confAUTO_REBUILD')dnl define(`confTO_CONNECT', `1m')dnl define(`confTRY_NULL_MX_LIST', `True')dnl define(`confDONT_PROBE_INTERFACES',`True') define(`PROCMAIL_MAILER_PATH', `/usr/bin/procmail')dnl define(`ALIAS_FILE', `/etc/aliases')dnl define(`STATUS_FILE', `/var/log/mail/statistics')dnl define(`UUCP_MAILER_MAX', `2000000')dnl define(`confUSERDB_SPEC', `/etc/mail/userdb.db')dnl define(`confPRIVACY_FLAGS', `authwarnings,novrfy,noexpn,restrictqrun')dnl define(`confAUTH_OPTIONS', `A')dnl dnl # dnl # The following allows relaying if the user authenticates, and disallows dnl # plaintext authentication (PLAIN/LOGIN) on non-TLS links dnl # dnl define(`confAUTH_OPTIONS', `A p')dnl dnl # dnl # PLAIN is the preferred plaintext authentication method and used by dnl # Mozilla Mail and Evolution, though Outlook Express and other MUAs do dnl # use LOGIN. Other mechanisms should be used if the connection is not dnl # guaranteed secure. dnl # Please remember that saslauthd needs to be running for AUTH. dnl # dnl TRUST_AUTH_MECH(`EXTERNAL DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl dnl define(`confAUTH_MECHANISMS', `EXTERNAL GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl dnl # dnl # Rudimentary information on creating certificates for sendmail TLS: dnl # cd /usr/share/ssl/certs; make sendmail.pem dnl # Complete usage: dnl # make -C /usr/share/ssl/certs usage dnl # dnl define(`confCACERT_PATH', `/etc/pki/tls/certs')dnl dnl define(`confCACERT', `/etc/pki/tls/certs/ca-bundle.crt')dnl dnl define(`confSERVER_CERT', `/etc/pki/tls/certs/sendmail.pem')dnl dnl define(`confSERVER_KEY', `/etc/pki/tls/certs/sendmail.pem')dnl dnl # dnl # This allows sendmail to use a keyfile that is shared with OpenLDAP's dnl # slapd, which requires the file to be readble by group ldap dnl # dnl define(`confDONT_BLAME_SENDMAIL', `groupreadablekeyfile')dnl dnl # dnl define(`confTO_QUEUEWARN', `4h')dnl dnl define(`confTO_QUEUERETURN', `5d')dnl dnl define(`confQUEUE_LA', `12')dnl dnl define(`confREFUSE_LA', `18')dnl define(`confTO_IDENT', `0')dnl dnl FEATURE(delay_checks)dnl FEATURE(`no_default_msa', `dnl')dnl FEATURE(`smrsh', `/usr/sbin/smrsh')dnl FEATURE(`mailertable', `hash -o /etc/mail/mailertable.db')dnl FEATURE(`virtusertable', `hash -o /etc/mail/virtusertable.db')dnl FEATURE(redirect)dnl FEATURE(always_add_domain)dnl FEATURE(use_cw_file)dnl FEATURE(use_ct_file)dnl dnl # dnl # The following limits the number of processes sendmail can fork to accept dnl # incoming messages or process its message queues to 20.) sendmail refuses dnl # to accept connections once it has reached its quota of child processes. dnl # dnl define(`confMAX_DAEMON_CHILDREN', `20')dnl dnl # dnl # Limits the number of new connections per second. This caps the overhead dnl # incurred due to forking new sendmail processes. May be useful against dnl # DoS attacks or barrages of spam. (As mentioned below, a per-IP address dnl # limit would be useful but is not available as an option at this writing.) dnl # dnl define(`confCONNECTION_RATE_THROTTLE', `3')dnl dnl # dnl # The -t option will retry delivery if e.g. the user runs over his quota. dnl # FEATURE(local_procmail, `', `procmail -t -Y -a $h -d $u')dnl FEATURE(`access_db', `hash -T<TMPF> -o /etc/mail/access.db')dnl FEATURE(`blacklist_recipients')dnl EXPOSED_USER(`root')dnl dnl # dnl # For using Cyrus-IMAPd as POP3/IMAP server through LMTP delivery uncomment dnl # the following 2 definitions and activate below in the MAILER section the dnl # cyrusv2 mailer. dnl # dnl define(`confLOCAL_MAILER', `cyrusv2')dnl dnl define(`CYRUSV2_MAILER_ARGS', `FILE /var/lib/imap/socket/lmtp')dnl dnl # dnl # The following causes sendmail to only listen on the IPv4 loopback address dnl # 127.0.0.1 and not on any other network devices. Remove the loopback dnl # address restriction to accept email from the internet or intranet. dnl # DAEMON_OPTIONS(`Name=MTA,Port=smtp') dnl # dnl # The following causes sendmail to additionally listen to port 587 for dnl # mail from MUAs that authenticate. Roaming users who can't reach their dnl # preferred sendmail daemon due to port 25 being blocked or redirected find dnl # this useful. dnl # dnl DAEMON_OPTIONS(`Port=submission, Name=MSA, M=Ea')dnl dnl # dnl # The following causes sendmail to additionally listen to port 465, but dnl # starting immediately in TLS mode upon connecting. Port 25 or 587 followed dnl # by STARTTLS is preferred, but roaming clients using Outlook Express can't dnl # do STARTTLS on ports other than 25. Mozilla Mail can ONLY use STARTTLS dnl # and doesn't support the deprecated smtps; Evolution <1.1.1 uses smtps dnl # when SSL is enabled-- STARTTLS support is available in version 1.1.1. dnl # dnl # For this to work your OpenSSL certificates must be configured. dnl # dnl DAEMON_OPTIONS(`Port=smtps, Name=TLSMTA, M=s')dnl dnl # dnl # The following causes sendmail to additionally listen on the IPv6 loopback dnl # device. Remove the loopback address restriction listen to the network. dnl # dnl DAEMON_OPTIONS(`port=smtp,Addr=::1, Name=MTA-v6, Family=inet6')dnl dnl # dnl # enable both ipv6 and ipv4 in sendmail: dnl # dnl DAEMON_OPTIONS(`Name=MTA-v4, Family=inet, Name=MTA-v6, Family=inet6') dnl # dnl # We strongly recommend not accepting unresolvable domains if you want to dnl # protect yourself from spam. However, the laptop and users on computers dnl # that do not have 24x7 DNS do need this. dnl # FEATURE(`accept_unresolvable_domains')dnl dnl # dnl FEATURE(`relay_based_on_MX')dnl dnl # dnl # Also accept email sent to "localhost.localdomain" as local email. dnl # LOCAL_DOMAIN(`localhost.localdomain')dnl dnl # dnl # The following example makes mail from this host and any additional dnl # specified domains appear to be sent from mydomain.com dnl # dnl MASQUERADE_AS(`mydomain.com')dnl dnl # dnl # masquerade not just the headers, but the envelope as well dnl # dnl FEATURE(masquerade_envelope)dnl dnl # dnl # masquerade not just @mydomainalias.com, but @*.mydomainalias.com as well dnl # dnl FEATURE(masquerade_entire_domain)dnl dnl # dnl MASQUERADE_DOMAIN(localhost)dnl dnl MASQUERADE_DOMAIN(localhost.localdomain)dnl dnl MASQUERADE_DOMAIN(mydomainalias.com)dnl dnl MASQUERADE_DOMAIN(mydomain.lan)dnl MAILER(smtp)dnl MAILER(procmail)dnl dnl MAILER(cyrusv2)dnl FEATURE(`dnsbl',`zen.spamhaus.org',`Rejected - your IP is blacklisted by http://www.spamhaus.org')

    Read the article

  • DNS resolution problems; dig SERVFAIL error

    - by JustinP
    I'm setting up a couple of dedicated servers, and having problems setting up my nameservers properly. One of these is a LEMP server (LAMP with nginx in place of Apache), and the other will function solely as an email server, running exim/dovecot/ASSP antispam (no Apache). The LEMP server is CentOS 5.5, with no control panel, while the email server is CentOS 5.5 as well, with cPanel/WHM. So, I've had problems getting DNS set up properly. I have two domains, each one pointing to one of these servers. The nameservers are registered correctly with the domain registrar, and the nameserver IPs are entered correctly as well. I've spoken to tech support at the registrar and they confirm that everything is set up on their end. Not knowing much about DNS, I googled nameservers and DNS until I nearly went blind, and spent hours messing with the configuration. Eventually, I got the LEMP server's DNS working properly (no cPanel). Pleased with this triumph, I'm trying to mimic that configuration and repeat the process with the email server, and it's just not happening. The nameserver starts and stops, but the domain doesn't resolve. Things I have tried Going through standard procedures to set up DNS in WHM Clearing all DNS information, uninstalling BIND, then reinstalling all of that and again going through WHM procedures for setting up DNS Clearing all DNS information, and setting up BIND via shell (completely outside of cPanel) by using my config and zone files from the LEMP server as a template named runs just fine, but nothing is resolving. When I "dig any example.com" I get a SERVFAIL message. Nslookups return no information. Here are my config and zone files. named.conf controls { inet 127.0.0.1 allow { localhost; } keys { coretext-key; }; }; options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; // Those options should be used carefully because they disable port // randomization // query-source port 53; // query-source-v6 port 53; allow-query { any; }; allow-query-cache { any; }; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; view "localhost_resolver" { match-clients { 127.0.0.0/24; }; match-destinations { localhost; }; recursion yes; //zone "." IN { // type hint; // file "/var/named/named.ca"; //}; include "/etc/named.rfc1912.zones"; }; view "internal" { /* This view will contain zones you want to serve only to "internal" clients that connect via your directly attached LAN interfaces - "localnets" . */ match-clients { localnets; }; match-destinations { localnets; }; recursion yes; zone "." IN { type hint; file "/var/named/named.ca"; }; // include "/var/named/named.rfc1912.zones"; // you should not serve your rfc1912 names to non-localhost clients. // These are your "authoritative" internal zones, and would probably // also be included in the "localhost_resolver" view above : zone "example.com" { type master; file "data/db.example.com"; }; zone "3.2.1.in-addr.arpa" { type master; file "data/db.1.2.3"; }; }; view "external" { /* This view will contain zones you want to serve only to "external" clients * that have addresses that are not on your directly attached LAN interface subnets: */ match-clients { any; }; match-destinations { any; }; recursion no; // you'd probably want to deny recursion to external clients, so you don't // end up providing free DNS service to all takers allow-query-cache { none; }; // Disable lookups for any cached data and root hints // all views must contain the root hints zone: //include "/etc/named.rfc1912.zones"; zone "." IN { type hint; file "/var/named/named.ca"; }; zone "example.com" { type master; file "data/db.example.com"; }; zone "3.2.1.in-addr.arpa" { type master; file "data/db.1.2.3"; }; }; include "/etc/rndc.key"; db.example.com $TTL 1D ; ; Zone file for example.com ; ; Mandatory minimum for a working domain ; @ IN SOA ns1.example.com. contact.example.com. ( 2011042905 ; serial 8H ; refresh 2H ; retry 4W ; expire 1D ; default_ttl ) NS ns1.example.com. NS ns2.example.com. ns1 A 1.2.3.4 ns2 A 1.2.3.5 example.com. A 1.2.3.4 localhost A 127.0.0.1 www CNAME example.com. mail CNAME example.com. ; db.1.2.3 $TTL 1D $ORIGIN 3.2.1.in-addr.arpa. @ IN SOA ns1.example.com contact.example.com. ( 2011042908 ; 8H ; 2H ; 4W ; 1D ; ) NS ns1.example.com. NS ns2.example.com. 4 PTR hostname.example.com. 5 PTR hostname.example.com. ; Also of note: both of these servers are managed. Tech support is very responsive, and largely useless. Hours go by with them asking me questions to narrow down what could be wrong, then they pass the ticket to the tech on the next shift, who ignores everything that's happened already and spend his whole shift asking all the same questions the last guy asked. So, in summary: *Nameservers, with IPs, are correctly registered with domain registrar *named is configured and running *...and must not be configured correctly, because nothing resolves. Any help would be great. I changed domains and IPs in the files to generics, but let me know if you need to know the domain in question. Thanks! UPDATE I found that I didn't have 127.0.0.1 in /etc/resolv.conf, so I added it, along with my two public IPs that I have named listening on. resolv.conf search www.example.com example.com nameserver 127.0.0.1 nameserver 7.8.9.10 ;Was in here by default, authoritative nameserver of hosting company nameserver 1.2.3.4 ;Public IP #1 nameserver 1.2.3.5 ;Public IP #2 Now when I DIG example.com from the host, it resolves. If I try to DIG from my other server (in the same datacenter), or from the internet, it times out or I get SERVFAIL.

    Read the article

  • DNS works only with ip but does not work with NS CentOS + Bind9

    - by Borislav Yordanov
    I am having a headache with DNS. Lets say my public IP is 1.2.3.4, my local IP is 192.168.0.10 and my domain is example.com I am running CentOS on a virtual machine (Parallels Desktop for Mac) with a LAN card reserved for it, so it gets Ip directly from the router. I have ports 80,443,53 forwarded to 192.168.0.10. Both Mac OS and CentOs firewalls are Off. The strange is when I type dig @1.2.3.4 example.com from my other PC I get: ; <<>> DiG 9.8.3-P1 <<>> @1.2.3.4 example.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16941 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;example.com. IN A ;; ANSWER SECTION: example.com. 86400 IN A 1.2.3.4 ;; AUTHORITY SECTION: example.com. 86400 IN NS ns2.example.com. example.com. 86400 IN NS ns1.example.com. ;; ADDITIONAL SECTION: ns1.example.com. 86400 IN A 1.2.3.4 ns2.example.com. 86400 IN A 1.2.3.4 ;; Query time: 8 msec ;; SERVER: 1.2.3.4#53(1.2.3.4) ;; WHEN: Sat Nov 2 09:37:36 2013 ;; MSG SIZE rcvd: 109 but when i type: dig @ns1.example.com example.com it waits a few seconds and returns dig: couldn't get address for 'ns1.dsht.in': not found This is my config file: /etc/named.conf options { listen-on-v6 { none; }; directory"/var/named"; dump-file"/var/named/data/cache_dump.db"; statistics-file"/var/named/data/named_stats.txt"; memstatistics-file"/var/named/data/named_mem_stats.txt"; allow-query{ localhost; 192.168.0.0/24; }; allow-transfer { localhost; 192.168.0.0/24; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; # change all from here view "internal" { match-clients { localhost; 192.168.0.0/24; }; zone "." IN { type hint; file "named.ca"; }; zone "example.com" IN { type master; file "example.com.zone"; allow-update { none; }; }; zone "0.168.192.in-addr.arpa" IN { type master; file "0.168.192.in-addr.arpa"; allow-update { none; }; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; }; view "external" { match-clients { any; }; allow-query { any; }; recursion no; zone "example.com" IN { type master; file "example.com.zone"; allow-update { none; }; }; zone "4.3.2.1.in-addr.arpa" IN { type master; file "4.3.2.1.in-addr.arpa"; allow-update { none; }; }; }; /var/named/exmaple.com.zone $TTL 86400 @ IN SOA ns1.example.com. host.example.com. ( 2013042201 ;Serial 3600 ;Refresh 1800 ;Retry 604800 ;Expire 86400 ;Minimum TTL ) ; Specify our two nameservers IN NS ns1.example.com. IN NS ns2.example.com. ; Resolve nameserver hostnames to IP, replace with your two droplet IP addresses. ns1 IN A 1.2.3.4 ns2 IN A 1.2.3.4 ; Define hostname -> IP pairs which you wish to resolve @ IN A 1.2.3.4 IN A 1.2.3.4 www IN A 1.2.3.4 server2 IN A 192.168.0.2 * IN A 1.2.3.4 /var/named/4.3.2.1.in-addr.arpa $TTL 2d ; 172800 seconds $ORIGIN 4.3.2.1.IN-ADDR.ARPA. @ IN SOA ns1.example.com. host.example.com. ( 2013010304 ; serial number 3h ; refresh 15m ; update retry 3w ; expiry 3h ; nx = nxdomain ttl ) IN NS ns1.example.com. IN NS ns2.example.com. IN PTR example.com. ; etc /var/named/0.168.192.in-addr.arpa $TTL 2d ; 172800 seconds $ORIGIN 0.168.192.IN-ADDR.ARPA. @ IN SOA ns1.example.com. host.example.com. ( 2013010304 ; serial number 3h ; refresh 15m ; update retry 3w ; expiry 3h ; nx = nxdomain ttl ) IN NS ns1.example.com. IN NS ns2.example.com. 10 IN PTR example.com. 2 IN PTR server2.example.com ; etc I will be very glad if someone can help me. Thank you in advance

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • The broken Promise of the Mobile Web

    - by Rick Strahl
    High end mobile devices have been with us now for almost 7 years and they have utterly transformed the way we access information. Mobile phones and smartphones that have access to the Internet and host smart applications are in the hands of a large percentage of the population of the world. In many places even very remote, cell phones and even smart phones are a common sight. I’ll never forget when I was in India in 2011 I was up in the Southern Indian mountains riding an elephant out of a tiny local village, with an elephant herder in front riding atop of the elephant in front of us. He was dressed in traditional garb with the loin wrap and head cloth/turban as did quite a few of the locals in this small out of the way and not so touristy village. So we’re slowly trundling along in the forest and he’s lazily using his stick to guide the elephant and… 10 minutes in he pulls out his cell phone from his sash and starts texting. In the middle of texting a huge pig jumps out from the side of the trail and he takes a picture running across our path in the jungle! So yeah, mobile technology is very pervasive and it’s reached into even very buried and unexpected parts of this world. Apps are still King Apps currently rule the roost when it comes to mobile devices and the applications that run on them. If there’s something that you need on your mobile device your first step usually is to look for an app, not use your browser. But native app development remains a pain in the butt, with the requirement to have to support 2 or 3 completely separate platforms. There are solutions that try to bridge that gap. Xamarin is on a tear at the moment, providing their cross-device toolkit to build applications using C#. While Xamarin tools are impressive – and also *very* expensive – they only address part of the development madness that is app development. There are still specific device integration isssues, dealing with the different developer programs, security and certificate setups and all that other noise that surrounds app development. There’s also PhoneGap/Cordova which provides a hybrid solution that involves creating local HTML/CSS/JavaScript based applications, and then packaging them to run in a specialized App container that can run on most mobile device platforms using a WebView interface. This allows for using of HTML technology, but it also still requires all the set up, configuration of APIs, security keys and certification and submission and deployment process just like native applications – you actually lose many of the benefits that  Web based apps bring. The big selling point of Cordova is that you get to use HTML have the ability to build your UI once for all platforms and run across all of them – but the rest of the app process remains in place. Apps can be a big pain to create and manage especially when we are talking about specialized or vertical business applications that aren’t geared at the mainstream market and that don’t fit the ‘store’ model. If you’re building a small intra department application you don’t want to deal with multiple device platforms and certification etc. for various public or corporate app stores. That model is simply not a good fit both from the development and deployment perspective. Even for commercial, big ticket apps, HTML as a UI platform offers many advantages over native, from write-once run-anywhere, to remote maintenance, single point of management and failure to having full control over the application as opposed to have the app store overloads censor you. In a lot of ways Web based HTML/CSS/JavaScript applications have so much potential for building better solutions based on existing Web technologies for the very same reasons a lot of content years ago moved off the desktop to the Web. To me the Web as a mobile platform makes perfect sense, but the reality of today’s Mobile Web unfortunately looks a little different… Where’s the Love for the Mobile Web? Yet here we are in the middle of 2014, nearly 7 years after the first iPhone was released and brought the promise of rich interactive information at your fingertips, and yet we still don’t really have a solid mobile Web platform. I know what you’re thinking: “But we have lots of HTML/JavaScript/CSS features that allows us to build nice mobile interfaces”. I agree to a point – it’s actually quite possible to build nice looking, rich and capable Web UI today. We have media queries to deal with varied display sizes, CSS transforms for smooth animations and transitions, tons of CSS improvements in CSS 3 that facilitate rich layout, a host of APIs geared towards mobile device features and lately even a number of JavaScript framework choices that facilitate development of multi-screen apps in a consistent manner. Personally I’ve been working a lot with AngularJs and heavily modified Bootstrap themes to build mobile first UIs and that’s been working very well to provide highly usable and attractive UI for typical mobile business applications. From the pure UI perspective things actually look very good. Not just about the UI But it’s not just about the UI - it’s also about integration with the mobile device. When it comes to putting all those pieces together into what amounts to a consolidated platform to build mobile Web applications, I think we still have a ways to go… there are a lot of missing pieces to make it all work together and integrate with the device more smoothly, and more importantly to make it work uniformly across the majority of devices. I think there are a number of reasons for this. Slow Standards Adoption HTML standards implementations and ratification has been dreadfully slow, and browser vendors all seem to pick and choose different pieces of the technology they implement. The end result is that we have a capable UI platform that’s missing some of the infrastructure pieces to make it whole on mobile devices. There’s lots of potential but what is lacking that final 10% to build truly compelling mobile applications that can compete favorably with native applications. Some of it is the fragmentation of browsers and the slow evolution of the mobile specific HTML APIs. A host of mobile standards exist but many of the standards are in the early review stage and they have been there stuck for long periods of time and seem to move at a glacial pace. Browser vendors seem even slower to implement them, and for good reason – non-ratified standards mean that implementations may change and vendor implementations tend to be experimental and  likely have to be changed later. Neither Vendors or developers are not keen on changing standards. This is the typical chicken and egg scenario, but without some forward momentum from some party we end up stuck in the mud. It seems that either the standards bodies or the vendors need to carry the torch forward and that doesn’t seem to be happening quickly enough. Mobile Device Integration just isn’t good enough Current standards are not far reaching enough to address a number of the use case scenarios necessary for many mobile applications. While not every application needs to have access to all mobile device features, almost every mobile application could benefit from some integration with other parts of the mobile device platform. Integration with GPS, phone, media, messaging, notifications, linking and contacts system are benefits that are unique to mobile applications and could be widely used, but are mostly (with the exception of GPS) inaccessible for Web based applications today. Unfortunately trying to do most of this today only with a mobile Web browser is a losing battle. Aside from PhoneGap/Cordova’s app centric model with its own custom API accessing mobile device features and the token exception of the GeoLocation API, most device integration features are not widely supported by the current crop of mobile browsers. For example there’s no usable messaging API that allows access to SMS or contacts from HTML. Even obvious components like the Media Capture API are only implemented partially by mobile devices. There are alternatives and workarounds for some of these interfaces by using browser specific code, but that’s might ugly and something that I thought we were trying to leave behind with newer browser standards. But it’s not quite working out that way. It’s utterly perplexing to me that mobile standards like Media Capture and Streams, Media Gallery Access, Responsive Images, Messaging API, Contacts Manager API have only minimal or no traction at all today. Keep in mind we’ve had mobile browsers for nearly 7 years now, and yet we still have to think about how to get access to an image from the image gallery or the camera on some devices? Heck Windows Phone IE Mobile just gained the ability to upload images recently in the Windows 8.1 Update – that’s feature that HTML has had for 20 years! These are simple concepts and common problems that should have been solved a long time ago. It’s extremely frustrating to see build 90% of a mobile Web app with relative ease and then hit a brick wall for the remaining 10%, which often can be show stoppers. The remaining 10% have to do with platform integration, browser differences and working around the limitations that browsers and ‘pinned’ applications impose on HTML applications. The maddening part is that these limitations seem arbitrary as they could easily work on all mobile platforms. For example, SMS has a URL Moniker interface that sort of works on Android, works badly with iOS (only works if the address is already in the contact list) and not at all on Windows Phone. There’s no reason this shouldn’t work universally using the same interface – after all all phones have supported SMS since before the year 2000! But, it doesn’t have to be this way Change can happen very quickly. Take the GeoLocation API for example. Geolocation has taken off at the very beginning of the mobile device era and today it works well, provides the necessary security (a big concern for many mobile APIs), and is supported by just about all major mobile and even desktop browsers today. It handles security concerns via prompts to avoid unwanted access which is a model that would work for most other device APIs in a similar fashion. One time approval and occasional re-approval if code changes or caches expire. Simple and only slightly intrusive. It all works well, even though GeoLocation actually has some physical limitations, such as representing the current location when no GPS device is present. Yet this is a solved problem, where other APIs that are conceptually much simpler to implement have failed to gain any traction at all. Technically none of these APIs should be a problem to implement, but it appears that the momentum is just not there. Inadequate Web Application Linking and Activation Another important piece of the puzzle missing is the integration of HTML based Web applications. Today HTML based applications are not first class citizens on mobile operating systems. When talking about HTML based content there’s a big difference between content and applications. Content is great for search engine discovery and plain browser usage. Content is usually accessed intermittently and permanent linking is not so critical for this type of content.  But applications have different needs. Applications need to be started up quickly and must be easily switchable to support a multi-tasking user workflow. Therefore, it’s pretty crucial that mobile Web apps are integrated into the underlying mobile OS and work with the standard task management features. Unfortunately this integration is not as smooth as it should be. It starts with actually trying to find mobile Web applications, to ‘installing’ them onto a phone in an easily accessible manner in a prominent position. The experience of discovering a Mobile Web ‘App’ and making it sticky is by no means as easy or satisfying. Today the way you’d go about this is: Open the browser Search for a Web Site in the browser with your search engine of choice Hope that you find the right site Hope that you actually find a site that works for your mobile device Click on the link and run the app in a fully chrome’d browser instance (read tiny surface area) Pin the app to the home screen (with all the limitations outline above) Hope you pointed at the right URL when you pinned Even for you and me as developers, there are a few steps in there that are painful and annoying, but think about the average user. First figuring out how to search for a specific site or URL? And then pinning the app and hopefully from the right location? You’ve probably lost more than half of your audience at that point. This experience sucks. For developers too this process is painful since app developers can’t control the shortcut creation directly. This problem often gets solved by crazy coding schemes, with annoying pop-ups that try to get people to create shortcuts via fancy animations that are both annoying and add overhead to each and every application that implements this sort of thing differently. And that’s not the end of it - getting the link onto the home screen with an application icon varies quite a bit between browsers. Apple’s non-standard meta tags are prominent and they work with iOS and Android (only more recent versions), but not on Windows Phone. Windows Phone instead requires you to create an actual screen or rather a partial screen be captured for a shortcut in the tile manager. Who had that brilliant idea I wonder? Surprisingly Chrome on recent Android versions seems to actually get it right – icons use pngs, pinning is easy and pinned applications properly behave like standalone apps and retain the browser’s active page state and content. Each of the platforms has a different way to specify icons (WP doesn’t allow you to use an icon image at all), and the most widely used interface in use today is a bunch of Apple specific meta tags that other browsers choose to support. The question is: Why is there no standard implementation for installing shortcuts across mobile platforms using an official format rather than a proprietary one? Then there’s iOS and the crazy way it treats home screen linked URLs using a crazy hybrid format that is neither as capable as a Web app running in Safari nor a WebView hosted application. Moving off the Web ‘app’ link when switching to another app actually causes the browser and preview it to ‘blank out’ the Web application in the Task View (see screenshot on the right). Then, when the ‘app’ is reactivated it ends up completely restarting the browser with the original link. This is crazy behavior that you can’t easily work around. In some situations you might be able to store the application state and restore it using LocalStorage, but for many scenarios that involve complex data sources (like say Google Maps) that’s not a possibility. The only reason for this screwed up behavior I can think of is that it is deliberate to make Web apps a pain in the butt to use and forcing users trough the App Store/PhoneGap/Cordova route. App linking and management is a very basic problem – something that we essentially have solved in every desktop browser – yet on mobile devices where it arguably matters a lot more to have easy access to web content we have to jump through hoops to have even a remotely decent linking/activation experience across browsers. Where’s the Money? It’s not surprising that device home screen integration and Mobile Web support in general is in such dismal shape – the mobile OS vendors benefit financially from App store sales and have little to gain from Web based applications that bypass the App store and the cash cow that it presents. On top of that, platform specific vendor lock-in of both end users and developers who have invested in hardware, apps and consumables is something that mobile platform vendors actually aspire to. Web based interfaces that are cross-platform are the anti-thesis of that and so again it’s no surprise that the mobile Web is on a struggling path. But – that may be changing. More and more we’re seeing operations shifting to services that are subscription based or otherwise collect money for usage, and that may drive more progress into the Web direction in the end . Nothing like the almighty dollar to drive innovation forward. Do we need a Mobile Web App Store? As much as I dislike moderated experiences in today’s massive App Stores, they do at least provide one single place to look for apps for your device. I think we could really use some sort of registry, that could provide something akin to an app store for mobile Web apps, to make it easier to actually find mobile applications. This could take the form of a specialized search engine, or maybe a more formal store/registry like structure. Something like apt-get/chocolatey for Web apps. It could be curated and provide at least some feedback and reviews that might help with the integrity of applications. Coupled to that could be a native application on each platform that would allow searching and browsing of the registry and then also handle installation in the form of providing the home screen linking, plus maybe an initial security configuration that determines what features are allowed access to for the app. I’m not holding my breath. In order for this sort of thing to take off and gain widespread appeal, a lot of coordination would be required. And in order to get enough traction it would have to come from a well known entity – a mobile Web app store from a no name source is unlikely to gain high enough usage numbers to make a difference. In a way this would eliminate some of the freedom of the Web, but of course this would also be an optional search path in addition to the standard open Web search mechanisms to find and access content today. Security Security is a big deal, and one of the perceived reasons why so many IT professionals appear to be willing to go back to the walled garden of deployed apps is that Apps are perceived as safe due to the official review and curation of the App stores. Curated stores are supposed to protect you from malware, illegal and misleading content. It doesn’t always work out that way and all the major vendors have had issues with security and the review process at some time or another. Security is critical, but I also think that Web applications in general pose less of a security threat than native applications, by nature of the sandboxed browser and JavaScript environments. Web applications run externally completely and in the HTML and JavaScript sandboxes, with only a very few controlled APIs allowing access to device specific features. And as discussed earlier – security for any device interaction can be granted the same for mobile applications through a Web browser, as they can for native applications either via explicit policies loaded from the Web, or via prompting as GeoLocation does today. Security is important, but it’s certainly solvable problem for Web applications even those that need to access device hardware. Security shouldn’t be a reason for Web apps to be an equal player in mobile applications. Apps are winning, but haven’t we been here before? So now we’re finding ourselves back in an era of installed app, rather than Web based and managed apps. Only it’s even worse today than with Desktop applications, in that the apps are going through a gatekeeper that charges a toll and censors what you can and can’t do in your apps. Frankly it’s a mystery to me why anybody would buy into this model and why it’s lasted this long when we’ve already been through this process. It’s crazy… It’s really a shame that this regression is happening. We have the technology to make mobile Web apps much more prominent, but yet we’re basically held back by what seems little more than bureaucracy, partisan bickering and self interest of the major parties involved. Back in the day of the desktop it was Internet Explorer’s 98+%  market shareholding back the Web from improvements for many years – now it’s the combined mobile OS market in control of the mobile browsers. If mobile Web apps were allowed to be treated the same as native apps with simple ways to install and run them consistently and persistently, that would go a long way to making mobile applications much more usable and seriously viable alternatives to native apps. But as it is mobile apps have a severe disadvantage in placement and operation. There are a few bright spots in all of this. Mozilla’s FireFoxOs is embracing the Web for it’s mobile OS by essentially building every app out of HTML and JavaScript based content. It supports both packaged and certified package modes (that can be put into the app store), and Open Web apps that are loaded and run completely off the Web and can also cache locally for offline operation using a manifest. Open Web apps are treated as full class citizens in FireFoxOS and run using the same mechanism as installed apps. Unfortunately FireFoxOs is getting a slow start with minimal device support and specifically targeting the low end market. We can hope that this approach will change and catch on with other vendors, but that’s also an uphill battle given the conflict of interest with platform lock in that it represents. Recent versions of Android also seem to be working reasonably well with mobile application integration onto the desktop and activation out of the box. Although it still uses the Apple meta tags to find icons and behavior settings, everything at least works as you would expect – icons to the desktop on pinning, WebView based full screen activation, and reliable application persistence as the browser/app is treated like a real application. Hopefully iOS will at some point provide this same level of rudimentary Web app support. What’s also interesting to me is that Microsoft hasn’t picked up on the obvious need for a solid Web App platform. Being a distant third in the mobile OS war, Microsoft certainly has nothing to lose and everything to gain by using fresh ideas and expanding into areas that the other major vendors are neglecting. But instead Microsoft is trying to beat the market leaders at their own game, fighting on their adversary’s terms instead of taking a new tack. Providing a kick ass mobile Web platform that takes the lead on some of the proposed mobile APIs would be something positive that Microsoft could do to improve its miserable position in the mobile device market. Where are we at with Mobile Web? It sure sounds like I’m really down on the Mobile Web, right? I’ve built a number of mobile apps in the last year and while overall result and response has been very positive to what we were able to accomplish in terms of UI, getting that final 10% that required device integration dialed was an absolute nightmare on every single one of them. Big compromises had to be made and some features were left out or had to be modified for some devices. In two cases we opted to go the Cordova route in order to get the integration we needed, along with the extra pain involved in that process. Unless you’re not integrating with device features and you don’t care deeply about a smooth integration with the mobile desktop, mobile Web development is fraught with frustration. So, yes I’m frustrated! But it’s not for lack of wanting the mobile Web to succeed. I am still a firm believer that we will eventually arrive a much more functional mobile Web platform that allows access to the most common device features in a sensible way. It wouldn't be difficult for device platform vendors to make Web based applications first class citizens on mobile devices. But unfortunately it looks like it will still be some time before this happens. So, what’s your experience building mobile Web apps? Are you finding similar issues? Just giving up on raw Web applications and building PhoneGap apps instead? Completely skipping the Web and going native? Leave a comment for discussion. Resources Rick Strahl on DotNet Rocks talking about Mobile Web© Rick Strahl, West Wind Technologies, 2005-2014Posted in HTML5  Mobile   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Security Issues with Single Page Apps

    - by Stephen.Walther
    Last week, I was asked to do a code review of a Single Page App built using the ASP.NET Web API, Durandal, and Knockout (good stuff!). In particular, I was asked to investigate whether there any special security issues associated with building a Single Page App which are not present in the case of a traditional server-side ASP.NET application. In this blog entry, I discuss two areas in which you need to exercise extra caution when building a Single Page App. I discuss how Single Page Apps are extra vulnerable to both Cross-Site Scripting (XSS) attacks and Cross-Site Request Forgery (CSRF) attacks. This goal of this blog post is NOT to persuade you to avoid writing Single Page Apps. I’m a big fan of Single Page Apps. Instead, the goal is to ensure that you are fully aware of some of the security issues related to Single Page Apps and ensure that you know how to guard against them. Cross-Site Scripting (XSS) Attacks According to WhiteHat Security, over 65% of public websites are open to XSS attacks. That’s bad. By taking advantage of XSS holes in a website, a hacker can steal your credit cards, passwords, or bank account information. Any website that redisplays untrusted information is open to XSS attacks. Let me give you a simple example. Imagine that you want to display the name of the current user on a page. To do this, you create the following server-side ASP.NET page located at http://MajorBank.com/SomePage.aspx: <%@Page Language="C#" %> <html> <head> <title>Some Page</title> </head> <body> Welcome <%= Request["username"] %> </body> </html> Nothing fancy here. Notice that the page displays the current username by using Request[“username”]. Using Request[“username”] displays the username regardless of whether the username is present in a cookie, a form field, or a query string variable. Unfortunately, by using Request[“username”] to redisplay untrusted information, you have now opened your website to XSS attacks. Here’s how. Imagine that an evil hacker creates the following link on another website (hackers.com): <a href="/SomePage.aspx?username=<script src=Evil.js></script>">Visit MajorBank</a> Notice that the link includes a query string variable named username and the value of the username variable is an HTML <SCRIPT> tag which points to a JavaScript file named Evil.js. When anyone clicks on the link, the <SCRIPT> tag will be injected into SomePage.aspx and the Evil.js script will be loaded and executed. What can a hacker do in the Evil.js script? Anything the hacker wants. For example, the hacker could display a popup dialog on the MajorBank.com site which asks the user to enter their password. The script could then post the password back to hackers.com and now the evil hacker has your secret password. ASP.NET Web Forms and ASP.NET MVC have two automatic safeguards against this type of attack: Request Validation and Automatic HTML Encoding. Protecting Coming In (Request Validation) In a server-side ASP.NET app, you are protected against the XSS attack described above by a feature named Request Validation. If you attempt to submit “potentially dangerous” content — such as a JavaScript <SCRIPT> tag — in a form field or query string variable then you get an exception. Unfortunately, Request Validation only applies to server-side apps. Request Validation does not help in the case of a Single Page App. In particular, the ASP.NET Web API does not pay attention to Request Validation. You can post any content you want – including <SCRIPT> tags – to an ASP.NET Web API action. For example, the following HTML page contains a form. When you submit the form, the form data is submitted to an ASP.NET Web API controller on the server using an Ajax request: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title></title> </head> <body> <form data-bind="submit:submit"> <div> <label> User Name: <input data-bind="value:user.userName" /> </label> </div> <div> <label> Email: <input data-bind="value:user.email" /> </label> </div> <div> <input type="submit" value="Submit" /> </div> </form> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { user: { userName: ko.observable(), email: ko.observable() }, submit: function () { $.post("/api/users", ko.toJS(this.user)); } }; ko.applyBindings(viewModel); </script> </body> </html> The form above is using Knockout to bind the form fields to a view model. When you submit the form, the view model is submitted to an ASP.NET Web API action on the server. Here’s the server-side ASP.NET Web API controller and model class: public class UsersController : ApiController { public HttpResponseMessage Post(UserViewModel user) { var userName = user.UserName; return Request.CreateResponse(HttpStatusCode.OK); } } public class UserViewModel { public string UserName { get; set; } public string Email { get; set; } } If you submit the HTML form, you don’t get an error. The “potentially dangerous” content is passed to the server without any exception being thrown. In the screenshot below, you can see that I was able to post a username form field with the value “<script>alert(‘boo’)</script”. So what this means is that you do not get automatic Request Validation in the case of a Single Page App. You need to be extra careful in a Single Page App about ensuring that you do not display untrusted content because you don’t have the Request Validation safety net which you have in a traditional server-side ASP.NET app. Protecting Going Out (Automatic HTML Encoding) Server-side ASP.NET also protects you from XSS attacks when you render content. By default, all content rendered by the razor view engine is HTML encoded. For example, the following razor view displays the text “<b>Hello!</b>” instead of the text “Hello!” in bold: @{ var message = "<b>Hello!</b>"; } @message   If you don’t want to render content as HTML encoded in razor then you need to take the extra step of using the @Html.Raw() helper. In a Web Form page, if you use <%: %> instead of <%= %> then you get automatic HTML Encoding: <%@ Page Language="C#" %> <% var message = "<b>Hello!</b>"; %> <%: message %> This automatic HTML Encoding will prevent many types of XSS attacks. It prevents <script> tags from being rendered and only allows &lt;script&gt; tags to be rendered which are useless for executing JavaScript. (This automatic HTML encoding does not protect you from all forms of XSS attacks. For example, you can assign the value “javascript:alert(‘evil’)” to the Hyperlink control’s NavigateUrl property and execute the JavaScript). The situation with Knockout is more complicated. If you use the Knockout TEXT binding then you get HTML encoded content. On the other hand, if you use the HTML binding then you do not: <!-- This JavaScript DOES NOT execute --> <div data-bind="text:someProp"></div> <!-- This Javacript DOES execute --> <div data-bind="html:someProp"></div> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { someProp : "<script>alert('Evil!')<" + "/script>" }; ko.applyBindings(viewModel); </script>   So, in the page above, the DIV element which uses the TEXT binding is safe from XSS attacks. According to the Knockout documentation: “Since this binding sets your text value using a text node, it’s safe to set any string value without risking HTML or script injection.” Just like server-side HTML encoding, Knockout does not protect you from all types of XSS attacks. For example, there is nothing in Knockout which prevents you from binding JavaScript to a hyperlink like this: <a data-bind="attr:{href:homePageUrl}">Go</a> <script src="Scripts/jquery-1.7.1.min.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { homePageUrl: "javascript:alert('evil!')" }; ko.applyBindings(viewModel); </script> In the page above, the value “javascript:alert(‘evil’)” is bound to the HREF attribute using Knockout. When you click the link, the JavaScript executes. Cross-Site Request Forgery (CSRF) Attacks Cross-Site Request Forgery (CSRF) attacks rely on the fact that a session cookie does not expire until you close your browser. In particular, if you visit and login to MajorBank.com and then you navigate to Hackers.com then you will still be authenticated against MajorBank.com even after you navigate to Hackers.com. Because MajorBank.com cannot tell whether a request is coming from MajorBank.com or Hackers.com, Hackers.com can submit requests to MajorBank.com pretending to be you. For example, Hackers.com can post an HTML form from Hackers.com to MajorBank.com and change your email address at MajorBank.com. Hackers.com can post a form to MajorBank.com using your authentication cookie. After your email address has been changed, by using a password reset page at MajorBank.com, a hacker can access your bank account. To prevent CSRF attacks, you need some mechanism for detecting whether a request is coming from a page loaded from your website or whether the request is coming from some other website. The recommended way of preventing Cross-Site Request Forgery attacks is to use the “Synchronizer Token Pattern” as described here: https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29_Prevention_Cheat_Sheet When using the Synchronizer Token Pattern, you include a hidden input field which contains a random token whenever you display an HTML form. When the user opens the form, you add a cookie to the user’s browser with the same random token. When the user posts the form, you verify that the hidden form token and the cookie token match. Preventing Cross-Site Request Forgery Attacks with ASP.NET MVC ASP.NET gives you a helper and an action filter which you can use to thwart Cross-Site Request Forgery attacks. For example, the following razor form for creating a product shows how you use the @Html.AntiForgeryToken() helper: @model MvcApplication2.Models.Product <h2>Create Product</h2> @using (Html.BeginForm()) { @Html.AntiForgeryToken(); <div> @Html.LabelFor( p => p.Name, "Product Name:") @Html.TextBoxFor( p => p.Name) </div> <div> @Html.LabelFor( p => p.Price, "Product Price:") @Html.TextBoxFor( p => p.Price) </div> <input type="submit" /> } The @Html.AntiForgeryToken() helper generates a random token and assigns a serialized version of the same random token to both a cookie and a hidden form field. (Actually, if you dive into the source code, the AntiForgeryToken() does something a little more complex because it takes advantage of a user’s identity when generating the token). Here’s what the hidden form field looks like: <input name=”__RequestVerificationToken” type=”hidden” value=”NqqZGAmlDHh6fPTNR_mti3nYGUDgpIkCiJHnEEL59S7FNToyyeSo7v4AfzF2i67Cv0qTB1TgmZcqiVtgdkW2NnXgEcBc-iBts0x6WAIShtM1″ /> And here’s what the cookie looks like using the Google Chrome developer toolbar: You use the [ValidateAntiForgeryToken] action filter on the controller action which is the recipient of the form post to validate that the token in the hidden form field matches the token in the cookie. If the tokens don’t match then validation fails and you can’t post the form: public ActionResult Create() { return View(); } [ValidateAntiForgeryToken] [HttpPost] public ActionResult Create(Product productToCreate) { if (ModelState.IsValid) { // save product to db return RedirectToAction("Index"); } return View(); } How does this all work? Let’s imagine that a hacker has copied the Create Product page from MajorBank.com to Hackers.com – the hacker grabs the HTML source and places it at Hackers.com. Now, imagine that the hacker trick you into submitting the Create Product form from Hackers.com to MajorBank.com. You’ll get the following exception: The Cross-Site Request Forgery attack is blocked because the anti-forgery token included in the Create Product form at Hackers.com won’t match the anti-forgery token stored in the cookie in your browser. The tokens were generated at different times for different users so the attack fails. Preventing Cross-Site Request Forgery Attacks with a Single Page App In a Single Page App, you can’t prevent Cross-Site Request Forgery attacks using the same method as a server-side ASP.NET MVC app. In a Single Page App, HTML forms are not generated on the server. Instead, in a Single Page App, forms are loaded dynamically in the browser. Phil Haack has a blog post on this topic where he discusses passing the anti-forgery token in an Ajax header instead of a hidden form field. He also describes how you can create a custom anti-forgery token attribute to compare the token in the Ajax header and the token in the cookie. See: http://haacked.com/archive/2011/10/10/preventing-csrf-with-ajax.aspx Also, take a look at Johan’s update to Phil Haack’s original post: http://johan.driessen.se/posts/Updated-Anti-XSRF-Validation-for-ASP.NET-MVC-4-RC (Other server frameworks such as Rails and Django do something similar. For example, Rails uses an X-CSRF-Token to prevent CSRF attacks which you generate on the server – see http://excid3.com/blog/rails-tip-2-include-csrf-token-with-every-ajax-request/#.UTFtgDDkvL8 ). For example, if you are creating a Durandal app, then you can use the following razor view for your one and only server-side page: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> @Html.AntiForgeryToken() <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that this page includes a call to @Html.AntiForgeryToken() to generate the anti-forgery token. Then, whenever you make an Ajax request in the Durandal app, you can retrieve the anti-forgery token from the razor view and pass the token as a header: var csrfToken = $("input[name='__RequestVerificationToken']").val(); $.ajax({ headers: { __RequestVerificationToken: csrfToken }, type: "POST", dataType: "json", contentType: 'application/json; charset=utf-8', url: "/api/products", data: JSON.stringify({ name: "Milk", price: 2.33 }), statusCode: { 200: function () { alert("Success!"); } } }); Use the following code to create an action filter which you can use to match the header and cookie tokens: using System.Linq; using System.Net.Http; using System.Web.Helpers; using System.Web.Http.Controllers; namespace MvcApplication2.Infrastructure { public class ValidateAjaxAntiForgeryToken : System.Web.Http.AuthorizeAttribute { protected override bool IsAuthorized(HttpActionContext actionContext) { var headerToken = actionContext .Request .Headers .GetValues("__RequestVerificationToken") .FirstOrDefault(); ; var cookieToken = actionContext .Request .Headers .GetCookies() .Select(c => c[AntiForgeryConfig.CookieName]) .FirstOrDefault(); // check for missing cookie or header if (cookieToken == null || headerToken == null) { return false; } // ensure that the cookie matches the header try { AntiForgery.Validate(cookieToken.Value, headerToken); } catch { return false; } return base.IsAuthorized(actionContext); } } } Notice that the action filter derives from the base AuthorizeAttribute. The ValidateAjaxAntiForgeryToken only works when the user is authenticated and it will not work for anonymous requests. Add the action filter to your ASP.NET Web API controller actions like this: [ValidateAjaxAntiForgeryToken] public HttpResponseMessage PostProduct(Product productToCreate) { // add product to db return Request.CreateResponse(HttpStatusCode.OK); } After you complete these steps, it won’t be possible for a hacker to pretend to be you at Hackers.com and submit a form to MajorBank.com. The header token used in the Ajax request won’t travel to Hackers.com. This approach works, but I am not entirely happy with it. The one thing that I don’t like about this approach is that it creates a hard dependency on using razor. Your single page in your Single Page App must be generated from a server-side razor view. A better solution would be to generate the anti-forgery token in JavaScript. Unfortunately, until all browsers support a way to generate cryptographically strong random numbers – for example, by supporting the window.crypto.getRandomValues() method — there is no good way to generate anti-forgery tokens in JavaScript. So, at least right now, the best solution for generating the tokens is the server-side solution with the (regrettable) dependency on razor. Conclusion The goal of this blog entry was to explore some ways in which you need to handle security differently in the case of a Single Page App than in the case of a traditional server app. In particular, I focused on how to prevent Cross-Site Scripting and Cross-Site Request Forgery attacks in the case of a Single Page App. I want to emphasize that I am not suggesting that Single Page Apps are inherently less secure than server-side apps. Whatever type of web application you build – regardless of whether it is a Single Page App, an ASP.NET MVC app, an ASP.NET Web Forms app, or a Rails app – you must constantly guard against security vulnerabilities.

    Read the article

  • CodePlex Daily Summary for Wednesday, November 23, 2011

    CodePlex Daily Summary for Wednesday, November 23, 2011Popular ReleasesVisual Leak Detector for Visual C++ 2008/2010: v2.2.1: Enhancements: * strdup and _wcsdup functions support added. * Preliminary support for VS 11 added. Bugs Fixed: * Low performance after upgrading from VLD v2.1. * Memory leaks with static linking fixed (disabled calloc support). * Runtime error R6002 fixed because of wrong memory dump format. * version.h fixed in installer. * Some PVS studio warning fixed.NetSqlAzMan - .NET SQL Authorization Manager: 3.6.0.10: 3.6.0.10 22-Nov-2011 Update: Removed PreEmptive Platform integration (PreEmptive analytics) Removed all PreEmptive attributes Removed PreEmptive.dll assembly references from all projects Added first support to ADAM/AD LDS Thanks to PatBea. Work Item 9775: http://netsqlazman.codeplex.com/workitem/9775Developer Team Article System Management: DTASM v1.3: ?? ??? ???? 3 ????? ???? ???? ????? ??? : - ????? ?????? ????? ???? ?? ??? ???? ????? ?? ??? ? ?? ???? ?????? ???? ?? ???? ????? ?? . - ??? ?? ???? ????? ???? ????? ???? ???? ?? ????? , ?????? ????? ????? ?? ??? . - ??? ??????? ??? ??? ???? ?? ????? ????? ????? .SharePoint 2010 FBA Pack: SharePoint 2010 FBA Pack 1.2.0: Web parts are now fully customizable via html templates (Issue #323) FBA Pack is now completely localizable using resource files. Thank you David Chen for submitting the code as well as Chinese translations of the FBA Pack! The membership request web part now gives the option of having the user enter the password and removing the captcha (Issue # 447) The FBA Pack will now work in a zone that does not have FBA enabled (Another zone must have FBA enabled, and the zone must contain the me...SharePoint 2010 Education Demo Project: Release SharePoint SP1 for Education Solutions: This release includes updates to the Content Packs for SharePoint SP1. All Content Packs have been updated to install successfully under SharePoint SP1SQL Monitor - tracking sql server activities: SQLMon 4.1 alpha 6: 1. improved support for schema 2. added find reference when right click on object list 3. added object rename supportBugNET Issue Tracker: BugNET 0.9.126: First stable release of version 0.9. Upgrades from 0.8 are fully supported and upgrades to future releases will also be supported. This release is now compiled against the .NET 4.0 framework and is a requirement. Because of this the web.config has significantly changed. After upgrading, you will need to configure the authentication settings for user registration and anonymous access again. Please see our installation / upgrade instructions for more details: http://wiki.bugnetproject.c...Anno 2070 Assistant: v0.1.0 (STABLE): Version 0.1.0 Features Production Chains Eco Production Chains (Complete) Tycoon Production Chains (Disabled - Incomplete) Tech Production Chains (Disabled - Incomplete) Supply (Disabled - Incomplete) Calculator (Disabled - Incomplete) Building Layouts Eco Building Layouts (Complete) Tycoon Building Layouts (Disabled - Incomplete) Tech Building Layouts (Disabled - Incomplete) Credits (Complete)Free SharePoint 2010 Sites Templates: SharePoint Server 2010 Sites Templates: here is the list of sites templates to be downloadedVsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 30 Beta: Note: This release does not work with custom VsTortoise toolbars. These get removed every time when you shutdown Visual Studio. (#7940) Build 30 (beta)New: Support for TortoiseSVN 1.7 added. (the download contains both setups, for TortoiseSVN 1.6 and 1.7) New: OpenModifiedDocumentDialog displays conflicted files now. New: OpenModifiedDocument allows to group items by changelist now. Fix: OpenModifiedDocumentDialog caused Visual Studio 2010 to freeze sometimes. Fix: The installer didn...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.30: Highlight features & improvements: • Performance optimization. • Back in stock notifications. • Product special price support. • Catalog mode (based on customer role) To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).WPF Converters: WPF Converters V1.2.0.0: support for enumerations, value types, and reference types in the expression converter's equality operators the expression converter now handles DependencyProperty.UnsetValue as argument values correctly (#4062) StyleCop conformance (more or less)Json.NET: Json.NET 4.0 Release 4: Change - JsonTextReader.Culture is now CultureInfo.InvariantCulture by default Change - KeyValurPairConverter no longer cares about the order of the key and value properties Change - Time zone conversions now use new TimeZoneInfo instead of TimeZone Fix - Fixed boolean values sometimes being capitalized when converting to XML Fix - Fixed error when deserializing ConcurrentDictionary Fix - Fixed serializing some Uris returning the incorrect value Fix - Fixed occasional error when...Media Companion: MC 3.423b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Replaced 'Rebuild' with 'Refresh' throughout entire code. Rebuild will now be known as Refresh. mc_com.exe has been fully updated TV Show Resolutions... Resolved issue #206 - having to hit save twice when updating runtime manually Shrunk cache size and lowered loading times f...Delta Engine: Delta Engine Beta Preview v0.9.1: v0.9.1 beta release with lots of refactoring, fixes, new samples and support for iOS, Android and WP7 (you need a Marketplace account however). If you want a binary release for the games (like v0.9.0), just say so in the Forum or here and we will quickly prepare one. It is just not much different from v0.9.0, so I left it out this time. See http://DeltaEngine.net/Wiki.Roadmap for details.ASP.net Awesome Samples (Web-Forms): 1.0 samples: Demos and Tutorials for ASP.net Awesome VS2008 are in .NET 3.5 VS2010 are in .NET 4.0 (demos for the ASP.net Awesome jQuery Ajax Controls)SharpMap - Geospatial Application Framework for the CLR: SharpMap-0.9-AnyCPU-Trunk-2011.11.17: This is a build of SharpMap from the 0.9 development trunk as per 2011-11-17 For most applications the AnyCPU release is the recommended, but in case you need an x86 build that is included to. For some dataproviders (GDAL/OGR, SqLite, PostGis) you need to also referense the SharpMap.Extensions assembly For SqlServer Spatial you need to reference the SharpMap.SqlServerSpatial assemblyAJAX Control Toolkit: November 2011 Release: AJAX Control Toolkit Release Notes - November 2011 Release Version 51116November 2011 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4 - Binary – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 - Binary – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - The current version of the AJAX Control Toolkit is not compatible with ASP.NET 2.0. The latest version that is compatible with ASP.NET 2.0 can be found h...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.36: Fix for issue #16908: string literals containing ASP.NET replacement syntax fail if the ASP.NET code contains the same character as the string literal delimiter. Also, we shouldn't be changing the delimiter for those literals or combining them with other literals; the developer may have specifically chosen the delimiter used because of possible content inserted by ASP.NET code. This logic is normally off; turn it on via the -aspnet command-line flag (or the Code.Settings.AllowEmbeddedAspNetBl...MVC Controls Toolkit: Mvc Controls Toolkit 1.5.5: Added: Now the DateRanteAttribute accepts complex expressions containing "Now" and "Today" as static minimum and maximum. Menu, MenuFor helpers capable of handling a "currently selected element". The developer can choose between using a standard nested menu based on a standard SimpleMenuItem class or specifying an item template based on a custom class. Added also helpers to build the tree structure containing all data items the menu takes infos from. Improved the pager. Now the developer ...New ProjectsActiveWorlds World Server Admin PowerShell SnapIn: The purpose of this PowerShell SnapIn is to provide a set of tools to administer the world server from PowerShell. It leverages the ActiveWorlds SDK .NET Wrapper to provide this functionality.Aigu: Enter special characters like you would on your mobile phone. For instance, if you want to type 'é', you just hold down 'e' and a menu will appear. Selected the desired character using the arrow keys and press 'enter'. Simple but powerful.Are you workaholic?: Are you a workaholic? Did your Doctor advice you not to stare at the computer monitor for a long time? Then this app is perfectly made for you. It runs in the background, and alerts you to take periodic rests for your eyes and body. What's more, It's open source (MS-PL).ATDIS PoC: privateAuto Version Web Assets: The AVWA project is an HTTP Module written in C# that is designed to allow for versioning of various web assets such as .CSS and .JS files. This allows you to publish new versions of these files without having to force the server or the client browsers to expire cache.Bachelor Thesis Algorithm Test Bed: Algorithm Test Bed for my Bachelor ThesisBase64: Simple application helps converting strings and files from or to Base64 string. You can use any encoding to convert while a sidebar previews decoded string for all other encodings.BoracayExpress: BoracayExpressC++ Framework for Test Driven Development: A testing framework for C++ written in C++.Class2Table: Class2Table aka Entity2Table. Easy tool that allows creation of SQL tables from .Net types.Code for Demos & Experiments: This is where I will post code from demos and presentationsCodeMaker: CodeMaker?????????: 1、?????????? 2、???? 3、????? 4、??Python????????? ConsoleCommand: ConsoleCommand provides certain .Net commands for access from javascript console engines. Included are commands to set the text and background colors, as well as list and extract resources compiled in a .Net dll. Converter: Character code conversion tools ???????? CryptoInator - self contained, self-encrypting, self-decrypting image viewer: Original developed to encrypt and store NemID images in Denmark. DAiBears: Something, something, botDelicious Notify Plugin: Lets you push a blog post straight to Delicious from Live WriterDeveloperFile: Compresses Javascripts using the YUI .NET project. Loops through the root folder and subfolders for files matching the debug extension and creates new files using the release extension. (File extensions must match exactly).DotNetNuke SharePoint File Explorer: A DotNetNuke SharePoint File ExplorerDouban FM: WP7 Douban FM appGame Lib: Game Library is a open-source game library to allow focusing on the fun part of a game. It is developed in C#, but will be ported to C++ and VB.net.Google reader notes to Delicious Export tool (WPF): Google reader discontinued note in reader features. Current google reader allows to export users old notes in JSON format, This App will parse the JSON file & upload it to it delicious , delicious is a good alternative for note in readerHtml Source Transmitter Control: This web control allows getting a source of a web page, that will displayed before submit. So, developer can store a view of the html page, that was before server exception. It helps to reproduce bugs and can be used with other logging systems.Ideopuzzle: A puzzle gameImageShack-Uploader: This project demonstrates how to upload files automated to imageshack.us and other image hosters with C#.Insert Acronym Tags: Lets you insert <acronym> and <abbr> tags into your blog entry more easily.Insert Quick Link: Allows you to paste a link into the Writer window and have the a window similar to the one in Writer where you can change what text is to appear, open in new window, etc.Insert Video Plugin: Allows you to insert a video into a blog entry from a multitude of different sitesIoCWrap: Provides a wrapper to the various IoC container implementations so that it is possible to switch to a different provider without changing any application code.kaveepoj: sharepoint projectKinect Quiz Engine: Fun quiz game for the Kinect.Klaverjas: Test application for testing different new technologies in .NET (WCF, DataServices, C# stuff, Entity...etc.)Man In The Middle: A cyberpunk themed action with puzzle and strategy elements. Made with XNA as part of a game development course at the IT University of Copenhagen by Bo Bendtsen, Jonas Flensbak, Daniel Kromand, Jess Rahbek & Darryl Woodford.MediaSelektor: Simple tool to select mediasMicajah Mindtouch Deki Wiki Copier: Small C# application to move data between 2 Deki Wiki installs or, more importantly, from a wik.is account to a locally installed systemMineFlagger: MineFlagger is a mine clearing game modeled after Microsoft’s Minesweeper. In addition to standard play, MineFlagger incorporates an AI for fun and training.myXbyqwrhjadsfasfhgf: myXbyqwrhjadsfasfhgfnatoop: natoopNauplius.KeyStore: Provides secure application key storage backed by SQL 2008 and Active Directory.ObjectDB: An object database written using C# 4 and Mono.Cecil.PaceR: PaceR is an attempt to encapsulate a lot of the common code functionality I use on different projects. Instead of recreating functionality from memory or worse, copying from older projects, I'd like to have a central location to maintain this common code. Parseq: Parseq is a Parser Combinator library written in C# (version 2.0).PowerShell Network Adapter Configuration module: PowerShell Network Adapter Configuration module is a PowerShell module which provides functions for managing network adapters using WMI.public traffic tracker: This is a university project for a .net course. We develop a public traffic tracker applications for Windows Phone 7 devices, that can give information about the actual positions of the nearest vehicle on a given line. The speciality is that we use only the GPS information of the users' WP7 devices, so this is a completely software solution without any hardware investment. The disatvantage is that for the real operation we would need a lot of active WP7 user.puyo: puyoRadioTroll: Projeto web Radio TrollRead Feed Community: Read Feed CommunityReviewer: Reviewer.dk - Dansk spil og anmeldelsessite.Rollout Sharepoint Solutions - ROSS: ROSS performs the following actions: - Delete sitecollection and restart services - 'Get Latest Version' from SourceSafe - Rebuild Solution - Install all wsp solutions - Create SiteCollections - Check for build en provisioning errors - Send email to developers if errors occurredSchool Management: school managementSQL File Executer: This project is a class library written in c# which is used for executing *.sql files in remote server. Simply one dll file. You include it in your web project, add using statement at the top of your page, pass the parameters inside. Rest, it will do.Startup Manager: Startup Manager launches all startup programs at a managed rate therefore meaning that your computer doesn't crash everytime it starts up and you can use it immediately.stetic: ...Test Infrastructure Guidance: The purpose of this project is to provide guidance to testers in using TFS effectively as an ALM solution. TFS is much more than a simple code repository. Used with Visual Studio it can form a powerful testing solution and remove a lot of pain in dealing with test infrastructure overhead.Tête-à-tête: Tete-a-tete is an address book with a built-in function to send electronic mail over the Internet.Tipeysh! - Add-in that helps you creating C/C++ header files on a single click: Are you also feel miserable when you need to create a new header file in your Visual Studio C/C++ project? Repeatedly choosing "new header file", then writing the annoying (but needed) "#ifndef" section, then writing the class name with it's "private", "protected" and "public" access modifiers... too much clicks and typewriting! Well, there is a solution: Tipeysh! is a simple, easy to use, very handy and configurable Visual Studio Add-In, compatible for both the 2005 and 2008 versions. Once ...UMN Dashboard Project: academic projUsersMOSS: UsersMOSS est une petite application permettant de consulter sur un serveur MOSS les sites web (SPWeb) les users (SPUser), et les groupes (SPGroup). Cette application utilise le modèle objet de MOSS pour inspecter le contenu des objets d'un serveur MOSS. Cette application est loin d'être professionnelle, ou même terminée, mais elle me rend très souvent service. Surtout ne l'utilisez pas sur un serveur de production car le gestion du GC n'est pas faite, ce qui peut provoquer des plantages de v...UtilityLibrary.Win32: UtilityLibrary.Win32UW iLearn: The iLearn activity inference platform is a suite of desktop and mobile tools for logging, modeling, and classifying sensor data for mobile devices. It was created at the University of Washington.VsDocGen: Dynamic javascript documentation generation directly from xml comment documented source code.Windows Live Spaces Photo Album plugin: This is going to be a plugin for Windows Live Writer that will allow you to browse a Windows Live Space Photo Album.Windows Live Writer Plugin for Amazon Books using CueCat: This Windows Live Writer Plugin is for users who use WLW and wish to use their CueCat to scan books. ItemLookups are run against Amazon via its AWS and book image, title, author, and publisher is returned. This project was first created by Scott Hanselman on MSDN's Coding4Fun! X7: X7 makes it easier for win7user to clean the system. You'll no longer have to delete useless stuff in your win7. It's developed in bat.xDT - Commander: Using this application, the user can assign shortcuts (short texts) for various links/URLs. These short texts will be typed into a Textbox to then launch/go to the target (similar to the "Run" program in Windows).

    Read the article

  • Using the ASP.NET Cache to cache data in a Model or Business Object layer, without a dependency on System.Web in the layer - Part One.

    - by Rhames
    ASP.NET applications can make use of the System.Web.Caching.Cache object to cache data and prevent repeated expensive calls to a database or other store. However, ideally an application should make use of caching at the point where data is retrieved from the database, which typically is inside a Business Objects or Model layer. One of the key features of using a UI pattern such as Model-View-Presenter (MVP) or Model-View-Controller (MVC) is that the Model and Presenter (or Controller) layers are developed without any knowledge of the UI layer. Introducing a dependency on System.Web into the Model layer would break this independence of the Model from the View. This article gives a solution to this problem, using dependency injection to inject the caching implementation into the Model layer at runtime. This allows caching to be used within the Model layer, without any knowledge of the actual caching mechanism that will be used. Create a sample application to use the caching solution Create a test SQL Server database This solution uses a SQL Server database with the same Sales data used in my previous post on calculating running totals. The advantage of using this data is that it gives nice slow queries that will exaggerate the effect of using caching! To create the data, first create a new SQL database called CacheSample. Next run the following script to create the Sale table and populate it: USE CacheSample GO   CREATE TABLE Sale(DayCount smallint, Sales money) CREATE CLUSTERED INDEX ndx_DayCount ON Sale(DayCount) go INSERT Sale VALUES (1,120) INSERT Sale VALUES (2,60) INSERT Sale VALUES (3,125) INSERT Sale VALUES (4,40)   DECLARE @DayCount smallint, @Sales money SET @DayCount = 5 SET @Sales = 10   WHILE @DayCount < 5000  BEGIN  INSERT Sale VALUES (@DayCount,@Sales)  SET @DayCount = @DayCount + 1  SET @Sales = @Sales + 15  END Next create a stored procedure to calculate the running total, and return a specified number of rows from the Sale table, using the following script: USE [CacheSample] GO   SET ANSI_NULLS ON GO   SET QUOTED_IDENTIFIER ON GO   -- ============================================= -- Author:        Robin -- Create date: -- Description:   -- ============================================= CREATE PROCEDURE [dbo].[spGetRunningTotals]       -- Add the parameters for the stored procedure here       @HighestDayCount smallint = null AS BEGIN       -- SET NOCOUNT ON added to prevent extra result sets from       -- interfering with SELECT statements.       SET NOCOUNT ON;         IF @HighestDayCount IS NULL             SELECT @HighestDayCount = MAX(DayCount) FROM dbo.Sale                   DECLARE @SaleTbl TABLE (DayCount smallint, Sales money, RunningTotal money)         DECLARE @DayCount smallint,                   @Sales money,                   @RunningTotal money         SET @RunningTotal = 0       SET @DayCount = 0         DECLARE rt_cursor CURSOR       FOR       SELECT DayCount, Sales       FROM Sale       ORDER BY DayCount         OPEN rt_cursor         FETCH NEXT FROM rt_cursor INTO @DayCount,@Sales         WHILE @@FETCH_STATUS = 0 AND @DayCount <= @HighestDayCount        BEGIN        SET @RunningTotal = @RunningTotal + @Sales        INSERT @SaleTbl VALUES (@DayCount,@Sales,@RunningTotal)        FETCH NEXT FROM rt_cursor INTO @DayCount,@Sales        END         CLOSE rt_cursor       DEALLOCATE rt_cursor         SELECT DayCount, Sales, RunningTotal       FROM @SaleTbl   END   GO   Create the Sample ASP.NET application In Visual Studio create a new solution and add a class library project called CacheSample.BusinessObjects and an ASP.NET web application called CacheSample.UI. The CacheSample.BusinessObjects project will contain a single class to represent a Sale data item, with all the code to retrieve the sales from the database included in it for simplicity (normally I would at least have a separate Repository or other object that is responsible for retrieving data, and probably a data access layer as well, but for this sample I want to keep it simple). The C# code for the Sale class is shown below: using System; using System.Collections.Generic; using System.Data; using System.Data.SqlClient;   namespace CacheSample.BusinessObjects {     public class Sale     {         public Int16 DayCount { get; set; }         public decimal Sales { get; set; }         public decimal RunningTotal { get; set; }           public static IEnumerable<Sale> GetSales(int? highestDayCount)         {             List<Sale> sales = new List<Sale>();               SqlParameter highestDayCountParameter = new SqlParameter("@HighestDayCount", SqlDbType.SmallInt);             if (highestDayCount.HasValue)                 highestDayCountParameter.Value = highestDayCount;             else                 highestDayCountParameter.Value = DBNull.Value;               string connectionStr = System.Configuration.ConfigurationManager .ConnectionStrings["CacheSample"].ConnectionString;               using(SqlConnection sqlConn = new SqlConnection(connectionStr))             using (SqlCommand sqlCmd = sqlConn.CreateCommand())             {                 sqlCmd.CommandText = "spGetRunningTotals";                 sqlCmd.CommandType = CommandType.StoredProcedure;                 sqlCmd.Parameters.Add(highestDayCountParameter);                   sqlConn.Open();                   using (SqlDataReader dr = sqlCmd.ExecuteReader())                 {                     while (dr.Read())                     {                         Sale newSale = new Sale();                         newSale.DayCount = dr.GetInt16(0);                         newSale.Sales = dr.GetDecimal(1);                         newSale.RunningTotal = dr.GetDecimal(2);                           sales.Add(newSale);                     }                 }             }               return sales;         }     } }   The static GetSale() method makes a call to the spGetRunningTotals stored procedure and then reads each row from the returned SqlDataReader into an instance of the Sale class, it then returns a List of the Sale objects, as IEnnumerable<Sale>. A reference to System.Configuration needs to be added to the CacheSample.BusinessObjects project so that the connection string can be read from the web.config file. In the CacheSample.UI ASP.NET project, create a single web page called ShowSales.aspx, and make this the default start up page. This page will contain a single button to call the GetSales() method and a label to display the results. The html mark up and the C# code behind are shown below: ShowSales.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="ShowSales.aspx.cs" Inherits="CacheSample.UI.ShowSales" %>   <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">   <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server">     <title>Cache Sample - Show All Sales</title> </head> <body>     <form id="form1" runat="server">     <div>         <asp:Button ID="btnTest1" runat="server" onclick="btnTest1_Click"             Text="Get All Sales" />         &nbsp;&nbsp;&nbsp;         <asp:Label ID="lblResults" runat="server"></asp:Label>         </div>     </form> </body> </html>   ShowSales.aspx.cs using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls;   using CacheSample.BusinessObjects;   namespace CacheSample.UI {     public partial class ShowSales : System.Web.UI.Page     {         protected void Page_Load(object sender, EventArgs e)         {         }           protected void btnTest1_Click(object sender, EventArgs e)         {             System.Diagnostics.Stopwatch stopWatch = new System.Diagnostics.Stopwatch();             stopWatch.Start();               var sales = Sale.GetSales(null);               var lastSales = sales.Last();               stopWatch.Stop();               lblResults.Text = string.Format( "Count of Sales: {0}, Last DayCount: {1}, Total Sales: {2}. Query took {3} ms", sales.Count(), lastSales.DayCount, lastSales.RunningTotal, stopWatch.ElapsedMilliseconds);         }       } }   Finally we need to add a connection string to the CacheSample SQL Server database, called CacheSample, to the web.config file: <?xmlversion="1.0"?>   <configuration>    <connectionStrings>     <addname="CacheSample"          connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;Initial Catalog=CacheSample"          providerName="System.Data.SqlClient" />  </connectionStrings>    <system.web>     <compilationdebug="true"targetFramework="4.0" />  </system.web>   </configuration>   Run the application and click the button a few times to see how long each call to the database takes. On my system, each query takes about 450ms. Next I shall look at a solution to use the ASP.NET caching to cache the data returned by the query, so that subsequent requests to the GetSales() method are much faster. Adding Data Caching Support I am going to create my caching support in a separate project called CacheSample.Caching, so the next step is to add a class library to the solution. We shall be using the application configuration to define the implementation of our caching system, so we need a reference to System.Configuration adding to the project. ICacheProvider<T> Interface The first step in adding caching to our application is to define an interface, called ICacheProvider, in the CacheSample.Caching project, with methods to retrieve any data from the cache or to retrieve the data from the data source if it is not present in the cache. Dependency Injection will then be used to inject an implementation of this interface at runtime, allowing the users of the interface (i.e. the CacheSample.BusinessObjects project) to be completely unaware of how the caching is actually implemented. As data of any type maybe retrieved from the data source, it makes sense to use generics in the interface, with a generic type parameter defining the data type associated with a particular instance of the cache interface implementation. The C# code for the ICacheProvider interface is shown below: using System; using System.Collections.Generic;   namespace CacheSample.Caching {     public interface ICacheProvider     {     }       public interface ICacheProvider<T> : ICacheProvider     {         T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry);           IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry);     } }   The empty non-generic interface will be used as a type in a Dictionary generic collection later to store instances of the ICacheProvider<T> implementation for reuse, I prefer to use a base interface when doing this, as I think the alternative of using object makes for less clear code. The ICacheProvider<T> interface defines two overloaded Fetch methods, the difference between these is that one will return a single instance of the type T and the other will return an IEnumerable<T>, providing support for easy caching of collections of data items. Both methods will take a key parameter, which will uniquely identify the cached data, a delegate of type Func<T> or Func<IEnumerable<T>> which will provide the code to retrieve the data from the store if it is not present in the cache, and absolute or relative expiry policies to define when a cached item should expire. Note that at present there is no support for cache dependencies, but I shall be showing a method of adding this in part two of this article. CacheProviderFactory Class We need a mechanism of creating instances of our ICacheProvider<T> interface, using Dependency Injection to get the implementation of the interface. To do this we shall create a CacheProviderFactory static class in the CacheSample.Caching project. This factory will provide a generic static method called GetCacheProvider<T>(), which shall return instances of ICacheProvider<T>. We can then call this factory method with the relevant data type (for example the Sale class in the CacheSample.BusinessObject project) to get a instance of ICacheProvider for that type (e.g. call CacheProviderFactory.GetCacheProvider<Sale>() to get the ICacheProvider<Sale> implementation). The C# code for the CacheProviderFactory is shown below: using System; using System.Collections.Generic;   using CacheSample.Caching.Configuration;   namespace CacheSample.Caching {     public static class CacheProviderFactory     {         private static Dictionary<Type, ICacheProvider> cacheProviders = new Dictionary<Type, ICacheProvider>();         private static object syncRoot = new object();           ///<summary>         /// Factory method to create or retrieve an implementation of the  /// ICacheProvider interface for type <typeparamref name="T"/>.         ///</summary>         ///<typeparam name="T">  /// The type that this cache provider instance will work with  ///</typeparam>         ///<returns>An instance of the implementation of ICacheProvider for type  ///<typeparamref name="T"/>, as specified by the application  /// configuration</returns>         public static ICacheProvider<T> GetCacheProvider<T>()         {             ICacheProvider<T> cacheProvider = null;             // Get the Type reference for the type parameter T             Type typeOfT = typeof(T);               // Lock the access to the cacheProviders dictionary             // so multiple threads can work with it             lock (syncRoot)             {                 // First check if an instance of the ICacheProvider implementation  // already exists in the cacheProviders dictionary for the type T                 if (cacheProviders.ContainsKey(typeOfT))                     cacheProvider = (ICacheProvider<T>)cacheProviders[typeOfT];                 else                 {                     // There is not already an instance of the ICacheProvider in       // cacheProviders for the type T                     // so we need to create one                       // Get the Type reference for the application's implementation of       // ICacheProvider from the configuration                     Type cacheProviderType = Type.GetType(CacheProviderConfigurationSection.Current. CacheProviderType);                     if (cacheProviderType != null)                     {                         // Now get a Type reference for the Cache Provider with the                         // type T generic parameter                         Type typeOfCacheProviderTypeForT = cacheProviderType.MakeGenericType(new Type[] { typeOfT });                         if (typeOfCacheProviderTypeForT != null)                         {                             // Create the instance of the Cache Provider and add it to // the cacheProviders dictionary for future use                             cacheProvider = (ICacheProvider<T>)Activator. CreateInstance(typeOfCacheProviderTypeForT);                             cacheProviders.Add(typeOfT, cacheProvider);                         }                     }                 }             }               return cacheProvider;                 }     } }   As this code uses Activator.CreateInstance() to create instances of the ICacheProvider<T> implementation, which is a slow process, the factory class maintains a Dictionary of the previously created instances so that a cache provider needs to be created only once for each type. The type of the implementation of ICacheProvider<T> is read from a custom configuration section in the application configuration file, via the CacheProviderConfigurationSection class, which is described below. CacheProviderConfigurationSection Class The implementation of ICacheProvider<T> will be specified in a custom configuration section in the application’s configuration. To handle this create a folder in the CacheSample.Caching project called Configuration, and add a class called CacheProviderConfigurationSection to this folder. This class will extend the System.Configuration.ConfigurationSection class, and will contain a single string property called CacheProviderType. The C# code for this class is shown below: using System; using System.Configuration;   namespace CacheSample.Caching.Configuration {     internal class CacheProviderConfigurationSection : ConfigurationSection     {         public static CacheProviderConfigurationSection Current         {             get             {                 return (CacheProviderConfigurationSection) ConfigurationManager.GetSection("cacheProvider");             }         }           [ConfigurationProperty("type", IsRequired=true)]         public string CacheProviderType         {             get             {                 return (string)this["type"];             }         }     } }   Adding Data Caching to the Sales Class We now have enough code in place to add caching to the GetSales() method in the CacheSample.BusinessObjects.Sale class, even though we do not yet have an implementation of the ICacheProvider<T> interface. We need to add a reference to the CacheSample.Caching project to CacheSample.BusinessObjects so that we can use the ICacheProvider<T> interface within the GetSales() method. Once the reference is added, we can first create a unique string key based on the method name and the parameter value, so that the same cache key is used for repeated calls to the method with the same parameter values. Then we get an instance of the cache provider for the Sales type, using the CacheProviderFactory, and pass the existing code to retrieve the data from the database as the retrievalMethod delegate in a call to the Cache Provider Fetch() method. The C# code for the modified GetSales() method is shown below: public static IEnumerable<Sale> GetSales(int? highestDayCount) {     string cacheKey = string.Format("CacheSample.BusinessObjects.GetSalesWithCache({0})", highestDayCount);       return CacheSample.Caching.CacheProviderFactory. GetCacheProvider<Sale>().Fetch(cacheKey,         delegate()         {             List<Sale> sales = new List<Sale>();               SqlParameter highestDayCountParameter = new SqlParameter("@HighestDayCount", SqlDbType.SmallInt);             if (highestDayCount.HasValue)                 highestDayCountParameter.Value = highestDayCount;             else                 highestDayCountParameter.Value = DBNull.Value;               string connectionStr = System.Configuration.ConfigurationManager. ConnectionStrings["CacheSample"].ConnectionString;               using (SqlConnection sqlConn = new SqlConnection(connectionStr))             using (SqlCommand sqlCmd = sqlConn.CreateCommand())             {                 sqlCmd.CommandText = "spGetRunningTotals";                 sqlCmd.CommandType = CommandType.StoredProcedure;                 sqlCmd.Parameters.Add(highestDayCountParameter);                   sqlConn.Open();                   using (SqlDataReader dr = sqlCmd.ExecuteReader())                 {                     while (dr.Read())                     {                         Sale newSale = new Sale();                         newSale.DayCount = dr.GetInt16(0);                         newSale.Sales = dr.GetDecimal(1);                         newSale.RunningTotal = dr.GetDecimal(2);                           sales.Add(newSale);                     }                 }             }               return sales;         },         null,         new TimeSpan(0, 10, 0)); }     This example passes the code to retrieve the Sales data from the database to the Cache Provider as an anonymous method, however it could also be written as a lambda. The main advantage of using an anonymous function (method or lambda) is that the code inside the anonymous function can access the parameters passed to the GetSales() method. Finally the absolute expiry is set to null, and the relative expiry set to 10 minutes, to indicate that the cache entry should be removed 10 minutes after the last request for the data. As the ICacheProvider<T> has a Fetch() method that returns IEnumerable<T>, we can simply return the results of the Fetch() method to the caller of the GetSales() method. This should be all that is needed for the GetSales() method to now retrieve data from a cache after the first time the data has be retrieved from the database. Implementing a ASP.NET Cache Provider The final step is to actually implement the ICacheProvider<T> interface, and add the implementation details to the web.config file for the dependency injection. The cache provider implementation needs to have access to System.Web. Therefore it could be placed in the CacheSample.UI project, or in its own project that has a reference to System.Web. Implementing the Cache Provider in a separate project is my favoured approach. Create a new project inside the solution called CacheSample.CacheProvider, and add references to System.Web and CacheSample.Caching to this project. Add a class to the project called AspNetCacheProvider. Make the class a generic class by adding the generic parameter <T> and indicate that the class implements ICacheProvider<T>. The C# code for the AspNetCacheProvider class is shown below: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Caching;   using CacheSample.Caching;   namespace CacheSample.CacheProvider {     public class AspNetCacheProvider<T> : ICacheProvider<T>     {         #region ICacheProvider<T> Members           public T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry)         {             return FetchAndCache<T>(key, retrieveData, absoluteExpiry, relativeExpiry);         }           public IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry)         {             return FetchAndCache<IEnumerable<T>>(key, retrieveData, absoluteExpiry, relativeExpiry);         }           #endregion           #region Helper Methods           private U FetchAndCache<U>(string key, Func<U> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry)         {             U value;             if (!TryGetValue<U>(key, out value))             {                 value = retrieveData();                 if (!absoluteExpiry.HasValue)                     absoluteExpiry = Cache.NoAbsoluteExpiration;                   if (!relativeExpiry.HasValue)                     relativeExpiry = Cache.NoSlidingExpiration;                   HttpContext.Current.Cache.Insert(key, value, null, absoluteExpiry.Value, relativeExpiry.Value);             }             return value;         }           private bool TryGetValue<U>(string key, out U value)         {             object cachedValue = HttpContext.Current.Cache.Get(key);             if (cachedValue == null)             {                 value = default(U);                 return false;             }             else             {                 try                 {                     value = (U)cachedValue;                     return true;                 }                 catch                 {                     value = default(U);                     return false;                 }             }         }           #endregion       } }   The two interface Fetch() methods call a private method called FetchAndCache(). This method first checks for a element in the HttpContext.Current.Cache with the specified cache key, and if so tries to cast this to the specified type (either T or IEnumerable<T>). If the cached element is found, the FetchAndCache() method simply returns it. If it is not found in the cache, the method calls the retrievalMethod delegate to get the data from the data source, and then adds this to the HttpContext.Current.Cache. The final step is to add the AspNetCacheProvider class to the relevant custom configuration section in the CacheSample.UI.Web.Config file. To do this there needs to be a <configSections> element added as the first element in <configuration>. This will match a custom section called <cacheProvider> with the CacheProviderConfigurationSection. Then we add a <cacheProvider> element, with a type property set to the fully qualified assembly name of the AspNetCacheProvider class, as shown below: <?xmlversion="1.0"?>   <configuration>  <configSections>     <sectionname="cacheProvider" type="CacheSample.Base.Configuration.CacheProviderConfigurationSection, CacheSample.Base" />  </configSections>    <connectionStrings>     <addname="CacheSample"          connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;Initial Catalog=CacheSample"          providerName="System.Data.SqlClient" />  </connectionStrings>    <cacheProvidertype="CacheSample.CacheProvider.AspNetCacheProvider`1, CacheSample.CacheProvider, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null">  </cacheProvider>    <system.web>     <compilationdebug="true"targetFramework="4.0" />  </system.web>   </configuration>   One point to note is that the fully qualified assembly name of the AspNetCacheProvider class includes the notation `1 after the class name, which indicates that it is a generic class with a single generic type parameter. The CacheSample.UI project needs to have references added to CacheSample.Caching and CacheSample.CacheProvider so that the actual application is aware of the relevant cache provider implementation. Conclusion After implementing this solution, you should have a working cache provider mechanism, that will allow the middle and data access layers to implement caching support when retrieving data, without any knowledge of the actually caching implementation. If the UI is not ASP.NET based, if for example it is Winforms or WPF, the implementation of ICacheProvider<T> would be written around whatever technology is available. It could even be a standalone caching system that takes full responsibility for adding and removing items from a global store. The next part of this article will show how this caching mechanism may be extended to provide support for cache dependencies, such as the System.Web.Caching.SqlCacheDependency. Another possible extension would be to cache the cache provider implementations instead of storing them in a static Dictionary in the CacheProviderFactory. This would prevent a build up of seldom used cache providers in the application memory, as they could be removed from the cache if not used often enough, although in reality there are probably unlikely to be vast numbers of cache provider implementation instances, as most applications do not have a massive number of business object or model types.

    Read the article

  • Converting Lighttpd config to NginX with php-fpm

    - by Le Dude
    Having so much issue with NginX configuration since I'm new with NginX. Been using Lighttpd for quite sometime. Here are the base info. New Machine - CentOS 6.3 64 Bit - NginX 1.2.4-1.e16.ngx - Php-FPM 5.3.18-1.e16.remi Old Machine - CentOS 6.2 64Bit - Lighttpd 1.4.25-3.e16 Original Lighttpd config file: ####################################################################### ## ## /etc/lighttpd/lighttpd.conf ## ## check /etc/lighttpd/conf.d/*.conf for the configuration of modules. ## ####################################################################### ####################################################################### ## ## Some Variable definition which will make chrooting easier. ## ## if you add a variable here. Add the corresponding variable in the ## chroot example aswell. ## var.log_root = "/var/log/lighttpd" var.server_root = "/var/www" var.state_dir = "/var/run" var.home_dir = "/var/lib/lighttpd" var.conf_dir = "/etc/lighttpd" ## ## run the server chrooted. ## ## This requires root permissions during startup. ## ## If you run Chrooted set the the variables to directories relative to ## the chroot dir. ## ## example chroot configuration: ## #var.log_root = "/logs" #var.server_root = "/" #var.state_dir = "/run" #var.home_dir = "/lib/lighttpd" #var.vhosts_dir = "/vhosts" #var.conf_dir = "/etc" # #server.chroot = "/srv/www" ## ## Some additional variables to make the configuration easier ## ## ## Base directory for all virtual hosts ## ## used in: ## conf.d/evhost.conf ## conf.d/simple_vhost.conf ## vhosts.d/vhosts.template ## var.vhosts_dir = server_root + "/vhosts" ## ## Cache for mod_compress ## ## used in: ## conf.d/compress.conf ## var.cache_dir = "/var/cache/lighttpd" ## ## Base directory for sockets. ## ## used in: ## conf.d/fastcgi.conf ## conf.d/scgi.conf ## var.socket_dir = home_dir + "/sockets" ## ####################################################################### ####################################################################### ## ## Load the modules. include "modules.conf" ## ####################################################################### ####################################################################### ## ## Basic Configuration ## --------------------- ## server.port = 80 ## ## Use IPv6? ## #server.use-ipv6 = "enable" ## ## bind to a specific IP ## #server.bind = "localhost" ## ## Run as a different username/groupname. ## This requires root permissions during startup. ## server.username = "lighttpd" server.groupname = "lighttpd" ## ## enable core files. ## #server.core-files = "disable" ## ## Document root ## server.document-root = server_root + "/lighttpd" ## ## The value for the "Server:" response field. ## ## It would be nice to keep it at "lighttpd". ## #server.tag = "lighttpd" ## ## store a pid file ## server.pid-file = state_dir + "/lighttpd.pid" ## ####################################################################### ####################################################################### ## ## Logging Options ## ------------------ ## ## all logging options can be overwritten per vhost. ## ## Path to the error log file ## server.errorlog = log_root + "/error.log" ## ## If you want to log to syslog you have to unset the ## server.errorlog setting and uncomment the next line. ## #server.errorlog-use-syslog = "enable" ## ## Access log config ## include "conf.d/access_log.conf" ## ## The debug options are moved into their own file. ## see conf.d/debug.conf for various options for request debugging. ## include "conf.d/debug.conf" ## ####################################################################### ####################################################################### ## ## Tuning/Performance ## -------------------- ## ## corresponding documentation: ## http://www.lighttpd.net/documentation/performance.html ## ## set the event-handler (read the performance section in the manual) ## ## possible options on linux are: ## ## select ## poll ## linux-sysepoll ## ## linux-sysepoll is recommended on kernel 2.6. ## server.event-handler = "linux-sysepoll" ## ## The basic network interface for all platforms at the syscalls read() ## and write(). Every modern OS provides its own syscall to help network ## servers transfer files as fast as possible ## ## linux-sendfile - is recommended for small files. ## writev - is recommended for sending many large files ## server.network-backend = "linux-sendfile" ## ## As lighttpd is a single-threaded server, its main resource limit is ## the number of file descriptors, which is set to 1024 by default (on ## most systems). ## ## If you are running a high-traffic site you might want to increase this ## limit by setting server.max-fds. ## ## Changing this setting requires root permissions on startup. see ## server.username/server.groupname. ## ## By default lighttpd would not change the operation system default. ## But setting it to 2048 is a better default for busy servers. ## ## With SELinux enabled, this is denied by default and needs to be allowed ## by running the following once : setsebool -P httpd_setrlimit on server.max-fds = 2048 ## ## Stat() call caching. ## ## lighttpd can utilize FAM/Gamin to cache stat call. ## ## possible values are: ## disable, simple or fam. ## server.stat-cache-engine = "simple" ## ## Fine tuning for the request handling ## ## max-connections == max-fds/2 (maybe /3) ## means the other file handles are used for fastcgi/files ## server.max-connections = 1024 ## ## How many seconds to keep a keep-alive connection open, ## until we consider it idle. ## ## Default: 5 ## #server.max-keep-alive-idle = 5 ## ## How many keep-alive requests until closing the connection. ## ## Default: 16 ## #server.max-keep-alive-requests = 18 ## ## Maximum size of a request in kilobytes. ## By default it is unlimited (0). ## ## Uploads to your server cant be larger than this value. ## #server.max-request-size = 0 ## ## Time to read from a socket before we consider it idle. ## ## Default: 60 ## #server.max-read-idle = 60 ## ## Time to write to a socket before we consider it idle. ## ## Default: 360 ## #server.max-write-idle = 360 ## ## Traffic Shaping ## ----------------- ## ## see /usr/share/doc/lighttpd/traffic-shaping.txt ## ## Values are in kilobyte per second. ## ## Keep in mind that a limit below 32kB/s might actually limit the ## traffic to 32kB/s. This is caused by the size of the TCP send ## buffer. ## ## per server: ## #server.kbytes-per-second = 128 ## ## per connection: ## #connection.kbytes-per-second = 32 ## ####################################################################### ####################################################################### ## ## Filename/File handling ## ------------------------ ## ## files to check for if .../ is requested ## index-file.names = ( "index.php", "index.rb", "index.html", ## "index.htm", "default.htm" ) ## index-file.names += ( "index.xhtml", "index.html", "index.htm", "default.htm", "index.php" ) ## ## deny access the file-extensions ## ## ~ is for backupfiles from vi, emacs, joe, ... ## .inc is often used for code includes which should in general not be part ## of the document-root url.access-deny = ( "~", ".inc" ) ## ## disable range requests for pdf files ## workaround for a bug in the Acrobat Reader plugin. ## $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## ## url handling modules (rewrite, redirect) ## #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.example.com/$1" ) ## ## both rewrite/redirect support back reference to regex conditional using %n ## #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} ## ## which extensions should not be handle via static-file transfer ## ## .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi ## static-file.exclude-extensions = ( ".php", ".pl", ".fcgi", ".scgi" ) ## ## error-handler for status 404 ## #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' ## #server.errorfile-prefix = "/srv/www/htdocs/errors/status-" ## ## mimetype mapping ## include "conf.d/mime.conf" ## ## directory listing configuration ## include "conf.d/dirlisting.conf" ## ## Should lighttpd follow symlinks? ## server.follow-symlink = "enable" ## ## force all filenames to be lowercase? ## #server.force-lowercase-filenames = "disable" ## ## defaults to /var/tmp as we assume it is a local harddisk ## server.upload-dirs = ( "/var/tmp" ) ## ####################################################################### ####################################################################### ## ## SSL Support ## ------------- ## ## To enable SSL for the whole server you have to provide a valid ## certificate and have to enable the SSL engine.:: ## ## ssl.engine = "enable" ## ssl.pemfile = "/path/to/server.pem" ## ## The HTTPS protocol does not allow you to use name-based virtual ## hosting with SSL. If you want to run multiple SSL servers with ## one lighttpd instance you must use IP-based virtual hosting: :: ## ## $SERVER["socket"] == "10.0.0.1:443" { ## ssl.engine = "enable" ## ssl.pemfile = "/etc/ssl/private/www.example.com.pem" ## server.name = "www.example.com" ## ## server.document-root = "/srv/www/vhosts/example.com/www/" ## } ## ## If you have a .crt and a .key file, cat them together into a ## single PEM file: ## $ cat /etc/ssl/private/lighttpd.key /etc/ssl/certs/lighttpd.crt \ ## > /etc/ssl/private/lighttpd.pem ## #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" ## ## optionally pass the CA certificate here. ## ## #ssl.ca-file = "" ## ####################################################################### ####################################################################### ## ## custom includes like vhosts. ## #include "conf.d/config.conf" #include_shell "cat /etc/lighttpd/vhosts.d/*.conf" ## ####################################################################### ####################################################################### ### Custom Added by me #url.rewrite-once = (".*\.(js|ico|gif|jpg|png|css|jar|class)$" => "$0", "" => "/index.php") url.rewrite-once = ( ".*\?(.*)$" => "/index.php?$1", "^/js/.*$" => "$0", "^.*\.(js|ico|gif|jpg|png|css|swf |jar|class)$" => "$0", "" => "/index.php" ) # expire.url = ( "" => "access 1 days" ) include "myvhost-vhosts.conf" ####################################################################### Here is my Vhost file for lighttpd $HTTP["host"] =~ "192.168.8.35$" { server.document-root = "/var/www/lighttpd/qc41022012/public" server.errorlog = "/var/log/lighttpd/error.log" accesslog.filename = "/var/log/lighttpd/access.log" server.error-handler-404 = "/e404.php" } and here is my nginx.conf file user nginx; worker_processes 5; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/testsite/logs/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; # include /etc/nginx/conf.d/*.conf; ## I added this ## include /etc/nginx/sites-available/*; } Here is my NginX Vhost file server { server_name 192.168.8.91; access_log /var/log/nginx/myapps/logs/access.log; error_log /var/log/nginx/myapps/logs/error.log; root /var/www/html/myapps/public; location / { index index.html index.htm index.php; } location = /favicon.ico { return 204; access_log off; log_not_found off; } # location ~ \.php$ { # try_files $uri /index.php; # include /etc/nginx/fastcgi_params; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_param SCRIPT_NAME $fastcgi_script_name; location ~ \.php.*$ { rewrite ^(.*.php)/ $1 last; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_intercept_errors on; # fastcgi_param SCRIPT_FILENAME $document_root/index.php; # fastcgi_param PATH_INFO $uri; # fastcgi_pass 127.0.0.1:9000; # include fastcgi_params; } } We have a custom apps that we created that works great with lighttpd. I went through some headache also when we were trying to figure out how to make it work with lighttpd. this is the line that helps make it work in lighttpd. url.rewrite-once = ( ".*\?(.*)$" => "/index.php?$1", "^/js/.*$" => "$0", "^.*\.(js|ico|gif|jpg|png|css|swf |jar|class)$" => "$0", "" => "/index.php" ) but I couldn't figure out how to make it works in NginX. The webserver run just fine when we use the phpinfo.php test file. However as soon as I point it to my apps, nothing comes up. Check the error.log file and there's no error. Very mind boggling. I spent over 1 week trying to figure it out with no luck.. Please help?

    Read the article

  • DNSSEC - Ad Flag not activated

    - by Arancha
    Hi all, I have some doubts regarding DNSSEC. I have one server acting as an Authoritative Name Server and another one as a Cache/Resolver. I'm using Bind 9.7.1-P2 and these are my configuration files: Named.conf (Authoritative Server) // Opciones de configuracion del servidor include "/etc/rndc.key"; controls { inet 127.0.0.1 allow { localhost; } keys { rndc-key; }; }; options{ version "Peticion no permitida/Query not allowed"; hostname "Peticion no permitida/Query not allowed"; server-id "Peticion no permitida/Query not allowed"; directory "/etc/DNS_RIMA"; pid-file "named.pid"; notify yes; #files 65535; dnssec-enable yes; dnssec-validation yes; allow-transfer { 172.23.2.37; 172.23.3.39; }; transfer-format many-answers; transfers-per-ns 5; transfers-in 10; max-transfer-time-in 120; check-names master ignore; listen-on {172.23.2.57; 80.58.102.13; 80.58.102.103; 127.0.0.1; }; }; zone "test.dnssec" { type master; key-directory "keys"; file "db.test.dnssec.signed"; also-notify { 172.23.2.37 ; 172.23.3.39 ; }; allow-transfer { 172.23.2.37 ; 172.23.3.39 ; }; }; test.dnssec zone test.dnssec. 86400 IN SOA ns.test.dnssec. mxadmin.test.dnssec. ( 2010090902 ; serial 21600 ; refresh (6 hours) 3600 ; retry (1 hour) 1814400 ; expire (3 weeks) 172800 ; minimum (2 days) ) 86400 RRSIG SOA 5 2 86400 20101009062248 ( 20100909062248 40665 test.dnssec. eY99laB6PrtETaXLdCS+G8Uq1lIK7d5vxUB1 pAQ9npv/YbvX1pdWZKGojDgPGw8V65Q0zKQo YW1VuBzvwfSRKax+yrjJzvHQGfCZPJWARehK hgLxHOfXLVH7tyndvLD49ZKcWtrop+Tuy4n9 apWWfSJZxCOngwS7zUi0zCTKfPs= ) 86400 NS ns1.test.dnssec. 86400 RRSIG NS 5 2 86400 20101009062248 ( 20100909062248 40665 test.dnssec. lmlP/Mb2qEXPSlajgSDn/CqWk/jokVCmqjeo idNuytxbiFnbCOunzvaYpgvDpEr0CPrwXaDL TSnb/w53tZl7GHRImJo50vwwNZljLzNT6CFw aaQXFc3rDLsXjCi+WF0/Z7meteM4jYdx5nrV Qx9pgur7VPbP88bJOqWCPBev2Ho= ) 172800 NSEC a.test.dnssec. NS SOA RRSIG NSEC DNSKEY 172800 RRSIG NSEC 5 2 172800 20101009062248 ( 20100909062248 40665 test.dnssec. E76ayamsAAz8Zcj7060KY0nTFzHPztM/Pkc5 OM0EcP7C5+ocn4L8M2J0rmR3jxfYvCpOk0BQ Zniqn9Aw41Qk068yJ2dfDPwV5zT0+te0nzwC /awJGPMXLzMj4JejYTlTiKfspGDJCG44F+lb lHXdcUhbjXf3loqMQadZFQ/eSn0= ) 86400 DNSKEY 256 3 5 ( AwEAAbQ8qrNN5vetx/7E1VOgXZ7fLqwG1y/i 55hWGCeLbcS95ratT9A6UospOvPSwPTlrFgF RWP67Pubzbsy7/damS1F1+p4GgBQway52Hd1 8HjdHKKC6kIxna9pOJBRfhCdzAsv9LnpRvrw mDpcFAqhdn5k5RqwcUF1eOZrKjxXjAOr ) ; key id = 40665 86400 DNSKEY 257 3 5 ( AwEAAcd4dxWyTgOuqha0DJADUH0pk5jvnwdM ZhgZaqnayUdeTh8U9WOjOUHdVCGywZS6NTVp xXqhcegWzh2ZR5VN6thuhezt7kbzLNWbPe7m YF29/ZTXB6nmdSxruQlSvYhzkWTaPNtfrUnI UlbDRxUFWQkSHj9LA1TG76FpR6uqOj1sNrWX nPb/Hwp1Sb2Ik4FlifKb/Vu1+/UnclRJgfPm p2HGTeNYpfk15JHBPSYxJ1TuedXQIdkPGlQX ISmAeV1evGomCC/x9DNleDHCszJOptwurzRP Z7wRXcWnbXz1BU8rAqvUZL3M4UgdNRR5LLTz CkRnrlvXYJpgzDtgmQxE9Bs= ) ; key id = 59647 86400 RRSIG DNSKEY 5 2 86400 20101009062248 ( 20100909062248 40665 test.dnssec. sa4W3tvl6n0TkIcq3xzhG17C2O0lRhllrpUd n5Hs6yVo8r7stewP6tm2XscQiAeseDgmv28w s6Mtiz8uPUbrgFRb6SJk7coH2n/2Y3//S9YP NldDFv3luPnnU1TBb3jDsBKIZWHU9yl/cLNA OKUhlMDd40txk+fQi3iiV5Ls9K8= ) 86400 RRSIG DNSKEY 5 2 86400 20101009062248 ( 20100909062248 59647 test.dnssec. b5fz0dEp2co2pVO7biY896XmsJanjQIR69vC MvSF104/9iZk6eGVFi6hsa4aZcXutEjUDESB ynPkDjMWWIIhN6K1jYKGIc/sFKv1IUONRYHF KXGgZhC6aI0B1E4NA9AXLjlBVF60nHdc3iw8 5gTLDjypP3qAZrnzMvdiBopLnVdB25UZYKn8 mGpOuzKqX02TGMCFMlEVtMX4FP/XKAE8UjiQ 5ehC1JvIKIyg/2zM+ot3nmcqqtUfzp/Hweyc aIkl/9wPJPwMedfTqOjfUKFdB+GiZ0Zz16HZ 5MfJui5IGh5Y6Q04kMrnap2V5U7mByTzx/ud V/eFYhmSHGtAXzBjMA== ) a.test.dnssec. 86400 IN A 1.1.1.1 86400 RRSIG A 5 3 86400 20101009062248 ( 20100909062248 40665 test.dnssec. P52N9ypCrYsgS4CFcUmII0xjyE6KNL9ndhzH oU63fHJHQHeQV+fc0Rx8cCmZSzuqk1lSBelV 3Gcl9UNNuCAQ4ORQ/yJkiZ1zn7h93Mep9qsg YEUQJMfk4FLjYW67DHNcuoCnKbDJhZS0ndVf I474k7ZEZJsGslwk/vcIoFnTa4o= ) 172800 NSEC b.test.dnssec. A RRSIG NSEC 172800 RRSIG NSEC 5 3 172800 20101009062248 ( 20100909062248 40665 test.dnssec. TCduf7xPSrWvEAzBO7Kx5haR85yA/lbsswkQ v0QxlskqAqo+9YedGQV+wGblbCIOmkomrYcq u/rXQ5yoQ3SDXd/bw6EFdoQmH8UJOjMc7SdR xY93MjawPB6XXlJsSlbBFPWJwEpILVRhdBFX czdS5VCa1KmhAYZYQp1FY9rMelA= ) b.test.dnssec. 86400 IN A 2.2.2.2 86400 RRSIG A 5 3 86400 20101009062248 ( 20100909062248 40665 test.dnssec. f0M6Tcqe6B09ctaN3BGAit4u4cJE8x3Ik8sh gyMu0GN/lMv/Bo7PB6hgylLam3HXtF1pPAzX oYudXmhU8afPapHMXfUitC1lFQB5ZW052ZC7 JXV9MnGULydz1blj2EdN+JL3Za8SJKM0LrLB XdQ+QUV+A/6N7hUV6usz5YmdBeI= ) 172800 NSEC ns1.test.dnssec. A RRSIG NSEC 172800 RRSIG NSEC 5 3 172800 20101009062248 ( 20100909062248 40665 test.dnssec. sc6v19dcOFVa295/Xf1pKxBhbdpEErY8CTDQ fw2fjJf0Y3wL1Y1Mlr5zi5ShceQwgua+6YHE DWNbAPcXrJ0lLMU4DU5r0sAyBiBCgCavngGk i59W+nv11zuIpPMnlaMHpJVfJrQ+c4z7H9MH 77B0fMRFTUnvAXoq6ag8Q5POITI= ) ns1.test.dnssec. 86400 IN A 3.3.3.3 86400 RRSIG A 5 3 86400 20101009062248 ( 20100909062248 40665 test.dnssec. UQ3hR/++ta1GokxGz8Yh+GomMcA+xhd3z2Ke z0tdFiNfxvGbm85XyCtSqJIo2S/ZLVJUv/mG nGJbicTfJSziKzYZsD7dp0WJiUK3l7lQ/HpP 5FL8SbjlovVYYAG5woW4p3+os28mmCAJA8gP JTywbcREEhFB4cir2M/QVP+9h+Y= ) 172800 NSEC test.dnssec. A RRSIG NSEC 172800 RRSIG NSEC 5 3 172800 20101009062248 ( 20100909062248 40665 test.dnssec. i7F/ezGl/pGXCC6JyVDaxuwdZMAgv9QLxwzi PTgjCG8Sj6pTIxaQkSLwXsoB9gF77WWBANow R2SWdz0Zai2vWnv/NYoNm9ZfRJEQ9NuExeYp rvX/+lLOHvZXN6tUerIQbWAxO2GwdzHoejSn wReUNVr9MxzZUvuJ33Z7X/7s9VQ= ) Named.conf (Cache/Resolver) include "/etc/rndc.key"; controls { inet 127.0.0.1 allow { localhost; } keys { rndc-key; }; }; options{ version "Peticion no permitida/Query not allowed"; hostname "Peticion no permitida/Query not allowed"; server-id "Peticion no permitida/Query not allowed"; directory "/etc/DNS_RIMA"; pid-file "named.pid"; recursion yes; notify no; #DNSSEC dnssec-enable yes; dnssec-validation yes; listen-on {127.0.0.1; 172.23.2.87; 80.58.102.37; 80.58.102.115; }; #listen-on {127.0.0.1; 80.58.102.37; 80.58.102.115; }; allow-query { telefonica; }; allow-transfer { none; }; recursive-clients 40000; max-cache-size 838860800; rrset-order { order fixed;}; max-ncache-ttl 600; }; trusted-keys { "test.dnssec." 257 3 5 "AwEAAcd4dxWyTgOuqha0DJADUH0pk5jvnwdMZhgZaqnayUdeTh8U9WOjOUHdVCGywZS6NTVpxXqhcegWzh2ZR5VN6thuhezt7kbzLNWbPe7mYF29/ZT XB6nmdSxruQlSvYhzkWTaPNtfrUnIUlbDRxUFWQkSHj9LA1TG76FpR6uqOj1sNrWXnPb/Hwp1Sb2Ik4FlifKb/Vu1+/UnclRJgfPmp2HGTeNYpfk15JHBPSYxJ1TuedXQIdkPGlQXIS mAeV1evGomCC/x9DNleDHCszJOptwurzRPZ7wRXcWnbXz1BU8rAqvUZL3M4UgdNRR5LLTzCkRnrlvXYJpgzDtgmQxE9Bs="; }; I have configured a secure zone (test.dnssec) and I'm trying to perform some queries from the resolver to the Name server (172.23.2.57): /usr/local/bin/dig @172.23.2.57 a.test.dnssec +dnssec ; <<>> DiG 9.7.1-P2 <<>> @172.23.2.57 a.test.dnssec +dnssec ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2654 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 2, ADDITIONAL: 3 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags: do; udp: 4096 ;; QUESTION SECTION: ;a.test.dnssec. IN A ;; ANSWER SECTION: a.test.dnssec. 86400 IN A 1.1.1.1 a.test.dnssec. 86400 IN RRSIG A 5 3 86400 20101009062248 20100909062248 40665 test.dnssec. P52N9ypCrYsgS4CFcUmII0xjyE6KNL9ndhzHoU63fHJHQHeQV+ fc0Rx8 cCmZSzuqk1lSBelV3Gcl9UNNuCAQ4ORQ/yJkiZ1zn7h93Mep9qsgYEUQ JMfk4FLjYW67DHNcuoCnKbDJhZS0ndVfI474k7ZEZJsGslwk/vcIoFnT a4o= ;; AUTHORITY SECTION: test.dnssec. 86400 IN NS ns1.test.dnssec. test.dnssec. 86400 IN RRSIG NS 5 2 86400 20101009062248 20100909062248 40665 test.dnssec. lmlP/Mb2qEXPSlajgSDn/CqWk/jokVCmqjeoidNuytxbiFnbCOunzvaY pgvDpEr0CPrwXaDLTSnb/w53tZl7GHRImJo50vwwNZljLzNT6CFwaaQX Fc3rDLsXjCi+WF0/Z7meteM4jYdx5nrVQx9pgur7VPbP88bJOqWCPBev 2Ho= ;; ADDITIONAL SECTION: ns1.test.dnssec. 86400 IN A 3.3.3.3 ns1.test.dnssec. 86400 IN RRSIG A 5 3 86400 20101009062248 20100909062248 40665 test.dnssec. UQ3hR/++ta1GokxGz8Yh+GomMcA+xhd3z2Kez0tdFiNfxvGbm85XyCtS qJIo2S/ZLVJUv/mGnGJbicTfJSziKzYZsD7dp0WJiUK3l7lQ/HpP5FL8 SbjlovVYYAG5woW4p3+os28mmCAJA8gPJTywbcREEhFB4cir2M/QVP+9 h+Y= ;; Query time: 1 msec ;; SERVER: 172.23.2.57#53(172.23.2.57) ;; WHEN: Thu Sep 9 09:47:14 2010 ;; MSG SIZE rcvd: 605 I obtain the right answer along with the RRSIG records, but the problem is that I'm not seeing the ad flag activated. Any idea about what is wrong????

    Read the article

  • FreeBSD 8.1 unstable network connection

    - by frankcheong
    I have three FreeBSD 8.1 running on three different hardware and therefore consist of different network adapter as well (bce, bge and igb). I found that the network connection is kind of unstable which I have tried to scp some 10MB file and found that I cannot always get the files completed successfully. I have further checked with my network admin and he claim that the problem is being caused by the network driver which cannot support the load whereby he tried to ping using huge packet size (around 15k) and my server will drop packet consistently at a regular interval. I found that this statement may not be valid since the three server is using three different network drive and it would be quite impossible that the same problem is being caused by three different network adapter and thus different network driver. Since then I have tried to tune up the performance by playing around with the /etc/sysctl.conf figures with no luck. kern.ipc.somaxconn=1024 kern.ipc.shmall=3276800 kern.ipc.shmmax=1638400000 # Security net.inet.ip.redirect=0 net.inet.ip.sourceroute=0 net.inet.ip.accept_sourceroute=0 net.inet.icmp.maskrepl=0 net.inet.icmp.log_redirect=0 net.inet.icmp.drop_redirect=1 net.inet.tcp.drop_synfin=1 # Security net.inet.udp.blackhole=1 net.inet.tcp.blackhole=2 # Required by pf net.inet.ip.forwarding=1 #Network Performance Tuning kern.ipc.maxsockbuf=16777216 net.inet.tcp.rfc1323=1 net.inet.tcp.sendbuf_max=16777216 net.inet.tcp.recvbuf_max=16777216 # Setting specifically for 1 or even 10Gbps network net.local.stream.sendspace=262144 net.local.stream.recvspace=262144 net.inet.tcp.local_slowstart_flightsize=10 net.inet.tcp.nolocaltimewait=1 net.inet.tcp.mssdflt=1460 net.inet.tcp.sendbuf_auto=1 net.inet.tcp.sendbuf_inc=16384 net.inet.tcp.recvbuf_auto=1 net.inet.tcp.recvbuf_inc=524288 net.inet.tcp.sendspace=262144 net.inet.tcp.recvspace=262144 net.inet.udp.recvspace=262144 kern.ipc.maxsockbuf=16777216 kern.ipc.nmbclusters=32768 net.inet.tcp.delayed_ack=1 net.inet.tcp.delacktime=100 net.inet.tcp.slowstart_flightsize=179 net.inet.tcp.inflight.enable=1 net.inet.tcp.inflight.min=6144 # Reduce the cache size of slow start connection net.inet.tcp.hostcache.expire=1 Our network admin also claim that they see quite a lot of network up and down from their cisco switch log while I cannot find any up down message inside the dmesg. Have further checked the netstat -s but dont have concrete idea. tcp: 133695291 packets sent 39408539 data packets (3358837321 bytes) 61868 data packets (89472844 bytes) retransmitted 24 data packets unnecessarily retransmitted 0 resends initiated by MTU discovery 50756141 ack-only packets (2148 delayed) 0 URG only packets 0 window probe packets 4372385 window update packets 39781869 control packets 134898031 packets received 72339403 acks (for 3357601899 bytes) 190712 duplicate acks 0 acks for unsent data 59339201 packets (3647021974 bytes) received in-sequence 114 completely duplicate packets (135202 bytes) 27 old duplicate packets 0 packets with some dup. data (0 bytes duped) 42090 out-of-order packets (60817889 bytes) 0 packets (0 bytes) of data after window 0 window probes 3953896 window update packets 64181 packets received after close 0 discarded for bad checksums 0 discarded for bad header offset fields 0 discarded because packet too short 45192 discarded due to memory problems 19945391 connection requests 1323420 connection accepts 0 bad connection attempts 0 listen queue overflows 0 ignored RSTs in the windows 21133581 connections established (including accepts) 21268724 connections closed (including 32737 drops) 207874 connections updated cached RTT on close 207874 connections updated cached RTT variance on close 132439 connections updated cached ssthresh on close 42392 embryonic connections dropped 72339338 segments updated rtt (of 69477829 attempts) 390871 retransmit timeouts 0 connections dropped by rexmit timeout 0 persist timeouts 0 connections dropped by persist timeout 0 Connections (fin_wait_2) dropped because of timeout 13990 keepalive timeouts 2 keepalive probes sent 13988 connections dropped by keepalive 173044 correct ACK header predictions 36947371 correct data packet header predictions 1323420 syncache entries added 0 retransmitted 0 dupsyn 0 dropped 1323420 completed 0 bucket overflow 0 cache overflow 0 reset 0 stale 0 aborted 0 badack 0 unreach 0 zone failures 1323420 cookies sent 0 cookies received 1864 SACK recovery episodes 18005 segment rexmits in SACK recovery episodes 26066896 byte rexmits in SACK recovery episodes 147327 SACK options (SACK blocks) received 87473 SACK options (SACK blocks) sent 0 SACK scoreboard overflow 0 packets with ECN CE bit set 0 packets with ECN ECT(0) bit set 0 packets with ECN ECT(1) bit set 0 successful ECN handshakes 0 times ECN reduced the congestion window udp: 5141258 datagrams received 0 with incomplete header 0 with bad data length field 0 with bad checksum 1 with no checksum 0 dropped due to no socket 129616 broadcast/multicast datagrams undelivered 0 dropped due to full socket buffers 0 not for hashed pcb 5011642 delivered 5016050 datagrams output 0 times multicast source filter matched sctp: 0 input packets 0 datagrams 0 packets that had data 0 input SACK chunks 0 input DATA chunks 0 duplicate DATA chunks 0 input HB chunks 0 HB-ACK chunks 0 input ECNE chunks 0 input AUTH chunks 0 chunks missing AUTH 0 invalid HMAC ids received 0 invalid secret ids received 0 auth failed 0 fast path receives all one chunk 0 fast path multi-part data 0 output packets 0 output SACKs 0 output DATA chunks 0 retransmitted DATA chunks 0 fast retransmitted DATA chunks 0 FR's that happened more than once to same chunk 0 intput HB chunks 0 output ECNE chunks 0 output AUTH chunks 0 ip_output error counter Packet drop statistics: 0 from middle box 0 from end host 0 with data 0 non-data, non-endhost 0 non-endhost, bandwidth rep only 0 not enough for chunk header 0 not enough data to confirm 0 where process_chunk_drop said break 0 failed to find TSN 0 attempt reverse TSN lookup 0 e-host confirms zero-rwnd 0 midbox confirms no space 0 data did not match TSN 0 TSN's marked for Fast Retran Timeouts: 0 iterator timers fired 0 T3 data time outs 0 window probe (T3) timers fired 0 INIT timers fired 0 sack timers fired 0 shutdown timers fired 0 heartbeat timers fired 0 a cookie timeout fired 0 an endpoint changed its cookiesecret 0 PMTU timers fired 0 shutdown ack timers fired 0 shutdown guard timers fired 0 stream reset timers fired 0 early FR timers fired 0 an asconf timer fired 0 auto close timer fired 0 asoc free timers expired 0 inp free timers expired 0 packet shorter than header 0 checksum error 0 no endpoint for port 0 bad v-tag 0 bad SID 0 no memory 0 number of multiple FR in a RTT window 0 RFC813 allowed sending 0 RFC813 does not allow sending 0 times max burst prohibited sending 0 look ahead tells us no memory in interface 0 numbers of window probes sent 0 times an output error to clamp down on next user send 0 times sctp_senderrors were caused from a user 0 number of in data drops due to chunk limit reached 0 number of in data drops due to rwnd limit reached 0 times a ECN reduced the cwnd 0 used express lookup via vtag 0 collision in express lookup 0 times the sender ran dry of user data on primary 0 same for above 0 sacks the slow way 0 window update only sacks sent 0 sends with sinfo_flags !=0 0 unordered sends 0 sends with EOF flag set 0 sends with ABORT flag set 0 times protocol drain called 0 times we did a protocol drain 0 times recv was called with peek 0 cached chunks used 0 cached stream oq's used 0 unread messages abandonded by close 0 send burst avoidance, already max burst inflight to net 0 send cwnd full avoidance, already max burst inflight to net 0 number of map array over-runs via fwd-tsn's ip: 137814085 total packets received 0 bad header checksums 0 with size smaller than minimum 0 with data size < data length 0 with ip length > max ip packet size 0 with header length < data size 0 with data length < header length 0 with bad options 0 with incorrect version number 1200 fragments received 0 fragments dropped (dup or out of space) 0 fragments dropped after timeout 300 packets reassembled ok 137813009 packets for this host 530 packets for unknown/unsupported protocol 0 packets forwarded (0 packets fast forwarded) 61 packets not forwardable 0 packets received for unknown multicast group 0 redirects sent 137234598 packets sent from this host 0 packets sent with fabricated ip header 685307 output packets dropped due to no bufs, etc. 52 output packets discarded due to no route 300 output datagrams fragmented 1200 fragments created 0 datagrams that can't be fragmented 0 tunneling packets that can't find gif 0 datagrams with bad address in header icmp: 0 calls to icmp_error 0 errors not generated in response to an icmp message Output histogram: echo reply: 305 0 messages with bad code fields 0 messages less than the minimum length 0 messages with bad checksum 0 messages with bad length 0 multicast echo requests ignored 0 multicast timestamp requests ignored Input histogram: destination unreachable: 530 echo: 305 305 message responses generated 0 invalid return addresses 0 no return routes ICMP address mask responses are disabled igmp: 0 messages received 0 messages received with too few bytes 0 messages received with wrong TTL 0 messages received with bad checksum 0 V1/V2 membership queries received 0 V3 membership queries received 0 membership queries received with invalid field(s) 0 general queries received 0 group queries received 0 group-source queries received 0 group-source queries dropped 0 membership reports received 0 membership reports received with invalid field(s) 0 membership reports received for groups to which we belong 0 V3 reports received without Router Alert 0 membership reports sent arp: 376748 ARP requests sent 3207 ARP replies sent 245245 ARP requests received 80845 ARP replies received 326090 ARP packets received 267712 total packets dropped due to no ARP entry 108876 ARP entrys timed out 0 Duplicate IPs seen ip6: 2226633 total packets received 0 with size smaller than minimum 0 with data size < data length 0 with bad options 0 with incorrect version number 0 fragments received 0 fragments dropped (dup or out of space) 0 fragments dropped after timeout 0 fragments that exceeded limit 0 packets reassembled ok 2226633 packets for this host 0 packets forwarded 0 packets not forwardable 0 redirects sent 2226633 packets sent from this host 0 packets sent with fabricated ip header 0 output packets dropped due to no bufs, etc. 8 output packets discarded due to no route 0 output datagrams fragmented 0 fragments created 0 datagrams that can't be fragmented 0 packets that violated scope rules 0 multicast packets which we don't join Input histogram: UDP: 2226633 Mbuf statistics: 962679 one mbuf 1263954 one ext mbuf 0 two or more ext mbuf 0 packets whose headers are not continuous 0 tunneling packets that can't find gif 0 packets discarded because of too many headers 0 failures of source address selection Source addresses selection rule applied: icmp6: 0 calls to icmp6_error 0 errors not generated in response to an icmp6 message 0 errors not generated because of rate limitation 0 messages with bad code fields 0 messages < minimum length 0 bad checksums 0 messages with bad length Histogram of error messages to be generated: 0 no route 0 administratively prohibited 0 beyond scope 0 address unreachable 0 port unreachable 0 packet too big 0 time exceed transit 0 time exceed reassembly 0 erroneous header field 0 unrecognized next header 0 unrecognized option 0 redirect 0 unknown 0 message responses generated 0 messages with too many ND options 0 messages with bad ND options 0 bad neighbor solicitation messages 0 bad neighbor advertisement messages 0 bad router solicitation messages 0 bad router advertisement messages 0 bad redirect messages 0 path MTU changes rip6: 0 messages received 0 checksum calculations on inbound 0 messages with bad checksum 0 messages dropped due to no socket 0 multicast messages dropped due to no socket 0 messages dropped due to full socket buffers 0 delivered 0 datagrams output netstat -m 516/5124/5640 mbufs in use (current/cache/total) 512/1634/2146/32768 mbuf clusters in use (current/cache/total/max) 512/1536 mbuf+clusters out of packet secondary zone in use (current/cache) 0/1303/1303/12800 4k (page size) jumbo clusters in use (current/cache/total/max) 0/0/0/6400 9k jumbo clusters in use (current/cache/total/max) 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max) 1153K/9761K/10914K bytes allocated to network (current/cache/total) 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) 0/0/0 requests for jumbo clusters denied (4k/9k/16k) 0/8/6656 sfbufs in use (current/peak/max) 0 requests for sfbufs denied 0 requests for sfbufs delayed 0 requests for I/O initiated by sendfile 0 calls to protocol drain routines Anyone got an idea what might be the possible cause?

    Read the article

  • Unstable DNS with bind

    - by yasser abd
    we have a Centos machine called jupiter, on which I have installed bind9, On every other machine the DNS is set to be the IP address of jupiter (192.168.2.101), as you can see in the output of the following command in windows >ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : mypcs Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom NetXtreme 57xx Gigabit Controller Physical Address. . . . . . . . . : 00-1A-A0-AC-E4-CC DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::c16d:3ae4:5907:30c4%8(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.2.98(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : Thursday, September 20, 2012 10:26:11 AM Lease Expires . . . . . . . . . . : Sunday, September 23, 2012 10:26:10 AM Default Gateway . . . . . . . . . : 192.168.2.1 DHCP Server . . . . . . . . . . . : 192.168.2.1 DHCPv6 IAID . . . . . . . . . . . : 201333408 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-16-3A-50-01-00-1A-A0-AC-E4-CC DNS Servers . . . . . . . . . . . : 192.168.2.101 192.168.2.1 192.168.2.1 NetBIOS over Tcpip. . . . . . . . : Enabled All machines can always nslookup one of the domain (mydomain.com) that is set in the jupiter's DNS server, you can see that in the output of nslookup on the same windows machine: >nslookup mydomain.com Server: UnKnown Address: 192.168.2.101 Name: mydomain.com Address: 192.168.2.100 The problem is, sometimes mydomain.com can not be pinged, here is the output of the ping on the same windows machine >ping mydomain.com Ping request could not find host mydomain.com. Please check the name and try again. This looks very random, and happens once in a while, so the machine can lookup the DNS records but can't ping it, nor can browse the website that is hosted on mydomain.com, which should resolve to 192.168.2.100 On a linux machine that has the same DNS settings, the output of dig command for mydomain is as follows: $ dig mydomain.com ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.2 <<>> mydomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 36090 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1 ;; QUESTION SECTION: ;mydomain.com. IN A ;; ANSWER SECTION: mydomain.com. 86400 IN A 192.168.2.100 ;; AUTHORITY SECTION: mydomain.com. 86400 IN NS jupiter. ;; ADDITIONAL SECTION: jupiter. 86400 IN A 192.168.2.101 ;; Query time: 1 msec ;; SERVER: 192.168.2.101#53(192.168.2.101) ;; WHEN: Thu Sep 20 16:32:14 2012 ;; MSG SIZE rcvd: 83 We've never had the same problem on MACs, they always resolve mydomain.com Here is how I have defined mydomain.com on Bind9's configs on Jupiter, notice that the name of the machine on 192.168.2.100 is venus, so I have this file: /var/named/named.venus: $TTL 1D @ IN SOA jupiter. admin.ourcompany.com. ( 2003052800 ; serial 86400 ; refresh 300 ; retry 604800 ; expire 3600 ; minimum ) @ IN NS jupiter. @ IN A 192.168.2.100 * IN A 192.168.2.100 /var/named/zones/named.venus.zone zone "mydomain.com" IN {type master;file "/var/named/named.venus";allow-update {none;};}; One thing to note is that I haven't defined reverse DNS lookups, only the forward DNS lookups are defined in Bind9 configs, not sure if that's relevant or not. So my question is, why is this being so unstable? what could be the cause?

    Read the article

  • Asset Pipeline acting up

    - by Abram
    Ok, so my asset pipeline has suddenly started acting up on my development machine. JS functions that previously worked are now throwing "not a function" errors.. I know I must be doing something wrong. A minute ago the datatables jquery function was working, then it was throwing an error, then it was working, and now it's not working or throwing an error. Here is my application.js //= require jquery //= require jquery-ui //= require jquery_ujs //= require_self //= require_tree . //= require dataTables/jquery.dataTables //= require dataTables/jquery.dataTables.bootstrap //= require bootstrap //= require bootstrap-tooltip //= require bootstrap-popover //= require bootstrap-tab //= require bootstrap-modal //= require bootstrap-alert //= require bootstrap-dropdown //= require jquery.ui.addresspicker //= require raty //= require jquery.alphanumeric //= require jquery.formrestrict //= require select2 //= require chosen/chosen.jquery //= require highcharts //= require jquery.lazyload Here is some of my layout header: <%= stylesheet_link_tag "application", media: "all" %> <%= yield(:scripthead) %> <%= javascript_include_tag "application" %> <%= csrf_meta_tags %> <%= yield(:head) %> Above I am using the yield to load up online scripts from google as they're only needed on some pages, and generally slow down the site if included in the application layout. I tried removing the yield but things were still broken, even after clearing the browser cache and running rake assets:clean (just to be on the safe side). Here's what shows up between CSS and metatags (for a page with nothin in the yield scripthead): <script src="/assets/jquery.js?body=1" type="text/javascript"></script> <script src="/assets/jquery-ui.js?body=1" type="text/javascript"></script> <script src="/assets/jquery_ujs.js?body=1" type="text/javascript"></script> <script src="/assets/application.js?body=1" type="text/javascript"></script> <script src="/assets/aidmodels.js?body=1" type="text/javascript"></script> <script src="/assets/audio.js?body=1" type="text/javascript"></script> <script src="/assets/bootstrap-alert.js?body=1" type="text/javascript"></script> <script src="/assets/bootstrap-dropdown.js?body=1" type="text/javascript"></script> <script src="/assets/bootstrap-modal.js?body=1" type="text/javascript"></script> <script src="/assets/bootstrap-popover.js?body=1" type="text/javascript"></script> <script src="/assets/bootstrap-tab.js?body=1" type="text/javascript"></script> <script src="/assets/bootstrap-tooltip.js?body=1" type="text/javascript"></script> <script src="/assets/branches.js?body=1" type="text/javascript"></script> <script src="/assets/charts.js?body=1" type="text/javascript"></script> <script src="/assets/chosen/backup_chosen.jquery.js?body=1" type="text/javascript"></script> <script src="/assets/chosen/chosen.jquery.js?body=1" type="text/javascript"></script> <script src="/assets/consumers.js?body=1" type="text/javascript"></script> <script src="/assets/dispensers.js?body=1" type="text/javascript"></script> <script src="/assets/favorites.js?body=1" type="text/javascript"></script> <script src="/assets/features.js?body=1" type="text/javascript"></script> <script src="/assets/generic_styles.js?body=1" type="text/javascript"></script> <script src="/assets/gmaps4rails/gmaps4rails.base.js?body=1" type="text/javascript"></script> <script src="/assets/gmaps4rails/gmaps4rails.bing.js?body=1" type="text/javascript"></script> <script src="/assets/gmaps4rails/gmaps4rails.googlemaps.js?body=1" type="text/javascript"></script> <script src="/assets/gmaps4rails/gmaps4rails.mapquest.js?body=1" type="text/javascript"></script> <script src="/assets/gmaps4rails/gmaps4rails.openlayers.js?body=1" type="text/javascript"></script> <script src="/assets/highcharts.js?body=1" type="text/javascript"></script> <script src="/assets/jquery-ui-1.8.18.custom.min.js?body=1" type="text/javascript"></script> <script src="/assets/jquery.alphanumeric.js?body=1" type="text/javascript"></script> <script src="/assets/jquery.formrestrict.js?body=1" type="text/javascript"></script> <script src="/assets/jquery.lazyload.js?body=1" type="text/javascript"></script> <script src="/assets/jquery.ui.addresspicker.js?body=1" type="text/javascript"></script> <script src="/assets/likes.js?body=1" type="text/javascript"></script> <script src="/assets/messages.js?body=1" type="text/javascript"></script> <script src="/assets/overalls.js?body=1" type="text/javascript"></script> <script src="/assets/pages.js?body=1" type="text/javascript"></script> <script src="/assets/questions.js?body=1" type="text/javascript"></script> <script src="/assets/raty.js?body=1" type="text/javascript"></script> <script src="/assets/reviews.js?body=1" type="text/javascript"></script> <script src="/assets/sessions.js?body=1" type="text/javascript"></script> <script src="/assets/styles.js?body=1" type="text/javascript"></script> <script src="/assets/tickets.js?body=1" type="text/javascript"></script> <script src="/assets/universities.js?body=1" type="text/javascript"></script> <script src="/assets/users.js?body=1" type="text/javascript"></script> <script src="/assets/dataTables/jquery.dataTables.js?body=1" type="text/javascript"></script> <script src="/assets/dataTables/jquery.dataTables.bootstrap.js?body=1" type="text/javascript"></script> <script src="/assets/bootstrap-transition.js?body=1" type="text/javascript"></script> <script src="/assets/bootstrap-affix.js?body=1" type="text/javascript"></script> <script src="/assets/bootstrap-button.js?body=1" type="text/javascript"></script> <script src="/assets/bootstrap-carousel.js?body=1" type="text/javascript"></script> <script src="/assets/bootstrap-collapse.js?body=1" type="text/javascript"></script> <script src="/assets/bootstrap-scrollspy.js?body=1" type="text/javascript"></script> <script src="/assets/bootstrap-typeahead.js?body=1" type="text/javascript"></script> <script src="/assets/bootstrap.js?body=1" type="text/javascript"></script> <script src="/assets/select2.js?body=1" type="text/javascript"></script> From application.rb: config.assets.initialize_on_precompile = false # Enable the asset pipeline config.assets.enabled = true config.action_controller.assets_dir = "#{File.dirname(File.dirname(__FILE__))}/public" # Version of your assets, change this if you want to expire all your assets config.assets.version = '1.0' I'm sorry, I'm not sure what else to include to help with this puzzle, but any advise would be appreciated. I was having no problems before I started trying to upload to heroku and now everything's gone haywire. EDIT: In the console at the moment I'm seeing Uncaught TypeError: Cannot read property 'Constructor' of undefined bootstrap-popover.js:33 Uncaught ReferenceError: google is not defined jquery.ui.addresspicker.js:25 Uncaught TypeError: Object [object Object] has no method 'popover' overall:476

    Read the article

  • Storage model for various user setting and attributes in database?

    - by dvd
    I'm currently trying to upgrade a user management system for one web application. This web application is used to provide remote access to various networking equipment for educational purposes. All equipment is assigned to various pods, which users can book for periods of time. The current system is very simple - just 2 user types: administrators and students. All their security and other attributes are mostly hardcoded. I want to change this to something like the following model: user <-- (1..n)profile <--- (1..n) attributes. I.e. user can be assigned several profiles and each profile can have multiple attributes. At runtime all profiles and attributes are merged into single active profile. Some examples of attributes i'm planning to implement: EXPIRATION_DATE - single value, value type: date, specifies when user account will expire; ACCESS_POD - single value, value type: ref to object of Pod class, specifies which pod the user is allowed to book, user profile can have multiple such attributes with different values; TIME_QUOTA - single value, value type: integer, specifies maximum length of time for which student can reserve equipment. CREDIT_CHARGING - multi valued, specifies how much credits will be assigned to user over period of time. (Reservation of devices will cost credits, which will regenerate over time); Security permissions and user preferences can end up as profile or user attributes too: i.e CAN_CREATE_USERS, CAN_POST_NEWS, CAN_EDIT_DEVICES, FONT_SIZE, etc.. This way i could have, for example: students of course A will have profiles STUDENT (with basic attributes) and PROFILE A (wich grants acces to pod A). Students of course B will have profiles: STUDENT, PROFILE B(wich grants to pod B and have increased time quotas). I'm using Spring and Hibernate frameworks for this application and MySQL for database. For this web application i would like to stay within boundaries of these tools. The problem is, that i can't figure out how to best represent all these attributes in database. I also want to create some kind of unified way of retrieveing these attributes and their values. Here is the model i've come up with. Base classes. public abstract class Attribute{ private Long id; Attribute() {} abstract public String getName(); public Long getId() {return id; } void setId(Long id) {this.id = id;} } public abstract class SimpleAttribute extends Attribute{ public abstract Serializable getValue(); abstract void setValue(Serializable s); @Override public boolean equals(Object obj) { ... } @Override public int hashCode() { ... } } Simple attributes can have only one value of any type (including object and enum). Here are more specific attributes: public abstract class IntAttribute extends SimpleAttribute { private Integer value; public Integer getValue() { return value; } void setValue(Integer value) { this.value = value;} void setValue(Serializable s) { setValue((Integer)s); } } public class MaxOrdersAttribute extends IntAttribute { public String getName() { return "Maximum outstanding orders"; } } public final class CreditRateAttribute extends IntAttribute { public String getName() { return "Credit Regeneration Rate"; } } All attributes stored stored using Hibenate variant "table per class hierarchy". Mapping: <class name="ru.mirea.rea.model.abac.Attribute" table="ATTRIBUTES" abstract="true" > <id name="id" column="id"> <generator class="increment" /> </id> <discriminator column="attributeType" type="string"/> <subclass name="ru.mirea.rea.model.abac.SimpleAttribute" abstract="true"> <subclass name="ru.mirea.rea.model.abac.IntAttribute" abstract="true" > <property name="value" column="intVal" type="integer"/> <subclass name="ru.mirea.rea.model.abac.CreditRateAttribute" discriminator-value="CreditRate" /> <subclass name="ru.mirea.rea.model.abac.MaxOrdersAttribute" discriminator-value="MaxOrders" /> </subclass> <subclass name="ru.mirea.rea.model.abac.DateAttribute" abstract="true" > <property name="value" column="dateVal" type="timestamp"/> <subclass name="ru.mirea.rea.model.abac.ExpirationDateAttribute" discriminator-value="ExpirationDate" /> </subclass> <subclass name="ru.mirea.rea.model.abac.PodAttribute" abstract="true" > <many-to-one name="value" column="podVal" class="ru.mirea.rea.model.pods.Pod"/> <subclass name="ru.mirea.rea.model.abac.PodAccessAttribute" discriminator-value="PodAccess" lazy="false"/> </subclass> <subclass name="ru.mirea.rea.model.abac.SecurityPermissionAttribute" discriminator-value="SecurityPermission" lazy="false"> <property name="value" column="spVal" type="ru.mirea.rea.db.hibernate.customTypes.SecurityPermissionType"/> </subclass> </subclass> </class> SecurityPermissionAttribute uses enumeration of various permissions as it's value. Several types of attributes imlement GrantedAuthority interface and can be used with Spring Security for authentication and authorization. Attributes can be created like this: public final class AttributeManager { public <T extends SimpleAttribute> T createSimpleAttribute(Class<T> c, Serializable value) { Session session = HibernateUtil.getCurrentSession(); T att = null; ... att = c.newInstance(); att.setValue(value); session.save(att); session.flush(); ... return att; } public <T extends SimpleAttribute> List<T> findSimpleAttributes(Class<T> c) { List<T> result = new ArrayList<T>(); Session session = HibernateUtil.getCurrentSession(); List<T> temp = session.createCriteria(c).list(); result.addAll(temp); return result; } } And retrieved through User Profiles to which they are assigned. I do not expect that there would be very large amount of rows in the ATTRIBUTES table, but are there any serious drawbacks of such design?

    Read the article

  • SINGLE SIGN ON SECURITY THREAT! FACEBOOK access_token broadcast in the open/clear

    - by MOKANA
    Subsequent to my posting there was a remark made that this was not really a question but I thought I did indeed postulate one. So that there is no ambiquity here is the question with a lead in: Since there is no data sent from Facebook during the Canvas Load process that is not at some point divulged, including the access_token, session and other data that could uniquely identify a user, does any one see any other way other than adding one more layer, i.e., a password, sent over the wire via HTTPS along with the access_toekn, that will insure unique untampered with security by the user? Using Wireshark I captured the local broadcast while loading my Canvas Application page. I was hugely surprised to see the access_token broadcast in the open, viewable for any one to see. This access_token is appended to any https call to the Facebook OpenGraph API. Using facebook as a single click log on has now raised huge concerns for me. It is stored in a session object in memory and the cookie is cleared upon app termination and after reviewing the FB.Init calls I saw a lot of HTTPS calls so I assumed the access_token was always encrypted. But last night I saw in the status bar a call from what was simply an http call that included the App ID so I felt I should sniff the Application Canvas load sequence. Today I did sniff the broadcast and in the attached image you can see that there are http calls with the access_token being broadcast in the open and clear for anyone to gain access to. Am I missing something, is what I am seeing and my interpretation really correct. If any one can sniff and get the access_token they can theorically make calls to the Graph API via https, even though the call back would still need to be the site established in Facebook's application set up. But what is truly a security threat is anyone using the access_token for access to their own site. I do not see the value of a single sign on via Facebook if the only thing that was established as secure was the access_token - becuase for what I can see it clearly is not secure. Access tokens that never have an expire date do not change. Access_tokens are different for every user, to access to another site could be held tight to just a single user, but compromising even a single user's data is unacceptable. http://www.creatingstory.com/images/InTheOpen.png Went back and did more research on this: FINDINGS: Went back an re ran the canvas application to verify that it was not any of my code that was not broadcasting. In this call: HTTP GET /connect.php/en_US/js/CacheData HTTP/1.1 The USER ID is clearly visible in the cookie. So USER_ID's are fully visible, but they are already. Anyone can go to pretty much any ones page and hover over the image and see the USER ID. So no big threat. APP_ID are also easily obtainable - but . . . http://www.creatingstory.com/images/InTheOpen2.png The above file clearly shows the FULL ACCESS TOKEN clearly in the OPEN via a Facebook initiated call. Am I wrong. TELL ME I AM WRONG because I want to be wrong about this. I have since reset my app secret so I am showing the real sniff of the Canvas Page being loaded. Additional data 02/20/2011: @ifaour - I appreciate the time you took to compile your response. I am pretty familiar with the OAuth process and have a pretty solid understanding of the signed_request unpacking and utilization of the access_token. I perform a substantial amount of my processing on the server and my Facebook server side flows are all complete and function without any flaw that I know of. The application secret is secure and never passed to the front end application and is also changed regularly. I am being as fanatical about security as I can be, knowing there is so much I don’t know that could come back and bite me. Two huge access_token issues: The issues concern the possible utilization of the access_token from the USER AGENT (browser). During the FB.INIT() process of the Facebook JavaScript SDK, a cookie is created as well as an object in memory called a session object. This object, along with the cookie contain the access_token, session, a secret, and uid and status of the connection. The session object is structured such that is supports both the new OAuth and the legacy flows. With OAuth, the access_token and status are pretty much al that is used in the session object. The first issue is that the access_token is used to make HTTPS calls to the GRAPH API. If you had the access_token, you could do this from any browser: https://graph.facebook.com/220439?access_token=... and it will return a ton of information about the user. So any one with the access token can gain access to a Facebook account. You can also make additional calls to any info the user has granted access to the application tied to the access_token. At first I thought that a call into the GRAPH had to have a Callback to the URL established in the App Setup, but I tested it as mentioned below and it will return info back right into the browser. Adding that callback feature would be a good idea I think, tightens things up a bit. The second issue is utilization of some unique private secured data that identifies the user to the third party data base, i.e., like in my case, I would use a single sign on to populate user information into my database using this unique secured data item (i.e., access_token which contains the APP ID, the USER ID, and a hashed with secret sequence). None of this is a problem on the server side. You get a signed_request, you unpack it with secret, make HTTPS calls, get HTTPS responses back. When a user has information entered via the USER AGENT(browser) that must be stored via a POST, this unique secured data element would be sent via HTTPS such that they are validated prior to data base insertion. However, If there is NO secured piece of unique data that is supplied via the single sign on process, then there is no way to guarantee unauthorized access. The access_token is the one piece of data that is utilized by Facebook to make the HTTPS calls into the GRAPH API. it is considered unique in regards to BOTH the USER and the APPLICATION and is initially secure via the signed_request packaging. If however, it is subsequently transmitted in the clear and if I can sniff the wire and obtain the access_token, then I can pretend to be the application and gain the information they have authorized the application to see. I tried the above example from a Safari and IE browser and it returned all of my information to me in the browser. In conclusion, the access_token is part of the signed_request and that is how the application initially obtains it. After OAuth authentication and authorization, i.e., the USER has logged into Facebook and then runs your app, the access_token is stored as mentioned above and I have sniffed it such that I see it stored in a Cookie that is transmitted over the wire, resulting in there being NO UNIQUE SECURED IDENTIFIABLE piece of information that can be used to support interaction with the database, or in other words, unless there were one more piece of secure data sent along with the access_token to my database, i.e., a password, I would not be able to discern if it is a legitimate call. Luckily I utilized secure AJAX via POST and the call has to come from the same domain, but I am sure there is a way to hijack that. I am totally open to any ideas on this topic on how to uniquely identify my USERS other than adding another layer (password) via this single sign on process or if someone would just share with me that I read and analyzed my data incorrectly and that the access_token is always secure over the wire. Mahalo nui loa in advance.

    Read the article

< Previous Page | 14 15 16 17 18