Search Results

Search found 30312 results on 1213 pages for 'tactial test team'.

Page 154/1213 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Nginx and client certificates from hierarchical OpenSSL-based certification authorities

    - by Fmy Oen
    I'm trying to set up root certification authority, subordinate certification authority and to generate the client certificates signed by any of this CA that nginx 0.7.67 on Debian Squeeze will accept. My problem is that root CA signed client certificate works fine while subordinate CA signed one results in "400 Bad Request. The SSL certificate error". Step 1: nginx virtual host configuration: server { server_name test.local; access_log /var/log/nginx/test.access.log; listen 443 default ssl; keepalive_timeout 70; ssl_protocols SSLv3 TLSv1; ssl_ciphers AES128-SHA:AES256-SHA:RC4-SHA:DES-CBC3-SHA:RC4-MD5; ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; ssl_client_certificate /etc/nginx/ssl/client.pem; ssl_verify_client on; ssl_session_cache shared:SSL:10m; ssl_session_timeout 5m; location / { proxy_pass http://testsite.local/; } } Step 2: PKI infrastructure organization for both root and subordinate CA (based on this article): # mkdir ~/pki && cd ~/pki # mkdir rootCA subCA # cp -v /etc/ssl/openssl.cnf rootCA/ # cd rootCA/ # mkdir certs private crl newcerts; touch serial; echo 01 > serial; touch index.txt; touch crlnumber; echo 01 > crlnumber # cp -Rvp * ../subCA/ Almost no changes was made to rootCA/openssl.cnf: [ CA_default ] dir = . # Where everything is kept ... certificate = $dir/certs/rootca.crt # The CA certificate ... private_key = $dir/private/rootca.key # The private key and to subCA/openssl.cnf: [ CA_default ] dir = . # Where everything is kept ... certificate = $dir/certs/subca.crt # The CA certificate ... private_key = $dir/private/subca.key # The private key Step 3: Self-signed root CA certificate generation: # openssl genrsa -out ./private/rootca.key -des3 2048 # openssl req -x509 -new -key ./private/rootca.key -out certs/rootca.crt -config openssl.cnf Enter pass phrase for ./private/rootca.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:rootca Email Address []: Step 4: Subordinate CA certificate generation: # cd ../subCA # openssl genrsa -out ./private/subca.key -des3 2048 # openssl req -new -key ./private/subca.key -out subca.csr -config openssl.cnf Enter pass phrase for ./private/subca.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:subca Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: Step 5: Subordinate CA certificate signing by root CA certificate: # cd ../rootCA/ # openssl ca -in ../subCA/subca.csr -extensions v3_ca -config openssl.cnf Using configuration from openssl.cnf Enter pass phrase for ./private/rootca.key: Check that the request matches the signature Signature ok Certificate Details: Serial Number: 1 (0x1) Validity Not Before: Feb 4 10:49:43 2013 GMT Not After : Feb 4 10:49:43 2014 GMT Subject: countryName = AU stateOrProvinceName = Some-State organizationName = Internet Widgits Pty Ltd commonName = subca X509v3 extensions: X509v3 Subject Key Identifier: C9:E2:AC:31:53:81:86:3F:CD:F8:3D:47:10:FC:E5:8E:C2:DA:A9:20 X509v3 Authority Key Identifier: keyid:E9:50:E6:BF:57:03:EA:6E:8F:21:23:86:BB:44:3D:9F:8F:4A:8B:F2 DirName:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca serial:9F:FB:56:66:8D:D3:8F:11 X509v3 Basic Constraints: CA:TRUE Certificate is to be certified until Feb 4 10:49:43 2014 GMT (365 days) Sign the certificate? [y/n]:y 1 out of 1 certificate requests certified, commit? [y/n]y ... # cd ../subCA/ # cp -v ../rootCA/newcerts/01.pem certs/subca.crt Step 6: Server certificate generation and signing by root CA (for nginx virtual host): # cd ../rootCA # openssl genrsa -out ./private/server.key -des3 2048 # openssl req -new -key ./private/server.key -out server.csr -config openssl.cnf Enter pass phrase for ./private/server.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:test.local Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # openssl ca -in server.csr -out certs/server.crt -config openssl.cnf Step 7: Client #1 certificate generation and signing by root CA: # openssl genrsa -out ./private/client1.key -des3 2048 # openssl req -new -key ./private/client1.key -out client1.csr -config openssl.cnf Enter pass phrase for ./private/client1.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:Client #1 Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # openssl ca -in client1.csr -out certs/client1.crt -config openssl.cnf Step 8: Client #1 certificate converting to PKCS12 format: # openssl pkcs12 -export -out certs/client1.p12 -inkey private/client1.key -in certs/client1.crt -certfile certs/rootca.crt Step 9: Client #2 certificate generation and signing by subordinate CA: # cd ../subCA/ # openssl genrsa -out ./private/client2.key -des3 2048 # openssl req -new -key ./private/client2.key -out client2.csr -config openssl.cnf Enter pass phrase for ./private/client2.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:Client #2 Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # openssl ca -in client2.csr -out certs/client2.crt -config openssl.cnf Step 10: Client #2 certificate converting to PKCS12 format: # openssl pkcs12 -export -out certs/client2.p12 -inkey private/client2.key -in certs/client2.crt -certfile certs/subca.crt Step 11: Passing server certificate and private key to nginx (performed with OS superuser privileges): # cd ../rootCA/ # cp -v certs/server.crt /etc/nginx/ssl/ # cp -v private/server.key /etc/nginx/ssl/ Step 12: Passing root and subordinate CA certificates to nginx (performed with OS superuser privileges): # cat certs/rootca.crt > /etc/nginx/ssl/client.pem # cat ../subCA/certs/subca.crt >> /etc/nginx/ssl/client.pem client.pem file look like this: # cat /etc/nginx/ssl/client.pem -----BEGIN CERTIFICATE----- MIID6TCCAtGgAwIBAgIJAJ/7VmaN048RMA0GCSqGSIb3DQEBBQUAMFYxCzAJBgNV BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQxDzANBgNVBAMTBnJvb3RjYTAeFw0xMzAyMDQxMDM1NTda ... -----END CERTIFICATE----- Certificate: Data: Version: 3 (0x2) Serial Number: 1 (0x1) ... -----BEGIN CERTIFICATE----- MIID4DCCAsigAwIBAgIBATANBgkqhkiG9w0BAQUFADBWMQswCQYDVQQGEwJBVTET MBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0cyBQ dHkgTHRkMQ8wDQYDVQQDEwZyb290Y2EwHhcNMTMwMjA0MTA0OTQzWhcNMTQwMjA0 ... -----END CERTIFICATE----- It looks like everything is working fine: # service nginx reload # Reloading nginx configuration: Enter PEM pass phrase: # nginx. # Step 13: Installing *.p12 certificates in browser (Firefox in my case) gives the problem I've mentioned above. Client #1 = 200 OK, Client #2 = 400 Bad request/The SSL certificate error. Any ideas what should I do? Update 1: Results of SSL connection test attempts: # openssl s_client -connect test.local:443 -CAfile ~/pki/rootCA/certs/rootca.crt -cert ~/pki/rootCA/certs/client1.crt -key ~/pki/rootCA/private/client1.key -showcerts Enter pass phrase for tmp/testcert/client1.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify return:1 depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = test.local verify return:1 --- Certificate chain 0 s:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=test.local i:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca -----BEGIN CERTIFICATE----- MIIDpjCCAo6gAwIBAgIBAjANBgkqhkiG9w0BAQUFADBWMQswCQYDVQQGEwJBVTET MBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0cyBQ dHkgTHRkMQ8wDQYDVQQDEwZyb290Y2EwHhcNMTMwMjA0MTEwNjAzWhcNMTQwMjA0 ... -----END CERTIFICATE----- 1 s:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca i:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca -----BEGIN CERTIFICATE----- MIID6TCCAtGgAwIBAgIJAJ/7VmaN048RMA0GCSqGSIb3DQEBBQUAMFYxCzAJBgNV BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQxDzANBgNVBAMTBnJvb3RjYTAeFw0xMzAyMDQxMDM1NTda ... -----END CERTIFICATE----- --- Server certificate subject=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=test.local issuer=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca --- Acceptable client certificate CA names /C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca /C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca --- SSL handshake has read 3395 bytes and written 2779 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: zlib compression Expansion: zlib compression SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: 15BFC2029691262542FAE95A48078305E76EEE7D586400F8C4F7C516B0F9D967 Session-ID-ctx: Master-Key: 23246CF166E8F3900793F0A2561879E5DB07291F32E99591BA1CF53E6229491FEAE6858BFC9AACAF271D9C3706F139C7 Key-Arg : None PSK identity: None PSK identity hint: None SRP username: None TLS session ticket: 0000 - c2 5e 1d d2 b5 6d 40 23-b2 40 89 e4 35 75 70 07 .^...m@#[email protected]. 0010 - 1b bb 2b e6 e0 b5 ab 10-10 bf 46 6e aa 67 7f 58 ..+.......Fn.g.X 0020 - cf 0e 65 a4 67 5a 15 ba-aa 93 4e dd 3d 6e 73 4c ..e.gZ....N.=nsL 0030 - c5 56 f6 06 24 0f 48 e6-38 36 de f1 b5 31 c5 86 .V..$.H.86...1.. ... 0440 - 4c 53 39 e3 92 84 d2 d0-e5 e2 f5 8a 6a a8 86 b1 LS9.........j... Compression: 1 (zlib compression) Start Time: 1359989684 Timeout : 300 (sec) Verify return code: 0 (ok) --- Everything seems fine with Client #2 and root CA certificate but request returns 400 Bad Request error: # openssl s_client -connect test.local:443 -CAfile ~/pki/rootCA/certs/rootca.crt -cert ~/pki/subCA/certs/client2.crt -key ~/pki/subCA/private/client2.key -showcerts Enter pass phrase for tmp/testcert/client2.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify return:1 depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = test.local verify return:1 ... Compression: 1 (zlib compression) Start Time: 1359989989 Timeout : 300 (sec) Verify return code: 0 (ok) --- GET / HTTP/1.0 HTTP/1.1 400 Bad Request Server: nginx/0.7.67 Date: Mon, 04 Feb 2013 15:00:43 GMT Content-Type: text/html Content-Length: 231 Connection: close <html> <head><title>400 The SSL certificate error</title></head> <body bgcolor="white"> <center><h1>400 Bad Request</h1></center> <center>The SSL certificate error</center> <hr><center>nginx/0.7.67</center> </body> </html> closed Verification fails with Client #2 certificate and subordinate CA certificate: # openssl s_client -connect test.local:443 -CAfile ~/pki/subCA/certs/subca.crt -cert ~/pki/subCA/certs/client2.crt -key ~/pki/subCA/private/client2.key -showcerts Enter pass phrase for tmp/testcert/client2.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify error:num=19:self signed certificate in certificate chain verify return:0 ... Compression: 1 (zlib compression) Start Time: 1359990354 Timeout : 300 (sec) Verify return code: 19 (self signed certificate in certificate chain) --- GET / HTTP/1.0 HTTP/1.1 400 Bad Request ... Still getting 400 Bad Request error with concatenated CA certificates and Client #2 (but still everything ok with Client #1): # cat certs/rootca.crt ../subCA/certs/subca.crt > certs/concatenatedca.crt # openssl s_client -connect test.local:443 -CAfile ~/pki/rootCA/certs/concatenatedca.crt -cert ~/pki/subCA/certs/client2.crt -key ~/pki/subCA/private/client2.key -showcerts Enter pass phrase for tmp/testcert/client2.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify return:1 depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = test.local verify return:1 --- ... Compression: 1 (zlib compression) Start Time: 1359990772 Timeout : 300 (sec) Verify return code: 0 (ok) --- GET / HTTP/1.0 HTTP/1.1 400 Bad Request ... Update 2: I've managed to recompile nginx with enabled debug. Here is the part of successfull conection by Client #1 track: 2013/02/05 14:08:23 [debug] 38701#0: *119 accept: <MY IP ADDRESS> fd:3 2013/02/05 14:08:23 [debug] 38701#0: *119 event timer add: 3: 60000:2856497512 2013/02/05 14:08:23 [debug] 38701#0: *119 kevent set event: 3: ft:-1 fl:0025 2013/02/05 14:08:23 [debug] 38701#0: *119 malloc: 28805200:660 2013/02/05 14:08:23 [debug] 38701#0: *119 malloc: 28834400:1024 2013/02/05 14:08:23 [debug] 38701#0: *119 posix_memalign: 28860000:4096 @16 2013/02/05 14:08:23 [debug] 38701#0: *119 http check ssl handshake 2013/02/05 14:08:23 [debug] 38701#0: *119 https ssl handshake: 0x16 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL server name: "test.local" 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_do_handshake: -1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_get_error: 2 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL handshake handler: 0 2013/02/05 14:08:23 [debug] 38701#0: *119 verify:1, error:0, depth:1, subject:"/C=AU /ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 14:08:23 [debug] 38701#0: *119 verify:1, error:0, depth:0, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=Client #1",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_do_handshake: 1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL: TLSv1, cipher: "AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1" 2013/02/05 14:08:23 [debug] 38701#0: *119 http process request line 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: -1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_get_error: 2 2013/02/05 14:08:23 [debug] 38701#0: *119 http process request line 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: 1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: 524 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: -1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_get_error: 2 2013/02/05 14:08:23 [debug] 38701#0: *119 http request line: "GET / HTTP/1.1" And here is the part of unsuccessfull conection by Client #2 track: 2013/02/05 13:51:34 [debug] 38701#0: *112 accept: <MY_IP_ADDRESS> fd:3 2013/02/05 13:51:34 [debug] 38701#0: *112 event timer add: 3: 60000:2855488975 2013/02/05 13:51:34 [debug] 38701#0: *112 kevent set event: 3: ft:-1 fl:0025 2013/02/05 13:51:34 [debug] 38701#0: *112 malloc: 28805200:660 2013/02/05 13:51:34 [debug] 38701#0: *112 malloc: 28834400:1024 2013/02/05 13:51:34 [debug] 38701#0: *112 posix_memalign: 28860000:4096 @16 2013/02/05 13:51:34 [debug] 38701#0: *112 http check ssl handshake 2013/02/05 13:51:34 [debug] 38701#0: *112 https ssl handshake: 0x16 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL server name: "test.local" 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_do_handshake: -1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_get_error: 2 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL handshake handler: 0 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_do_handshake: -1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_get_error: 2 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL handshake handler: 0 2013/02/05 13:51:34 [debug] 38701#0: *112 verify:0, error:20, depth:1, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 13:51:34 [debug] 38701#0: *112 verify:0, error:27, depth:1, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 13:51:34 [debug] 38701#0: *112 verify:1, error:27, depth:0, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=Client #2",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca" 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_do_handshake: 1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL: TLSv1, cipher: "AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1" 2013/02/05 13:51:34 [debug] 38701#0: *112 http process request line 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_read: 1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_read: 524 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_read: -1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_get_error: 2 2013/02/05 13:51:34 [debug] 38701#0: *112 http request line: "GET / HTTP/1.1" So I'm getting OpenSSL error #20 and then #27. According to verify documentation: 20 X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY: unable to get local issuer certificate the issuer certificate could not be found: this occurs if the issuer certificate of an untrusted certificate cannot be found. 27 X509_V_ERR_CERT_UNTRUSTED: certificate not trusted the root CA is not marked as trusted for the specified purpose.

    Read the article

  • Am I just not understanding TDD unit testing (Asp.Net MVC project)?

    - by KallDrexx
    I am trying to figure out how to correctly and efficiently unit test my Asp.net MVC project. When I started on this project I bought the Pro ASP.Net MVC, and with that book I learned about TDD and unit testing. After seeing the examples, and the fact that I work as a software engineer in QA in my current company, I was amazed at how awesome TDD seemed to be. So I started working on my project and went gun-ho writing unit tests for my database layer, business layer, and controllers. Everything got a unit test prior to implementation. At first I thought it was awesome, but then things started to go downhill. Here are the issues I started encountering: I ended up writing application code in order to make it possible for unit tests to be performed. I don't mean this in a good way as in my code was broken and I had to fix it so the unit test pass. I mean that abstracting out the database to a mock database is impossible due to the use of linq for data retrieval (using the generic repository pattern). The reason is that with linq-sql or linq-entities you can do joins just by doing: var objs = select p from _container.Projects select p.Objects; However, if you mock the database layer out, in order to have that linq pass the unit test you must change the linq to be var objs = select p from _container.Projects join o in _container.Objects on o.ProjectId equals p.Id select o; Not only does this mean you are changing your application logic just so you can unit test it, but you are making your code less efficient for the sole purpose of testability, and getting rid of a lot of advantages using an ORM has in the first place. Furthermore, since a lot of the IDs for my models are database generated, I proved to have to write additional code to handle the non-database tests since IDs were never generated and I had to still handle those cases for the unit tests to pass, yet they would never occur in real scenarios. Thus I ended up throwing out my database unit testing. Writing unit tests for controllers was easy as long as I was returning views. However, the major part of my application (and the one that would benefit most from unit testing) is a complicated ajax web application. For various reasons I decided to change the app from returning views to returning JSON with the data I needed. After this occurred my unit tests became extremely painful to write, as I have not found any good way to write unit tests for non-trivial json. After pounding my head and wasting a ton of time trying to find a good way to unit test the JSON, I gave up and deleted all of my controller unit tests (all controller actions are focused on this part of the app so far). So finally I was left with testing the Service layer (BLL). Right now I am using EF4, however I had this issue with linq-sql as well. I chose to do the EF4 model-first approach because to me, it makes sense to do it that way (define my business objects and let the framework figure out how to translate it into the sql backend). This was fine at the beginning but now it is becoming cumbersome due to relationships. For example say I have Project, User, and Object entities. One Object must be associated to a project, and a project must be associated to a user. This is not only a database specific rule, these are my business rules as well. However, say I want to do a unit test that I am able to save an object (for a simple example). I now have to do the following code just to make sure the save worked: User usr = new User { Name = "Me" }; _userService.SaveUser(usr); Project prj = new Project { Name = "Test Project", Owner = usr }; _projectService.SaveProject(prj); Object obj = new Object { Name = "Test Object" }; _objectService.SaveObject(obj); // Perform verifications There are many issues with having to do all this just to perform one unit test. There are several issues with this. For starters, if I add a new dependency, such as all projects must belong to a category, I must go into EVERY single unit test that references a project, add code to save the category then add code to add the category to the project. This can be a HUGE effort down the road for a very simple business logic change, and yet almost none of the unit tests I will be modifying for this requirement are actually meant to test that feature/requirement. If I then add verifications to my SaveProject method, so that projects cannot be saved unless they have a name with at least 5 characters, I then have to go through every Object and Project unit test to make sure that the new requirement doesn't make any unrelated unit tests fail. If there is an issue in the UserService.SaveUser() method it will cause all project, and object unit tests to fail and it the cause won't be immediately noticeable without having to dig through the exceptions. Thus I have removed all service layer unit tests from my project. I could go on and on, but so far I have not seen any way for unit testing to actually help me and not get in my way. I can see specific cases where I can, and probably will, implement unit tests, such as making sure my data verification methods work correctly, but those cases are few and far between. Some of my issues can probably be mitigated but not without adding extra layers to my application, and thus making more points of failure just so I can unit test. Thus I have no unit tests left in my code. Luckily I heavily use source control so I can get them back if I need but I just don't see the point. Everywhere on the internet I see people talking about how great TDD unit tests are, and I'm not just talking about the fanatical people. The few people who dismiss TDD/Unit tests give bad arguments claiming they are more efficient debugging by hand through the IDE, or that their coding skills are amazing that they don't need it. I recognize that both of those arguments are utter bullocks, especially for a project that needs to be maintainable by multiple developers, but any valid rebuttals to TDD seem to be few and far between. So the point of this post is to ask, am I just not understanding how to use TDD and automatic unit tests?

    Read the article

  • With XSLT, how can I use this if-test with an array, when search element is returned by a template call inside the for loop?

    - by codesforcoffee
    I think this simple example might ask the question a lot more clearly. I have an input file with multiple products. There are 10 types of product (2 product IDs is fine enough for this example), but the input will have 200 products, and I only want to output the info for the first product of each type. (Output info for the lowest priced one, so the first one will be the lowest price because I sort by Price first.) So I want to read in each product, but only output the product's info if I haven't already output a product with that same ID. I couldn't figure out how to get the processID template to return a value that I need to do my if-check on, that uses parameters from inside the for-each Product loop -then properly close the if tag in the right place so it won't output the open Product tag unless it passes the if test. I know the following code does not work, but it illustrates the idea and gives me a place to start: <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml" encoding="UTF-8" indent="yes" cdata-section-elements="prod_name adv_notes"/> <xsl:template match="/"> <List> <xsl:for-each select="ProductGroup"> <xsl:sort select="ActiveProducts/Product/Rate"/> <xsl:variable name="IDarray"> <xsl:for-each select="ActiveProducts/Product"> <xsl:variable name="CurrentID"> <xsl:call-template name="processID"> <xsl:with-param name="ProductCode" select="ProductCode" /> </xsl:call-template> </xsl:variable> <xsl:if test="not(contains($IDarray, $CurrentID))"> <child elem="{@elem}"> <xsl:select value-of="$CurrentID" /> </child> <Product> <xsl:attribute name="ID"> <xsl:select value-of="$CurrentID" /> </xsl:attribute> <prod_name> <xsl:value-of select="../ProductName"/> </prod_name> <rate> <xsl:value-of select="../Rate"/> </rate> </Product> </xsl:if> </xsl:for-each> </xsl:variable> </xsl:for-each> </List> </xsl:template> <xsl:template name="processID"> <xsl:param name="ProductCode"/> <xsl:choose> <xsl:when test="starts-with($ProductCode, '515')">5</xsl:when> <xsl:when test="starts-with($ProductCode, '205')">2</xsl:when> </xsl:choose> </xsl:template> Thanks so much in advance, I know some of the awesome programmers here can help! :) -Holly An input would look like this: <ProductGroup> <ActiveProducts> <Product> <ProductCode> 5155 </ProductCode> <ProductName> House </ProductName> <Rate> 3.99 </Rate> </Product> <Product> <ProductCode> 5158 </ProductCode> <ProductName> House </ProductName> <Rate> 4.99 </Rate> </Product> </ActiveProducts> </ProductGroup> <ProductGroup> <ActiveProducts> <Product> <ProductCode> 2058 </ProductCode> <ProductName> House </ProductName> <Rate> 2.99 </Rate> </Product> <Product> <ProductCode> 2055 </ProductCode> <ProductName> House </ProductName> <Rate> 7.99 </Rate> </Product> </ActiveProducts> </ProductGroup> 200 of those with different attributes. I have the translation working, just needed to add that array and if statement somehow. Output would be this for only that simple input file:

    Read the article

  • Writing an OS for Motorola 68K processor. Can I emulate it? And can I test-drive OS development?

    - by ulver
    Next term, I'll need to write a basic operating system for Motorola 68K processor as part of a course lab material. Is there a Linux emulator of a basic hardware setup with that processor? So my partners and I can debug quicker on our computers instead of physically restarting the board and stuff. Is it possible to apply test-driven development technique to OS development? Code will be mostly assembly and C. What will be the main difficulties with trying to test-drive this? Any advice on how to do it?

    Read the article

  • Global Cache CR Requested But Current Block Received

    - by Liu Maclean(???)
    ????????«MINSCN?Cache Fusion Read Consistent» ????,???????????? ??????????????????: SQL> select * from V$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production CORE 11.2.0.3.0 Production TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production SQL> select count(*) from gv$instance; COUNT(*) ---------- 2 SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com ?11gR2 2??RAC??????????status???XG,????Xcurrent block???INSTANCE 2?hold?,?????INSTANCE 1?????????,?????: SQL> select * from test; ID ---------- 1 2 SQL> select dbms_rowid.rowid_block_number(rowid),dbms_rowid.rowid_relative_fno(rowid) from test; DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) ------------------------------------ ------------------------------------ 89233 1 89233 1 SQL> alter system flush buffer_cache; System altered. INSTANCE 1 Session A: SQL> update test set id=id+1 where id=1; 1 row updated. INSTANCE 1 Session B: SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 1 0 3 1755287 SQL> oradebug setmypid; Statement processed. SQL> oradebug dump gc_elements 255; Statement processed. SQL> oradebug tracefile_name; /s01/orabase/diag/rdbms/vprod/VPROD1/trace/VPROD1_ora_19111.trc GLOBAL CACHE ELEMENT DUMP (address: 0xa4ff3080): id1: 0x15c91 id2: 0x1 pkey: OBJ#76896 block: (1/89233) lock: X rls: 0x0 acq: 0x0 latch: 3 flags: 0x20 fair: 0 recovery: 0 fpin: 'kdswh11: kdst_fetch' bscn: 0x0.146e20 bctx: (nil) write: 0 scan: 0x0 lcp: (nil) lnk: [NULL] lch: [0xa9f6a6f8,0xa9f6a6f8] seq: 32 hist: 58 145:0 118 66 144:0 192 352 197 48 121 113 424 180 58 LIST OF BUFFERS LINKED TO THIS GLOBAL CACHE ELEMENT: flg: 0x02000001 lflg: 0x1 state: XCURRENT tsn: 0 tsh: 2 addr: 0xa9f6a5c8 obj: 76896 cls: DATA bscn: 0x0.1ac898 BH (0xa9f6a5c8) file#: 1 rdba: 0x00415c91 (1/89233) class: 1 ba: 0xa9e56000 set: 5 pool: 3 bsz: 8192 bsi: 0 sflg: 3 pwc: 0,15 dbwrid: 0 obj: 76896 objn: 76896 tsn: 0 afn: 1 hint: f hash: [0x91f4e970,0xbae9d5b8] lru: [0x91f58848,0xa9f6a828] lru-flags: debug_dump obj-flags: object_ckpt_list ckptq: [0x9df6d1d8,0xa9f6a740] fileq: [0xa2ece670,0xbdf4ed68] objq: [0xb4964e00,0xb4964e00] objaq: [0xb4964de0,0xb4964de0] st: XCURRENT md: NULL fpin: 'kdswh11: kdst_fetch' tch: 2 le: 0xa4ff3080 flags: buffer_dirty redo_since_read LRBA: [0x19.5671.0] LSCN: [0x0.1ac898] HSCN: [0x0.1ac898] HSUB: [1] buffer tsn: 0 rdba: 0x00415c91 (1/89233) scn: 0x0000.001ac898 seq: 0x01 flg: 0x00 tail: 0xc8980601 frmt: 0x02 chkval: 0x0000 type: 0x06=trans data ??????block: (1/89233)?GLOBAL CACHE ELEMENT DUMP?LOCK????X ??XG , ??????Current Block????Instance??modify???,????????????? ????Instance 2 ????: Instance 2 Session C: SQL> update test set id=id+1 where id=2; 1 row updated. Instance 2 Session D: SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 1 0 3 1756658 SQL> oradebug setmypid; Statement processed. SQL> oradebug dump gc_elements 255; Statement processed. SQL> oradebug tracefile_name; /s01/orabase/diag/rdbms/vprod/VPROD2/trace/VPROD2_ora_13038.trc GLOBAL CACHE ELEMENT DUMP (address: 0x89fb25a0): id1: 0x15c91 id2: 0x1 pkey: OBJ#76896 block: (1/89233) lock: XG rls: 0x0 acq: 0x0 latch: 3 flags: 0x20 fair: 0 recovery: 0 fpin: 'kduwh01: kdusru' bscn: 0x0.1acdf3 bctx: (nil) write: 0 scan: 0x0 lcp: (nil) lnk: [NULL] lch: [0x96f4cf80,0x96f4cf80] seq: 61 hist: 324 21 143:0 19 16 352 329 144:6 14 7 352 197 LIST OF BUFFERS LINKED TO THIS GLOBAL CACHE ELEMENT: flg: 0x0a000001 state: XCURRENT tsn: 0 tsh: 1 addr: 0x96f4ce50 obj: 76896 cls: DATA bscn: 0x0.1acdf6 BH (0x96f4ce50) file#: 1 rdba: 0x00415c91 (1/89233) class: 1 ba: 0x96bd4000 set: 5 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 0,15 dbwrid: 0 obj: 76896 objn: 76896 tsn: 0 afn: 1 hint: f hash: [0x96ee1fe8,0xbae9d5b8] lru: [0x96f4d0b0,0x96f4cdc0] obj-flags: object_ckpt_list ckptq: [0xbdf519b8,0x96f4d5a8] fileq: [0xbdf519d8,0xbdf519d8] objq: [0xb4a47b90,0xb4a47b90] objaq: [0x96f4d0e8,0xb4a47b70] st: XCURRENT md: NULL fpin: 'kduwh01: kdusru' tch: 1 le: 0x89fb25a0 flags: buffer_dirty redo_since_read remote_transfered LRBA: [0x11.9e18.0] LSCN: [0x0.1acdf6] HSCN: [0x0.1acdf6] HSUB: [1] buffer tsn: 0 rdba: 0x00415c91 (1/89233) scn: 0x0000.001acdf6 seq: 0x01 flg: 0x00 tail: 0xcdf60601 frmt: 0x02 chkval: 0x0000 type: 0x06=trans data GCS CLIENT 0x89fb2618,6 resp[(nil),0x15c91.1] pkey 76896.0 grant 2 cvt 0 mdrole 0x42 st 0x100 lst 0x20 GRANTQ rl G0 master 1 owner 2 sid 0 remote[(nil),0] hist 0x94121c601163423c history 0x3c.0x4.0xd.0xb.0x1.0xc.0x7.0x9.0x14.0x1. cflag 0x0 sender 1 flags 0x0 replay# 0 abast (nil).x0.1 dbmap (nil) disk: 0x0000.00000000 write request: 0x0000.00000000 pi scn: 0x0000.00000000 sq[(nil),(nil)] msgseq 0x1 updseq 0x0 reqids[6,0,0] infop (nil) lockseq x2b8 pkey 76896.0 hv 93 [stat 0x0, 1->1, wm 32768, RMno 0, reminc 18, dom 0] kjga st 0x4, step 0.0.0, cinc 20, rmno 6, flags 0x0 lb 0, hb 0, myb 15250, drmb 15250, apifrz 0 ?Instance 2??????block: (1/89233)? GLOBAL CACHE ELEMENT Lock Convert?lock: XG ????GC_ELEMENTS DUMP???XCUR Cache Fusion?,???????X$ VIEW,??? X$LE X$KJBR X$KJBL, ???X$ VIEW???????????????????: INSTANCE 2 Session D: SELECT * FROM x$le WHERE le_addr IN (SELECT le_addr FROM x$bh WHERE obj IN (SELECT data_object_id FROM dba_objects WHERE owner = 'SYS' AND object_name = 'TEST') AND class = 1 AND state != 3); ADDR INDX INST_ID LE_ADDR LE_ID1 LE_ID2 ---------------- ---------- ---------- ---------------- ---------- ---------- LE_RLS LE_ACQ LE_FLAGS LE_MODE LE_WRITE LE_LOCAL LE_RECOVERY ---------- ---------- ---------- ---------- ---------- ---------- ----------- LE_BLKS LE_TIME LE_KJBL ---------- ---------- ---------------- 00007F94CA14CF60 7003 2 0000000089FB25A0 89233 1 0 0 32 2 0 1 0 1 0 0000000089FB2618 PCM Resource NAME?[ID1][ID2],[BL]???, ID1?ID2 ??blockno? fileno????, ??????????GC_elements dump?? id1: 0x15c91 id2: 0×1 pkey: OBJ#76896 block: (1/89233)?? ,?  kjblname ? kjbrname ??”[0x15c91][0x1],[BL]” ??: INSTANCE 2 Session D: SQL> set linesize 80 pagesize 1400 SQL> SELECT * 2 FROM x$kjbl l 3 WHERE l.kjblname LIKE '%[0x15c91][0x1],[BL]%'; ADDR INDX INST_ID KJBLLOCKP KJBLGRANT KJBLREQUE ---------------- ---------- ---------- ---------------- --------- --------- KJBLROLE KJBLRESP KJBLNAME ---------- ---------------- ------------------------------ KJBLNAME2 KJBLQUEUE ------------------------------ ---------- KJBLLOCKST KJBLWRITING ---------------------------------------------------------------- ----------- KJBLREQWRITE KJBLOWNER KJBLMASTER KJBLBLOCKED KJBLBLOCKER KJBLSID KJBLRDOMID ------------ ---------- ---------- ----------- ----------- ---------- ---------- KJBLPKEY ---------- 00007F94CA22A288 451 2 0000000089FB2618 KJUSEREX KJUSERNL 0 00 [0x15c91][0x1],[BL][ext 0x0,0x 89233,1,BL 0 GRANTED 0 0 1 0 0 0 0 0 76896 SQL> SELECT r.* FROM x$kjbr r WHERE r.kjbrname LIKE '%[0x15c91][0x1],[BL]%'; no rows selected Instance 1 session B: SQL> SELECT r.* FROM x$kjbr r WHERE r.kjbrname LIKE '%[0x15c91][0x1],[BL]%'; ADDR INDX INST_ID KJBRRESP KJBRGRANT KJBRNCVL ---------------- ---------- ---------- ---------------- --------- --------- KJBRROLE KJBRNAME KJBRMASTER KJBRGRANTQ ---------- ------------------------------ ---------- ---------------- KJBRCVTQ KJBRWRITER KJBRSID KJBRRDOMID KJBRPKEY ---------------- ---------------- ---------- ---------- ---------- 00007F801ACA68F8 1355 1 00000000B5A62AE0 KJUSEREX KJUSERNL 0 [0x15c91][0x1],[BL][ext 0x0,0x 0 00000000B48BB330 00 00 0 0 76896 ??????Instance 1???block: (1/89233),??????Instance 2 build cr block ????Instance 1, ?????????? ????? Instance 1? Foreground Process ? Instance 2?LMS??????RAC  TRACE: Instance 2: [oracle@vrh2 ~]$ ps -ef|grep ora_lms|grep -v grep oracle 23364 1 0 Apr29 ? 00:33:15 ora_lms0_VPROD2 SQL> oradebug setospid 23364 Oracle pid: 13, Unix process pid: 23364, image: [email protected] (LMS0) SQL> oradebug event 10046 trace name context forever,level 8:10708 trace name context forever,level 103: trace[rac.*] disk high; Statement processed. SQL> oradebug tracefile_name /s01/orabase/diag/rdbms/vprod/VPROD2/trace/VPROD2_lms0_23364.trc Instance 1 session B : SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 3 1756658 3 1756661 3 1755287 Instance 1 session A : SQL> alter session set events '10046 trace name context forever,level 8:10708 trace name context forever,level 103: trace[rac.*] disk high'; Session altered. SQL> select * from test; ID ---------- 2 2 SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 3 1761520 ?x$BH?????,???????Instance 1???build??CR block,????? TRACE ??: Instance 1 foreground Process: PARSING IN CURSOR #140336527348792 len=18 dep=0 uid=0 oct=3 lid=0 tim=1335939136125254 hv=1689401402 ad='b1a4c828' sqlid='c99yw1xkb4f1u' select * from test END OF STMT PARSE #140336527348792:c=2999,e=2860,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=1357081020,tim=1335939136125253 EXEC #140336527348792:c=0,e=40,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1357081020,tim=1335939136125373 WAIT #140336527348792: nam='SQL*Net message to client' ela= 6 driver id=1650815232 #bytes=1 p3=0 obj#=0 tim=1335939136125420 *** 2012-05-02 02:12:16.125 kclscrs: req=0 block=1/89233 2012-05-02 02:12:16.125574 : kjbcro[0x15c91.1 76896.0][4] *** 2012-05-02 02:12:16.125 kclscrs: req=0 typ=nowait-abort *** 2012-05-02 02:12:16.125 kclscrs: bid=1:3:1:0:f:1e:0:0:10:0:0:0:1:2:4:1:20:0:0:0:c3:49:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:4:3:2:1:2:0:1c:0:4d:26:a3:52:0:0:0:0:c7:c:ca:62:c3:49:0:0:0:0:1:0:14:8e:47:76:1:2:dc:5:a9:fe:17:75:0:0:0:0:0:0:0:0:0:0:0:0:99:ed:0:0:0:0:0:0:10:0:0:0 2012-05-02 02:12:16.125718 : kjbcro[0x15c91.1 76896.0][4] 2012-05-02 02:12:16.125751 : GSIPC:GMBQ: buff 0xba0ee018, queue 0xbb79a7b8, pool 0x60013fa0, freeq 0, nxt 0xbb79a7b8, prv 0xbb79a7b8 2012-05-02 02:12:16.125780 : kjbsentscn[0x0.1ae0f0][to 2] 2012-05-02 02:12:16.125806 : GSIPC:SENDM: send msg 0xba0ee088 dest x20001 seq 177740 type 36 tkts xff0000 mlen x1680198 2012-05-02 02:12:16.125918 : kjbmscr(0x15c91.1)reqid=0x8(req 0xa4ff30f8)(rinst 1)hldr 2(infosz 200)(lseq x2b8) 2012-05-02 02:12:16.126959 : GSIPC:KSXPCB: msg 0xba0ee088 status 30, type 36, dest 2, rcvr 1 *** 2012-05-02 02:12:16.127 kclwcrs: wait=0 tm=1233 *** 2012-05-02 02:12:16.127 kclwcrs: got 1 blocks from ksxprcv WAIT #140336527348792: nam='gc cr block 2-way' ela= 1233 p1=1 p2=89233 p3=1 obj#=76896 tim=1335939136127199 2012-05-02 02:12:16.127272 : kjbcrcomplete[0x15c91.1 76896.0][0] 2012-05-02 02:12:16.127309 : kjbrcvdscn[0x0.1ae0f0][from 2][idx 2012-05-02 02:12:16.127329 : kjbrcvdscn[no bscn <= rscn 0x0.1ae0f0][from 2] ???? kjbcro[0x15c91.1 76896.0][4] kjbsentscn[0x0.1ae0f0][to 2] ?Instance 2??SCN=1ae0f0=1761520? block: (1/89233),???’gc cr block 2-way’ ??,?????????CR block? Instance 2 LMS TRACE 2012-05-02 02:12:15.634057 : GSIPC:RCVD: ksxp msg 0x7f16e1598588 sndr 1 seq 0.177740 type 36 tkts 0 2012-05-02 02:12:15.634094 : GSIPC:RCVD: watq msg 0x7f16e1598588 sndr 1, seq 177740, type 36, tkts 0 2012-05-02 02:12:15.634108 : GSIPC:TKT: collect msg 0x7f16e1598588 from 1 for rcvr -1, tickets 0 2012-05-02 02:12:15.634162 : kjbrcvdscn[0x0.1ae0f0][from 1][idx 2012-05-02 02:12:15.634186 : kjbrcvdscn[no bscn1, wm 32768, RMno 0, reminc 18, dom 0] kjga st 0x4, step 0.0.0, cinc 20, rmno 6, flags 0x0 lb 0, hb 0, myb 15250, drmb 15250, apifrz 0 GCS CLIENT END 2012-05-02 02:12:15.635211 : kjbdowncvt[0x15c91.1 76896.0][1][options x0] 2012-05-02 02:12:15.635230 : GSIPC:AMBUF: rcv buff 0x7f16e1c56420, pool rcvbuf, rqlen 1103 2012-05-02 02:12:15.635308 : GSIPC:GPBMSG: new bmsg 0x7f16e1c56490 mb 0x7f16e1c56420 msg 0x7f16e1c564b0 mlen 152 dest x101 flushsz -1 2012-05-02 02:12:15.635334 : kjbmslset(0x15c91.1)) seq 0x4 reqid=0x6 (shadow 0xb48bb330.xb)(rsn 2)(mas@1) 2012-05-02 02:12:15.635355 : GSIPC:SPBMSG: send bmsg 0x7f16e1c56490 blen 184 msg 0x7f16e1c564b0 mtype 33 attr|dest x30101 bsz|fsz x1ffff 2012-05-02 02:12:15.635377 : GSIPC:SNDQ: enq msg 0x7f16e1c56490, type 65521 seq 118669, inst 1, receiver 1, queued 1 *** 2012-05-02 02:12:15.635 kclccctx: cleanup copy 0x7f16e1d94798 2012-05-02 02:12:15.635479 : [kjmpmsgi:compl][type 36][msg 0x7f16e1598588][seq 177740.0][qtime 0][ptime 1257] 2012-05-02 02:12:15.635511 : GSIPC:BSEND: flushing sndq 0xb491dd28, id 1, dcx 0xbc516778, inst 1, rcvr 1 qlen 0 1 2012-05-02 02:12:15.635536 : GSIPC:BSEND: no batch1 msg 0x7f16e1c56490 type 65521 len 184 dest (1:1) 2012-05-02 02:12:15.635557 : kjbsentscn[0x0.1ae0f1][to 1] 2012-05-02 02:12:15.635578 : GSIPC:SENDM: send msg 0x7f16e1c56490 dest x10001 seq 118669 type 65521 tkts x10002 mlen xb800e8 WAIT #0: nam='gcs remote message' ela= 180 waittime=1 poll=0 event=0 obj#=0 tim=1335939135635819 2012-05-02 02:12:15.635853 : GSIPC:RCVD: ksxp msg 0x7f16e167e0b0 sndr 1 seq 0.177741 type 32 tkts 0 2012-05-02 02:12:15.635875 : GSIPC:RCVD: watq msg 0x7f16e167e0b0 sndr 1, seq 177741, type 32, tkts 0 2012-05-02 02:12:15.636012 : GSIPC:TKT: collect msg 0x7f16e167e0b0 from 1 for rcvr -1, tickets 0 2012-05-02 02:12:15.636040 : kjbrcvdscn[0x0.1ae0f1][from 1][idx 2012-05-02 02:12:15.636060 : kjbrcvdscn[no bscn <= rscn 0x0.1ae0f1][from 1] 2012-05-02 02:12:15.636082 : GSIPC:TKT: dest (1:1) rtkt not acked 1  unassigned bufs 0  tkts 0  newbufs 0 2012-05-02 02:12:15.636102 : GSIPC:TKT: remove ctx dest (1:1) 2012-05-02 02:12:15.636125 : [kjmxmpm][type 32][seq 0.177741][msg 0x7f16e167e0b0][from 1] 2012-05-02 02:12:15.636146 : kjbmpocr(0xb0.6)seq 0x1,reqid=0x23a,(client 0x9fff7b58,0x1)(from 1)(lseq xdf0) 2????LMS????????? ??gcs remote message GSIPC ????SCN=[0x0.1ae0f0] block=1/89233???,??BAST kjbmpbast(0x15c91.1),?? block=1/89233??????? ??fairness??(?11.2.0.3???_fairness_threshold=2),?current block?KCL: F156: fairness downconvert,?Xcurrent DownConvert? Scurrent: Instance 2: SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 2 0 3 1756658 ??Instance 2 LMS ?cr block??? kjbmslset(0x15c91.1)) ????SEND QUEUE GSIPC:SNDQ: enq msg 0x7f16e1c56490? ???????Instance 1???? block: (1/89233)??? ??????: Instance 2: SQL> select CURRENT_RESULTS,LIGHT_WORKS from v$cr_block_server; CURRENT_RESULTS LIGHT_WORKS --------------- ----------- 29273 437 Instance 1 session A: SQL> SQL> select * from test; ID ---------- 2 2 SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 3 1761942 3 1761932 1 0 3 1761520 Instance 2: SQL> select CURRENT_RESULTS,LIGHT_WORKS from v$cr_block_server; CURRENT_RESULTS LIGHT_WORKS --------------- ----------- 29274 437 select * from test END OF STMT PARSE #140336529675592:c=0,e=337,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1357081020,tim=1335939668940051 EXEC #140336529675592:c=0,e=96,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1357081020,tim=1335939668940204 WAIT #140336529675592: nam='SQL*Net message to client' ela= 5 driver id=1650815232 #bytes=1 p3=0 obj#=0 tim=1335939668940348 *** 2012-05-02 02:21:08.940 kclscrs: req=0 block=1/89233 2012-05-02 02:21:08.940676 : kjbcro[0x15c91.1 76896.0][5] *** 2012-05-02 02:21:08.940 kclscrs: req=0 typ=nowait-abort *** 2012-05-02 02:21:08.940 kclscrs: bid=1:3:1:0:f:21:0:0:10:0:0:0:1:2:4:1:20:0:0:0:c3:49:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:4:3:2:1:2:0:1f:0:4d:26:a3:52:0:0:0:0:c7:c:ca:62:c3:49:0:0:0:0:1:0:17:8e:47:76:1:2:dc:5:a9:fe:17:75:0:0:0:0:0:0:0:0:0:0:0:0:99:ed:0:0:0:0:0:0:10:0:0:0 2012-05-02 02:21:08.940799 : kjbcro[0x15c91.1 76896.0][5] 2012-05-02 02:21:08.940833 : GSIPC:GMBQ: buff 0xba0ee018, queue 0xbb79a7b8, pool 0x60013fa0, freeq 0, nxt 0xbb79a7b8, prv 0xbb79a7b8 2012-05-02 02:21:08.940859 : kjbsentscn[0x0.1ae28c][to 2] 2012-05-02 02:21:08.940870 : GSIPC:SENDM: send msg 0xba0ee088 dest x20001 seq 177810 type 36 tkts xff0000 mlen x1680198 2012-05-02 02:21:08.940976 : kjbmscr(0x15c91.1)reqid=0xa(req 0xa4ff30f8)(rinst 1)hldr 2(infosz 200)(lseq x2b8) 2012-05-02 02:21:08.941314 : GSIPC:KSXPCB: msg 0xba0ee088 status 30, type 36, dest 2, rcvr 1 *** 2012-05-02 02:21:08.941 kclwcrs: wait=0 tm=707 *** 2012-05-02 02:21:08.941 kclwcrs: got 1 blocks from ksxprcv 2012-05-02 02:21:08.941818 : kjbassume[0x15c91.1][sender 2][mymode x1][myrole x0][srole x0][flgs x0][spiscn 0x0.0][swscn 0x0.0] 2012-05-02 02:21:08.941852 : kjbrcvdscn[0x0.1ae28d][from 2][idx 2012-05-02 02:21:08.941871 : kjbrcvdscn[no bscn ??????????????SCN=[0x0.1ae28c]=1761932 Version?CR block, ????receive????Xcurrent Block??SCN=1ae28d=1761933,Instance 1???Xcurrent Block???build????????SCN=1761932?CR BLOCK, ????????Current block,?????????'gc current block 2-way'? ?????????????request current block,?????kjbcro;?????Instance 2?LMS???????Current Block: Instance 2 LMS trace: 2012-05-02 02:21:08.448743 : GSIPC:RCVD: ksxp msg 0x7f16e14a4398 sndr 1 seq 0.177810 type 36 tkts 0 2012-05-02 02:21:08.448778 : GSIPC:RCVD: watq msg 0x7f16e14a4398 sndr 1, seq 177810, type 36, tkts 0 2012-05-02 02:21:08.448798 : GSIPC:TKT: collect msg 0x7f16e14a4398 from 1 for rcvr -1, tickets 0 2012-05-02 02:21:08.448816 : kjbrcvdscn[0x0.1ae28c][from 1][idx 2012-05-02 02:21:08.448834 : kjbrcvdscn[no bscn <= rscn 0x0.1ae28c][from 1] 2012-05-02 02:21:08.448857 : GSIPC:TKT: dest (1:1) rtkt not acked 2  unassigned bufs 0  tkts 0  newbufs 0 2012-05-02 02:21:08.448875 : GSIPC:TKT: remove ctx dest (1:1) 2012-05-02 02:21:08.448970 : [kjmxmpm][type 36][seq 0.177810][msg 0x7f16e14a4398][from 1] 2012-05-02 02:21:08.448993 : kjbmpbast(0x15c91.1) reqid=0x6 (req 0xa4ff30f8)(reqinst 1)(reqid 10)(flags x0) *** 2012-05-02 02:21:08.449 kclcrrf: req=48054 block=1/89233 *** 2012-05-02 02:21:08.449 kcl_compress_block: compressed: 6 free space: 7680 2012-05-02 02:21:08.449085 : kjbsentscn[0x0.1ae28d][to 1] 2012-05-02 02:21:08.449142 : kjbdeliver[to 1][0xa4ff30f8][10][current 1] 2012-05-02 02:21:08.449164 : kjbmssch(reqlock 0xa4ff30f8,10)(to 1)(bsz 344) 2012-05-02 02:21:08.449183 : GSIPC:AMBUF: rcv buff 0x7f16e18bcec8, pool rcvbuf, rqlen 1102 *** 2012-05-02 02:21:08.449 kclccctx: cleanup copy 0x7f16e1d94838 *** 2012-05-02 02:21:08.449 kcltouched: touch seconds 3271 *** 2012-05-02 02:21:08.449 kclgrantlk: req=48054 2012-05-02 02:21:08.449347 : [kjmpmsgi:compl][type 36][msg 0x7f16e14a4398][seq 177810.0][qtime 0][ptime 1119] WAIT #0: nam='gcs remote message' ela= 568 waittime=1 poll=0 event=0 obj#=0 tim=1335939668449962 2012-05-02 02:21:08.450001 : GSIPC:RCVD: ksxp msg 0x7f16e1bb22a0 sndr 1 seq 0.177811 type 32 tkts 0 2012-05-02 02:21:08.450024 : GSIPC:RCVD: watq msg 0x7f16e1bb22a0 sndr 1, seq 177811, type 32, tkts 0 2012-05-02 02:21:08.450043 : GSIPC:TKT: collect msg 0x7f16e1bb22a0 from 1 for rcvr -1, tickets 0 2012-05-02 02:21:08.450060 : kjbrcvdscn[0x0.1ae28e][from 1][idx 2012-05-02 02:21:08.450078 : kjbrcvdscn[no bscn <= rscn 0x0.1ae28e][from 1] 2012-05-02 02:21:08.450097 : GSIPC:TKT: dest (1:1) rtkt not acked 3  unassigned bufs 0  tkts 0  newbufs 0 2012-05-02 02:21:08.450116 : GSIPC:TKT: remove ctx dest (1:1) 2012-05-02 02:21:08.450136 : [kjmxmpm][type 32][seq 0.177811][msg 0x7f16e1bb22a0][from 1] 2012-05-02 02:21:08.450155 : kjbmpocr(0xb0.6)seq 0x1,reqid=0x23e,(client 0x9fff7b58,0x1)(from 1)(lseq xdf4) ???Instance 2??LMS???,???build cr block,??????Instance 1?????Current Block??????Instance 2??v$cr_block_server??????LIGHT_WORKS?????current block transfer??????,??????? CR server? Light Work Rule(Light Work Rule?8i Cr Server?????????,?Remote LMS?? build CR????????,resource holder?LMS???????block,????CR build If creating the consistent read version block involves too much work (such as reading blocks from disk), then the holder sends the block to the requestor, and the requestor completes the CR fabrication. The holder maintains a fairness counter of CR requests. After the fairness threshold is reached, the holder downgrades it to lock mode.)? ??????? CR Request ????Current Block?? ???:??????class?block,CR server??????? ??undo block?? undo header block?CR quest, LMS????Current Block, ????? ???? ??????? block cleanout? CR  Version??????? ???????? data blocks, ??????? CR quest  & CR received?(???????Light Work Rule,LMS"??"), ??Current Block??DownConvert???S lock,??LMS???????ship??current version?block? ??????? , ?????? ,???????DownConvert?????”_fairness_threshold“???200,????Xcurrent Block?????Scurrent, ????LMS?????Current Version?Data Block: SQL> show parameter fair NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ _fairness_threshold integer 200 Instance 1: SQL> update test set id=id+1 where id=4; 1 row updated. Instance 2: SQL> update test set id=id+1 where id=2; 1 row updated. SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 1 0 3 1838166 ?Instance 1? ????,? ??instance 2? v$cr_block_server?? instance 1 SQL> select * from test; ID ---------- 10 3 instance 2: SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 1 0 3 1883707 8 0 SQL> select * from test; ID ---------- 10 3 SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 1 0 3 1883707 8 0 ................... SQL> / STATE CR_SCN_BAS ---------- ---------- 2 0 3 1883707 3 1883695 repeat cr request on Instance 1 SQL> / STATE CR_SCN_BAS ---------- ---------- 8 0 3 1883707 3 1883695 ??????_fairness_threshold????????,?????200 ????????CR serve??Downgrade?lock, ????data block? CR Request????Receive? Current Block?

    Read the article

  • Tools of the Trade

    - by Ajarn Mark Caldwell
    I got pretty excited a couple of days ago when my new laptop arrived. “The new phone books are here!  The new phone books are here!  I’m a somebody!” - Steve Martin in The Jerk It is a Dell Precision M4500 with an Intel i7 Core 2.8 GHZ running 64-bit Windows 7 with a 15.6” widescreen, 8 GB RAM, 256 GB SSD.  For some of you high fliers, this may be nothing to write home about, but compared to the 32–bit Windows XP laptop with 2 GB of RAM and a regular hard disk that I’m coming from, it’s a really nice step forward.  I won’t even bore you with the details of the desktop PC I was first given when I started here 5 1/2 years ago.  Let’s just say that things have improved.  One really nice thing is that while we are definitely running a lean and mean department in terms of staffing, my boss believes in supporting that lean staff with good tools in order to stay lean instead of having to spend even more money on additional employees.  Of course, that only goes so far, and at some point you have to add more people in order to get more work done, which is why we are bringing on-board a new employee and a new contract developer next week.  But that’s a different story for a different time. But the main topic for this post is to highlight the variety of tools that I use in my job and that you might find useful, too.  This is easy to do right now because the process of building up my new laptop from scratch has forced me to assemble a list of software that had to be installed and configured.  Keep in mind as you look through this list that I play many roles in our company.  My official title is Software Engineering Manager, but in addition to managing the team, I am also an active ASP.NET and SQL developer, the Database Administrator, and 50% of the SAN Administrator team.  So, without further ado, here are the tools and some comments about why I use them: Tool Purpose Virtual Clone Drive Easily mount an ISO image as a DVD Drive.  This is particularly handy when you are downloading disk images from Microsoft for your tools. SQL Server 2008 R2 Developer Edition We are migrating all of our active systems to SQL 2008 R2.  Developer Edition has all the features of Enterprise Edition, but intended for development use. SQL Server 2005 Developer Edition (BIDS ONLY) The migration to SSRS 2008 R2 is just getting started, and in the meantime, maintenance work still has to be done on the reports on our SQL 2005 server.  For some reason, you can’t use BIDS from 2008 to write reports for a 2005 server.  There is some different format and when you open 2005 reports in 2008 BIDS, it forces you to upgrade, and they can no longer be uploaded to a 2005 server.  Hopefully Microsoft will fix this soon in some manner similar to Visual Studio now allows you to pick which version of the .NET Framework you are coding against. Visual Studio 2010 Premium All of our application development is in ASP.NET, and we might as well use the tool designed for it. I’ve used a version of Visual Studio going all the way back to VB 6.0 and Visual Interdev. Vault Professional Client Several years ago we replaced Visual Source Safe with SourceGear Vault (then Fortress, and now Vault Pro), and I love it.  It is very reliable with low overhead - perfect for a small to medium size development team.  And being a small ISV, their support is exceptional. Red-Gate Developer Bundle with the SQL Source Control update for Vault I first used, and fell in love with, SQL Prompt shortly before Red-Gate bought it, and then Red-Gate’s first release made me love it even more.  SQL Refactor (which has since been rolled into the latest version of SQL Prompt) has saved me many hours and migraine’s trying to understand somebody else’s code when their indenting was nonexistant, or worse, irrational.  SQL Compare has been awesome for troubleshooting potential schema issues between different instances of system databases.  SQL Data Compare helped us identify the cause behind a bug which appeared in PROD but could not be reproduced in a nearly (but not quite exactly) identical copy in UAT.  And the newest tool we are embracing: SQL Source Control.  I blogged about it here (and here, and here) last December.  This is really going to help us keep each developer’s copy of the database in sync with one another. Fiddler Helps you watch the whole traffic stream on web visits.  Haven’t used it a lot, but it did help me track down some odd 404 errors we were finding in our own application logs.  Has some other JavaScript troubleshooting capabilities, but some of its usefulness has been supplanted by the Developer Tools option in IE8. Funduc Search & Replace Find any string anywhere in a mound of source code really, really fast.  Does RegEx searches, if you understand that foreign language.  Has really helped with some refactoring work to pinpoint, for example, everywhere a particular stored procedure is referenced, whether in .NET code or other SQL procedures (which we have in script files).  Provides in-context preview of the search results.  Fantastic tool, and a bargain price. SciTE SciTE is a Scintilla based Text Editor and it is a fantastic, light-weight tool for quickly reviewing (or writing) program code, SQL scripts, and extract files.  It has language-specific syntax highlighting.  I used it to write several batch and CMD programs a year ago, and to examine data extract files for exchanging information with other systems.  Extremely handy are the options to View End of Line and View Whitespace.  Ever receive a file that is supposed to use CRLF as an end-of-line marker, but really only has CRs?  SciTE will quickly make that visible. Infragistics Controls We do a lot of ASP.NET development, and frequently use the WebGrid, WebTab, and date picker controls.  We will likely be implementing the Hierarchical Data Grid soon.  Infragistics has control suites for WebForms, WinForms, Silverlight, and coming soon MVC/JQuery. WinZip - WITH Command-Line add-in The classic compression program with a great command-line interface that allows me to build those CMD (and soon PowerShell) programs for automated compression jobs.  Our versioned Build packages are zip files. XML Notepad Haven’t used this a lot myself, but one of my team really likes it for examining large XML files. LINQPad Again, haven’t used this one a lot, but it was recommended to me for learning and practicing my LINQ skills which will come in handy as we implement Entity Framework. SQL Sentry Plan Explorer SQL Server Show Plan on steroids.  Great for helping you focus on the parts of a large query that are of most importance.  Also great for just compressing the graphical plan into more readable layout. Araxis Merge A great DIFF and Merge tool.  SourceGear provides a great tool called DiffMerge that we use all the time, but occasionally, I like the cross-edit capabilities of Araxis Merge.  For a while, we also produced DIFF reports in HTML that showed all the changes that occurred between two releases.  This was most important when we were putting out very small, but very important hot fixes on a very politically hot system.  The reports produced by Araxis Merge gave the Director of IS assurance that we were not accidentally introducing ripples throughout the system with our releases. Idera SQL Admin Toolset A great collection of tools including a password checker to help analyze your SQL Server for weak user passwords, a Backup Status tool to quickly scan a large list of servers and databases to identify any that are overdue for backups.  Particularly helpful for highlighting new databases that have been deployed without getting included in your backup processing.  I also like Space Analyzer to keep an eye on disk space consumed by database files. Idera SQL Job Manager This free tool provides a nice calendar view of SQL Server Job Schedules, but to a degree, you also get what you pay for.  We will be purchasing SQL Sentry Event Manager later this year as an even better job schedule reviewer/manager.  But in the meantime, this at least gives me a good view on potential resource conflicts across multiple instances of SQL Server. DBFViewer 2000 I inherited a couple of FoxPro databases that I have to keep an eye on occasionally and have not yet been able to migrate them to SQL Server. Balsamiq Mockups We are still in evaluation-mode on this tool, but I really like it as a quick UI mockup tool that does not require Visual Studio, so someone other than a programmer can do UI design.  The interface looks hand-drawn which definitely has some psychological benefits when communicating to users, too. FeedDemon I have to stay on top of my WAY TOO MANY blog subscriptions somehow.  I may read blogs on a couple of different computers, and FeedDemon’s integration with Google Reader allows me to keep them all in sync.  I don’t particularly like the Google Reader interface, or the fact that it always wanted to mark articles as read just because I scrolled past them.  FeedDemon solves this problem for me, and provides a multi-tabbed interface which is good because fairly frequently one blog will link to something else I want to read, and I can end up with a half-dozen open tabs all from one article. Synergy+ In my office, I run four monitors across two computers all with one mouse and keyboard.  Synergy is the magic software that makes this work. TweetDeck I’m not the most active Tweeter in the world, but when I want to check-in with the Twitterverse, this really helps.  I have found the #sqlhelp and #PoshHelp hash tags particularly useful, and I also have columns setup to make it easy to monitor #sqlpass, #PASSProfDev, and short term events like #sqlsat68.   Whew!  That’s a lot.  No wonder it took me a couple of days to get everything setup the way I wanted it.  Oh, that and actually getting some work accomplished at the same time.  Anyway, I know that is a huge dump of info, and most people never make it here to the end, so for those who did, let me say, CONGRATULATIONS, you made it! I hope you’ll find a new tool or two to make your work life a little easier.

    Read the article

  • Advanced Continuous Delivery to Azure from TFS, Part 1: Good Enough Is Not Great

    - by jasont
    The folks over on the TFS / Visual Studio team have been working hard at releasing a steady stream of new features for their new hosted Team Foundation Service in the cloud. One of the most significant features released was simple continuous delivery of your solution into your Azure deployments. The original announcement from Brian Harry can be found here. Team Foundation Service is a great platform for .Net developers who are used to working with TFS on-premises. I’ve been using it since it became available at the //BUILD conference in 2011, and when I recently came to work at Stackify, it was one of the first changes I made. Managing work items is much easier than the tool we were using previously, although there are some limitations (more on that in another blog post). However, when continuous deployment was made available, it blew my mind. It was the killer feature I didn’t know I needed. Not to say that I wasn’t previously an advocate for continuous delivery; just that it was always a pain to set up and configure. Having it hosted - and a one-click setup – well, that’s just the best thing since sliced bread. It made perfect sense: my source code is in the cloud, and my deployment is in the cloud. Great! I can queue up a build from my iPad or phone and just let it go! I quickly tore through the quick setup and saw it all work… sort of. This will be the first in a three part series on how to take the building block of Team Foundation Service continuous delivery and build a CD model that will actually work for any team deploying something more advanced than a “Hello World” example. Part 1: Good Enough Is Not Great Part 2: A Model That Works: Branching and Multiple Deployment Environments Part 3: Other Considerations: SQL, Custom Tasks, Etc Good Enough Is Not Great There. I’ve said it. I certainly hope no one on the TFS team is offended, but it’s the truth. Let’s take a look under the hood and understand how it works, and then why it’s not enough to handle real world CD as-is. How it works. (note that I’ve skipped a couple of steps; I already have my accounts set up and something deployed to Azure) The first step is to establish some oAuth magic between your Azure management portal and your TFS Instance. You do this via the management portal. Once it’s done, you have a new build process template in your TFS instance. (Image lifted from the documentation) From here, you’ll get the usual prompts for security, allowing access, etc. But you’ll also get to pick which Solution in your source control to build. Here’s what the bulk of the build definition looks like. All I’ve had to do is add in the solution to build (notice that mine is from a specific branch – Release – more on that later) and I’ve changed the configuration. I trigger the build, and voila! I have an Azure deployment a few minutes later. The beauty of this is that it’s all in the cloud and I’m not waiting for my machine to compile and upload the package. (I also had to enable the build definition first – by default it is created in disabled state, probably a good thing since it will trigger on every.single.checkin by default.) I get to see a history of deployments from the Azure portal, and can link into TFS to see the associated changesets and work items. You’ll notice also that this build definition also automatically put my code in the Staging slot of my Azure deployment – more on this soon. For now, I can VIP swap and be in production. (P.S. I hate VIP swap and “production” and “staging” in Azure. More on that later too.) That’s it. That’s the default out-of-box experience. Easy, right? But it’s full of room for improvement, so let’s get into that….   The Problems Nothing is perfect (except my code – it’s always perfect), and neither is Continuous Deployment without a bit of work to help it fit your dev team’s process. So what are the issues? Issue 1: Staging vs QA vs Prod vs whatever other environments your team may have. This, for me, is the big hairy one. Remember how this automatically deployed to staging rather than prod for us? There are a couple of issues with this model: If I want to deliver to prod, it requires intervention on my part after deployment (via a VIP swap). If I truly want to promote between environments (i.e. Nightly Build –> Stable QA –> Production) I likely have configuration changes between each environment such as database connection strings and this process (and the VIP swap) doesn’t account for this. Yet. Issue 2: Branching and delivering on every check-in. As I mentioned above, I have set this up to target a specific branch – Release – of my code. For the purposes of this example, I have adopted the “basic” branching strategy as defined by the ALM Rangers. This basically establishes a “Main” trunk where you branch off Dev and Release branches. Granted, the Release branch is usually the only thing you will deploy to production, but you certainly don’t want to roll to production automatically when you merge to the Release branch and check-in (unless you like the thrill of it, and in that case, I like your style, cowboy….). Rather, you have nightly build and QA environments, or if you’ve adopted the feature-branch model you have environments for those. Those are the environments you want to continuously deploy to. But that takes us back to Issue 1: we currently have a 1:1 solution to Azure deployment target. Issue 3: SQL and other custom tasks. Let’s be honest and address the elephant in the room: I need to get some sleep because I see an elephant in the room. But seriously, I can’t think of an application I have touched in the last 10 years that doesn’t need to consider SQL changes when deploying code and upgrading an environment. Microsoft seems perfectly content to ignore this elephant for now: yes, they’ve added Data Tier Applications. But let’s be honest with ourselves again: no one really uses it, and it’s not suitable for anything more complex than a Hello World sample project database. Why? Because it doesn’t fit well into a great source control story. Developers make stored procedure and table changes all day long while coding complex applications, and if someone forgets to go update the DACPAC before the automated deployment, you have a broken build until it’s completed. Developers – not just DBAs – also like to work with SQL in SQL tools, not in Visual Studio. I’m really picking on SQL because that’s generally the biggest concern that I hear. But we need to account for any custom tasks as well in the build process.   The Solutions… ? We’ve taken a look at how this all works, and addressed the shortcomings. In my next post (which I promise will be very, very soon), I will detail how I’ve overcome these shortcomings and used this foundation to create a mature, flexible model for deploying my app – any version, any time, to any environment.

    Read the article

  • Xen won't start after it had been working

    - by Paul Tomblin
    I've been setting up this Debian Stable system with a dom0 and 3 domUs. It was working fine for several days, and I'm almost ready to deploy it to the rack. But last night I shut it down with all three domUs still running for the first time, and today when I started it up, xend won't start. In /var/log/messages, I have: Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: blktapctrl: v1.0.0 Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: Found driver: [raw image (aio)] Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: Found driver: [raw image (sync)] Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: Found driver: [vmware image (vmdk)] Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: Found driver: [ramdisk image (ram)] Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: Found driver: [qcow disk (qcow)] Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: couldn't find device number for 'blktap0' Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: Unable to start blktapctrl and in /var/log/xen/xend.log, I have this: [2010-04-18 12:46:32 3523] INFO (SrvDaemon:219) Xend exited with status 1. [2010-04-18 13:01:34 4255] INFO (SrvDaemon:331) Xend Daemon started [2010-04-18 13:01:34 4255] INFO (SrvDaemon:335) Xend changeset: unavailable. [2010-04-18 13:01:34 4255] INFO (SrvDaemon:342) Xend version: Unknown. [2010-04-18 13:01:34 4255] ERROR (SrvDaemon:353) Exception starting xend (no element found: line 1, column 0) Traceback (most recent call last): File "/usr/lib/xen-3.2-1/lib/python/xen/xend/server/SrvDaemon.py", line 345, in run servers = SrvServer.create() File "/usr/lib/xen-3.2-1/lib/python/xen/xend/server/SrvServer.py", line 251, in create root.putChild('xend', SrvRoot()) File "/usr/lib/xen-3.2-1/lib/python/xen/xend/server/SrvRoot.py", line 40, in __init__ self.get(name) File "/usr/lib/xen-3.2-1/lib/python/xen/web/SrvDir.py", line 82, in get val = val.getobj() File "/usr/lib/xen-3.2-1/lib/python/xen/web/SrvDir.py", line 52, in getobj File "/usr/lib/xen-3.2-1/lib/python/xen/xend/server/SrvNode.py", line 30, in _ _init__ self.xn = XendNode.instance() File "/usr/lib/xen-3.2-1/lib/python/xen/xend/XendNode.py", line 709, in instance inst = XendNode() File "/usr/lib/xen-3.2-1/lib/python/xen/xend/XendNode.py", line 164, in __init__ saved_pifs = self.state_store.load_state('pif') File "/usr/lib/xen-3.2-1/lib/python/xen/xend/XendStateStore.py", line 104, in load_state dom = minidom.parse(xml_path) File "/usr/lib/python2.5/xml/dom/minidom.py", line 1915, in parse return expatbuilder.parse(file) File "/usr/lib/python2.5/xml/dom/expatbuilder.py", line 924, in parse result = builder.parseFile(fp) File "/usr/lib/python2.5/xml/dom/expatbuilder.py", line 211, in parseFile parser.Parse("", True) ExpatError: no element found: line 1, column 0 [2010-04-18 13:01:34 4253] INFO (SrvDaemon:219) Xend exited with status 1. Any clues as to what might be going wrong?

    Read the article

  • Deleting entire lines in a text file based on a partial string match with Windows PowerShell

    - by Charles
    So I have several large text files I need to sort through, and remove all occurrences of lines which contain a given keyword. So basically, if I have these lines: This is not a test This is a test Maybe a test Definitely not a test And I run the script with 'not', I need to entirely delete lines 1 and 4. I've been trying with: PS C:\Users\Admin (Get-Content "D:\Logs\co2.txt") | Foreach-Object {$_ -replace "3*Program*", ""} | Set-Content "D:\Logs\co2.txt" but it only replaces the 'Program' and not the entire line.

    Read the article

  • Old operational master still thinks it is the "one"

    - by Doug
    Hi there, I have a domain with 3 AD servers for now i'll just call them: AD01 (Win 2008 GC, Operations master) AD02 (Win 2008 GC) AD03 (Win 2003 GC) A couple of months there was some hardware issues with AD01 so the operations master, PDC and Infrastructure Master was moved to AD02. All machines where on while this was happening. AD01 (Win 2008 GC) AD02 (Win 2008 GC, Operations master) AD03 (Win 2003 GC) AD01 was then shutdown for a month. Upon starting this machine up with replaced hardware (NIC and RAID card) i now have a weird problem. AD01 Thinks it is operations master still in AD on the local box AD02 & AD03 Thinks AD02 is operations master in AD on both boxes When running DCDIAG on AD01 i get a number of issues (listed below) When running "dcdiag /test:advertising" on AD01: Doing primary tests Testing server: Default-First-Site-Name\AD01 Starting test: Advertising Warning: DsGetDcName returned information for \\ad02.domain.local, when we were trying to reach AD01. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... AD01 failed test Advertising Running partition tests on : ForestDnsZones Running partition tests on : DomainDnsZones Running partition tests on : Schema Running partition tests on : Configuration Running partition tests on : domain Running enterprise tests on : domain.local When running "dcdiag" on AD01 i get the following errors (excerpt of the Final output): Testing server: Default-First-Site-Name\AD01 Starting test: Advertising Warning: DsGetDcName returned information for \\ad02.domain.local, when we were trying to reach AD01. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... AD01 failed test Advertising Starting test: FrsEvent There are warning or error events within the last 24 hours after the SYSVOL has been shared. Failing SYSVOL replication problems may cause Group Policy problems. Starting test: NCSecDesc Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=ForestDnsZones,DC=domain,DC=local Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=DomainDnsZones,DC=domain,DC=local Starting test: Replications [Replications Check,Replications Check] Inbound replication is disabled. To correct, run "repadmin /options AD01 -DISABLE_INBOUND_REPL" [Replications Check,AD01] Outbound replication is disabled. To correct, run "repadmin /options AD01 -DISABLE_OUTBOUND_REPL" So the problem appeasr to be that when i moved the operations master, AD01 never got the memo, and now that it's started up, all the other AD servers don't think its the boss anymore when it trys to replicate etc. So i really need to manually update AD01 so that it knows who the operations master, instrastructure and PDC is - but i'm not having any luck I've been googling for nearly a day and all solutions lead to "the cake is a lie" Your ninja skills will be greatly appreciated

    Read the article

  • lighttpd virtual host config

    - by muncherelli
    Trying to get these two vhosts set up correctly. I have two sites on this domain that i'm hosting: www.example.com/example.com and test.example.com. How do I get both of these to work in my vhost definitions? I have: $HTTP["host"] == "test.example.com" { and $HTTP["host"] =~ "(^|.)example.com$" { (I think this is for *.example.com) It always goes to the second one, even if I am trying to go to test.example.com. I need to be able to host www.example.com/example.com and test.example.com

    Read the article

  • 500 internal server error running php file in cgi-bin

    - by vvvvvvv
    500 internal server error is shown when i access http://mysite.com/cgi-bin/test.php test.php <p> title here</p> <?php echo "hi"; ?> error log shows (8)Exec format error: exec of '/var/www/cgi-bin/test.php' failed'. Premature end of script headers: test.php. solved it by adding AddHandler application/x-httpd-php .php

    Read the article

  • How can I sort a document according to a substring in each line on Win7?

    - by Joey Hammer
    How can I sort a text according to hashtag on Windows-7? I have a long text (.txt format) which looks something like this: Blah blah #Test 123123 #Really Blah bluh #Really klfdmngl #Test I would like to conveniently, quickly and automatically be able to sort the text so that it looks like this: Blah blah #Test klfdmngl #Test 123123 #Really Blah bluh #Really I have to do this on a daily basis so I would like to be able to do it in as few steps as possible.

    Read the article

  • Reverse a .htaccess redirection rule

    - by Aahan Krish
    Let me explain by example. Say, I have this redirection rule in my .htaccess file: RedirectMatch 301 ^/([^/]+)/([^/]+)/$ http://www.example.com/$2 What it basically does is, redirect http://www.mysite.com/sports/test-post/ to http://www.mysite.com/test-post/. Now, how do I modify the .htaccess rule to do the opposite? (i.e. redirect http://www.mysite.com/test-post/ to http://www.mysite.com/sports/test-post/)

    Read the article

  • htaccess for subdomain help

    - by Patrick
    Usually I just use the online tools for url mod_rewrite rules but this just wouldn't work. Dynamic url: http://sub.domain.com/index.php?page=index&name=test Rewritten url: http://sub.domain.com/test OR http://sub.domain.com/test/ My htaccess: RewriteRule ^([^/]+)/?$ index.php?page=index&name=$1 [L] Instead of passing "test" for the variable name, I always get the value "index.php" Anyone gurus has have any idea?

    Read the article

  • How to execute home directory shell script file in php

    - by vvr
    How to execute /home/scripts/test.sh file in php Previously i have placed 'test.sh' file in the /usr/bin and calling in my php file like this exec('test.sh ' . escapeshellarg($testString)); But for security reasons i moved .sh file to /home/scripts directory and in my php i am calling like this exec('/home/scripts/test.sh ' . escapeshellarg($testString)); But it is not working now. Please suggest me how to achieve this.

    Read the article

  • conemu start a given console with hotkey

    - by Car981
    How could I assign a hotkey of my choice to start c:\cygwin\cygwin.bat ? Similarly, but a bit more difficult, how could I start c:\dir1#VAR#\dir2\test.bat, where #VAR# is the name of a directory that varies, and the last (in alphabetical order) of all #VAR# should be chosen ? So just to be clear, if c:\dir1\A\dir2\test.bat and c:\dir1\B\dir2\test.bat exist, the console that should be opened when the hotkey is pressed is: c:\dir1\B\dir2\test.bat. Thanks

    Read the article

  • Importing CSV filte to SQL server...

    - by sam
    HI guys, I am trying to import CSV file to SQL server database, no success, I am still newbie to sql server, thanks Operation stopped... Initializing Data Flow Task (Success) Initializing Connections (Success) Setting SQL Command (Success) Setting Source Connection (Success) Setting Destination Connection (Success) Validating (Success) Messages Warning 0x80049304: Data Flow Task 1: Warning: Could not open global shared memory to communicate with performance DLL; data flow performance counters are not available. To resolve, run this package as an administrator, or on the system's console. (SQL Server Import and Export Wizard) Prepare for Execute (Success) Pre-execute (Success) Messages Information 0x402090dc: Data Flow Task 1: The processing of file "D:\test.csv" has started. (SQL Server Import and Export Wizard) Executing (Error) Messages Error 0xc002f210: Drop table(s) SQL Task 1: Executing the query "drop table [dbo].[test] " failed with the following error: "Cannot drop the table 'dbo.test', because it does not exist or you do not have permission.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. (SQL Server Import and Export Wizard) Error 0xc02020a1: Data Flow Task 1: Data conversion failed. The data conversion for column ""Code"" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.". (SQL Server Import and Export Wizard) Error 0xc020902a: Data Flow Task 1: The "output column ""Code"" (38)" failed because truncation occurred, and the truncation row disposition on "output column ""Code"" (38)" specifies failure on truncation. A truncation error occurred on the specified object of the specified component. (SQL Server Import and Export Wizard) Error 0xc0202092: Data Flow Task 1: An error occurred while processing file "D:\test.csv" on data row 21. (SQL Server Import and Export Wizard) Error 0xc0047038: Data Flow Task 1: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component "Source - test_csv" (1) returned error code 0xC0202092. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure. (SQL Server Import and Export Wizard) Copying to [dbo].[test] (Stopped) Post-execute (Success) Messages Information 0x402090dd: Data Flow Task 1: The processing of file "D:\test.csv" has ended. (SQL Server Import and Export Wizard) Information 0x402090df: Data Flow Task 1: The final commit for the data insertion in "component "Destination - test" (70)" has started. (SQL Server Import and Export Wizard) Information 0x402090e0: Data Flow Task 1: The final commit for the data insertion in "component "Destination - test" (70)" has ended. (SQL Server Import and Export Wizard) Information 0x4004300b: Data Flow Task 1: "component "Destination - test" (70)" wrote 0 rows. (SQL Server Import and Export Wizard)

    Read the article

  • Django admin site populated combo box based on imput

    - by user292652
    hi i have to following model class Match(models.Model): Team_one = models.ForeignKey('Team', related_name='Team_one') Team_two = models.ForeignKey('Team', related_name='Team_two') Stadium = models.CharField(max_length=255, blank=True) Start_time = models.DateTimeField(auto_now_add=False, auto_now=False, blank=True, null=True) Rafree = models.CharField(max_length=255, blank=True) Judge = models.CharField(max_length=255, blank=True) Winner = models.ForeignKey('Team', related_name='winner', blank=True) updated = models.DateTimeField('update date', auto_now=True ) created = models.DateTimeField('creation date', auto_now_add=True ) def save(self, force_insert=False, force_update=False): pass @models.permalink def get_absolute_url(self): return ('view_or_url_name') class MatchAdmin(admin.ModelAdmin): list_display = ('Team_one','Team_two', 'Winner') search_fields = ['Team_one','Team_tow'] admin.site.register(Match, MatchAdmin) i was wondering is their a way to populated the winner combo box once the team one and team two is selected in admin site ?

    Read the article

  • Overwriting the content from one MOSS content database to another

    - by 78lro
    We have a content database on our live moss server. It contains one site collection with several sub-sites. I'm using the stsadm export command to produce a cmp file, then moving this to our test server in a different farm. I then want to import this content into the content database on our test farm, using the import stsadm command results in me being left with all the existing test data as well as the live data. I tried detaching the existing content database from test in central admin and creating a new empty one,to the then run the import against that but the import failed as obviously there's not root site in the empty db. The aim is to have the data on test look like live, clearing out all the test data. Can anyone suggest a good approach to this type of problem?

    Read the article

  • What does a Software Developer actually do?

    - by chobo2
    Hi I am graduating from my Computer Science degree in a few weeks from now!! I started to look for my first job. For the last couple years I gotten really into web programming(Asp.net). My first choice would be to get a junior asp.net MVC developer but I don't any companies in my area use MVC yet or if they do they are not hiring. So my second choice would be a junior asp.net Webforms developer. My other choices after that would be forms applications, mobile applications using .Net and C#. As you can see I am looking for something with .Net. I spent the last couple years doing .Net projects for school, on my free time and love the Language and it would pain me right now to switch to something like php. So now I found a posting in my area for an Entry Software Developer. I like the fact that they are using .net and that it is entry job(I never worked in this industry and never had more then like a tutoring job so I want to for like intermediate jobs). Posting Are you looking for an exciting challenge within a dynamic, people-oriented culture where you can launch your technical career? Company Name Inc. is a technology consulting company, located in Canada, that designs, develops, and delivers real-time interactive applications accessed via the Internet as well as back-end tools to support these applications. Company Name provides a combination of out-of-the-box and customized solutions to an expanding list of partners and customers. POSITION SUMMARY As a member of our team, the successful candidate will be responsible for helping us increase the quality and stability of our software systems by working jointly and directly with both the Software Development teams and the QA Team. The primary mission of this role will be to substantially enhance our test automation suite. The incumbent will design and program automated tests (unit, integration, system, stress and load) in Visual Studio using C# and will develop sound processes that help us identify and resolve defects as early as possible. The successful incumbent will help us improve and enhance system functionality, reliability, performance and scalability. This role is specifically designed for an eager, bright, new graduate who is looking for a stepping stone into a software engineering role. We promote from within and invite new graduates to apply for this important position - which may lead to new opportunities. We also offer a generous professional development plan to help you on your way. You will be a key part of a team of experts that is responsible for improving the quality of our software by: • Designing, writing, and executing test plans and programmatic tests in Visual Studio using C# and NUnit for functional testing of our code, new features, regression, and performance test procedures. • Working with the engineers to design and build the stress and load testing framework which emulates tens and even hundreds of thousands of concurrent users via a distributed network interfacing with our Load Testing Lab. • Interfacing with both the Development Team and the QA Team to ensure risks are identified and managed. • Mentoring and leading the QA Team in programmatic test automation technologies and tools. MUST HAVE SKILLS / QUALIFICATIONS: • Diploma or higher Degree in Computer Science, or equivalent formal training. • Fundamental C# programming skills. • Knowledge of Internet technologies and Microsoft Windows platforms. • Knowledge of PC hardware. • Excellent communication skills (both oral and written). • Self-starter who takes initiative, requires minimal supervision, can handle multiple simultaneous tasks. • Detail-oriented, able to concentrate, and work quickly. • Proven diagnostic, analytical, and problem solving skills. NICE TO HAVE SKILLS: • Exposure to Visual Studio Team System or Visual Studio Test Edition. • Exposure in C# using NUnit. • Exposure to NUnit, HTTPUnit, and other automation tool suites. • Exposure to Performance/Stress/Load Testing. • Good understanding of relational databases (MS SQL Server). • Familiar with video and online multi-player games. As part of our team you will have the opportunity to work with a supportive team of experts, drive your own success, and ride the wave as we continually expand our team of experts. If you are interested in this opportunity, please send your resume to [email protected] with “Entry Level Software Developer” in the subject line. So that is the posting. To me it sounds like it is QA job. I don't have anything against QA jobs but alot of them seems to be your just clicking buttons and running scripts. Is this what a typical software developer does? Like I am so on the fence to apply for this job. On one side I am not sure how much programming I would be doing. Like I want to be at least half the time programming otherwise my skills will never improve since I will never be programming in teams and stuff. At the same time I have no experience in the industry so on the other side I am thinking just go for it and then maybe a year later try to get a full programming job(provided that I got the job). Yet if I am not programming in that job then that experience will not help me for the next job I find as I will be back a square one.

    Read the article

  • Do You Know How OUM defines the four, basic types of business system testing performed on a project? Why not test your knowledge?

    - by user713452
    Testing is perhaps the most important process in the Oracle® Unified Method (OUM). That makes it all the more important for practitioners to have a common understanding of the various types of functional testing referenced in the method, and to use the proper terminology when communicating with each other about testing activities. OUM identifies four basic types of functional testing, which is sometimes referred to as business system testing.  The basic functional testing types referenced by OUM include: Unit Testing Integration Testing System Testing, and  Systems Integration Testing See if you can match the following definitions with the appropriate type above? A.  This type of functional testing is focused on verifying that interfaces/integration between the system being implemented (i.e. System under Discussion (SuD)) and external systems functions as expected. B.     This type of functional testing is performed for custom software components only, is typically performed by the developer of the custom software, and is focused on verifying that the several custom components developed to satisfy a given requirement (e.g. screen, program, report, etc.) interact with one another as designed. C.  This type of functional testing is focused on verifying that the functionality within the system being implemented (i.e. System under Discussion (SuD)), functions as expected.  This includes out-of-the -box functionality delivered with Commercial Off-The-Shelf (COTS) applications, as well as, any custom components developed to address gaps in functionality.  D.  This type of functional testing is performed for custom software components only, is typically performed by the developer of the custom software, and is focused on verifying that the individual custom components developed to satisfy a given requirement  (e.g. screen, program, report, etc.) functions as designed.   Check your answers below: (D) (B) (C) (A) If you matched all of the functional testing types to their definitions correctly, then congratulations!  If not, you can find more information in the Testing Process Overview and Testing Task Overviews in the OUM Method Pack.

    Read the article

  • ISAPI filter crashes IIS

    - by test
    Hi, I have created an ISAPI filter. It works fine on develpoment server and SIT server. But in production server it doesn't works. In event viewer the following log : Reporting queued error: faulting application w3wp.exe, version 6.0.3790.2825, faulting module msvcr80.dll, version 8.0.50727.3053, fault address 0x00046039.

    Read the article

  • How to find the filename of a script being run when it is executed from a symlink on linux

    - by Phil Boltt
    Hi, If I have a python script that is executed via a symlink, is there a way that I can find the path to the script rather than the symlink? I've tried using the methods suggested in this question, but they always return the path to the symlink, not the script. For example, when this is saved as my "/usr/home/philboltt/scripts/test.py" : #!/usr/bin/python import sys print sys.argv[0] print __file__ and I then create this symlink ln -s /usr/home/philboltt/scripts/test.py /usr/home/philboltt/test and execute the script using /usr/home/philboltt/test I get the following output: /usr/home/philboltt/test /usr/home/philboltt/test Thanks! Phil

    Read the article

  • Verizon Wireless Supports its Mission-Critical Employee Portal with MySQL

    - by Bertrand Matthelié
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Verizon Wireless, the #1 mobile carrier in the United States, operates the nation’s largest 3G and 4G LTE network, with the most subscribers (109 millions) and the highest revenue ($70.2 Billion in 2011). Verizon Wireless built the first wide-area wireless broadband network and delivered the first wireless consumer 3G multimedia service in the US, and offers global voice and data services in more than 200 destinations around the world. To support 4.2 million daily wireless transactions and 493,000 calls and emails transactions produced by 94.2 million retail customers, Verizon Wireless employs over 78,000 employees with area headquarters across the United States. The Business Challenge Seeing the stupendous rise in social media, video streaming, live broadcasting…etc which redefined the scope of technology, Verizon Wireless, as a technology savvy company, wanted to provide a platform to its employees where they could network socially, view and host microsites, stream live videos, blog and provide the latest news. The IT team at Verizon Wireless had abundant experience with various technology platforms to support the huge number of applications in the company. However, open-source products weren’t yet widely used in the organization and the team had the ambition to adopt such technologies and see if the architecture could meet Verizon Wireless’ rigid requirements. After evaluating a few solutions, the IT team decided to use the LAMP stack for Vzweb, its mission-critical, 24x7 employee portal, with Drupal as the front end and MySQL on Linux as the backend, and for a few other internal websites also on MySQL. The MySQL Solution Verizon Wireless started to support its employee portal, Vzweb, its online streaming website, Vztube, and internal wiki pages, Vzwiki, with MySQL 5.1 in 2010. Vzweb is the main internal communication channel for Verizon Wireless, while Vztube hosts important company-wide webcasts regularly for executive-level announcements, so both channels have to be live and accessible all the time for its 78,000 employees across the United States. However during the initial deployment of the MySQL based Intranet, the application experienced performance issues. High connection spikes occurred causing slow user response time, and the IT team applied workarounds to continue the service. A number of key performance indexes (KPI) for the infrastructure were identified and the operational framework redesigned to support a more robust website and conform to the 99.985% uptime SLA (Service-Level Agreement). The MySQL DBA team made a series of upgrades in MySQL: Step 1: Moved from MyISAM to InnoDB storage engine in 2010 Step 2: Upgraded to the latest MySQL 5.1.54 release in 2010 Step 3: Upgraded from MySQL 5.1 to the latest GA release MySQL 5.5 in 2011, and leveraging MySQL Thread Pool as part of MySQL Enterprise Edition to scale better After making those changes, the team saw a much better response time during high concurrency use cases, and achieved an amazing performance improvement of 1400%! In January 2011, Verizon CEO, Ivan Seidenberg, announced the iPhone launch during the opening keynote at Consumer Electronic Show (CES) in Las Vegas, and that presentation was streamed live to its 78,000 employees. The event was broadcasted flawlessly with MySQL as the database. Later in 2011, Hurricane Irene attacked the East Coast of United States and caused major life and financial damages. During the hurricane, the team directed more traffic to its west coast data center to avoid potential infrastructure damage in the East Coast. Such transition was executed smoothly and even though the geographical distance became longer for the East Coast users, there was no impact in the performance of Vzweb and Vztube, and the SLA goal was achieved. “MySQL is the key component of Verizon Wireless’ mission-critical employee portal application,” said Shivinder Singh, senior DBA at Verizon Wireless. “We achieved 1400% performance improvement by moving from the MyISAM storage engine to InnoDB, upgrading to the latest GA release MySQL 5.5, and using the MySQL Thread Pool to support high concurrent user connections. MySQL has become part of our IT infrastructure, on which potentially more future applications will be built.” To learn more about MySQL Enterprise Edition, Get our Product Guide.

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >