Search Results

Search found 27766 results on 1111 pages for 'bad idea jeans'.

Page 70/1111 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • SQL query joining rows with same value

    - by user1285737
    I need to write a query that creates a view that calculates the total cost of each sale, by considering quantity and price of each bought item. The view should return the debit and total cost. In the answer each debit-number should only occur once. Thanks in advance Table ITEM: ID NAME PRICE 118 Jeans 100 120 Towel 20 127 Shirt 55 Table DEBIT: DEBIT ITEM Quantity 100581 118 5 100581 120 1 100586 127 5

    Read the article

  • Comapring pitches with digital audio

    - by user2250569
    I work on application which will compare musical notes with digital audio. My first idea was analyzes wav file (or sound in real-time) with some polyphonic pitch algorithms and gets notes and chords from this file and subsequently compared with notes in dataset. I went through a lot of pages and it seems to be a lot of hard work because existing implementations and algorithms are mainly/only focus on monophonic sound. Now, I got the idea to do this in the opposite way. In dataset I have for example note: A4 or better example chord: A4 B4 H4. And my idea is make some wave (or whatever I don't know what) from this note or chord and then compared with piece of digital audio. Is this good idea? Is it better/harder solution? If yes can you recommend me how to do it?

    Read the article

  • Nginx and client certificates from hierarchical OpenSSL-based certification authorities

    - by Fmy Oen
    I'm trying to set up root certification authority, subordinate certification authority and to generate the client certificates signed by any of this CA that nginx 0.7.67 on Debian Squeeze will accept. My problem is that root CA signed client certificate works fine while subordinate CA signed one results in "400 Bad Request. The SSL certificate error". Step 1: nginx virtual host configuration: server { server_name test.local; access_log /var/log/nginx/test.access.log; listen 443 default ssl; keepalive_timeout 70; ssl_protocols SSLv3 TLSv1; ssl_ciphers AES128-SHA:AES256-SHA:RC4-SHA:DES-CBC3-SHA:RC4-MD5; ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; ssl_client_certificate /etc/nginx/ssl/client.pem; ssl_verify_client on; ssl_session_cache shared:SSL:10m; ssl_session_timeout 5m; location / { proxy_pass http://testsite.local/; } } Step 2: PKI infrastructure organization for both root and subordinate CA (based on this article): # mkdir ~/pki && cd ~/pki # mkdir rootCA subCA # cp -v /etc/ssl/openssl.cnf rootCA/ # cd rootCA/ # mkdir certs private crl newcerts; touch serial; echo 01 > serial; touch index.txt; touch crlnumber; echo 01 > crlnumber # cp -Rvp * ../subCA/ Almost no changes was made to rootCA/openssl.cnf: [ CA_default ] dir = . # Where everything is kept ... certificate = $dir/certs/rootca.crt # The CA certificate ... private_key = $dir/private/rootca.key # The private key and to subCA/openssl.cnf: [ CA_default ] dir = . # Where everything is kept ... certificate = $dir/certs/subca.crt # The CA certificate ... private_key = $dir/private/subca.key # The private key Step 3: Self-signed root CA certificate generation: # openssl genrsa -out ./private/rootca.key -des3 2048 # openssl req -x509 -new -key ./private/rootca.key -out certs/rootca.crt -config openssl.cnf Enter pass phrase for ./private/rootca.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:rootca Email Address []: Step 4: Subordinate CA certificate generation: # cd ../subCA # openssl genrsa -out ./private/subca.key -des3 2048 # openssl req -new -key ./private/subca.key -out subca.csr -config openssl.cnf Enter pass phrase for ./private/subca.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:subca Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: Step 5: Subordinate CA certificate signing by root CA certificate: # cd ../rootCA/ # openssl ca -in ../subCA/subca.csr -extensions v3_ca -config openssl.cnf Using configuration from openssl.cnf Enter pass phrase for ./private/rootca.key: Check that the request matches the signature Signature ok Certificate Details: Serial Number: 1 (0x1) Validity Not Before: Feb 4 10:49:43 2013 GMT Not After : Feb 4 10:49:43 2014 GMT Subject: countryName = AU stateOrProvinceName = Some-State organizationName = Internet Widgits Pty Ltd commonName = subca X509v3 extensions: X509v3 Subject Key Identifier: C9:E2:AC:31:53:81:86:3F:CD:F8:3D:47:10:FC:E5:8E:C2:DA:A9:20 X509v3 Authority Key Identifier: keyid:E9:50:E6:BF:57:03:EA:6E:8F:21:23:86:BB:44:3D:9F:8F:4A:8B:F2 DirName:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca serial:9F:FB:56:66:8D:D3:8F:11 X509v3 Basic Constraints: CA:TRUE Certificate is to be certified until Feb 4 10:49:43 2014 GMT (365 days) Sign the certificate? [y/n]:y 1 out of 1 certificate requests certified, commit? [y/n]y ... # cd ../subCA/ # cp -v ../rootCA/newcerts/01.pem certs/subca.crt Step 6: Server certificate generation and signing by root CA (for nginx virtual host): # cd ../rootCA # openssl genrsa -out ./private/server.key -des3 2048 # openssl req -new -key ./private/server.key -out server.csr -config openssl.cnf Enter pass phrase for ./private/server.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:test.local Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # openssl ca -in server.csr -out certs/server.crt -config openssl.cnf Step 7: Client #1 certificate generation and signing by root CA: # openssl genrsa -out ./private/client1.key -des3 2048 # openssl req -new -key ./private/client1.key -out client1.csr -config openssl.cnf Enter pass phrase for ./private/client1.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:Client #1 Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # openssl ca -in client1.csr -out certs/client1.crt -config openssl.cnf Step 8: Client #1 certificate converting to PKCS12 format: # openssl pkcs12 -export -out certs/client1.p12 -inkey private/client1.key -in certs/client1.crt -certfile certs/rootca.crt Step 9: Client #2 certificate generation and signing by subordinate CA: # cd ../subCA/ # openssl genrsa -out ./private/client2.key -des3 2048 # openssl req -new -key ./private/client2.key -out client2.csr -config openssl.cnf Enter pass phrase for ./private/client2.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:Client #2 Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # openssl ca -in client2.csr -out certs/client2.crt -config openssl.cnf Step 10: Client #2 certificate converting to PKCS12 format: # openssl pkcs12 -export -out certs/client2.p12 -inkey private/client2.key -in certs/client2.crt -certfile certs/subca.crt Step 11: Passing server certificate and private key to nginx (performed with OS superuser privileges): # cd ../rootCA/ # cp -v certs/server.crt /etc/nginx/ssl/ # cp -v private/server.key /etc/nginx/ssl/ Step 12: Passing root and subordinate CA certificates to nginx (performed with OS superuser privileges): # cat certs/rootca.crt > /etc/nginx/ssl/client.pem # cat ../subCA/certs/subca.crt >> /etc/nginx/ssl/client.pem client.pem file look like this: # cat /etc/nginx/ssl/client.pem -----BEGIN CERTIFICATE----- MIID6TCCAtGgAwIBAgIJAJ/7VmaN048RMA0GCSqGSIb3DQEBBQUAMFYxCzAJBgNV BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQxDzANBgNVBAMTBnJvb3RjYTAeFw0xMzAyMDQxMDM1NTda ... -----END CERTIFICATE----- Certificate: Data: Version: 3 (0x2) Serial Number: 1 (0x1) ... -----BEGIN CERTIFICATE----- MIID4DCCAsigAwIBAgIBATANBgkqhkiG9w0BAQUFADBWMQswCQYDVQQGEwJBVTET MBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0cyBQ dHkgTHRkMQ8wDQYDVQQDEwZyb290Y2EwHhcNMTMwMjA0MTA0OTQzWhcNMTQwMjA0 ... -----END CERTIFICATE----- It looks like everything is working fine: # service nginx reload # Reloading nginx configuration: Enter PEM pass phrase: # nginx. # Step 13: Installing *.p12 certificates in browser (Firefox in my case) gives the problem I've mentioned above. Client #1 = 200 OK, Client #2 = 400 Bad request/The SSL certificate error. Any ideas what should I do? Update 1: Results of SSL connection test attempts: # openssl s_client -connect test.local:443 -CAfile ~/pki/rootCA/certs/rootca.crt -cert ~/pki/rootCA/certs/client1.crt -key ~/pki/rootCA/private/client1.key -showcerts Enter pass phrase for tmp/testcert/client1.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify return:1 depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = test.local verify return:1 --- Certificate chain 0 s:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=test.local i:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca -----BEGIN CERTIFICATE----- MIIDpjCCAo6gAwIBAgIBAjANBgkqhkiG9w0BAQUFADBWMQswCQYDVQQGEwJBVTET MBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0cyBQ dHkgTHRkMQ8wDQYDVQQDEwZyb290Y2EwHhcNMTMwMjA0MTEwNjAzWhcNMTQwMjA0 ... -----END CERTIFICATE----- 1 s:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca i:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca -----BEGIN CERTIFICATE----- MIID6TCCAtGgAwIBAgIJAJ/7VmaN048RMA0GCSqGSIb3DQEBBQUAMFYxCzAJBgNV BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQxDzANBgNVBAMTBnJvb3RjYTAeFw0xMzAyMDQxMDM1NTda ... -----END CERTIFICATE----- --- Server certificate subject=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=test.local issuer=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca --- Acceptable client certificate CA names /C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca /C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca --- SSL handshake has read 3395 bytes and written 2779 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: zlib compression Expansion: zlib compression SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: 15BFC2029691262542FAE95A48078305E76EEE7D586400F8C4F7C516B0F9D967 Session-ID-ctx: Master-Key: 23246CF166E8F3900793F0A2561879E5DB07291F32E99591BA1CF53E6229491FEAE6858BFC9AACAF271D9C3706F139C7 Key-Arg : None PSK identity: None PSK identity hint: None SRP username: None TLS session ticket: 0000 - c2 5e 1d d2 b5 6d 40 23-b2 40 89 e4 35 75 70 07 .^...m@#[email protected]. 0010 - 1b bb 2b e6 e0 b5 ab 10-10 bf 46 6e aa 67 7f 58 ..+.......Fn.g.X 0020 - cf 0e 65 a4 67 5a 15 ba-aa 93 4e dd 3d 6e 73 4c ..e.gZ....N.=nsL 0030 - c5 56 f6 06 24 0f 48 e6-38 36 de f1 b5 31 c5 86 .V..$.H.86...1.. ... 0440 - 4c 53 39 e3 92 84 d2 d0-e5 e2 f5 8a 6a a8 86 b1 LS9.........j... Compression: 1 (zlib compression) Start Time: 1359989684 Timeout : 300 (sec) Verify return code: 0 (ok) --- Everything seems fine with Client #2 and root CA certificate but request returns 400 Bad Request error: # openssl s_client -connect test.local:443 -CAfile ~/pki/rootCA/certs/rootca.crt -cert ~/pki/subCA/certs/client2.crt -key ~/pki/subCA/private/client2.key -showcerts Enter pass phrase for tmp/testcert/client2.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify return:1 depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = test.local verify return:1 ... Compression: 1 (zlib compression) Start Time: 1359989989 Timeout : 300 (sec) Verify return code: 0 (ok) --- GET / HTTP/1.0 HTTP/1.1 400 Bad Request Server: nginx/0.7.67 Date: Mon, 04 Feb 2013 15:00:43 GMT Content-Type: text/html Content-Length: 231 Connection: close <html> <head><title>400 The SSL certificate error</title></head> <body bgcolor="white"> <center><h1>400 Bad Request</h1></center> <center>The SSL certificate error</center> <hr><center>nginx/0.7.67</center> </body> </html> closed Verification fails with Client #2 certificate and subordinate CA certificate: # openssl s_client -connect test.local:443 -CAfile ~/pki/subCA/certs/subca.crt -cert ~/pki/subCA/certs/client2.crt -key ~/pki/subCA/private/client2.key -showcerts Enter pass phrase for tmp/testcert/client2.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify error:num=19:self signed certificate in certificate chain verify return:0 ... Compression: 1 (zlib compression) Start Time: 1359990354 Timeout : 300 (sec) Verify return code: 19 (self signed certificate in certificate chain) --- GET / HTTP/1.0 HTTP/1.1 400 Bad Request ... Still getting 400 Bad Request error with concatenated CA certificates and Client #2 (but still everything ok with Client #1): # cat certs/rootca.crt ../subCA/certs/subca.crt > certs/concatenatedca.crt # openssl s_client -connect test.local:443 -CAfile ~/pki/rootCA/certs/concatenatedca.crt -cert ~/pki/subCA/certs/client2.crt -key ~/pki/subCA/private/client2.key -showcerts Enter pass phrase for tmp/testcert/client2.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify return:1 depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = test.local verify return:1 --- ... Compression: 1 (zlib compression) Start Time: 1359990772 Timeout : 300 (sec) Verify return code: 0 (ok) --- GET / HTTP/1.0 HTTP/1.1 400 Bad Request ... Update 2: I've managed to recompile nginx with enabled debug. Here is the part of successfull conection by Client #1 track: 2013/02/05 14:08:23 [debug] 38701#0: *119 accept: <MY IP ADDRESS> fd:3 2013/02/05 14:08:23 [debug] 38701#0: *119 event timer add: 3: 60000:2856497512 2013/02/05 14:08:23 [debug] 38701#0: *119 kevent set event: 3: ft:-1 fl:0025 2013/02/05 14:08:23 [debug] 38701#0: *119 malloc: 28805200:660 2013/02/05 14:08:23 [debug] 38701#0: *119 malloc: 28834400:1024 2013/02/05 14:08:23 [debug] 38701#0: *119 posix_memalign: 28860000:4096 @16 2013/02/05 14:08:23 [debug] 38701#0: *119 http check ssl handshake 2013/02/05 14:08:23 [debug] 38701#0: *119 https ssl handshake: 0x16 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL server name: "test.local" 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_do_handshake: -1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_get_error: 2 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL handshake handler: 0 2013/02/05 14:08:23 [debug] 38701#0: *119 verify:1, error:0, depth:1, subject:"/C=AU /ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 14:08:23 [debug] 38701#0: *119 verify:1, error:0, depth:0, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=Client #1",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_do_handshake: 1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL: TLSv1, cipher: "AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1" 2013/02/05 14:08:23 [debug] 38701#0: *119 http process request line 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: -1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_get_error: 2 2013/02/05 14:08:23 [debug] 38701#0: *119 http process request line 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: 1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: 524 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: -1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_get_error: 2 2013/02/05 14:08:23 [debug] 38701#0: *119 http request line: "GET / HTTP/1.1" And here is the part of unsuccessfull conection by Client #2 track: 2013/02/05 13:51:34 [debug] 38701#0: *112 accept: <MY_IP_ADDRESS> fd:3 2013/02/05 13:51:34 [debug] 38701#0: *112 event timer add: 3: 60000:2855488975 2013/02/05 13:51:34 [debug] 38701#0: *112 kevent set event: 3: ft:-1 fl:0025 2013/02/05 13:51:34 [debug] 38701#0: *112 malloc: 28805200:660 2013/02/05 13:51:34 [debug] 38701#0: *112 malloc: 28834400:1024 2013/02/05 13:51:34 [debug] 38701#0: *112 posix_memalign: 28860000:4096 @16 2013/02/05 13:51:34 [debug] 38701#0: *112 http check ssl handshake 2013/02/05 13:51:34 [debug] 38701#0: *112 https ssl handshake: 0x16 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL server name: "test.local" 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_do_handshake: -1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_get_error: 2 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL handshake handler: 0 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_do_handshake: -1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_get_error: 2 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL handshake handler: 0 2013/02/05 13:51:34 [debug] 38701#0: *112 verify:0, error:20, depth:1, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 13:51:34 [debug] 38701#0: *112 verify:0, error:27, depth:1, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 13:51:34 [debug] 38701#0: *112 verify:1, error:27, depth:0, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=Client #2",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca" 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_do_handshake: 1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL: TLSv1, cipher: "AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1" 2013/02/05 13:51:34 [debug] 38701#0: *112 http process request line 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_read: 1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_read: 524 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_read: -1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_get_error: 2 2013/02/05 13:51:34 [debug] 38701#0: *112 http request line: "GET / HTTP/1.1" So I'm getting OpenSSL error #20 and then #27. According to verify documentation: 20 X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY: unable to get local issuer certificate the issuer certificate could not be found: this occurs if the issuer certificate of an untrusted certificate cannot be found. 27 X509_V_ERR_CERT_UNTRUSTED: certificate not trusted the root CA is not marked as trusted for the specified purpose.

    Read the article

  • How can I best implement 'cache until further notice' with memcache in multiple tiers?

    - by ajreal
    the term "client" used here is not referring to client's browser, but client server Before cache workflow 1. client make a HTTP request --> 2. server process --> 3. store parsed results into memcache for next use (cache indefinitely) --> 4. return results to client --> 5. client get the result, store into client's local memcache with TTL After cache workflow 1. another client make a HTTP request --> 2. memcache found return memcache results to client --> 3. client get the result, store into client's local memcache with TTL TTL = time to live Is possible for me to know when the data was updated, and to expire relevant memcache(s) accordingly. However, the pitfalls on client site cache TTL Any data update before the TTL is not pick-up by client memcache. In reverse manner, where there is no update, client memcache still expire after the TTL First request (or concurrent requests) after cache TTL will get throttle as it need to repeat the "Before cache workflow" In the event where client require several HTTP requests on a single web page, it could be very bad in performance. Ideal solution should be client to cache indefinitely until further notice. Here are the three proposals about futher notice Proposal 1 : Make use on HTTP header (current implementation) 1. client sent HTTP request last modified time header 2. server check if last data modified time=last cache time return status 304 3. client based on header to decide further processing GOOD? ---- - save some parsing for client - lesser data transfer BAD? ---- - fire a HTTP request is still slow - server end still need to process lots of requests Proposal 2 : Consistently issue a HTTP request to check all data group last modified time 1. client fire a HTTP request 2. server to return last modified time for all data group 3. client compare local last cache time with the result 4. if data group last cache time < server last modified time then request again for that data group only GOOD? ---- - only fetch what is no up-to-date - less requests for server BAD? ---- - every web page require a HTTP request Proposal 3 : Tell client when new data is available (Push) 1. when server end notice there is a change on a data group 2. notify clients on the changes 3. help clients to fetch again data 4. then reset client local memcache after data is parsed GOOD? ---- - let the cache act/behave like a true cache BAD? ---- - encourage race condition My preference is on proposal 3, and something like Gearman could be ideal Where there is a change, Gearman server to sent the task to multiple clients (workers). Am I crazy? (I know my first question is a bit crazy)

    Read the article

  • webserver horrible slow, sometimes incredible fast

    - by dhanke
    i am running a small community ( 6000+ Members ) on a non-virtual 64-bit ubuntu 11.04 system. I am not a Linux-pro, not even advanced, i just tried to setup a webserver, which does nothing special actually. Delivering some dynamic PHP and RoR websites is its task. So it might be that my configuration files do look horrible bad. Also, i might use the wrong vocabulary, so in doubt, please ask. Having a current all-time record of 520 registered users (board-accounts, no system-users) online at same time, average server-load is about 2.0 - 5.0. Meantime (~250 users) average server load value is at about 0.4 - 0.8, sometimes, on some expensive searches a bit higher. everything fine. From time to time however, the load increases up to 120 (120.0, not 12.0 ;) ). In this time, its hard to even connect via SSH, but when i reach the server, and use top/htop/iotop to see whats happening, i cannot identify any process causing high CPU load. iotop tells me about a current reading/writing speed of about approx. 70kb/s, which is quite equal to power-off i think. Memory-Usage is max. at ~ 12GB of 16GB, so swap remains empty. now the odd (at least for me:) waiting some minutes ( since i always get a bit into a panic when this happens, it feels like 5 minutes, but i suppose its more like 20-30 minutes) and the server is back to normal. everything continues as normal. another odd fact: when i run hdparm -tT /dev/sda, i get answer like: /dev/sda: Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec when i run the same command while the server is "frozen", the answer is like /dev/sda: <- takes about 5 minutes until this line appears Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec <- 5 more minutes Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec <- another 5 minutes so the values are the same, but the quoted time is completely wrong. using time command as prefix also tells me that ~ 15 minutes were used. I searched in dmesg, /var/log/[messages|syslog] - nothing found. /var/log/errors however tells me that: Jul 4 20:28:30 localhost kernel: [19080.671415] INFO: task php5-fpm:27728 blocked for more than 120 seconds. Jul 4 20:28:30 localhost kernel: [19080.671419] "echo 0 /proc/sys/kernel/hung_task_timeout_secs" disables this message. multiple times. now that message does tell me that php5-fpm task was blocked or did block ? - but not if that is the cause or just one of the results of that "freeze". Anyone? to cut the long story short, i dont know where even to start analyzing. So if you can give me any advice by looking at following specs and configs, or ask me to provide more information, i`d be glad. Specs: 6 Core AMD Phenom(tm) II X6 1055T Processor * 16 Gigabyte Ram 2x 1.5 TB Seagate ST1500DL003-9VT16L via SATA 3 via SoftwareRaid (i suppose) Services: (due to service --status-all, those with [ + ]) nginx Webserver 1.0.14 mySQL 5.1.63 Server Ruby on Rails 2.3.11 ( passenger-nginx-module ) php5-fpm 5.3.6-13ubuntu3.7 SSH ido2db Further services: default crontab + nightly backup. syslog-ng Website consists of 2 subdomains, forum. and www. where forum is a phpBB3.x PHP-Board, and www a Ruby on Rails 2.3.11 application (portal). Mini-Note: sometimes i notice that the forum is pretty slow, in contrast to the always-fast (except for this "freeze") portal. Both share the same Database, but the portal is using it read-only. The Webserver is nginx, using phusion passenger module to communicate with the ruby-application. Also, for the forum it communicates with php5-fpm via socket: relevant nginx configuration parts ( with comments/questions starting by ; ) ; in case of freeze due to too high Filesystem activity, maybe adding a limit? #worker_rlimit_nofile 50000; user www-data; ; 6 cores, so i read 6 fits. maybe already wrong? worker_processes 6; pid /var/run/nginx.pid; events { worker_connections 1024; } http { passenger_root /var/lib/gems/1.8/gems/passenger-3.0.11; passenger_ruby /usr/bin/ruby1.8; ; the forum once featured a chat, which was working w/o websockets. ; so it was a hell of pull requests (deactivated now, freeze still happening) keepalive_timeout 65; keepalive_requests 50; gzip on; server { listen 80; server_name www.domain.tld; root /var/www/domain/rails/public; passenger_enabled on; } server { listen 80; server_name forum.domain.tld; location / { root /var/www/domain/forum; index index.php; } ; satic stuff to be handled by nginx location ~* ^/style/.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ { access_log off; expires 30d; root /var/www/domain/forum/; } ; now the php magic, note the "backend"-fcgi_pass location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/domain/forum$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_max_temp_file_size 0; } location ~ /\.ht { deny all; } } ;the php5-fpm socket. i read that /dev/shm/ whould be the fastes place for this. bad idea in general? upstream backend { server unix:/dev/shm/phpfpm; } ... } php5-fpm settings (i changed this values due to php5-fpm error log messages higher and higher.. (freeze-problem was there before as well)* listen = /dev/shm/phpfpm user = www-data group = www-data pm = dynamic ; holy, 4000! well, shinking this value to earth-level gave me ; 100s of 502 bad gateway commands. this values were quite stable. ; since there are only max 520 users online i dont get it, why i would need ; as many children as configured here. due to keep-alive maybe? ; asking questions is easier for me since restarting server will make ; my community-members angry ;) pm.max_children = 4000 pm.start_servers = 100 pm.min_spare_servers = 50 pm.max_spare_servers = 150 pm.max_requests = 10 pm.status_path = /status ping.path = /ping ping.response = pong slowlog = log/$pool.log.slow ;should i use rlimit? ;rlimit_files = 1024 chdir = / mysql/my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP ; high number, but less gives some phpBB errors. max_connections = 450 table_cache = 512 ; i read twice the cpu cores, bad? thread_concurrency = 12 join_buffer_size = 2084K concurrent_insert = 3 query_cache_limit = 64M query_cache_size = 512M query_cache_type = 1 log_error = /var/log/mysql/error.log log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 2 expire_logs_days = 10 max_binlog_size = 100M low_priority_updates=1 [mysqldump] quick quote-names max_allowed_packet = 16M [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/ I used smartctl already, hdds seem to be fine. /proc/mdstatus quotes: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md3 : active raid1 sda3[1] 1459264192 blocks [2/1] [_U] md1 : active raid1 sda1[0] 3911680 blocks [2/1] [U_] unused devices: ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 127727 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 127727 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited I quote some questions in my configuration files, these are not (intentional) directly problem-related, but would be nice for me to know wether they are indeed questionable or done right. One additional Fact: my MYSQL-database is at 12GB size. i dont know if that does matter, but mytop sometimes shows me 4-5 seconds long insert queries, some are 20-30 seconds long. Its just a feeling that i am unable to prove (because i dont know how), but when i disable the database, the freeze seems not to happen. Example: i created a dummy rails application to see the development log. the app made some sql-queries, reads and inserts. the log quite often was like: DbTest Load (0.3ms) SELECT * FROM `db_test` WHERE (`db_test`.`id` = 31722) LIMIT 1 SQL (0.1ms) BEGIN DbTest Update (0.3ms) UPDATE `db_test` SET `updated_at` = '2012-07-04 23:32:34' WHERE `id` = 31722 - now the log stands still for 5-60 seconds. SQL (49.1ms) COMMIT - SQL-Update time in the log does not include freeze time Rendering test/index Completed in 96ms (View: 16, DB: 59) | 200 OK [http://localhost:9000/test] Bad part is: this mini-freeze here only happens from time to time as well. note: meanwhile i cannot even upload files via scp. I currently feel like running form bad to worse and back by googling for my server-problem due to immense lack of knowledge regarding server configurations. It still makes me wonder, why those problems even appear, since 250 users a time is not such a high amount, right? So my questions: whats wrong and how to fix? ;) or: what information can i provide to make the situation more clear? can you point at some critical bad configuration-line which i should consider to catch up in the documentation? are there any tools i can run to see some possible bottlenecks? any further advice? (next to: "pay someone who knows what he does" - its a private project, server costs enough already. :)) Thanks for your time and help. Best Regards, Daniel P.S.: i renamed the configfiles to domain.tld since i dont want to have any % more load to the server until its fixed. might be a exaggeratedly thought.. P.P.S: if i asked a complete duplicate question, sorry. my search results seemed to be quite specific in their own way.

    Read the article

  • JASON parsing using soap request and response

    - by hardik
    hello all in my project i want to use JASON parsing my sample soap request and response is here below please guide me how can i do that // soap request // <SOAP-ENV:Envelope SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:ns9093="urn:outmarket"><SOAP-ENV:Body><ns9093:doStartup xmlns:ns9093="urn:outmarket"><username xsi:type="xsd:string">guest</username><password xsi:type="xsd:string">guest</password></ns9093:doStartup></SOAP-ENV:Body></SOAP-ENV:Envelope> // soap response // <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/"><SOAP-ENV:Body><ns1:doStartupResponse xmlns:ns1="urn:outmarket"><return xsi:type="xsd:xml"><isvaliduser>1</isvaliduser><userid>19</userid><cansubmitphoto>0</cansubmitphoto><cansubmitcomment>0</cansubmitcomment><cansubmitrating>0</cansubmitrating><cansubmitmarket>0</cansubmitmarket><cansubmitstallholder>0</cansubmitstallholder><databaseid>1</databaseid><markets><markettype>1</markettype><marketid>3</marketid><marketname><![CDATA[Bairnsdale Farmers Market]]></marketname><ratingname><![CDATA[Market in general]]></ratingname><good>0</good><neutral>0</neutral><bad>0</bad></markets><markets><markettype>0</markettype><marketid>3</marketid><marketname><![CDATA[Bairnsdale Farmers Market]]></marketname><ratingname><![CDATA[Market in general]]></ratingname><good>25</good><neutral>0</neutral><bad>18</bad></markets><markets><markettype>0</markettype><marketid>5</marketid><marketname><![CDATA[Bendigo Farmers' Market]]></marketname><ratingname>`

    Read the article

  • Replace apostrophe in json string with empty string

    - by user572844
    Hi, I have problem with deserialization of json string, because string is bad format. For example json object consist string property statusMessage with value "Hello "dog" ". The correct format should be "Hello \" dog \" " . I would like remove apostrophes from this property. Something Like this. "Hello "dog" ". - "Hello dog ". Here is it original json string which I work. "{\"jancl\":{\"idUser\":18438201,\"nick\":\"JANCl\",\"photo\":\"1\",\"sex\":1,\"photoAlbums\":1,\"videoAlbums\":0,\"sefNick\":\"jancl\",\"profilPercent\":75,\"emphasis\":false,\"age\":\"-\",\"isBlocked\":false,\"PHOTO\":{\"normal\":\"http://u.aimg.sk/fotky/1843/82/n_18438201.jpg?v=1\",\"medium\":\"http://u.aimg.sk/fotky/1843/82/m_18438201.jpg?v=1\",\"24x24\":\"http://u.aimg.sk/fotky/1843/82/s_18438201.jpg?v=1\"},\"PLUS\":{\"active\":false,\"activeTo\":\"0000-00-00\"},\"LOCATION\":{\"idRegion\":\"6\",\"regionName\":\"Trenciansky kraj\",\"idCity\":\"138\",\"cityName\":\"Trencianske Teplice\"},\"STATUS\":{\"isLoged\":true,\"isChating\":false,\"idChat\":0,\"roomName\":\"\",\"lastLogin\":1294925369},\"PROJECT_STATUS\":{\"photoAlbums\":1,\"photoAlbumsFavs\":0,\"videoAlbums\":0,\"videoAlbumsFavs\":0,\"videoAlbumsExts\":0,\"blogPosts\":0,\"emailNew\":0,\"postaNew\":0,\"clubInvitations\":0,\"dashboardItems\":1},\"STATUS_MESSAGE\":{\"statusMessage\":\"\"Status\"\",\"addTime\":\"1294872330\"},\"isFriend\":false,\"isIamFriend\":false}}" Problem is here, json string consist this object: "STATUS_MESSAGE": {"statusMessage":" "some "bad" value" ", "addTime" :"1294872330"} Condition of string which I want modified: string start with "statusMessage":" string can has any *lenght from 0 -N * string end with ", "addTime So I try write pattern for string which start with "statusMessage":", has any lenght and is ended with ", "addTime. Here is it: const string pattern = " \" statusMessage \" : \" .*? \",\"addTime\" "; var regex = new Regex(pattern, RegexOptions.IgnoreCase); //here i would replace " with empty string string result = regex.Replace(jsonString, match => ???); But I think pattern is wrong, also I don’t know how replace apostrophe with empty string (remove apostrophne). My goal is : "statusMessage":" "some "bad" value" to "statusMessage":" "some bad value" Thank for advice

    Read the article

  • How do I protect the trunk from hapless newbies?

    - by Michael Haren
    A coworker relayed the following problem, let's say it's fictional to protect the guilty: A team of 5-10 works on a project which is issue-driven. That is, the typical flow goes like this: a chunk of work (bug, enhancement, etc.) is created as an issue in the issue tracker The issue is assigned to a developer The developer resolves the issue and commits their code changes to the trunk At release time, the frozen, and heavily tested trunk or release branch or whatever is built in release mode and released The problem he's having is that a couple newbies made several bad commits that weren't caught due to an unfortunate chain of events. This was followed by a bad release with a rollback or flurry of hot fixes. One idea we're toying with: Revoke commit access to the trunk for newbies and make them develop on a per-developer branch (we're using SVN): Good: newbies are isolated and can't hurt others Good: committers merge newbie branches with the trunk frequently Good: this enforces rigid code reviews Bad: this is burdensome on the committers (but there's probably no way around it since the code needs reviewed!) Bad: it might make traceability of trunk changes a little tougher since the reviewer would be doing the commit--not too sure on this. Update: Thank you, everyone, for your valuable input. I have concluded that this is far less a code/coder problem than I first presented. The root of the issue is that the release procedure failed to capture and test some poor quality changes to the trunk. Plugging that hole is most important. Relying on the false assumption that code in the trunk is "good" is not the solution. Once that hole--testing--is plugged, mistakes by everyone--newbie or senior--will be caught properly and dealt with accordingly. Next, a greater emphasis on code reviews and mentorship (probably driven by some systematic changes to encourage it) will go a long way toward improving code quality. With those two fixes in place, I don't think something as rigid or draconian as what I proposed above is necessary. Thanks!

    Read the article

  • Now Customers Can Actually Locate Your Resources with URL Rewriter 2.0 RTW

    - by The Official Microsoft IIS Site
    Today, Microsoft announced the final release of IIS URL Rewriter 2.0 RTW . Now the first reason might be obvious why you would want to rewrite a URL – when you are at a cocktail party with loud music and tasty appetizers and a potential customer asks you where they can get more info on your snazzy new idea. And you proudly blurt out next to their ear over the roar of the bass, “Just go to h-t-t-p colon slash slash w-w-w dot my new idea dot com slash items dot a-s-p-x question mark cat ID equals new...(read more)

    Read the article

  • SQLAuthority News – Pluralsight Course Review – Practices for Software Startups – Part 1 of 2

    - by pinaldave
    This is first part of the two part series of Practices for Software Startup Pluralsight Course. The course is written by Stephen Forte (Blog | Twitter). Stephen Forte is the Chief Strategy Officer of the venture backed company, Telerik, a leading vendor of developer and team productivity tools. Stephen is also a Certified Scrum Master, Certified Scrum Professional, PMP, and also speaks regularly at industry conferences around the world. He has written several books on application and database development.  Stephen is also a board member of the Scrum Alliance. Startups – Everybodies Dream Start-up companies are an important topic right now – everyone wants to start their own business.  It is also important to remember that all companies were a start up at one point – from your corner store to the giants like Microsoft and Apple.  Research proves that not every start-up succeeds, in fact, most will fail before their first year.  There are many reasons for this, and this could be due to the fact that there are many stages to a start-up company, and stumbling at any of these stages can lead to failure.  It is important to understand what makes a start-up company succeed at all its hurdles to become successful.  It is even important to define success.  For most start-ups this would mean becoming their own independently functioning company or to be bought out for a hefty profit by a larger company.  The idea of making a hefty profit by living your dream is extremely important, and you can even think of start-ups as the new craze.  That’s why studying them is so important – they are very popular, but things have changed a lot since their inception. Starting the Startups Beginning a start-up company used to be difficult, but now facilities and information is widely available, and it is much easier.  But that means it is much easier to fail, also.  Previously to start your own company, everything was planned and organized, resources were ensured and backed up before beginning; even the idea of starting your own business was a big thing.  Now anybody can do it, and the steps are simple and outlines everywhere – you can get online software and easily outsource , cloud source, or crowdsource a lot of your material.  But without the type of planning previously required, things can often go badly. New Products – New Ideas – New World There are so many fantastic new products, but they don’t reach success all the time.  I find start-up companies very interesting, and whenever I meet someone who is interested in the subject or already starting their own company, I always ask what they are doing, their plans, goals, market, etc.  I am sorry to say that in most cases, they cannot answer my questions.  It is true that many fantastic ideas fail because of bad decisions.  These bad decisions were not made intentionally, but people were simply unaware of what they should be doing.  This will always lead to failure.  But I am happy to say that all these issues can be gone because Pluralsight is now offering a course all about start-ups by Stephen Forte.  Stephen is a start up leader.  He has successfully started many companies and most are still going strong, or have gone on to even bigger and better things. Beginning Course on Startup I have always thought start-ups are a fascinating subject, and decided to take his course, but it is three hours long.  This would be hard to fit into my busy work day all at once, so I decided to do half of his course before my daughter wakes up, and the other half after she goes to sleep.  The course is divided into six modules, so this would be easy to do.  I began the first chapter early in the morning, at 5 am.  Stephen jumped right into the middle of the subject in the very first module – designing your business plan.  The first question you will have to answer to yourself, to others, and to investors is: What is your product and when will we be able to see it?  So a very important concept is a “minimal viable product.”  This means setting goals for yourself and your product.  We all have large dreams, but your minimal viable product doesn’t have to be your final vision at the very first.  For example: Apple is a giant company, but it is still evolving.  Steve Jobs didn’t envision the iPhone 6 at the very beginning.  He had to start at the first iPhone and do his market research, and the idea evolved into the technology you see now.  So for yourself, you should decide a beginning and stop point.  Do your market research.  Determine who you want to reach, what audience you want for your product.  You can have a great idea that simply will not work in the market, do need, bottlenecks, lack of resources, or competition.  There is a lot of research that needs to be done before you even write a business plan, and Stephen covers it in the very first chapter. The Team – Unique Key to Success After jumping right into the subject in the very first module, I wondered what Stephen could have in store for me for the rest of the course.  Chapter number two is building a team.  Having a team is important regardless of what your startup is.  You can be a true visionary with endless ideas and energy, but one person can still not do everything.  It is important to decide from the very beginning if you will have cofounders, team leaders, and how many employees you’ll need.  Even more important, you’ll need to decide what kind of team you want – what personalities, skills, and type of energy you want each of your employees to bring.  Do you want to have an A+ team with a B- idea, or do you have a B- idea that needs an A+ team to sell it?  Stephen asks all the hard questions!  I was especially impressed by his insight on developing.  You have to decide if you need developers, how many, and what their skills should be. I found this insight extremely useful for everyday usage, not just for start-up companies.  I would apply this kind of information in management at any position.  An amazing team will build an amazing product – and that doesn’t matter if you’re a start-up company or a small team working for a much larger business. Customer Development – The Ultimate Obective Chapter three was about customer development. According to Stephen, there are four different steps to develop a customer base.  The first question to ask yourself is if you are envisioning a large customer base buying a few products each, or a small, dedicated base that buys a lot of your product – quantity vs. Quality.  He also discusses how to earn, retain, and get more customers.  He also says that each customer should be placed in a different role – some will be like investors, who regularly spend with you and invest their money in your business.  It is then your job to take that investment and turn it into a better product in the future.  You need to deal with their money properly – think of it is as theirs as investors, not yours as profit.  At the end of this module I felt that only Stephen could provide this kind of insight, and then he listed all the resources he took his information from.  I have never seen a group of people so passionate about their customers. It was indeed a long day for me. In tomorrow’s part 2 we will discuss rest of the three module and also will see a quick video of the Practices for Software Startup Pluralsight Course. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Best Practices, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Website Basics - Planning Your Website

    A website begins with an idea. You probably have an idea for a site and that is the reason that you are exploring your options more. Or you may be looking to getting into Internet Marketing and need to know what is involved in getting a site up and running.

    Read the article

  • DHCPv6: Provide IPv6 information in your local network

    Even though IPv6 might not be that important within your local network it might be good to get yourself into shape, and be able to provide some details of your infrastructure automatically to your network clients. This is the second article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. IPv6 addresses for everyone (in your network) Okay, after setting up the configuration of your local system, it might be interesting to enable all your machines in your network to use IPv6. There are two options to solve this kind of requirement... Either you're busy like a bee and you go around to configure each and every system manually, or you're more the lazy and effective type of network administrator and you prefer to work with Dynamic Host Configuration Protocol (DHCP). Obviously, I'm of the second type. Enabling dynamic IPv6 address assignments can be done with a new or an existing instance of a DHCPd. In case of Ubuntu-based installation this might be isc-dhcp-server. The isc-dhcp-server allows address pooling for IP and IPv6 within the same package, you just have to run to independent daemons for each protocol version. First, check whether isc-dhcp-server is already installed and maybe running your machine like so: $ service isc-dhcp-server6 status In case, that the service is unknown, you have to install it like so: $ sudo apt-get install isc-dhcp-server Please bear in mind that there is no designated installation package for IPv6. Okay, next you have to create a separate configuration file for IPv6 address pooling and network parameters called /etc/dhcp/dhcpd6.conf. This file is not automatically provided by the package, compared to IPv4. Again, use your favourite editor and put the following lines: $ sudo nano /etc/dhcp/dhcpd6.conf authoritative;default-lease-time 14400; max-lease-time 86400;log-facility local7;subnet6 2001:db8:bad:a55::/64 {    option dhcp6.name-servers 2001:4860:4860::8888, 2001:4860:4860::8844;    option dhcp6.domain-search "ios.mu";    range6 2001:db8:bad:a55::100 2001:db8:bad:a55::199;    range6 2001:db8:bad:a55::/64 temporary;} Next, save the file and start the daemon as a foreground process to see whether it is going to listen to requests or not, like so: $ sudo /usr/sbin/dhcpd -6 -d -cf /etc/dhcp/dhcpd6.conf eth0 The parameters are explained quickly as -6 we want to run as a DHCPv6 server, -d we are sending log messages to the standard error descriptor (so you should monitor your /var/log/syslog file, too), and we explicitely want to use our newly created configuration file (-cf). You might also use the command switch -t to test the configuration file prior to running the server. In my case, I ended up with a couple of complaints by the server, especially reporting that the necessary lease file wouldn't exist. So, ensure that the lease file for your IPv6 address assignments is present: $ sudo touch /var/lib/dhcp/dhcpd6.leases$ sudo chown dhcpd:dhcpd /var/lib/dhcp/dhcpd6.leases Now, you should be good to go. Stop your foreground process and try to run the DHCPv6 server as a service on your system: $ sudo service isc-dhcp-server6 startisc-dhcp-server6 start/running, process 15883 Check your log file /var/log/syslog for any kind of problems. Refer to the man-pages of isc-dhcp-server and you might check out Chapter 22.6 of Peter Bieringer's IPv6 Howto. The instructions regarding DHCPv6 on the Ubuntu Wiki are not as complete as expected and it might not be as helpful as this article or Peter's HOWTO. But see for yourself. Does the client get an IPv6 address? Running a DHCPv6 server on your local network surely comes in handy but it has to work properly. The following paragraphs describe briefly how to check the IPv6 configuration of your clients, Linux - ifconfig or ip command First, you have enable IPv6 on your Linux by specifying the necessary directives in the /etc/network/interfaces file, like so: $ sudo nano /etc/network/interfaces iface eth1 inet6 dhcp Note: Your network device might be eth0 - please don't just copy my configuration lines. Then, either restart your network subsystem, or enable the device manually using the dhclient command with IPv6 switch, like so: $ sudo dhclient -6 You would either use the ifconfig or (if installed) the ip command to check the configuration of your network device like so: $ sudo ifconfig eth1eth1      Link encap:Ethernet  HWaddr 00:1d:09:5d:8d:98            inet addr:192.168.160.147  Bcast:192.168.160.255  Mask:255.255.255.0          inet6 addr: 2001:db8:bad:a55::193/64 Scope:Global          inet6 addr: fe80::21d:9ff:fe5d:8d98/64 Scope:Link          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1 Looks good, the client has an IPv6 assignment. Now, let's see whether DNS information has been provided, too. $ less /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTENnameserver 2001:4860:4860::8888nameserver 2001:4860:4860::8844nameserver 192.168.1.2nameserver 127.0.1.1search ios.mu Nicely done. Windows - netsh Per description on TechNet the netsh is defined as following: "Netsh is a command-line scripting utility that allows you to, either locally or remotely, display or modify the network configuration of a computer that is currently running. Netsh also provides a scripting feature that allows you to run a group of commands in batch mode against a specified computer. Netsh can also save a configuration script in a text file for archival purposes or to help you configure other servers." And even though TechNet states that it applies to Windows Server (only), it is also available on Windows client operating systems, like Vista, Windows 7 and Windows 8. In order to get or even set information related to IPv6 protocol, we have to switch the netsh interface context prior to our queries. Open a command prompt in Windows and run the following statements: C:\Users\joki>netshnetsh>interface ipv6netsh interface ipv6>show interfaces Select the device index from the Idx column to get more details about the IPv6 address and DNS server information (here: I'm going to use my WiFi device with device index 11), like so: netsh interface ipv6>show address 11 Okay, address information has been provided. Now, let's check the details about DNS and resolving host names: netsh interface ipv6> show dnsservers 11 Okay, that looks good already. Our Windows client has a valid IPv6 address lease with lifetime information and details about the configured DNS servers. Talking about DNS server... Your clients should be able to connect to your network servers via IPv6 using hostnames instead of IPv6 addresses. Please read on about how to enable a local named with IPv6.

    Read the article

  • Is there a better term than "smoothness" or "granularity" to describe this language feature?

    - by Chris Stevens
    One of the best things about programming is the abundance of different languages. There are general purpose languages like C++ and Java, as well as little languages like XSLT and AWK. When comparing languages, people often use things like speed, power, expressiveness, and portability as the important distinguishing features. There is one characteristic of languages I consider to be important that, so far, I haven't heard [or been able to come up with] a good term for: how well a language scales from writing tiny programs to writing huge programs. Some languages make it easy and painless to write programs that only require a few lines of code, e.g. task automation. But those languages often don't have enough power to solve large problems, e.g. GUI programming. Conversely, languages that are powerful enough for big problems often require far too much overhead for small problems. This characteristic is important because problems that look small at first frequently grow in scope in unexpected ways. If a programmer chooses a language appropriate only for small tasks, scope changes can require rewriting code from scratch in a new language. And if the programmer chooses a language with lots of overhead and friction to solve a problem that stays small, it will be harder for other people to use and understand than necessary. Rewriting code that works fine is the single most wasteful thing a programmer can do with their time, but using a bazooka to kill a mosquito instead of a flyswatter isn't good either. Here are some of the ways this characteristic presents itself. Can be used interactively - there is some environment where programmers can enter commands one by one Requires no more than one file - neither project files nor makefiles are required for running in batch mode Can easily split code across multiple files - files can refeence each other, or there is some support for modules Has good support for data structures - supports structures like arrays, lists, and especially classes Supports a wide variety of features - features like networking, serialization, XML, and database connectivity are supported by standard libraries Here's my take on how C#, Python, and shell scripting measure up. Python scores highest. Feature C# Python shell scripting --------------- --------- --------- --------------- Interactive poor strong strong One file poor strong strong Multiple files strong strong moderate Data structures strong strong poor Features strong strong strong Is there a term that captures this idea? If not, what term should I use? Here are some candidates. Scalability - already used to decribe language performance, so it's not a good idea to overload it in the context of language syntax Granularity - expresses the idea of being good just for big tasks versus being good for big and small tasks, but doesn't express anything about data structures Smoothness - expresses the idea of low friction, but doesn't express anything about strength of data structures or features Note: Some of these properties are more correctly described as belonging to a compiler or IDE than the language itself. Please consider these tools collectively as the language environment. My question is about how easy or difficult languages are to use, which depends on the environment as well as the language.

    Read the article

  • Hire the Right SEO Consultant

    Hiring the right SEO services in Orlando to do an effective search engine optimization campaign for a website is definitely a good idea if a person or company doesn't have any promising online presence. It may be the best idea if you as the owner of the website don't know the right ways to optimize the site.

    Read the article

  • Where and how to reference composite MVP components?

    - by Lea Hayes
    I am learning about the MVP (Model-View-Presenter) Passive View flavour of MVC. I intend to expose events from view interfaces rather than using the observer pattern to remove explicit coupling with presenter. Context: Windows Forms / Client-Side JavaScript. I am led to believe that the MVP (or indeed MVC in general) pattern can be applied at various levels of a user interface ranging from the main "Window" to an embedded "Text Field". For instance, the model to the text field is probably just a string whereas the model to the "Window" contains application specific view state (like a persons name which resides within the contained text field). Given a more complex scenario: Documentation viewer which contains: TOC navigation pane Document view Search pane Since each of these 4 user interface items are complex and can be reused elsewhere it makes sense to design these using MVP. Given that each of these user interface items comprises of 3 components; which component should be nested? where? who instantiates them? Idea #1 - Embed View inside View from Parent View public class DocumentationViewer : Form, IDocumentationViewerView { public DocumentationViewer() { ... // Unclear as to how model and presenter are injected... TocPane = new TocPaneView(); } protected ITocPaneView TocPane { get; private set; } } Idea #2 - Embed Presenter inside View from Parent View public class DocumentationViewer : Form, IDocumentationViewerView { public DocumentationViewer() { ... // This doesn't seem like view logic... var tocPaneModel = new TocPaneModel(); var tocPaneView = new TocPaneView(); TocPane = new TocPanePresenter(tocPaneModel, tocPaneView); } protected TocPanePresenter TocPane { get; private set; } } Idea #3 - Embed View inside View from Parent Presenter public class DocumentationViewer : Form, IDocumentationViewerView { ... // Part of IDocumentationViewerView: public ITocPaneView TocPane { get; set; } } public class DocumentationViewerPresenter { public DocumentationViewerPresenter(DocumentationViewerModel model, IDocumentationViewerView view) { ... var tocPaneView = new TocPaneView(); var tocPaneModel = new TocPaneModel(model.Toc); var tocPanePresenter = new TocPanePresenter(tocPaneModel, tocPaneView); view.TocPane = tocPaneView; } } Some better idea...

    Read the article

  • are there any opensource VGA drivers

    - by blade19899
    are there any open-source VGA drivers i seem to remember that back when i was using 10.10 or an older one that people where writing some opensource vga drivers for linux but i can't seem to find a webpage for that project? i was wondering cause i like the idea and want to install it and report bug fixes cause ubuntu with opensource VGA drivers somehow I find the idea to be fun and who knows maybe they will be even better/stable than the proprietary versions

    Read the article

  • Is there a better term than "smoothness" or "granularity" to describe this language feature?

    - by Chris
    One of the best things about programming is the abundance of different languages. There are general purpose languages like C++ and Java, as well as little languages like XSLT and AWK. When comparing languages, people often use things like speed, power, expressiveness, and portability as the important distinguishing features. There is one characteristic of languages I consider to be important that, so far, I haven't heard [or been able to come up with] a good term for: how well a language scales from writing tiny programs to writing huge programs. Some languages make it easy and painless to write programs that only require a few lines of code, e.g. task automation. But those languages often don't have enough power to solve large problems, e.g. GUI programming. Conversely, languages that are powerful enough for big problems often require far too much overhead for small problems. This characteristic is important because problems that look small at first frequently grow in scope in unexpected ways. If a programmer chooses a language appropriate only for small tasks, scope changes can require rewriting code from scratch in a new language. And if the programmer chooses a language with lots of overhead and friction to solve a problem that stays small, it will be harder for other people to use and understand than necessary. Rewriting code that works fine is the single most wasteful thing a programmer can do with their time, but using a bazooka to kill a mosquito instead of a flyswatter isn't good either. Here are some of the ways this characteristic presents itself. Can be used interactively - there is some environment where programmers can enter commands one by one Requires no more than one file - neither project files nor makefiles are required for running in batch mode Can easily split code across multiple files - files can refeence each other, or there is some support for modules Has good support for data structures - supports structures like arrays, lists, and especially classes Supports a wide variety of features - features like networking, serialization, XML, and database connectivity are supported by standard libraries Here's my take on how C#, Python, and shell scripting measure up. Python scores highest. Feature C# Python shell scripting --------------- --------- --------- --------------- Interactive poor strong strong One file poor strong strong Multiple files strong strong moderate Data structures strong strong poor Features strong strong strong Is there a term that captures this idea? If not, what term should I use? Here are some candidates. Scalability - already used to decribe language performance, so it's not a good idea to overload it in the context of language syntax Granularity - expresses the idea of being good just for big tasks versus being good for big and small tasks, but doesn't express anything about data structures Smoothness - expresses the idea of low friction, but doesn't express anything about strength of data structures or features Note: Some of these properties are more correctly described as belonging to a compiler or IDE than the language itself. Please consider these tools collectively as the language environment. My question is about how easy or difficult languages are to use, which depends on the environment as well as the language.

    Read the article

  • adapting a Unity gravitational script to allow moons

    - by PartyMix
    I'm using this script: http://wiki.unity3d.com/index.php/Simple_planetary_orbits to get a solar system going in Unity, but it doesn't seem to support creating bodies that orbit other moving bodies (or I am using it incorrectly). Any idea about how to modify it so that it does (or just use it correctly)? I've been beating my head against this problem for a couple hours, and I really don't feel like I have any idea what I'm doing. Thanks in advance.

    Read the article

  • When multitasking a fullscreen app, cant get mouse to lock back in the application: Ex. SPAZ game

    - by NikhilWanpal
    When I am running a fullscreen application, for example the game SPAZ from the HumbleBundle 6, if some popup comes or if I press the super key to come back to desktop and minimize the game, I am unable to play the game again. The game just wont lock my mouse again, I am sometimes able to get it to become fullscreen again (no idea exactly how!), but the mouse just wont get locked.. Any idea how to continue the game without exit and restart?

    Read the article

  • Cucumber Makes Behavior-Driven Ruby on Rails Development Cool

    <b>WDVL:</b> "This article introduces the Cucumber framework, a tool for implementing the Behavior-Driven Development (BDD) methodology. The idea behind BDD is simple: everyone should understand the system features. Cucumber promotes this idea by enabling the features of a system to be written in the native language of the program as either specs or functional tests."

    Read the article

  • laptop crashed: why?

    - by sds
    my linux (ubuntu 12.04) laptop crashed, and I am trying to figure out why. # last sds pts/4 :0 Tue Sep 4 10:01 still logged in sds pts/3 :0 Tue Sep 4 10:00 still logged in reboot system boot 3.2.0-29-generic Tue Sep 4 09:43 - 11:23 (01:40) sds pts/8 :0 Mon Sep 3 14:23 - crash (19:19) this seems to indicate a crash at 09:42 (= 14:23+19:19). as per another question, I looked at /var/log: auth.log: Sep 4 09:17:02 t520sds CRON[32744]: pam_unix(cron:session): session closed for user root Sep 4 09:43:17 t520sds lightdm: pam_unix(lightdm:session): session opened for user lightdm by (uid=0) no messages file syslog: Sep 4 09:24:19 t520sds kernel: [219104.819975] CPU0: Package power limit normal Sep 4 09:43:16 t520sds kernel: imklog 5.8.6, log source = /proc/kmsg started. kern.log: Sep 4 09:24:19 t520sds kernel: [219104.819969] CPU1: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819971] CPU2: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819974] CPU3: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819975] CPU0: Package power limit normal Sep 4 09:43:16 t520sds kernel: imklog 5.8.6, log source = /proc/kmsg started. Sep 4 09:43:16 t520sds kernel: [ 0.000000] Initializing cgroup subsys cpuset Sep 4 09:43:16 t520sds kernel: [ 0.000000] Initializing cgroup subsys cpu I had a computation running until 9:24, but the system crashed 18 minutes later! kern.log has many pages of these: Sep 4 09:43:16 t520sds kernel: [ 0.000000] total RAM covered: 8086M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 64K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 128K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 256K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 512K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 1M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 2M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 4M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 8M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 16M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 32M num_reg: 10 lose cover RAM: -16M Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 64M num_reg: 10 lose cover RAM: -16M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 128M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 256M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 512M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 1G num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 2G num_reg: 10 lose cover RAM: -1G does this mean that my RAM is bad?! it also says Sep 4 09:43:16 t520sds kernel: [ 2.944123] EXT4-fs (sda1): INFO: recovery required on readonly filesystem Sep 4 09:43:16 t520sds kernel: [ 2.944126] EXT4-fs (sda1): write access will be enabled during recovery Sep 4 09:43:16 t520sds kernel: [ 3.088001] firewire_core: created device fw0: GUID f0def1ff8fbd7dff, S400 Sep 4 09:43:16 t520sds kernel: [ 8.929243] EXT4-fs (sda1): orphan cleanup on readonly fs Sep 4 09:43:16 t520sds kernel: [ 8.929249] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 658984 ... Sep 4 09:43:16 t520sds kernel: [ 9.343266] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 525343 Sep 4 09:43:16 t520sds kernel: [ 9.343270] EXT4-fs (sda1): 56 orphan inodes deleted Sep 4 09:43:16 t520sds kernel: [ 9.343271] EXT4-fs (sda1): recovery complete Sep 4 09:43:16 t520sds kernel: [ 9.645799] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null) does this mean my HD is bad? As per FaultyHardware, I tried smartctl -l selftest, which uncovered no errors: smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-30-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Seagate Momentus 7200.4 Device Model: ST9500420AS Serial Number: 5VJE81YK LU WWN Device Id: 5 000c50 0440defe3 Firmware Version: 0003LVM1 User Capacity: 500,107,862,016 bytes [500 GB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 4 Local Time is: Mon Sep 10 16:40:04 2012 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED See vendor-specific Attribute list for marginal Attributes. General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 0) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 109) minutes. Conveyance self-test routine recommended polling time: ( 2) minutes. SCT capabilities: (0x103b) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 117 099 034 Pre-fail Always - 162843537 3 Spin_Up_Time 0x0003 100 100 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 571 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 069 060 030 Pre-fail Always - 17210154023 9 Power_On_Hours 0x0032 095 095 000 Old_age Always - 174362787320258 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 571 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 1 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 061 043 045 Old_age Always In_the_past 39 (0 11 44 26) 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 84 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 20 193 Load_Cycle_Count 0x0032 099 099 000 Old_age Always - 2434 194 Temperature_Celsius 0x0022 039 057 000 Old_age Always - 39 (0 15 0 0) 195 Hardware_ECC_Recovered 0x001a 041 041 000 Old_age Always - 162843537 196 Reallocated_Event_Count 0x000f 095 095 030 Pre-fail Always - 4540 (61955, 0) 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 254 Free_Fall_Sensor 0x0032 100 100 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed without error 00% 4545 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. Googling for the messages proved inconclusive, I can't even figure out whether the messages are routine or catastrophic. So, what do I do now?

    Read the article

  • How would I achieve diablo like 2D isometric projection?

    - by Darestium
    Good day, I am in the process of coming up with an idea for a game, and I would like it to be isometric like Diablo. The problem is I have no idea how it achieves the effect of height like in the following screenshot (on the columns): http://upload.wikimedia.org/wikipedia/en/thumb/2/20/Diabloscreen.jpg/350px-Diabloscreen.jpg but whatever the case, I'm sure it is going to be harder to achieve then creating a traditional isometric game, but any ideas regarding the topic would be greatly appreciated.

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >