Search Results

Search found 28439 results on 1138 pages for '10 1 3 4 1'.

Page 251/1138 | < Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >

  • SQL Contest – Win USD 300 Worth Gift – Cartoon Contest is Back

    - by pinaldave
    There are two excellent contests and we have lots of winning to do this year end. 1) Win USD 25 Amazon Gift Cards (10 Units) This is very simple, you just have to download SQL Server DB Optimizer. That’s it! There are only two conditions: You must have a valid email address. As USD 25 Amazon Gift Card will be sent to the same address. Download DB Optimizer between today and Dec 8, 2012. Link to Download DB Optimizer. Every day one winner will be notified about their winning USD 25 Amazon Gift Cards for next 10 days. 2) Win Star Wars R2-D2 Inflatable R/C This the coolest thing to win. I personally want one but as I am running a contest, I can’t  participate. You get this cool Remote Controlled Device – you just have to answer following cartoon contest. Read the complete story and think what will be the answer provided by the smart employee. There are only two conditions: Leave your answer in the comment area of this blog post (every comment will be hidden till Dec 8, 2012). Please leave your answer in the comment area between today and Dec 8, 2012. Remember you can participate as many times as you want. Make sure that your answer is correct and creative. The most creative answer will be selected. The decision of contest owner will be final. We may have runner’s up prices but for the moment let us try to win R2-D2. Here is the cool video of R2D2. Now here is the cartoon story, please follow the story and complete the very last cartoon template. Your answer should be correct and should be creative. However, the ideal answer will not be longer than one or two sentences. Hint: (Hint) Well, Leave your answer in the comment area of this blog post. If you do not win R2D2, trust me there are chances you may win a surprise gift from me. Remember your answer should be correct and should be creative. However, the ideal answer will not be longer than one or two sentences. Last day to participate in both of the contest is Dec 8, 2012. We will announce the winner in the week of December 10. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Configuring Oracle iPlanet WebServer / Oracle Traffic Director to use crypto accelerators on T4-1 servers

    - by mv
    Configuring Oracle iPlanet Web Server / Oracle Traffic Director to use crypto accelerators on T4-1 servers Jyri had written a technical article on Configuring Solaris Cryptographic Framework and Sun Java System Web Server 7 on Systems With UltraSPARC T1 Processors. I tried to find out what has changed since then in T4. I have used a T4-1 SPARC system with Solaris 10. Results slightly vary for Solaris 11.  For Solaris 11, the T4 optimization was implemented in libsoftcrypto.so while it was in pkcs11_softtoken_extra.so for Solaris 10. Overview of T4 processors is here in this blog. Many thanx to Chi-Chang Lin and Julien for their help. 1. Install Oracle iPlanet Web Server / Oracle Traffic Director.  Go to instance/config directory.  # cd /opt/oracle/webserver7/https-hostname.fqdn/config 2. List default PKCS#11 Modules # ../../bin/modutil -dbdir . -listListing of PKCS #11 Modules-----------------------------------------------------------1. NSS Internal PKCS #11 Moduleslots: 2 slots attachedstatus: loadedslot: NSS Internal Cryptographic Servicestoken: NSS Generic Crypto Servicesslot: NSS User Private Key and Certificate Servicestoken: NSS Certificate DB2. Root Certslibrary name: libnssckbi.soslots: 1 slot attachedstatus: loadedslot: NSS Builtin Objectstoken: Builtin Object Token----------------------------------------------------------- 3. Initialize the soft token data store in the $HOME/.sunw/pkcs11_softtoken/ directory # pktool setpin keystore=pkcs11Enter token passphrase: olderpasswordCreate new passphrase: passwordRe-enter new passphrase: passwordPassphrase changed. 4. Offload crypto operations to Solaris Crypto Framework on T4 $ ../../bin/modutil -dbdir . -nocertdb -add SCF -libfile /usr/lib/libpkcs11.so -mechanisms RSA:AES:SHA1:MD5 Module "SCF" added to database. Note that -nocertdb means modutil won't try to open the NSS softoken key database. It doesn't even have to be present. PKCS#11 library used is /usr/lib/libpkcs11.so. If the server is running in 64 bit mode, we have to use /usr/lib/64/libpkcs11.so Unlike T1 and T2, in T4 we do not have to disable mechanisms in softtoken provider using cryptoadm. 5. List again to check that a new module SCF is added # ../../bin/modutil -dbdir . -list Listing of PKCS #11 Modules-----------------------------------------------------------1. NSS Internal PKCS #11 Moduleslots: 2 slots attachedstatus: loadedslot: NSS Internal Cryptographic Servicestoken: NSS Generic Crypto Servicesslot: NSS User Private Key and Certificate Servicestoken: NSS Certificate DB2. SCFlibrary name: /usr/lib/libpkcs11.soslots: 2 slots attachedstatus: loadedslot: Sun Metaslottoken: Sun Metaslotslot: n2rng/0 SUNW_N2_Random_Number_Generator token: n2rng/0 SUNW_N2_RNG 3. Root Certs library name: libnssckbi.so slots: 1 slot attached status: loaded slot: NSS Builtin Objects token: Builtin Object Token----------------------------------------------------------- 6.  Create certificate in “Sun Metaslot” : I have used certutil, but you must use Admin Server CLI / GUI # ../../bin/certutil -S -x -n "Server-Cert" -t "CT,CT,CT" -s "CN=*.fqdn" -d . -h "Sun Metaslot"Enter Password or Pin for "Sun Metaslot": password 7. Verify that the certificate is created properly in “Sun Metslaot” # ../../bin/certutil -L -d . -h "Sun Metaslot"Certificate Nickname Trust AttributesSSL,S/MIME,JAR/XPIEnter Password or Pin for "Sun Metaslot": passwordSun Metaslot:Server-Cert CTu,Cu,Cu# 8. Associate this newly created certificate to http listener using Admin CLI/GUI. After that server.xml should have <http-listener> ...    <ssl>        <server-cert-nickname>Sun Metaslot:Server-Cert</server-cert-nicknamer>    </ssl> Note the prefix "Sun Metaslot" 9. Disable PKCS#11 bypass To use the accelerated AES algorithm, turn off PKCS#11 bypass, and configure modutil to have the AES mechanism go to the Metaslot. After you disable PKCS#11 bypasss using Admin GUI/CLI,  check that server.xml should have <server> ....    <pkcs11>         <enabled>1</enabled>         <allow-bypass>0</allow-bypass>     </pkcs11> With PKCS#11 bypass enabled, Oracle iPlanet Web Server will only use the RSA capability of the T4, provided certificate and key are stored in the T4 slot (Metaslot). Actually, the RSA op is never bypassed in NSS, it's always done with PKCS#11 calls. So the bypass settings won't affect the behavior of the probes for RSA at all. The only thing that matters if where the RSA key and certificate live, ie. which PKCS#11 token, and thus which PKCS#11 module gets called to do the work. If your certificate/key are in the NSS certificate/key db, you will see libsoftokn3/libfreebl libraries doing the RSA work. If they are in the Sun Metaslot, it should be the Solaris code. 10. Start the server instance # ../bin/startserv Oracle iPlanet Web Server 7.0.16 B09/14/2012 03:33Please enter the PIN for the "Sun Metaslot" token: password...info: HTTP3072: http-listener-1: https://hostname.fqdn:80 ready to accept requestsinfo: CORE3274: successful server startup 11. Figure out which process to run this DTrace script on # ps -eaf | grep webservd | grep -v dogwebservd 18224 18223 0 13:17:25 ? 0:07 webservd -d /opt/oracle/webserver7/https-hostname.fqdn/config -r /opt/root 18225 18224 0 13:17:25 ? 0:00 webservd -d /opt/oracle/webserver7/https-hostname.fqdn/config -r /opt/ (For Oracle Traffic Director look for process named "trafficd") We see that the child process id is “18225” 12. Clients for testing : You can use any browser. I used NSS tool tstclnt for testing $cat > req.txtGET /index.html HTTP/1.0 For checking both RSA and AES, I used cipher “:0035” which is TLS_RSA_WITH_AES_256_CBC_SHA $./tstclnt -h hostname -p 80 -d . -T -f -o -v -c “:0035” < req.txt 13. How do I make sure that crypto accelerator is being used 13.1 Create DTrace script The following D script should be able to uncover whether T4-specific crypto routine are being called or not. It also displays stats per second. # cat > t4crypto.d#!/usr/sbin/dtrace -spid$target::*rsa*:entry,pid$target::*yf*:entry{    @ops[probemod, probefunc] = count();}tick-1sec{    printa(@ops);    trunc(@ops);} Invoke with './t4crypto.d -p <pid> ' 13.2 EXPECTED PROBES FOR Solaris 10 : If offloading to T4 HW are correctly set up, the expected DTrace output would have these probes and libraries library Operations PROBES pkcs11_softtoken_extra.so RSA soft_decrypt_rsa_pkcs_decode, soft_encrypt_rsa_pkcs_encode soft_rsa_crypt_init_common soft_rsa_decrypt, soft_rsa_encrypt soft_rsa_decrypt_common, soft_rsa_encrypt_common AES yf_aes_instructions_present yf_aes_expand256, yf_aes256_cbc_decrypt, yf_aes256_cbc_encrypt, yf_aes256_load_keys_for_decrypt, yf_aes256_load_keys_for_encrypt, Note that these are for 256, same for 128, 192... these are for cbc, same for ecb, ctr, cfb128... DES yf_des_expand, yf_des_instructions_present yf_des_encrypt libmd_psr.so MD5 yf_md5_multiblock, yf_md5_instruction_present SHA1 yf_sha1_instruction_present, yf_sha1_multibloc 13.3 SAMPLE OUTPUT FOR CIPHER TLS_RSA_WITH_AES_256_CBC_SHA (0x0035) ON T4 SPARC SOLARIS 10 WITHOUT PKCS#11 BYPASS # ./t4crypto.d -p 18225 pkcs11_softtoken_extra.so.1   soft_decrypt_rsa_pkcs_decode    1 pkcs11_softtoken_extra.so.1   soft_rsa_crypt_init_common      1 pkcs11_softtoken_extra.so.1   soft_rsa_decrypt                1 pkcs11_softtoken_extra.so.1   big_mp_mul_yf                   2 pkcs11_softtoken_extra.so.1   mpm_yf_mpmul                    2 pkcs11_softtoken_extra.so.1   mpmul_arr_yf                    2 pkcs11_softtoken_extra.so.1   rijndael_key_setup_enc_yf       2 pkcs11_softtoken_extra.so.1   soft_rsa_decrypt_common         2 pkcs11_softtoken_extra.so.1   yf_aes_expand256                2 pkcs11_softtoken_extra.so.1   yf_aes256_cbc_decrypt           3 pkcs11_softtoken_extra.so.1   yf_aes256_load_keys_for_decrypt 3 pkcs11_softtoken_extra.so.1   big_mont_mul_yf                 6 pkcs11_softtoken_extra.so.1   mm_yf_montmul                   6 pkcs11_softtoken_extra.so.1   yf_des_instructions_present     6 pkcs11_softtoken_extra.so.1   yf_aes256_cbc_encrypt           8 pkcs11_softtoken_extra.so.1   yf_aes256_load_keys_for_encrypt 8 pkcs11_softtoken_extra.so.1   yf_mpmul_present                8 pkcs11_softtoken_extra.so.1   yf_aes_instructions_present    13 pkcs11_softtoken_extra.so.1   yf_des_encrypt                 18 libmd_psr.so.1                yf_md5_multiblock              41 libmd_psr.so.1                yf_md5_instruction_present     72 libmd_psr.so.1                yf_sha1_instruction_present    82 libmd_psr.so.1                yf_sha1_multiblock             82 This indicates that both RSA and AES ops are done in Solaris Crypto Framework. 13.4 SAMPLE OUTPUT FOR CIPHER TLS_RSA_WITH_AES_256_CBC_SHA (0x0035) ON T4 SPARC SOLARIS 10 WITH PKCS#11 BYPASS # ./t4crypto.d -p 18225 pkcs11_softtoken_extra.so.1   soft_decrypt_rsa_pkcs_decode 1 pkcs11_softtoken_extra.so.1   soft_rsa_crypt_init_common   1 pkcs11_softtoken_extra.so.1   soft_rsa_decrypt             1 pkcs11_softtoken_extra.so.1   soft_rsa_decrypt_common      1 pkcs11_softtoken_extra.so.1   big_mp_mul_yf                2 pkcs11_softtoken_extra.so.1   mpm_yf_mpmul                 2 pkcs11_softtoken_extra.so.1   mpmul_arr_yf                 2 pkcs11_softtoken_extra.so.1   big_mont_mul_yf              6 pkcs11_softtoken_extra.so.1   mm_yf_montmul                6 pkcs11_softtoken_extra.so.1   yf_mpmul_present             8 For this cipher, when I enable PKCS#11 bypass, Only RSA probes are being hit AES probes are not being hit. 13.5 ustack() for RSA operations / probefunc == "soft_rsa_decrypt" / Shows that libnss3.so is calling C_* functions of libpkcs11.so which is calling functions of pkcs11_softtoken_extra.so for both cases with and without bypass. When PKCS#11 bypass is disabled (allow-bypass is 0) pkcs11_softtoken_extra.so.1`soft_rsa_decrypt pkcs11_softtoken_extra.so.1`soft_rsa_decrypt_common+0x94 pkcs11_softtoken_extra.so.1`soft_unwrapkey+0x258 pkcs11_softtoken_extra.so.1`C_UnwrapKey+0x1ec libpkcs11.so.1`meta_unwrap_key+0x17c libpkcs11.so.1`meta_UnwrapKey+0xc4 libpkcs11.so.1`C_UnwrapKey+0xfc libnss3.so`pk11_AnyUnwrapKey+0x6b8 libnss3.so`PK11_PubUnwrapSymKey+0x8c libssl3.so`ssl3_HandleRSAClientKeyExchange+0x1a0 libssl3.so`ssl3_HandleClientKeyExchange+0x154 libssl3.so`ssl3_HandleHandshakeMessage+0x440 libssl3.so`ssl3_HandleHandshake+0x11c libssl3.so`ssl3_HandleRecord+0x5e8 libssl3.so`ssl3_GatherCompleteHandshake+0x5c libssl3.so`ssl_GatherRecord1stHandshake+0x30 libssl3.so`ssl_Do1stHandshake+0xec libssl3.so`ssl_SecureRecv+0x1c8 libssl3.so`ssl_Recv+0x9c libns-httpd40.so`__1cNDaemonSessionDrun6M_v_+0x2dc When PKCS#11 bypass is enabled (allow-bypass is 1) pkcs11_softtoken_extra.so.1`soft_rsa_decrypt pkcs11_softtoken_extra.so.1`soft_rsa_decrypt_common+0x94 pkcs11_softtoken_extra.so.1`C_Decrypt+0x164 libpkcs11.so.1`meta_do_operation+0x27c libpkcs11.so.1`meta_Decrypt+0x4c libpkcs11.so.1`C_Decrypt+0xcc libnss3.so`PK11_PrivDecryptPKCS1+0x1ac libssl3.so`ssl3_HandleRSAClientKeyExchange+0xe4 libssl3.so`ssl3_HandleClientKeyExchange+0x154 libssl3.so`ssl3_HandleHandshakeMessage+0x440 libssl3.so`ssl3_HandleHandshake+0x11c libssl3.so`ssl3_HandleRecord+0x5e8 libssl3.so`ssl3_GatherCompleteHandshake+0x5c libssl3.so`ssl_GatherRecord1stHandshake+0x30 libssl3.so`ssl_Do1stHandshake+0xec libssl3.so`ssl_SecureRecv+0x1c8 libssl3.so`ssl_Recv+0x9c libns-httpd40.so`__1cNDaemonSessionDrun6M_v_+0x2dc libnsprwrap.so`ThreadMain+0x1c libnspr4.so`_pt_root+0xe8 13.6 ustack() FOR AES operations / probefunc == "yf_aes256_cbc_encrypt" / When PKCS#11 bypass is disabled (allow-bypass is 0) pkcs11_softtoken_extra.so.1`yf_aes256_cbc_encrypt pkcs11_softtoken_extra.so.1`aes_block_process_contiguous_whole_blocks+0xb4 pkcs11_softtoken_extra.so.1`aes_crypt_contiguous_blocks+0x1cc pkcs11_softtoken_extra.so.1`soft_aes_encrypt_common+0x22c pkcs11_softtoken_extra.so.1`C_EncryptUpdate+0x10c libpkcs11.so.1`meta_do_operation+0x1fc libpkcs11.so.1`meta_EncryptUpdate+0x4c libpkcs11.so.1`C_EncryptUpdate+0xcc libnss3.so`PK11_CipherOp+0x1a0 libssl3.so`ssl3_CompressMACEncryptRecord+0x264 libssl3.so`ssl3_SendRecord+0x300 libssl3.so`ssl3_FlushHandshake+0x54 libssl3.so`ssl3_SendFinished+0x1fc libssl3.so`ssl3_HandleFinished+0x314 libssl3.so`ssl3_HandleHandshakeMessage+0x4ac libssl3.so`ssl3_HandleHandshake+0x11c libssl3.so`ssl3_HandleRecord+0x5e8 libssl3.so`ssl3_GatherCompleteHandshake+0x5c libssl3.so`ssl_GatherRecord1stHandshake+0x30 libssl3.so`ssl_Do1stHandshake+0xec Shows that libnss3.so is calling C_* functions of libpkcs11.so which is calling functions of pkcs11_softtoken_extra.so However when PKCS#11 bypass is disabled (allow-bypass is 1) this stack isn't getting called. 14. LIST OF ALL THE PROBES MATCHED BY D SCRIPT FOR REFERENCE # ./t4crypto.d -p 18225 -l ID PROVIDER MODULE FUNCTION NAME ... 55720 pid18225 libmd_psr.so.1 yf_md5_instruction_present entry 55721 pid18225 libmd_psr.so.1 yf_sha256_instruction_present entry 55722 pid18225 libmd_psr.so.1 yf_sha512_instruction_present entry 55723 pid18225 libmd_psr.so.1 yf_sha1_instruction_present entry 55724 pid18225 libmd_psr.so.1 yf_sha256 entry 55725 pid18225 libmd_psr.so.1 yf_sha256_multiblock entry 55726 pid18225 libmd_psr.so.1 yf_sha512 entry 55727 pid18225 libmd_psr.so.1 yf_sha512_multiblock entry 55728 pid18225 libmd_psr.so.1 yf_sha1 entry 55729 pid18225 libmd_psr.so.1 yf_sha1_multiblock entry 55730 pid18225 libmd_psr.so.1 yf_md5 entry 55731 pid18225 libmd_psr.so.1 yf_md5_multiblock entry 55732 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_instructions_present entry 55733 pid18225 pkcs11_softtoken_extra.so.1 rijndael_key_setup_enc_yf entry 55734 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_expand128 entry 55735 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_encrypt128 entry 55736 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_decrypt128 entry 55737 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_expand192 entry 55738 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_encrypt192 entry 55739 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_decrypt192 entry 55740 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_expand256 entry 55741 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_encrypt256 entry 55742 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_decrypt256 entry 55743 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_load_keys_for_encrypt entry 55744 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_load_keys_for_encrypt entry 55745 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_load_keys_for_encrypt entry 55746 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_ecb_encrypt entry 55747 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_ecb_encrypt entry 55748 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_ecb_encrypt entry 55749 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_cbc_encrypt entry 55750 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_cbc_encrypt entry 55751 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_cbc_encrypt entry 55752 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_ctr_crypt entry 55753 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_ctr_crypt entry 55754 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_ctr_crypt entry 55755 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_cfb128_encrypt entry 55756 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_cfb128_encrypt entry 55757 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_cfb128_encrypt entry 55758 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_load_keys_for_decrypt entry 55759 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_load_keys_for_decrypt entry 55760 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_load_keys_for_decrypt entry 55761 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_ecb_decrypt entry 55762 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_ecb_decrypt entry 55763 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_ecb_decrypt entry 55764 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_cbc_decrypt entry 55765 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_cbc_decrypt entry 55766 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_cbc_decrypt entry 55767 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_cfb128_decrypt entry 55768 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_cfb128_decrypt entry 55769 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_cfb128_decrypt entry 55771 pid18225 pkcs11_softtoken_extra.so.1 yf_des_instructions_present entry 55772 pid18225 pkcs11_softtoken_extra.so.1 yf_des_expand entry 55773 pid18225 pkcs11_softtoken_extra.so.1 yf_des_encrypt entry 55774 pid18225 pkcs11_softtoken_extra.so.1 yf_mpmul_present entry 55775 pid18225 pkcs11_softtoken_extra.so.1 yf_montmul_present entry 55776 pid18225 pkcs11_softtoken_extra.so.1 mm_yf_montmul entry 55777 pid18225 pkcs11_softtoken_extra.so.1 mm_yf_montsqr entry 55778 pid18225 pkcs11_softtoken_extra.so.1 mm_yf_restore_func entry 55779 pid18225 pkcs11_softtoken_extra.so.1 mm_yf_ret_from_mont_func entry 55780 pid18225 pkcs11_softtoken_extra.so.1 mm_yf_execute_slp entry 55781 pid18225 pkcs11_softtoken_extra.so.1 big_modexp_ncp_yf entry 55782 pid18225 pkcs11_softtoken_extra.so.1 big_mont_mul_yf entry 55783 pid18225 pkcs11_softtoken_extra.so.1 mpmul_arr_yf entry 55784 pid18225 pkcs11_softtoken_extra.so.1 big_mp_mul_yf entry 55785 pid18225 pkcs11_softtoken_extra.so.1 mpm_yf_mpmul entry 55786 pid18225 libns-httpd40.so nsapi_rsa_set_priv_fn entry ... 55795 pid18225 libnss3.so prepare_rsa_priv_key_export_for_asn1 entry 55796 pid18225 libresolv.so.2 sunw_dst_rsaref_init entry 55797 pid18225 libnssutil3.so NSS_Get_SEC_UniversalStringTemplate entry ... 55813 pid18225 libsoftokn3.so prepare_low_rsa_priv_key_for_asn1 entry 55814 pid18225 libsoftokn3.so rsa_FormatOneBlock entry 55815 pid18225 libsoftokn3.so rsa_FormatBlock entry 55816 pid18225 libnssdbm3.so lg_prepare_low_rsa_priv_key_for_asn1 entry 55817 pid18225 libfreebl_32fpu_3.so rsa_build_from_primes entry 55818 pid18225 libfreebl_32fpu_3.so rsa_is_prime entry 55819 pid18225 libfreebl_32fpu_3.so rsa_get_primes_from_exponents entry 55820 pid18225 libfreebl_32fpu_3.so rsa_PrivateKeyOpNoCRT entry 55821 pid18225 libfreebl_32fpu_3.so rsa_PrivateKeyOpCRTNoCheck entry 55822 pid18225 libfreebl_32fpu_3.so rsa_PrivateKeyOpCRTCheckedPubKey entry 55823 pid18225 pkcs11_kernel.so.1 key_gen_rsa_by_value entry 55824 pid18225 pkcs11_kernel.so.1 get_rsa_private_key entry 55825 pid18225 pkcs11_kernel.so.1 get_rsa_public_key entry 55826 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_encrypt entry 55827 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_decrypt entry 55828 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_crypt_init_common entry 55829 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_encrypt_common entry 55830 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_decrypt_common entry 55831 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_sign_verify_init_common entry 55832 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_sign_common entry 55833 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_verify_common entry 55834 pid18225 pkcs11_softtoken_extra.so.1 generate_rsa_key entry 55835 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_genkey_pair entry 55836 pid18225 pkcs11_softtoken_extra.so.1 get_rsa_sha1_prefix entry 55837 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_digest_sign_common entry 55838 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_digest_verify_common entry 55839 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_verify_recover entry 55840 pid18225 pkcs11_softtoken_extra.so.1 rsa_pri_to_asn1 entry 55841 pid18225 pkcs11_softtoken_extra.so.1 asn1_to_rsa_pri entry 55842 pid18225 pkcs11_softtoken_extra.so.1 soft_encrypt_rsa_pkcs_encode entry 55843 pid18225 pkcs11_softtoken_extra.so.1 soft_decrypt_rsa_pkcs_decode entry 55844 pid18225 pkcs11_softtoken_extra.so.1 soft_sign_rsa_pkcs_encode entry 55845 pid18225 pkcs11_softtoken_extra.so.1 soft_verify_rsa_pkcs_decode entry 55770 profile tick-1sec

    Read the article

  • When Less is More

    - by aditya.agarkar
    How do you reconcile the fact that while the overall warehouse volume is down you still need more workers in the warehouse to ship all the orders? A WMS customer recently pointed out this seemingly perplexing fact in a customer conference. So what is going on? Didn't we tell you before that for a warehouse the customer is really the "king"? In this case customers are merely responding to a low overall low demand and uncertainty. They do not want to hold down inventory and one of the ways to do that is by decreasing the order size and ordering more frequently. Overall impact to the warehouse? Two words: "More work!!" This is not all. Smaller order sizes also mean challenges from a transportation perspective including a rise in costlier parcel or LTL shipments instead of cheaper TL shipments. Here is a hypothetical scenario where a customer reduces the order size by 10% and increases the order frequency by 10%. As you can see in the following table, the overall volume declines by 1% but the warehouse has to ship roughly 10% more lines. Order Frequency (Line Count)Order Size (Units)Total VolumeChange (%)10010010,000 -110909,900-1% If you want to see how "Less is More" in graphical terms, this is how it appears: Even though the volume is down, there is going to be more work in the warehouse in terms of number of lines shipped. The operators need to pick more discrete orders, pack them into more shipping containers and ship more deliveries. What do you do differently if you are facing this situation?In this case here are some obvious steps to take:Uno: Change your pick methods. If you are used to doing order picks, it needs to go out the door. You need to evaluate batch picking and grouping techniques. Go for cluster picking, go for zone picking, pick and pass...anything that improves your picker productivity. More than anything, cluster picking works like a charm and above all, its simple and very effective. Dos: Are you minimize "touch" points in your pick process? Consider doing one step pick, pack and confirm i.e. pick and pack stuff directly into shipping cartons. Done correctly the container will not require any more "touch" points all the way to the trailer loading. Use cartonization!Tres: Are the being picked from an optimized pick face? Are the items slotted correctly? This needs to be looked into. Consider automated "pull" or "push" replenishment into your pick face and also make sure that high demand items are occupying the golden zones.  Cuatro: Are you tracking labor productivity? If not there needs to be a concerted push for having labor standards in place. Hope you found these ideas useful.

    Read the article

  • Oracle Linux Events in December

    - by Zeynep Koch
    December will be a busy month for Oracle Linux team. We will be showcasing Oracle Linux and Oracle VM in conferences all around the world. Here's a list of December events we will showcase Oracle Linux: Gartner Data Center – North America Oracle will have a session and booth - Register today Dec 3-6, Las Vegas, USA 0 0 1 66 377 Oracle Corporation 3 1 442 14.0 96 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman";} Oracle Open World Latin America Oracle OpenWorld Latin America December 4–6, Sao Paulo, Brazil 0 0 1 25 145 Oracle Corporation 1 1 169 14.0 96 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman";} HP Discover EMEA HP Discover – EMEA December 4 – 6, Messe Frankfurt, Germany (Oracle Platinum sponsor) 0 0 1 41 239 Oracle Corporation 1 1 279 14.0 96 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman";} Oracle Superior Solutions and Cost Savings with Oracle Linux and Oracle VM See Location Details and register Dec 4 Kansas City and Tampa Dec 6 Milwaukee and Miami Dec 11 Washington, DC Dec 13 Raleigh Visit our booth and you can grab an Oracle Linux/Oracle VM DVD Kit and talk to Oracle Linux experts.

    Read the article

  • HTML5 and CSS3 Editing in Windows Live Writer

    - by Rick Strahl
    Windows Live Writer is a wonderful tool for editing blog posts and getting them posted to your blog. What makes it nice is that it has a small set of useful features, plus a simple plug-in model that has spawned many useful add-ins. Small tool with a reasonably decent plug-in model to extend equals a great solution to a simple problem. If you're running Windows, have a blog and aren’t using Live Writer you’re probably doing it wrong…One of Live Writer’s nice features is that it can download your blog’s CSS for preview and edit displays. It lets you edit your content inside of the context of that CSS using the WYSIWYG editor, so your content actually looks very close to what you’ll see on your blog while you’re editing your post. Unfortunately Live Writer renders the HTML content in the Web Browser Control’s  default IE 7 rendering mode. Yeah you read that right: IE 7 is the default for the Web Browser control and most applications that use it, are stuck in this modus unless the application explicitly overrides this default. The Web Browser control does not use the version of Internet Explorer installed on the system (IE 10 on my Win8 machine) but uses IE 7 mode for ‘compatibility’ for old applications.If you are importing your blog’s CSS that may suck if you’re using rich HTML 5 and CSS 3 formatting. Hack the Registry to get Live Writer to render using IE 9 or 10In order to get Live Writer (or any other application that uses the Web Browser Control for that matter) to render you can apply a registry hack that overrides the Web Browser Control engine usage for a specific application. I wrote about this in detail in a previous blog post a couple of years back.Here’s how you can set up Windows Live Writer to render your CSS 3 by making a change in your registry:The above is for setup on a 64 bit machine, where I configure Live Writer which is a 32 bit application for using IE 10 rendering. The keys set are as follows:32bit Configuration on 64 bit machine:HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BROWSER_EMULATIONKey: WindowsLiveWriter.exeValue: 9000 or 10000  (IE 9 or 10 respectively) (DWORD value)On a 32 bit only machine: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BROWSER_EMULATIONKey: WindowsLiveWriter.exeValue: 9000 or 10000  (IE 9 or 10 respectively) (DWORD value)Use decimal values of 9000, 10000 or 11000 to specify specific versions of Internet Explorer. This is a minor tweak, but it’s nice to actually see my blog posts now with the proper CSS formatting intact. Notice the rounded borders and shadow on the code blocks as well as the overflow-x and scrollbars that show up. In this particular case I can see what the code blocks actually look like in a specific resolution – much better than in the old plain view which just chopped things off at the end of the window frame. There are a few other elements that now show properly in the editor as well including block quotes and note boxes that I occasionally use. It’s minor stuff, but it makes the editing experience better yet and closer to the final things so there are less republish operations than I previously had. Sweet!Note that this approach of putting an IE version override into the registry works with most applications that use the Web Browser control. If you are using the Web Browser control in your own applications, it’s a good idea to switch the browser to a more recent version so you can take advantage of HTML 5 and CSS 3 in your browser displayed content by automatically setting this flag in the registry or as part of the application’s startup routine if not dedicated setup tool is used. At the very least you might set it to 9000 (IE 9) which supports most of the basic CSS3 features and is a decent baseline that works for most Windows 7 and 8 machines. If running pre-IE9, the browser will fall back to IE7 rendering and look bad but at least more recent browsers will see an improved experience.I’m surprised that there aren’t more vendors and third party apps using this feature. You can see in my first screen shot that there are only very few entries in the registry key group on my machine – any other apps use the Web Browser control are using IE7. Go figure. Certainly Windows Live Writer should be writing this key into the registry automatically as part of installation to support this functionality out of the box, but alas since it does not, this registry hack lets you get your way anyway…Resources.reg Files to register Live Write Browser Emulation (set for IE9)Specifying Internet Explorer Version for ApplicationsSnagIt LiveWriter Plug-inDownload Windows Live WriterDownload Windows Live Writer with Chocolatey© Rick Strahl, West Wind Technologies, 2005-2013Posted in Live Writer  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Boot failure : No DEFAULT or UI configuration directive found!

    - by Sven
    Hello .... I've tried to install Ubuntu on a VMWare Virtual Machine, but it never get's past the bootloader : ISOLINUX 4.01 debian-20100714 ETCD Copyright (C) 1994-2010 H. Peter Anvin et al No DEFAULT or UI configuration directive found! boot: _ I've also tried to burn this .iso image to a CD and go from there - but I got the same problems also ... Some more details : VMWare Server 2.0.2 Build 203138 on Windows 2008 R2 ubuntu-10.10-desktop-i386.iso image used (downloaded yesterday evening) I have no clue, why this is not booting !!! Somebody have any ideas ? Thanks, Sven

    Read the article

  • How can I configure the Aiptek T-6000U graphics tablet?

    - by mejpark
    I followed AiptekTablet instructions on the Ubuntu Wiki to configure 11.04 for use with my graphics tablet. I installed the xserver-xorg-input-aiptek package and created two files with the options detailed on the Wiki page above: $ cat /lib/udev/rules.d/69-xserver-xorg-input-aiptek.rules ACTION!="add|change", GOTO="xorg_aiptek_end" KERNEL!="event[0-9]*", GOTO="xorg_aiptek_end" ATTRS{idVendor}=="08ca", ENV{x11_driver}="aiptek", SYMLINK+="input/aiptektablet" LABEL="xorg_aiptek_end" $ cat /usr/share/X11/xorg.conf.d/50-aiptek.conf Section "InputClass" Identifier "pen" MatchProduct "Aiptek|AIPTEK|aiptek" MatchDevicePath "/dev/input/event*" Driver "aiptek" Option "USB" "on" Option "Type" "stylus" Option "Mode" "absolute" Option "zMin" "0" Option "zMax" "511" EndSection The 50-aiptek.conf file was originally called 10-aiptek.conf as in the Wiki, but an Aiptek tablet installation help thread on the Ubuntu Forums, suggested changing 10 to 50. Any ideas? Thank you.

    Read the article

  • CodePlex Daily Summary for Sunday, November 11, 2012

    CodePlex Daily Summary for Sunday, November 11, 2012Popular ReleasesZXMAK2: Version 2.7.2.0: show extended rzx error info fix reset lag for PROFI ULA 5.xx fix reset behavior fix PROFI ULA timings (thanks to solegstar) fix #FF port for PROFI ULA add ATM710 memory module add new predefined machine configs: ATM Turbo 2, PROFI 3.XX???????: Monitor 2012-11-11: This is the first releaseVidCoder: 1.4.5 Beta: Removed the old Advanced user interface and moved x264 preset/profile/tune there instead. The functionality is still available through editing the options string. Added ability to specify the H.264 level. Added ability to choose VidCoder's interface language. If you are interested in translating, we can get VidCoder in your language! Updated WPF text rendering to use the better Display mode. Updated HandBrake core to SVN 5045. Removed logic that forced the .m4v extension in certain ...ImageGlass: Version 1.5: http://i1214.photobucket.com/albums/cc483/phapsuxeko/ImageGlass/1.png v1.5.4401.3015 Thumbnail bar: Increase loading speed Thumbnail image with ratio Support personal customization: mouse up, mouse down, mouse hover, selected item... Scroll to show all items Image viewer Zoom by scroll, or selected rectangle Speed up loading Zoom to cursor point New background design and customization and others... v1.5.4430.483 Thumbnail bar: Auto move scroll bar to selected image Show / Hi...Building Windows 8 Apps with C# and XAML: Full Source Chapters 1 - 10 for Windows 8 Fix 002: This is the full source from all chapters of the book, compiled and tested on Windows 8 RTM. Includes: A fix for the Netflix example from Chapter 6 that was missing a service reference A fix for the ImageHelper issue (images were not being saved) - this was due to the buffer being inadequate and required streaming the writeable bitmap to a buffer first before encoding and savingmyCollections: Version 2.3.2.0: New in this version : Added TheGamesDB.net API for Games and NDS Added Support for Windows Media Center Added Support for myMovies Added Support for XBMC Added Support for Dune HD Added Support for Mede8er Added Support for WD HDTV Added Fast search options Added order by Artist/Album for music You can now create covers and background for games You can now update your ID3 tag with the info of myCollections Fixed several provider Performance improvement New Splash ...Draw: Draw 1.0: Drawing PadPlayer Framework by Microsoft: Player Framework for Windows 8 (v1.0): IMPORTANT: List of breaking changes from preview 7 Ability to move control panel or individual elements outside media player. more info... New Entertainment app theme for out of the box support for Windows 8 Entertainment app guidelines. more info... VSIX reference names shortened. Allows seeing plugin name from "Add Reference" dialog without resizing. FreeWheel SmartXML now supports new "Standard" event callback type. Other minor misc fixes and improvements ADDITIONAL DOWNLOADSSmo...WebSearch.Net: WebSearch.Net 3.1: WebSearch.Net is an open-source research platform that provides uniform data source access, data modeling, feature calculation, data mining, etc. It facilitates the experiments of web search researchers due to its high flexibility and extensibility. The platform can be used or extended by any language compatible for .Net 2 framework, from C# (recommended), VB.Net to C++ and Java. Thanks to the large coverage of knowledge in web search research, it is necessary to model the techniques and main...Umbraco CMS: Umbraco 4.10.0: NugetNuGet BlogRead the release blog post for 4.10.0. Whats newMVC support New request pipeline Many, many bugfixes (see the issue tracker for a complete list) Read the documentation for the MVC bits. Breaking changesWe have done all we can not to break backwards compatibility, but we had to do some minor breaking changes: Removed graphicHeadlineFormat config setting from umbracoSettings.config (an old relic from the 3.x days) U4-690 DynamicNode ChildrenAsList was fixed, altering it'...SQL Server Partitioned Table Framework: Partitioned Table Framework Release 1.0: SQL Server 2012 ReleaseSharePoint Manager 2013: SharePoint Manager 2013 Release ver 1.0.12.1106: SharePoint Manager 2013 Release (ver: 1.0.12.1106) is now ready for SharePoint 2013. The new version has an expanded view of the SharePoint object model and has been tested on SharePoint 2013 RTM. As a bonus, the new version is also available for SharePoint 2010 as a separate download.D3D9Client: D3D9Client R7: New release for Orbiter 2010-P1 - Added horizon/sun angle for night-lights into the configuration file (default 10deg) - Some runway lights related bugs are fixed - Added more configuration options for runway lightsFiskalizacija za developere: FiskalizacijaDev 1.2: Verzija 1.2. je, prije svega, odgovor na novu verziju Tehnicke specifikacije (v1.1.) koja je objavljena prije nekoliko dana. Pored novosti vezanih uz (sitne) izmjene u spomenutoj novoj verziji Tehnicke dokumentacije, projekt smo prošili sa nekim dodatnim feature-ima od kojih je vecina proizašla iz vaših prijedloga - hvala :) Novosti u v1.2. su: - Neusuglašenost zahtjeva (http://fiskalizacija.codeplex.com/workitem/645) - Sample projekt - iznosi se množe sa 100 (http://fiskalizacija.codeplex.c...MFCMAPI: October 2012 Release: Build: 15.0.0.1036 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeDictationTool: DictationCool-WPF: • Open a media file to start a new dication. • Open a dct file to continue a dictation. • Compare your dictation with original text if exists. • Save your dictation to dct file, and restore it to continue later. • Save the compared result to html file.MCEBuddy 2.x: MCEBuddy 2.3.7: Changelog for 2.3.7 (32bit and 64bit) 1. Improved performance of MP4 Fast and M4V Fast Profiles (no deinterlacing, removed --decomb) 2. Improved priority handling 3. Added support for Pausing and Resume conversions 4. Added support for fallback to source directory if network destination directory is unavailable 5. MCEBuddy now installs ShowAnalyzer during installation 6. Added support for long description atom in iTunesDyanamic Reports (RDLC) - SharePoint 2010 Visual WebPart: Initial Release: This is a Initial Release.HTML Renderer: HTML Renderer 1.0.0.0 (3): Major performance improvement (http://theartofdev.wordpress.com/2012/10/25/how-i-optimized-html-renderer-and-fell-in-love-with-vs-profiler/) Minor fixes raised in issue tracker and discussions.Window Manager: Window Manager 1.0: First releaseNew Projectsarteytex: este es una prueba blockworld: An implementation of a goal stack planner.Customer Note: customer note is windows store applicationDraw: ?????????????:??????、CAD??、????。Football Management: Football Management System is web management system for football (soccer) leagues, teams and players. Hijri Converter API: This project is aimed to create a simple RESTful API using VB and ASP.NET to do Hijri-to-Gregorian and Gregorian-to-Hijri conversion.httpclient?????????: httpclient?????????(1)??????????(2)?????????(3)??2012-11-06??,???????。 Imagine Cup 2013: Develop project to Imagine Cup 2013MyAppReji: MyAppN2F Request: The N2F Request object is used to handle interactions between N2F and the global $_REQUEST variable, sanitizing any results which are returned.Orchard Metro Theme: Orchard Metro Theme is a clean and flexible multi-zone theme.Poker Clock And Goodies: poker w8ProjectASPReviewer: Review website for notebooks, tablets and smartphones.Prototype: Its about making an proto type for the final project.Prototype - 7COM0207: 7COM0207 web scripting module, Assignment 2QuickToAD: QuickToAD is a foundational development project for the purpose of jump-starting data-driven application projects.Release Manager: Release Manager is a project to design and develop Windows based Release Management Software.ResW File Code Generator: A Visual Studio 2012 Custom Tool for generating a strongly typed helper class for accessing localized resources from a .ResW file.SEO Tools: This is a website containing some commonly used SEO tools. I have only added a blog ping utility at this time but there is more to come. Thales communicator: A C# library that helps communicate with Thales HSMTrivial: A trivia framework: Trivial is a C# framework that helps you creating custom trivia-like applications.

    Read the article

  • Webcenter 11g Patch Set 3 (PS3) - Formação para Parceiros - 19/Jan/11

    - by Claudia Costa
    O Oracle WebCenter Suite consiste num conjunto integrado de soluções  baseadas em standards de mercado, com o objectivo de criar aplicações de negócio, portais corporativos, comunidades de interesse, grupos colaborativos, redes sociais, consolidar sistemas aplicacionais web e potenciar as funcionalidades web 2.0 dentro e fora de uma organização. Toda a solução está em conformidade com os principais Standards de mercado e baseia-se numa arquitectura totalmente orientada ao serviço (SOA). Venha conhecer nesta sessão as novas funcionalidades do Webcenter Suite 11g PS3. Agenda 09.30h Boas Vindas e Introdução 09.40h Oracle & Enterprise 2.0 - João Borrego 10.00h Introduction to Oracle WebCenter Suite as a User Experience Platform (UXP) - Monte Kluemper 10.40h Coffee Break 11.00h Building Rich UIs with Oracle WebCenter Portal - Monte Kluemper 12.00h Interactive Q&A Session - João Borrego e Monte Kluemper 12.30h Almoço 14.00h Deep-dive into WebCenter Technical Architecture - Monte Kluemper 15.30h Final da Sessão Target: - Equipas de Venda e Comerciais (manhã)- Equipas Técnicas e de pré-venda (tarde) Data e Local:19 de JaneiroLagoas Park Hotel Para este Workshop os lugares estão limitados. Por favor aguarde um email de confirmação de sua inscrição.Inscrições : Email Para mais informações, por favor contacte: Claudia Costa / 21 423 50 27

    Read the article

  • SQL SERVER – Powershell – Importing CSV File Into Database – Video

    - by pinaldave
    Laerte Junior is my very dear friend and Powershell Expert. On my request he has agreed to share Powershell knowledge with us. Laerte Junior is a SQL Server MVP and, through his technology blog and simple-talk articles, an active member of the Microsoft community in Brasil. He is a skilled Principal Database Architect, Developer, and Administrator, specializing in SQL Server and Powershell Programming with over 8 years of hands-on experience. He holds a degree in Computer Science, has been awarded a number of certifications (including MCDBA), and is an expert in SQL Server 2000 / SQL Server 2005 / SQL Server 2008 technologies. Let us read the blog post in his own words. I was reading an excellent post from my great friend Pinal about loading data from CSV files, SQL SERVER – Importing CSV File Into Database – SQL in Sixty Seconds #018 – Video,   to SQL Server and was honored to write another guest post on SQL Authority about the magic of the PowerShell. The biggest stuff in TechEd NA this year was PowerShell. Fellows, if you still don’t know about it, it is better to run. Remember that The Core Servers to SQL Server are the future and consequently the Shell. You don’t want to be out of this, right? Let’s see some PowerShell Magic now. To start our tour, first we need to download these two functions from Powershell and SQL Server Master Jedi Chad Miller.Out-DataTable and Write-DataTable. Save it in a module and add it in your profile. In my case, the module is called functions.psm1. To have some data to play, I created 10 csv files with the same content. I just put the SQL Server Errorlog into a csv file and created 10 copies of it. #Just create a CSV with data to Import. Using SQLErrorLog [reflection.assembly]::LoadWithPartialName(“Microsoft.SqlServer.Smo”) $ServerInstance=new-object (“Microsoft.SqlServer.Management.Smo.Server“) $Env:Computername $ServerInstance.ReadErrorLog() | export-csv-path“c:\SQLAuthority\ErrorLog.csv”-NoTypeInformation for($Count=1;$Count-le 10;$count++)  {       Copy-Item“c:\SQLAuthority\Errorlog.csv”“c:\SQLAuthority\ErrorLog$($count).csv” } Now in my path c:\sqlauthority, I have 10 csv files : Now it is time to create a table. In my case, the SQL Server is called R2D2 and the Database is SQLServerRepository and the table is CSV_SQLAuthority. CREATE TABLE [dbo].[CSV_SQLAuthority]( [LogDate] [datetime] NULL, [Processinfo] [varchar](20) NULL, [Text] [varchar](MAX) NULL ) Let’s play a little bit. I want to import synchronously all csv files from the path to the table: #Importing synchronously $DataImport=Import-Csv-Path ( Get-ChildItem“c:\SQLAuthority\*.csv”) $DataTable=Out-DataTable-InputObject$DataImport Write-DataTable-ServerInstanceR2D2-DatabaseSQLServerRepository-TableNameCSV_SQLAuthority-Data$DataTable Very cool, right? Let’s do it asynchronously and in background using PowerShell  Jobs: #If you want to do it to all asynchronously Start-job-Name‘ImportingAsynchronously‘ ` -InitializationScript  {IpmoFunctions-Force-DisableNameChecking} ` -ScriptBlock {    ` $DataImport=Import-Csv-Path ( Get-ChildItem“c:\SQLAuthority\*.csv”) $DataTable=Out-DataTable-InputObject$DataImport Write-DataTable   -ServerInstance“R2D2″`                   -Database“SQLServerRepository“`                   -TableName“CSV_SQLAuthority“`                   -Data$DataTable             } Oh, but if I have csv files that are large in size and I want to import each one asynchronously. In this case, this is what should be done: Get-ChildItem“c:\SQLAuthority\*.csv” | % { Start-job-Name“$($_)” ` -InitializationScript  {IpmoFunctions-Force-DisableNameChecking} ` -ScriptBlock { $DataImport=Import-Csv-Path$args[0]                $DataTable=Out-DataTable-InputObject$DataImport                Write-DataTable-ServerInstance“R2D2″`                               -Database“SQLServerRepository“`                               -TableName“CSV_SQLAuthority“`                               -Data$DataTable             } -ArgumentList$_.fullname } How cool is that? Let’s make the funny stuff now. Let’s schedule it on an SQL Server Agent Job. If you are using SQL Server 2012, you can use the PowerShell Job Step. Otherwise you need to use a CMDexec job step calling PowerShell.exe. We will use the second option. First, create a ps1 file called ImportCSV.ps1 with the script above and save it in a path. In my case, it is in c:\temp\automation. Just add the line at the end: Get-ChildItem“c:\SQLAuthority\*.csv” | % { Start-job-Name“$($_)” ` -InitializationScript  {IpmoFunctions-Force-DisableNameChecking} ` -ScriptBlock { $DataImport=Import-Csv-Path$args[0]                $DataTable=Out-DataTable-InputObject$DataImport                Write-DataTable-ServerInstance“R2D2″`                               -Database“SQLServerRepository“`                               -TableName“CSV_SQLAuthority“`                               -Data$DataTable             } -ArgumentList$_.fullname } Get-Job | Wait-Job | Out-Null Remove-Job -State Completed Why? See my post Dooh PowerShell Trick–Running Scripts That has Posh Jobs on a SQL Agent Job Remember, this trick is for  ALL scripts that will use PowerShell Jobs and any kind of schedule tool (SQL Server agent, Windows Schedule) Create a Job Called ImportCSV and a step called Step_ImportCSV and choose CMDexec. Then you just need to schedule or run it. I did a short video (with matching good background music) and you can see it at: That’s it guys. C’mon, join me in the #PowerShellLifeStyle. You will love it. If you want to check what we can do with PowerShell and SQL Server, don’t miss Laerte Junior LiveMeeting on July 18. You can have more information in : LiveMeeting VC PowerShell PASS–Troubleshooting SQL Server With PowerShell–English Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology, Video Tagged: Powershell

    Read the article

  • OFM 11g: Implementing OAM SSO with Forms

    - by olaf.heimburger
    There is some confusion about the integration of OFM 11g Forms with Oracle Access Manager 11g (OAM). Some say this does not work, some say it works, but.... Actually, having implemented it many times I belong to the later group. Here is how. Caveat Before you start installing anything, take a step back and consider your current implementation and what you really need and want to achieve. The current integration of Forms 11g with OAM 11g does not support self-service account creation and password resets from the Forms application. If you really need this, you must use the existing Oracle AS 10.1.4.3 infrastructure. On the other hand, if your user population is pretty stable, you can enjoy the latest Forms 11g with OAM 11g. Assumptions The whole process should be done in one day. I assume that all domains and instances are started during setup, if you need to restart them on demand or purpose, be sure to have proper start/stop scripts, I don't mention them. Preparation It goes without saying, that you always should do a proper backup before you change anything on your production environment. With proper backup, I also mean a tested and verified restore process. If you dared to test it before, do it now. It pays off. Requirements For OAM 11g to work properly you need a LDAP repository. For the integration of Forms 11g you need an Oracle Internet Directory (OID) configured with the Oracle AS SSO LDAP extensions. For better support I usually give the latest version a try, in this case OID 11g is a good choice.During the Installation and Integration steps we use an upgrade wizard that needs the old OID configuration on the same host but in a different ORACLE_HOME. Installation vs Configuration With OFM 11g Oracle introduced a clear separation between Installation of the binaries (the software) and the Configuration of the instances (the runtime). This is really great as you can install all the software and create new instances when needed. In the following we adhere to this scheme and install the software first and then configure the instances later. Installation Steps The Oracle documentation contains all the necessary steps for the installation of all pieces of software. But some hints help to avoid traps and pitfalls. Step 1 The Database Start the installation with the database. It is quite obvious but we need an Oracle database for all the other steps. If you have one at hand, fine. If not, just install at least a Oracle 10.2.0.4 version. This database can be on a different host. Step 2 The Repository Creation Utility The next step should be to run the Repository Creation Utility (RCU). This is a client application that just needs to connect to your database. It can be run on any host that can reach the database and is a Windows or Linux 32-bit machine. When you run it, be sure to install the OID schema and the OAM schema. If you miss one of these, you can run the RCU again to install the missing schema. Step 3 The Foundation With OFM 11g Oracle started to use WebLogic Server 11g (WLS) as its foundation for all OFM 11g installation. We therefore install it first. Depending on your operating system, it might be possible, that no native installer is available. My approach to this dilemma is to use the WLS Generic Installer for all my installations. It does not include a JDK either but if you have both for your platform you are ready to go. Step 3a The JDK To make things interesting, Oracle currently has two JDKs in its portfolio. The Sun JDK and the JRockit JDK. Both are available for a number of platforms. If you are lucky and both are available for your platform, install both in a separate directory (and not one of your ORACLE_HOMEs) each, You can use the later as you like. Step 3b Install WLS for OID and OAM With the JDK installed, we start the generic installer with java -jar wls_generic.jar.STOP! Before you do this, check the version first. It should be 1.6.0_18 or later and not the GCC one (Some Linux distros have it installed by default). To verify the version, issue a java -version command and make sure that the output does not contain the text gcj and the version matches. If this does not work, use an absolute path like /opt/java/jdk1.6.0_23/bin/java to start the installer. The installer allows you to specify a path to install the software into, say /opt/oracle/iam/11.1.1.3 for the OID and OAM installation. We will call this IAM_HOME. Step 4 Install OID Now we are ready to install OID. Start the OID installer (in the Disk1 directory) and just select the installation only step. This will install the software only and does not configure the instance. Use the IAM_HOME as the target directory. Step 5 Install SOA Suite The IAM 11g Suite uses the BPEL component of the SOA Suite 11g for its workflows. This is a pretty closed environment and not to be used for SCA Composites. We install the SOA Suite in $IAM_HOME/soa. The installer only installs the binaries. Configuration will be done later. Step 6 Install OAM Once the installation of OID and SOA is done, we are ready to install the OAM software in the same IAM_HOME. Make sure to install the OAM binaries in a directory different from the one you used during the OID and SOA installation. As before, we only install the software, the instance will be created later. Step 7 Backup the Installation At this point, I normally do a backup (or snapshot in a virtual image) of the installation. Good when you need to go back to this point. Step 8 Configure OID The software is installed and now we need instances to run it. This process is called configuration. For OID use the config.sh found in $IAM_HOME/oid/bin to start the configuration wizard. Normally this runs smoothly. If you encounter some issues check the Oracle Support site for help. This configuration will also start the OID instance. Step 9 Install the Oracle AS SSO Schema Before we install the Forms software we need to install the Oracle AS SSO Schema into the database and OID. This is a rather dangerous procedure, but fully documented in the IAM Installation Guide, Chapter 10. You should finish this in one go, do not reboot your host during the whole procedure. As a precaution, you should make a backup of the OID instance before you start the procedure. Once the backup is ready, read the chapter, including every note, carefully. You can avoid a number of issues by following all the steps and will succeed with a working solution. Step 10 Configure OAM Reached this step? Great. You are ready to create an OAM instance. Use the $IAM_HOME/iam/common/binconfig.sh for this. This will open the WLS Domain Creation Wizard and asks for the libraries to be installed. You should at least select the OAM with Database repository item. The configuration will also start the OAM instance. Step 11 Install WLS for Forms 11g It is quite tempting to install everything in one ORACLE_HOME. Unfortunately this does not work for all OFM packages. Therefore we do another WLS installation in another ORACLE_HOME. The same considerations as in step 3b apply. We call this one FORMS_HOME. Step 12 Install Forms In the FORMS_HOME we now install the binaries for the Forms 11g software. Again, this is a install only step. Configuration starts with the next step. Step 13 Configure Forms To configure Forms 11g we start the Configuration Wizard (config.sh) in FORMS_HOME/bin. This wizard should create a new WebLogic Domain and an OHS instance! Do not extend existing domains or instances! Forms should run in its own instances! When all information is supplied, the wizard will create the domain and instance and starts them automatically.Step 14 Setup your Forms SSO EnvironmentOnce you have implemented and tested your Forms 11g instance, you can configured it for SSO. Yes, this requires the old Oracle AS SSO solution, OIDDAS for creating and assigning users and SSO to setup your partner applications. In this step you should consider to create every user necessary for use within the environment. When done, do not forget to test it. Step 15 Migrate the SSO Repository Since the final goal is to get rid of the old SSO implementation we need to migrate the old SSO repository into the new OID structure. Additionally, this step will also migrate all partner application configurations into OAM 11g. Quite convenient. To do this step, you have to start the upgrade agent (ua or ua.bat or ua.cmd) on the operating system level in $IAM_HOME/bin. Once finished, this wizard will create new osso.conf files for each partner application in $IAM_HOME/upgrade/temp/oam/.Note: At the time of this writing, this step only works if everything is on the same host (ie. OID, OAM, etc.). This restriction might be lifted in later releases. Step 16 Change your OHS sso.conf and shut down OC4J_SECURITY In Step 14 we verified that SSO for our Forms environment works fine. Now, we are shutting the old system done and reconfigure the OHS that acts as the Forms entry point. First we go to the OHS configuration directory and rename the old osso.conf  to osso.conf.10g. Now we change the moduleconf/mod_osso.conf  to point to the new osso.conf file. Copy the new osso.conf  file from $IAM_HOME/upgrade/temp/oam/ to the OHS configuration directory. Restart OHS, test forms by using the same forms links. OAM should now kick in and show the login dialog to ask for your user credentials.Done. Now your Forms environment is successfully integrated with OAM 11g.Enjoy. What's Next? This rather lengthy setup is just the foundation for your growing environment of OAM 11g protections. In the next entry we will show that Forms 11g and ADF Faces 11g can use the same OAM installation and provide real single sign-on. References Nearly everything is documented. Use the documentation! Oracle® Fusion Middleware Installation Guide for Oracle Identity Management 11gR1 Oracle® Fusion Middleware Installation Guide for Oracle Identity Management 11gR1, Chapter 11-14 Oracle® Fusion Middleware Administrator's Guide for Oracle Access Manager 11gR1, Appendix B Oracle® Fusion Middleware Upgrade Guide for Oracle Identity Management 11gR1, Chapter 10   

    Read the article

  • DocumentDB - Another Azure NoSQL Storage Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/08/25/documentdb---another-azure-nosql-storage-service.aspxMicrosoft just released a bunch of new features for Azure on 22nd and one of them I was interested in most is DocumentDB, a document NoSQL database service on the cloud.   Quick Look at DocumentDB We can try DocumentDB from the new azure preview portal. Just click the NEW button and select the item named DocumentDB to create a new account. Specify the name of the DocumentDB, which will be the endpoint we are going to use to connect later. Select the capacity unit, resource group and subscription. In resource group section we can select which region our DocumentDB will be located. Same as other azure services select the same location with your consumers of the DocumentDB, for example the website, web services, etc.. After several minutes the DocumentDB will be ready. Click the KEYS button we can find the URI and primary key, which will be used when connecting. Now let's open Visual Studio and try to use the DocumentDB we had just created. Create a new console application and install the DocumentDB .NET client library from NuGet with the keyword "DocumentDB". You need to select "Include Prerelase" in NuGet Package Manager window since this library was not yet released. Next we will create a new database and document collection under our DocumentDB account. The code below created an instance of DocumentClient with the URI and primary key we just copied from azure portal, and create a database and collection. And it also prints the document and collection link string which will be used later to insert and query documents. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7: Run(client).Wait(); 8:  9: Console.WriteLine("done"); 10: Console.ReadKey(); 11: } 12:  13: static async Task Run(DocumentClient client) 14: { 15:  16: var database = new Database() { Id = "testdb" }; 17: database = await client.CreateDatabaseAsync(database); 18: Console.WriteLine("database link = {0}", database.SelfLink); 19:  20: var collection = new DocumentCollection() { Id = "testcol" }; 21: collection = await client.CreateDocumentCollectionAsync(database.SelfLink, collection); 22: Console.WriteLine("collection link = {0}", collection.SelfLink); 23: } Below is the result from the console window. We need to copy the collection link string for future usage. Now if we back to the portal we will find a database was listed with the name we specified in the code. Next we will insert a document into the database and collection we had just created. In the code below we pasted the collection link which copied in previous step, create a dynamic object with several properties defined. As you can see we can add some normal properties contains string, integer, we can also add complex property for example an array, a dictionary and an object reference, unless they can be serialized to JSON. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: // collection link pasted from the result in previous demo 9: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 10:  11: // document we are going to insert to database 12: dynamic doc = new ExpandoObject(); 13: doc.firstName = "Shaun"; 14: doc.lastName = "Xu"; 15: doc.roles = new string[] { "developer", "trainer", "presenter", "father" }; 16:  17: // insert the docuemnt 18: InsertADoc(client, collectionLink, doc).Wait(); 19:  20: Console.WriteLine("done"); 21: Console.ReadKey(); 22: } the insert code will be very simple as below, just provide the collection link and the object we are going to insert. 1: static async Task InsertADoc(DocumentClient client, string collectionLink, dynamic doc) 2: { 3: var document = await client.CreateDocumentAsync(collectionLink, doc); 4: Console.WriteLine(await JsonConvert.SerializeObjectAsync(document, Formatting.Indented)); 5: } Below is the result after the object had been inserted. Finally we will query the document from the database and collection. Similar to the insert code, we just need to specify the collection link so that the .NET SDK will help us to retrieve all documents in it. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 9:  10: SelectDocs(client, collectionLink); 11:  12: Console.WriteLine("done"); 13: Console.ReadKey(); 14: } 15:  16: static void SelectDocs(DocumentClient client, string collectionLink) 17: { 18: var docs = client.CreateDocumentQuery(collectionLink + "docs/").ToList(); 19: foreach(var doc in docs) 20: { 21: Console.WriteLine(doc); 22: } 23: } Since there's only one document in my collection below is the result when I executed the code. As you can see all properties, includes the array was retrieve at the same time. DocumentDB also attached some properties we didn't specified such as "_rid", "_ts", "_self" etc., which is controlled by the service.   DocumentDB Benefit DocumentDB is a document NoSQL database service. Different from the traditional database, document database is truly schema-free. In a short nut, you can save anything in the same database and collection if it could be serialized to JSON. We you query the document database, all sub documents will be retrieved at the same time. This means you don't need to join other tables when using a traditional database. Document database is very useful when we build some high performance system with hierarchical data structure. For example, assuming we need to build a blog system, there will be many blog posts and each of them contains the content and comments. The comment can be commented as well. If we were using traditional database, let's say SQL Server, the database schema might be defined as below. When we need to display a post we need to load the post content from the Posts table, as well as the comments from the Comments table. We also need to build the comment tree based on the CommentID field. But if were using DocumentDB, what we need to do is to save the post as a document with a list contains all comments. Under a comment all sub comments will be a list in it. When we display this post we just need to to query the post document, the content and all comments will be loaded in proper structure. 1: { 2: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 3: "title": "xxxxx", 4: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 5: "postedOn": "08/25/2014 13:55", 6: "comments": 7: [ 8: { 9: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 10: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 11: "commentedOn": "08/25/2014 14:00", 12: "commentedBy": "xxx" 13: }, 14: { 15: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 16: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 17: "commentedOn": "08/25/2014 14:10", 18: "commentedBy": "xxx", 19: "comments": 20: [ 21: { 22: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 23: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 24: "commentedOn": "08/25/2014 14:18", 25: "commentedBy": "xxx", 26: "comments": 27: [ 28: { 29: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 30: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 31: "commentedOn": "08/25/2014 18:22", 32: "commentedBy": "xxx", 33: } 34: ] 35: }, 36: { 37: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 38: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 39: "commentedOn": "08/25/2014 15:02", 40: "commentedBy": "xxx", 41: } 42: ] 43: }, 44: { 45: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 46: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 47: "commentedOn": "08/25/2014 14:30", 48: "commentedBy": "xxx" 49: } 50: ] 51: }   DocumentDB vs. Table Storage DocumentDB and Table Storage are all NoSQL service in Microsoft Azure. One common question is "when we should use DocumentDB rather than Table Storage". Here are some ideas from me and some MVPs. First of all, they are different kind of NoSQL database. DocumentDB is a document database while table storage is a key-value database. Second, table storage is cheaper. DocumentDB supports scale out from one capacity unit to 5 in preview period and each capacity unit provides 10GB local SSD storage. The price is $0.73/day includes 50% discount. For storage service the highest price is $0.061/GB, which is almost 10% of DocumentDB. Third, table storage provides local-replication, geo-replication, read access geo-replication while DocumentDB doesn't support. Fourth, there is local emulator for table storage but none for DocumentDB. We have to connect to the DocumentDB on cloud when developing locally. But, DocumentDB supports some cool features that table storage doesn't have. It supports store procedure, trigger and user-defined-function. It supports rich indexing while table storage only supports indexing against partition key and row key. It supports transaction, table storage supports as well but restricted with Entity Group Transaction scope. And the last, table storage is GA but DocumentDB is still in preview.   Summary In this post I have a quick demonstration and introduction about the new DocumentDB service in Azure. It's very easy to interact through .NET and it also support REST API, Node.js SDK and Python SDK. Then I explained the concept and benefit of  using document database, then compared with table storage.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Craftsmanship is ALL that Matters

    - by Wayne Molina
    Today, I'm going to talk about a touchy subject: the notion of working in a company that doesn't use the prescribed "best practices" in its software development endeavours.  Over the years I have, using a variety of pseudonyms, asked this question on popular programming forums.  Although I always add in some minor variation of the story to avoid suspicion that it's the same person posting, the crux of the tale remains the same: A Programmer’s Tale A junior software developer has just started a new job at an average company, creating average line-of-business applications for internal use (the most typical scenario programmers find themselves in).  This hypothetical newbie has spent a lot of time reading up on the "theory" of software development, devouring books, blogs and screencasts from well-known and respected software developers in the community in order to broaden his knowledge and "do what the pros do".  He begins his new job, eager to apply what he's learned on a real-world project only to discover that his new teammates doesn't use any of those concepts and techniques.  They hack their way through development, or in a best-case scenario use some homebrew, thrown-together semblance of a framework for their applications that follows not one of the best practices suggested by the “elite” in the software community - things like TDD (TDD as a "best practice" is the only subjective part of this post, but it's included here due to a very large following of respected developers who consider it one), the SOLID principles, well-known and venerable tools, even version control in a worst case and truly nightmarish scenario.  Our protagonist is frustrated that he isn't doing things the "proper" way - a way he's spent personal time digesting and learning about and, more importantly, a way that some of the top developers in the industry advocate - and turns to a forum to ask the advice of his peers. Invariably the answer I, in the guise of the concerned newbie, will receive is that A) I don't know anything and should just shut my mouth and sling code the bad way like everybody else on the team, and B) These "best practices" are fade or a joke, and the only thing that matters is shipping software to your customers. I am here today to say that anyone who says this, or anything like it, is not only full of crap but indicative of exactly the type of “developer” that has helped to give our industry a bad name.  Here is why: One Who Knows Nothing, Understands Nothing On one hand, you have the cognoscenti of the .NET development world.  Guys like James Avery, Jeremy Miller, Ayende Rahien and Rob Conery; all well-respected and noted programmers that are pretty much our version of celebrities.  These guys write blogs, books, and post videos outlining the "correct" way of writing software to make sure it not only works but is maintainable and extensible and a joy to work with.  They tout the virtues of the SOLID principles, or of using TDD/BDD, or using a mature ORM like NHibernate, Subsonic or even Entity Framework. On the other hand, you have Joe Everyman, Lead Software Developer at Initrode Corporation - in our hypothetical story Joe is the junior developer's new boss.  Joe's been with Initrode for 10 years, starting as the company’s very first programmer and over the years building up a little fiefdom of his own until at the present he’s in charge of all Initrode’s software development.  Joe writes code the same way he always has, without bothering to learn much, if anything.  He looked at NHibernate once and found it was "too hard", so he uses a primitive implementation of the TableDataGateway pattern as a wrapper around SqlClient.SqlConnection and SqlClient.SqlCommand instead of an actual ORM (or, in a better case scenario, has created his own ORM); the thought of using LINQ or Entity Framework or really anything other than his own hastily homebrew solution has never occurred to him.  He doesn't understand TDD and considers “testing” to be using the .NET debugger to step through code, or simply loading up an app and entering some values to see if it works.  He doesn't really understand SOLID, and he doesn't care to.  He's worked as a programmer for years, and that's all that counts.  Right?  WRONG. Who would you rather trust?  Someone with years of experience and who writes books, creates well-known software and is akin to a celebrity, or someone with no credibility outside their own minute environment who throws around their clout and company seniority as the "proof" of their ability?  Joe Everyman may have years of experience at Initrode as a programmer, and says to do things "his way" but someone like Jeremy Miller or Ayende Rahien have years of experience at companies just like Initrode, THEY know ten times more than Joe Everyman knows or could ever hope to know, and THEY say to do things "this way". Here's another way of thinking about it: If you wanted to get into politics and needed advice on the best way to do it, would you rather listen to the mayor of Hicktown, USA or Barack Obama?  One is a small-time nobody while the other is very well-known and, as such, would probably have much more accurate and beneficial advice. NOTE: The selection of Barack Obama as an example in no way, shape, or form suggests a political affiliation or political bent to this post or blog, and no political innuendo should be mistakenly read from it; the intent was merely to compare a small-time persona with a well-known persona in a non-software field.  Feel free to replace the name "Barack Obama" with any well-known Congressman, Senator or US President of your choice. DIY Considered Harmful I will say right now that the homebrew development environment is the WORST one for an aspiring programmer, because it relies on nothing outside it's own little box - no useful skill outside of the small pond.  If you are forced to use some half-baked, homebrew ORM created by your Director of Software, you are not learning anything valuable you can take with you in the future; now, if you plan to stay at Initrode for 10 years like Joe Everyman, this is fine and dandy.  However if, like most of us, you want to advance your career outside a very narrow space you will do more harm than good by sticking it out in an environment where you, to be frank, know better than everybody else because you are aware of alternative and, in almost most cases, better tools for the job.  A junior developer who understands why the SOLID principles are good to follow, or why TDD is beneficial, or who knows that it's better to use NHibernate/Subsonic/EF/LINQ/well-known ORM versus some in-house one knows better than a senior developer with 20 years experience who doesn't understand any of that, plain and simple.  Anyone who disagrees is either a liar, or someone who, just like Joe Everyman, Lead Developer, relies on seniority and tenure rather than adapting their knowledge as things evolve. In many cases, the Joe Everymans of the world act this way out of fear - they cannot possibly fathom that a “junior” could know more than them; after all, they’ve spent 10 or more years in the same company, doing the same job, cranking out the same shoddy software.  And here comes a newbie who hasn’t spent 10+ years doing the same things, with a fresh and often radical take on the craft, and Joe Everyman is afraid he might have to put some real effort into his career again instead of just pointing to his 10 years of service at Initrode as “proof” that he’s good, or that he might have to learn something new to improve; in most cases the problem is Joe Everyman, and by extension Initrode itself, has a mentality of just being “good enough”, and mediocrity is the rule of the day. A Thorn Bush is No Place for a Phoenix My advice is that if you work on a team where they don't use the best practices that some of the most famous developers in our field say is the "right" way to do things (and have legions of people who agree), and YOU are aware of these practices and can see why they work, then LEAVE the company.  Find a company where they DO care about quality, and craftsmanship, otherwise you will never be happy.  There is no point in "dumbing" yourself down to the level of your co-workers and slinging code without care to craftsmanship.  In 95% of these situations there will be no point in bringing it to the attention of Joe Everyman because he won't listen; he might even get upset that someone is trying to "upstage" him and fire the newbie, and replace someone with loads of untapped potential with a drone that will just nod affirmatively and grind out the tasks assigned without question. Find a company that has people smart enough to listen to the "best and brightest", and be happy.  Do not, I repeat, DO NOT waste away in a job working for ignorant people.  At the end of the day software development IS a craft, and a level of craftsmanship is REQUIRED for any serious professional.  When you have knowledgeable people with the credibility to back it up saying one thing, and small-time people who are, to put it bluntly, nobodies in the field saying and doing something totally different because they can't comprehend it, leave the nobodies to their own devices to fade into obscurity.  Work for a company that uses REAL software engineering techniques and really cares about craftsmanship.  The biggest issue affecting our career, and the reason software development has never been the respected, white-collar career it was meant to be, is because hacks and charlatans can pass themselves off as professional programmers without following a lick of good advice from programmers much better at the craft than they are.  These modern day snake-oil salesmen entrench themselves in companies by hoodwinking non-technical businesspeople and customers with their shoddy wares, end up in senior/lead/executive positions, and push their lack of knowledge on everybody unfortunate enough to work with/for/under them, crushing any dissent or voices of reason and change under their tyrannical heel and leaving behind a trail of dismayed and, often, unemployed junior developers who were made examples of to keep up the facade and avoid the shadow of doubt being cast upon them. To sum this up another way: If you surround yourself with learned people, you will learn.  Surround yourself with ignorant people who can't, as the saying goes, see the forest through the trees, and you'll learn nothing of any real value.  There is more to software development than just writing code, and the end goal should not be just "shipping software", it should be shipping software that is extensible, maintainable, and above all else software whose creation has broadened your knowledge in some capacity, even if a minor one.  An eager newbie who knows theory and thirsts for knowledge can easily be moulded and taught the advanced topics, but the same can't be said of someone who only cares about the finish line.  This industry needs more people espousing the benefits of software craftsmanship and proper software engineering techniques, and less Joe Everymans who are unwilling to adapt or foster new ways of thinking. Conclusion - I Cast “Protection from Fire” I am fairly certain this post will spark some controversy and might even invite the flames.  Please keep in mind these are opinions and nothing more.  A little healthy rant and subsequent flamewar can be good for the soul once in a while.  To paraphrase The Godfather: It helps to get rid of the bad blood.

    Read the article

  • Silverlight Cream for November 08, 2011 -- #1165

    - by Dave Campbell
    In this Issue: Brian Noyes, Michael Crump, WindowsPhoneGeek, Erno de Weerd, Jesse Liberty, Derik Whittaker, Sumit Dutta, Asim Sajjad, Dhananjay Kumar, Kunal Chowdhury, and Beth Massi. Above the Fold: Silverlight: "Working with Prism 4 Part 1: Getting Started" Brian Noyes WP7: "Getting Started with the Coding4Fun toolkit Tile Control" WindowsPhoneGeek LightSwitch: "How to Connect to and Diagram your SQL Express Database in Visual Studio LightSwitch" Beth Massi Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up From SilverlightCream.com: Working with Prism 4 Part 1: Getting Started Brian Noyes has a series starting at SilverlightShow about Prism 4 ... this is the first one, so a good time to jump in and pick up on an intro and basic info about Prism plus building your first Prism app. 10 Laps around Silverlight 5 (Part 5 of 10) Michael Crump has Part 5 of his 10-part Silverlight 5 investigation up at SilverlightShow talking about all the various text features added in Silverlight 5 Beta: Text Tracking and Leading, Linked and MultiColumn, OpenType, etc. Getting Started with the Coding4Fun toolkit Tile Control WindowsPhoneGeek takes on the Tile control from the Coding4Fun toolkit... as usual, great tutorial... diagrams, code, explanation Using AppHarbor, Bitbucket and Mercurial with ASP.NET and Silverlight – Part 2 CouchDB, Cloudant and Hammock Erno de Weerd has Part 2 of his trilogy and he's trying to beat David Anson for the long title record :) ... in this episode, he's adding in cloud storage to the mix in a 35-step tutorial. Background Audio Jesse Liberty's talking about background Audio... and no not the Muzak in the elevator (do they still have that?) ... he's tlking about the WP7.1 BackgroundAudioPlayer Using the ToggleSwitch in WinRT/Metro (for C#) Derik Whittaker shows off the ToggleSwitch for WinRT/Metro... not a lot to be said about it, but he says it all :) Part 19 - Windows Phone 7 - Access Phone Contacts Sumit Dutta has Part 19! of his WP7 series up... talking today about getting a phone number from the directory using the PhoneNumberChooserTask ContextMenu using MVVM Asim Sajjad shows how to make the Context Menu ViewModel friendly in this short tutorial. Code to make call in Windows Phone 7 Dhananjay Kumar's latest WP7 post is explaining how to make a call programmatically using the PhoneCallTask launcher. Silverlight Page Navigation Framework - Basic Concept Kunal Chowdhury has a 3-part tutorial series on Silverlight Navigation up. This is the first in the series, and he hits the basics... what constitutes a Page, and how to get started with the navigation framework. How to Connect to and Diagram your SQL Express Database in Visual Studio LightSwitch Beth Massi's latest LightSwitch post is on using the Data Designer to easily crete and model database tables... during development this is in SQL Express, but can be deployed to most SQL server db you like Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • What steps can you take to ensure sane build environments when compiling software?

    - by Chris Adams
    Hi guys, I've been stuck with a compilation problem when building a standardised virtual machine on CentOS 5.4, and I'm in the dark here as to a) why this error is occurring, and b) how to fix it, and in the hope that someone else stumbles across this problem too, I'm hoping someone can help me find the solution here. I'm getting a configure: error: newly created file is older than distributed files! error when trying to compile Ruby Enterprise like below when I try to run the installer, and the solutions offered to on the forums (of checking the tine, and touching the files to update the time associated with them) don't seem to be helping here. What steps can I take to work out what the cause of this problem? [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ sudo ./installer Welcome to the Ruby Enterprise Edition installer This installer will help you install Ruby Enterprise Edition 1.8.7-2009.10. Don't worry, none of your system files will be touched if you don't want them to, so there is no risk that things will screw up. You can expect this from the installation process: 1. Ruby Enterprise Edition will be compiled and optimized for speed for this system. 2. Ruby on Rails will be installed for Ruby Enterprise Edition. 3. You will learn how to tell Phusion Passenger to use Ruby Enterprise Edition instead of regular Ruby. Press Enter to continue, or Ctrl-C to abort. Checking for required software... * C compiler... found at /usr/bin/gcc * C++ compiler... found at /usr/bin/g++ * The 'make' tool... found at /usr/bin/make * Zlib development headers... found * OpenSSL development headers... found * GNU Readline development headers... found -------------------------------------------- Target directory Where would you like to install Ruby Enterprise Edition to? (All Ruby Enterprise Edition files will be put inside that directory.) [/opt/ruby-enterprise] : -------------------------------------------- Compiling and optimizing the memory allocator for Ruby Enterprise Edition In the mean time, feel free to grab a cup of coffee. ./configure --prefix=/opt/ruby-enterprise --disable-dependency-tracking checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... configure: error: newly created file is older than distributed files! Check your system clock This is a virtual machine running on virtualbox, and the time of the host and the virtual machine are identical, and up to date. I've also tried running this after updating time with an ntp-client, so no avail. I tried this after reading this post here of someone having a similar problem [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ date Tue Apr 27 08:09:05 BST 2010 The other approach I've tried is to touch the top level the files in the build folder like suggested here, but this hasn't worked either (an to be honest, I'm not sure why it would have worked either) [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ sudo touch ruby-enterprise-1.8.7-2009.10/* I'm not sure what I can do next here - the problem seems to be the bash configure script that returns this error error: newly created file is older than distributed files!, at line :2214 { echo "$as_me:$LINENO: checking whether build environment is sane" >&5 echo $ECHO_N "checking whether build environment is sane... $ECHO_C" >&6; } # Just in case sleep 1 echo timestamp > conftest.file # Do `set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( set X `ls -Lt $srcdir/configure conftest.file 2> /dev/null` if test "$*" = "X"; then # -L didn't work. set X `ls -t $srcdir/configure conftest.file` fi rm -f conftest.file if test "$*" != "X $srcdir/configure conftest.file" \ && test "$*" != "X conftest.file $srcdir/configure"; then # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". { { echo "$as_me:$LINENO: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" >&5 echo "$as_me: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" >&2;} { (exit 1); exit 1; }; } fi ### PROBLEM LINE #### # this line is the problem line - this is returned true, sometimes it isn't and I can't # see a pattern that that determines when this will test will pass or not. test "$2" = conftest.file ) then # Ok. : else { { echo "$as_me:$LINENO: error: newly created file is older than distributed files! Check your system clock" >&5 echo "$as_me: error: newly created file is older than distributed files! Check your system clock" >&2;} { (exit 1); exit 1; }; } fi the thing that makes this really frustrating is that this script works sometimes, when the VM has been running for an hour or so it works, but not at boot. There's nothing I see in the crontab that suggests any hourly tasks are run that might change the state of the system enough make a difference to this script working. I'm totally at a loss when it comes to debugging beyond here. What's the best approach to take here? Thanks

    Read the article

  • Comment nommer la plus grande unité de mesure mathématique, après le yotta ? "Hella", propose un étu

    Comment nommer la plus grande unité de mesure mathématique, après le yotta ? "Hella", propose un étudiant américain Austin Sendek est un étudiant californien inscrit dans une filière consacrée à la physique. Et, depuis quelques semaines, une question hante son esprit : quel nom donner à l'unité de mesure venant après mega, giga, etc ? Le système international d'unités dispose d'une série de préfixes pour désigner l'échelle mathématique de l'unité qui est mesuré. Il y a méga, puis giga, puis téra, et enfin peta. A ce jour, la plus grande unité de mesure est le yotta, qui équivaut à 10 puissance 24. Le prochain préfixe, si l'on suit la logique de ceux existants, devrait dénommer 10 puissance 27. Problème : au...

    Read the article

  • FY12 Partner Kickoff – Are you Ready?

    - by user715249
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Want to know what Oracle has up its sleeve for FY12? Join us on June 28th (or 29th depending on your region) as we kick off the new year and fill you in on our latest plans. This live, interactive session will be hosted by Judson Althoff and he’s bringing some top Oracle executives into the studio with him. Take a peek below as Lydia Smyers, Oracle WW A&C vice president, talks about this upcoming event. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} As if that weren’t enough, Oracle President Mark Hurd will update you on his focus for partners in FY12 and OPN will be making a special announcement for our ISV partners highlighting some exciting new offerings on how we will go to market together.  But wait, there’s more – you’ll also hear from Oracle product executives and regional sales leaders outlining their priorities for the upcoming year.  Phew – We’re tired just thinking about all of the great content that will be shared on June 28th/29th.  There are a lot of exciting announcements in store for you later this month so tune in for the latest updates! Register now to get your name on ‘the list’, The OPN Communications Team

    Read the article

  • If my team has low skill, should I lower the skill of my code?

    - by Florian Margaine
    For example, there is a common snippet in JS to get a default value: function f(x) { x = x || 10; } This kind of snippet is not easily understood by all the members of my team, their JS level being low. Should I not use this trick then? It makes the code less readable by peers, but more readable than the following according to any JS dev: function f(x) { if (!x) { x = 10; } } Sure, if I use this trick and a colleague sees it, then they can learn something. But the case is often that they see this as "trying to be clever". So, should I lower the level of my code if my teammates have a lower level than me?

    Read the article

  • 2012 Oracle Fusion Innovation Awards - Part 2

    - by Michelle Kimihira
    Author: Moazzam Chaudry Continuing from Friday's blog on 2012 Oracle Fusion Innovation Awards, this blog (Part 2) will provide more details around the customers. It was a tremendous honor to be in single room of winners. We only wish we could have had more time to share stories from all the winners.  We received great insight from all the innovative solutions that our customers deploy and would like to share them broadly, so that others can benefit from best practices. There was a customer panel session joined by Ingersoll Rand, Nike and Motability and here is what was discussed: Barry Bonar, Enterprise Architect from Ingersoll Rand shared details around their solution, comprised of Oracle Exalogic, Oracle WebLogic Server and Oracle SOA Suite. This combined solutoin enabled their business transformation to increase decision-making, speed and efficiency, resulting in 40% reduced IT spend, 41X Faster response time and huge cost savings. Ashok Balakrishnan, Architect from Nike shared how they leveraged Oracle Coherence to analyze their digital "footprint" of activities. This helps them compete, collaborate and compare athletic data over time. Lastly, Ashley Doodly, Head of IT from Motability shared details around their solution compromised of Oracle SOA Suite, Service Bus, ADF, Coherence, BO and E-Business Suite. This solution helped Motability achieve 100% ROI within the first few months, performance in seconds vs. 10's of minutes and tremendous improvement in throughput that increased up to 50%.  This year's winners by category are: Oracle Exalogic Customer Results using Fusion Middleware Netshoes ATG on Exalogic: 6X Reduced H/W foot print, 6.2X increased throughput and 3 weeks time to market Claro Part of America Movil, running mission critical Java Application on Exalogic with 35X Faster Java response time, 5X Throughput Underwriters Laboratories Exalogic as an Apps Consolidation platform to power tremendous growth Ingersoll Rand EBS on Exalogic: Up to 40% Reduction in overall IT budget, 3x reduced foot print Oracle Cloud Application Foundation Customer Results using Fusion Middleware  Mazda Motor Corporation Tuxedo ART Batch runtime environment to migrate their batch apps on new open environment and reduce main frame cost. HOTELBEDS Technology Open Source to WebLogic transformation Globalia Corporation Introduced Oracle Coherence to fully reengineer DTH system and provide multiple business and technical benefits Nike Nike+, digital sports platform, has 8M users and is expecting an 5X increase in users, many of who will carry multiple devices that frequently sync data with the Digital Sport platform Comcast Corporation The solution is expected to increase availability, continuity, performance, and simplify and make the code at the application layer more flexible. Oracle SOA and Oracle BPM Customer Results using Fusion Middleware NTT Docomo Network traffic solution based on Oracle event processing and coherence - massive in scale: 12M users (50M in future) - 800,000 events/sec. Schneider National, Inc. SOA/B2B/ADF/Data Integration to orchestrate key order processes across Siebel, OTM & EBS.  Platform runs 60M trans/day and  50 million composite SOA instances per day across 10G and 11G Amadeus Oracle BPM solution: Business Rules and processes vary across local (80), regional (~10) and corporate approval process. Up to 10 levels of approval. Plans to deploy across 20+ markets Navitar SOA solution integrates a fully non-Oracle legacy application/ERP environment using Oracle’s SOA Suite and Oracle AIA Foundation Pack. Motability Uses SOA Suite to synchronize data across the systems and to manage the vehicle remarketing process Oracle WebCenter Customer Results using Fusion Middleware  News Limited Single platform running websites for 50% of Australia's newspapers University of Louisville “Facebook for Medicine”: Oracle Webcenter platform and Oracle BIEE to analyze patient test data and uncover potential health issues. Expecting annualized ROI of 277% China Mobile Jiangsu Company portal (25k users) to drive collaboration & productivity Life Technologies Portal for remotely monitoring & repairing biotech instruments LA Dept. of Water & Power Oracle WebCenter Portal to power ladwp.com on desktop and mobile for 1.6million users Oracle Identity Management Customer Results using Fusion Middleware Education Testing Service Identity Management platform for provisioning & SSO of 6 million GRE, GMAT, TOEFL customers Avea Oracle Identity Manager allowing call center personnel to quickly change Identity Profile to handle varying call loads based on a user self service interface. Decreased Admin Cost by 30% Oracle Data Integration Customer Results using Fusion Middleware Raymond James Near real-time integration for improved systems (throughput & performance) and enhanced operational flexibility in a 24 X 7 environment Wm Morrison Supermarkets Electronic Point of Sale integration handling over 80 million transactions a day in near real time (15 min intervals) Oracle Application Development Framework and Oracle Fusion Development Customer Results using Fusion Middleware Qualcomm Incorporated Solution providing  immediate business value enabling a self-service model necessary for growing the new customer base, an increase in customer satisfaction, reduced “time-to-deliver” Micros Systems, Inc. ADF, SOA Suite, WebCenter  enables services that include managing distribution of hotel rooms availability and rates to channels such as Hotel Web-site, Expedia, etc. Marfin Egnatia Bank A new web 2.0 UI provides a much richer experience through the ADF solution with the end result being one of boosting end-user productivity    Business Analytics (Oracle BI, Oracle EPM, Oracle Exalytics) Customer Results using Fusion Middleware INC Research Self-service customer portal delivering 5–10% of the overall revenue - expected to grow fast with the BI solution Experian Reduction in Time to Complete the Financial Close Process Hologic Inc Solution, saving months of decision-making uncertainty! We look forward to seeing many more innovative nominations. The nominatation process for 2013 begins in April 2013.    Additional Information: Blog: Oracle WebCenter Award Winners Blog: Oracle Identity Management Winners Blog: Oracle Exalogic Winners Blog: SOA, BPM and Data Integration will be will feature award winners in its respective areas this week Subscribe to our regular Fusion Middleware Newsletter Follow us on Twitter and Facebook

    Read the article

  • SilverlightShow for Jan 17-23, 2011

    - by Dave Campbell
    Check out the Top Five most popular news at SilverlightShow for Jan 17-23, 2011. Jesse Liberty's list of a dozen of absolutely essential utilities for programmers grabbed the first place in last week's SilverlightShow top news. Among the most visited news is also the collection of top picks from the "Above the Fold" list on SilverlightCream. Here's SilverlightShow top 5 news for last week: 12 Essential Utilities For Programmers 10 things that can be improved in Silverlight A Silverlight Sample Built with Self-Tracking Entities and WCF Services Exploring Ribbon Control for Silverlight (Part - 2) SilverlightCream top picks for January 10-16, 2011 Visit and bookmark SilverlightShow. Stay in the 'Light

    Read the article

  • Monitoring SQL Server Agent job run times

    - by okeofs
    Introduction A few months back, I was asked how long a particular nightly process took to run. It was a super question and the one thing that struck me was that there were a plethora of factors affecting the processing time. This said, I developed a query to ascertain process run times, the average nightly run times and applied some KPI’s to the end query. The end goal being to enable me to quickly detect anomalies and processes that are running beyond their normal times. As many of you are aware, most of the necessary data for this type of query, lies within the MSDB database. The core portion of the query is shown below.select sj.name,sh.run_date, sh.run_duration, case when len(sh.run_duration) = 6 then convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 5 then '0' + convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 4 then '00' + convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 3 then '000' + convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 2 then '0000' + convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 1 then '00000' + convert(varchar(8),sh.run_duration) end as tt from dbo.sysjobs sj with (nolock) inner join dbo.sysjobHistory sh with (nolock) on sj.job_id = sh.job_id where sj.name = 'My Agent Job' and [sh.Message] like '%The job%') Run_date and run_duration are obvious fields. The field ‘Name’ is the name of the job that we wish to follow. The only major challenge was that the format of the run duration which was not as ‘user friendly’ as I would have liked. As an example, the run duration 1 hour 10 minutes and 3 seconds would be displayed as 11003; whereas I wanted it to display this in a more user friendly manner as 01:10:03. In order to achieve this effect, we need to add leading zeros to the run_duration based upon the case logic shown above. At this point what we need to do add colons between the hours and minutes and one between the minutes and seconds. To achieve this I nested the query shown above (in purple) within a ‘super’ query. Thus the run time ([Run Time]) is constructed concatenating a series of substrings (See below in Blue). select run_date,substring(convert(varchar(20),tt),1,2) + ':' +substring(convert(varchar(20),tt),3,2) + ':' +substring(convert(varchar(20),tt),5,2) as [run_time] from (select sj.name,sh.run_date, sh.run_duration,case when len(sh.run_duration) = 6 then convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 5 then '0' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 4 then '00' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 3 then '000' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 2 then '0000' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 1 then '00000' + convert(varchar(8),sh.run_duration)end as ttfrom dbo.sysjobs sj with (nolock)inner join dbo.sysjobHistory sh with (nolock) on sj.job_id = sh.job_id where sj.name = 'My Agent Job'and [sh.Message] like '%The job%') a Now that I had each nightly run time in hours, minutes and seconds (01:10:03), I decided that it would very productive to calculate a rolling run time average. To do this, I decided to do the calculations in base units of seconds. This said, I encapsulated the query shown above into a further ‘super’ query (see the code in RED below). This encapsulation is shown below. The astute reader will note that I used implied casting from integer to string, which is not the best method to use however it works. This said and if I were constructing the query again I would definitely do an explicit convert. To Recap: I now have a key field of ‘1’, each and every applicable run date and the total number of SECONDS that the process ran for each run date, all of this data within the #rawdata1 temporary table. Select 1 as keyy,run_date,(substring(b.run_time,1,2)*3600) + (substring(b.run_time,4,2)*60) + (substring(b.run_time,7,2)) as run_time_in_Seconds,run_time into #rawdata1 from ( select run_date,substring(convert(varchar(20),tt),1,2) + ':' + substring(convert(varchar(20),tt),3,2) + ':' +substring(convert(varchar(20),tt),5,2) as [run_time] from (select sj.name,sh.run_date, sh.run_duration, case when len(sh.run_duration) = 6 then convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 5 then '0' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 4 then '00' + convert(varchar(8),sh.run_duration)when len(sh.run_duration)    = 3 then '000' + convert(varchar(8),sh.run_duration)when len(sh.run_duration)    = 2 then '0000' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 1 then '00000' + convert(varchar(8),sh.run_duration)end as ttfrom dbo.sysjobs sj with (nolock)inner join dbo.sysjobHistory sh with (nolock)on sj.job_id = sh.job_id where sj.name = 'My Agent Job'and [sh.Message] like '%The job%') a )b   Calculating the average run time We now select each run time in seconds from #rawdata1 and place the values into another temporary table called #rawdata2. Once again we create a ‘key’, a hardwired ‘1’. select 1 as Keyy, run_time_in_Seconds into #rawdata2 from #rawdata1The purpose of doing so is to make the average time AVG() available to the query immediately without having to do adverse grouping. Applying KPI Logic At this point, we shall apply some logic to determine whether processing times are within the norms. We do this by applying colour names. Obviously, this example is a super one for SSRS and traffic light icons.select rd1.run_date, rd1.run_time, rd1.run_time_in_Seconds ,Avg(rd2.run_time_in_Seconds) as Average_run_time_in_seconds,casewhenConvert(decimal(10,1),rd1.run_time_in_Seconds)/Avg(rd2.run_time_in_Seconds)<= 1.2 then 'Green' when Convert(decimal(10,1),rd1.run_time_in_Seconds)/Avg(rd2.run_time_in_Seconds)< 1.4 then 'Yellow' else 'Red'end as [color], Calculating the Average Run Time in Hours Minutes and Seconds and the end of the query. casewhen len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))else convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))end + ':' + case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)else convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)end + ':' + case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)else convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)end as [Average Run Time HH:MM:SS] from #rawdata2 rd2 innerjoin #rawdata1 rd1on rd1.keyy = rd2.keyygroup by run_date,rd1.run_time ,rd1.run_time_in_Seconds order by run_date descThe complete code example use msdbgo/*drop table #rawdata1drop table #rawdata2go*/select 1 as keyy,run_date,(substring(b.run_time,1,2)*3600) + (substring(b.run_time,4,2)*60) + (substring(b.run_time,7,2)) as run_time_in_Seconds,run_time into #rawdata1 from (select run_date,substring(convert(varchar(20),tt),1,2) + ':' +substring(convert(varchar(20),tt),3,2) + ':' +substring(convert(varchar(20),tt),5,2) as [run_time] from (select name,run_date, run_duration, casewhenlen(run_duration) = 6 then convert(varchar(8),run_duration)whenlen(run_duration) = 5 then '0' + convert(varchar(8),run_duration)whenlen(run_duration) = 4 then '00' + convert(varchar(8),run_duration)whenlen(run_duration) = 3 then '000' + convert(varchar(8),run_duration)whenlen(run_duration) = 2 then '0000' + convert(varchar(8),run_duration)whenlen(run_duration) = 1 then '00000' + convert(varchar(8),run_duration)end as ttfrom dbo.sysjobs sj with (nolock)innerjoin dbo.sysjobHistory sh with (nolock) on sj.job_id = sh.job_id where name = 'My Agent Job'and [Message] like '%The job%') a ) bselect 1 as Keyy, run_time_in_Seconds into #rawdata2 from #rawdata1select rd1.run_date, rd1.run_time, rd1.run_time_in_Seconds ,Avg(rd2.run_time_in_Seconds) as Average_run_time_in_seconds,casewhenConvert(decimal(10,1),rd1.run_time_in_Seconds)/Avg(rd2.run_time_in_Seconds)<= 1.2 then 'Green' when Convert(decimal(10,1),rd1.run_time_in_Seconds)/Avg(rd2.run_time_in_Seconds)< 1.4 then 'Yellow' else 'Red'end as [color],Case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))else convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))end + ':' + case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)else convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)end + ':' + case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)else convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)end as [Average Run Time HH:MM:SS] from #rawdata2 rd2 innerjoin #rawdata1 rd1on rd1.keyy = rd2.keyygroup by run_date,rd1.run_time ,rd1.run_time_in_Seconds order by run_date desc  

    Read the article

  • C# window application : How to validate mobile no.

    - by SAMIR BHOGAYTA
    //First : Simple Method private void textBox1_KeyPress(object sender, KeyPressEventArgs e) { if (char.IsDigit(e.KeyChar) == true) { if (textBox1.Text.Length 10) { MessageBox.Show("Invalid Indian Mobile Number !!"); txtPhone.Focus(); } } //With the help of JavaScript function phone_validate(phone) { var phoneReg = ^((\+)?(\d{2}[-]))?(\d{10}){1}?$; if(phoneReg.test(phone) == false) { alert("Phone number is not yet valid."); } else { alert("You have entered a valid phone number!"); } }

    Read the article

  • C#: Why Decorate When You Can Intercept

    - by James Michael Hare
    We've all heard of the old Decorator Design Pattern (here) or used it at one time or another either directly or indirectly.  A decorator is a class that wraps a given abstract class or interface and presents the same (or a superset) public interface but "decorated" with additional functionality.   As a really simplistic example, consider the System.IO.BufferedStream, it itself is a descendent of System.IO.Stream and wraps the given stream with buffering logic while still presenting System.IO.Stream's public interface:   1: Stream buffStream = new BufferedStream(rawStream); Now, let's take a look at a custom-code example.  Let's say that we have a class in our data access layer that retrieves a list of products from a database:  1: // a class that handles our CRUD operations for products 2: public class ProductDao 3: { 4: ... 5:  6: // a method that would retrieve all available products 7: public IEnumerable<Product> GetAvailableProducts() 8: { 9: var results = new List<Product>(); 10:  11: // must create the connection 12: using (var con = _factory.CreateConnection()) 13: { 14: con.ConnectionString = _productsConnectionString; 15: con.Open(); 16:  17: // create the command 18: using (var cmd = _factory.CreateCommand()) 19: { 20: cmd.Connection = con; 21: cmd.CommandText = _getAllProductsStoredProc; 22: cmd.CommandType = CommandType.StoredProcedure; 23:  24: // get a reader and pass back all results 25: using (var reader = cmd.ExecuteReader()) 26: { 27: while(reader.Read()) 28: { 29: results.Add(new Product 30: { 31: Name = reader["product_name"].ToString(), 32: ... 33: }); 34: } 35: } 36: } 37: }            38:  39: return results; 40: } 41: } Yes, you could use EF or any myriad other choices for this sort of thing, but the germaine point is that you have some operation that takes a non-trivial amount of time.  What if, during the production day I notice that my application is performing slowly and I want to see how much of that slowness is in the query versus my code.  Well, I could easily wrap the logic block in a System.Diagnostics.Stopwatch and log the results to log4net or other logging flavor of choice: 1:     // a class that handles our CRUD operations for products 2:     public class ProductDao 3:     { 4:         private static readonly ILog _log = LogManager.GetLogger(typeof(ProductDao)); 5:         ... 6:         7:         // a method that would retrieve all available products 8:         public IEnumerable<Product> GetAvailableProducts() 9:         { 10:             var results = new List<Product>(); 11:             var timer = Stopwatch.StartNew(); 12:             13:             // must create the connection 14:             using (var con = _factory.CreateConnection()) 15:             { 16:                 con.ConnectionString = _productsConnectionString; 17:                 18:                 // and all that other DB code... 19:                 ... 20:             } 21:             22:             timer.Stop(); 23:             24:             if (timer.ElapsedMilliseconds > 5000) 25:             { 26:                 _log.WarnFormat("Long query in GetAvailableProducts() took {0} ms", 27:                     timer.ElapsedMillseconds); 28:             } 29:             30:             return results; 31:         } 32:     } In my eye, this is very ugly.  It violates Single Responsibility Principle (SRP), which says that a class should only ever have one responsibility, where responsibility is often defined as a reason to change.  This class (and in particular this method) has two reasons to change: If the method of retrieving products changes. If the method of logging changes. Well, we could “simplify” this using the Decorator Design Pattern (here).  If we followed the pattern to the letter, we'd need to create a base decorator that implements the DAOs public interface and forwards to the wrapped instance.  So let's assume we break out the ProductDAO interface into IProductDAO using your refactoring tool of choice (Resharper is great for this). Now, ProductDao will implement IProductDao and get rid of all logging logic: 1:     public class ProductDao : IProductDao 2:     { 3:         // this reverts back to original version except for the interface added 4:     } 5:  And we create the base Decorator that also implements the interface and forwards all calls: 1:     public class ProductDaoDecorator : IProductDao 2:     { 3:         private readonly IProductDao _wrappedDao; 4:         5:         // constructor takes the dao to wrap 6:         public ProductDaoDecorator(IProductDao wrappedDao) 7:         { 8:             _wrappedDao = wrappedDao; 9:         } 10:         11:         ... 12:         13:         // and then all methods just forward their calls 14:         public IEnumerable<Product> GetAvailableProducts() 15:         { 16:             return _wrappedDao.GetAvailableProducts(); 17:         } 18:     } This defines our base decorator, then we can create decorators that add items of interest, and for any methods we don't decorate, we'll get the default behavior which just forwards the call to the wrapper in the base decorator: 1:     public class TimedThresholdProductDaoDecorator : ProductDaoDecorator 2:     { 3:         private static readonly ILog _log = LogManager.GetLogger(typeof(TimedThresholdProductDaoDecorator)); 4:         5:         public TimedThresholdProductDaoDecorator(IProductDao wrappedDao) : 6:             base(wrappedDao) 7:         { 8:         } 9:         10:         ... 11:         12:         public IEnumerable<Product> GetAvailableProducts() 13:         { 14:             var timer = Stopwatch.StartNew(); 15:             16:             var results = _wrapped.GetAvailableProducts(); 17:             18:             timer.Stop(); 19:             20:             if (timer.ElapsedMilliseconds > 5000) 21:             { 22:                 _log.WarnFormat("Long query in GetAvailableProducts() took {0} ms", 23:                     timer.ElapsedMillseconds); 24:             } 25:             26:             return results; 27:         } 28:     } Well, it's a bit better.  Now the logging is in its own class, and the database logic is in its own class.  But we've essentially multiplied the number of classes.  We now have 3 classes and one interface!  Now if you want to do that same logging decorating on all your DAOs, imagine the code bloat!  Sure, you can simplify and avoid creating the base decorator, or chuck it all and just inherit directly.  But regardless all of these have the problem of tying the logging logic into the code itself. Enter the Interceptors.  Things like this to me are a perfect example of when it's good to write an Interceptor using your class library of choice.  Sure, you could design your own perfectly generic decorator with delegates and all that, but personally I'm a big fan of Castle's Dynamic Proxy (here) which is actually used by many projects including Moq. What DynamicProxy allows you to do is intercept calls into any object by wrapping it with a proxy on the fly that intercepts the method and allows you to add functionality.  Essentially, the code would now look like this using DynamicProxy: 1: // Note: I like hiding DynamicProxy behind the scenes so users 2: // don't have to explicitly add reference to Castle's libraries. 3: public static class TimeThresholdInterceptor 4: { 5: // Our logging handle 6: private static readonly ILog _log = LogManager.GetLogger(typeof(TimeThresholdInterceptor)); 7:  8: // Handle to Castle's proxy generator 9: private static readonly ProxyGenerator _generator = new ProxyGenerator(); 10:  11: // generic form for those who prefer it 12: public static object Create<TInterface>(object target, TimeSpan threshold) 13: { 14: return Create(typeof(TInterface), target, threshold); 15: } 16:  17: // Form that uses type instead 18: public static object Create(Type interfaceType, object target, TimeSpan threshold) 19: { 20: return _generator.CreateInterfaceProxyWithTarget(interfaceType, target, 21: new TimedThreshold(threshold, level)); 22: } 23:  24: // The interceptor that is created to intercept the interface calls. 25: // Hidden as a private inner class so not exposing Castle libraries. 26: private class TimedThreshold : IInterceptor 27: { 28: // The threshold as a positive timespan that triggers a log message. 29: private readonly TimeSpan _threshold; 30:  31: // interceptor constructor 32: public TimedThreshold(TimeSpan threshold) 33: { 34: _threshold = threshold; 35: } 36:  37: // Intercept functor for each method invokation 38: public void Intercept(IInvocation invocation) 39: { 40: // time the method invocation 41: var timer = Stopwatch.StartNew(); 42:  43: // the Castle magic that tells the method to go ahead 44: invocation.Proceed(); 45:  46: timer.Stop(); 47:  48: // check if threshold is exceeded 49: if (timer.Elapsed > _threshold) 50: { 51: _log.WarnFormat("Long execution in {0} took {1} ms", 52: invocation.Method.Name, 53: timer.ElapsedMillseconds); 54: } 55: } 56: } 57: } Yes, it's a bit longer, but notice that: This class ONLY deals with logging long method calls, no DAO interface leftovers. This class can be used to time ANY class that has an interface or virtual methods. Personally, I like to wrap and hide the usage of DynamicProxy and IInterceptor so that anyone who uses this class doesn't need to know to add a Castle library reference.  As far as they are concerned, they're using my interceptor.  If I change to a new library if a better one comes along, they're insulated. Now, all we have to do to use this is to tell it to wrap our ProductDao and it does the rest: 1: // wraps a new ProductDao with a timing interceptor with a threshold of 5 seconds 2: IProductDao dao = TimeThresholdInterceptor.Create<IProductDao>(new ProductDao(), 5000); Automatic decoration of all methods!  You can even refine the proxy so that it only intercepts certain methods. This is ideal for so many things.  These are just some of the interceptors we've dreamed up and use: Log parameters and returns of methods to XML for auditing. Block invocations to methods and return default value (stubbing). Throw exception if certain methods are called (good for blocking access to deprecated methods). Log entrance and exit of a method and the duration. Log a message if a method takes more than a given time threshold to execute. Whether you use DynamicProxy or some other technology, I hope you see the benefits this adds.  Does it completely eliminate all need for the Decorator pattern?  No, there may still be cases where you want to decorate a particular class with functionality that doesn't apply to the world at large. But for all those cases where you are using Decorator to add functionality that's truly generic.  I strongly suggest you give this a try!

    Read the article

  • What micro web-framework has the lowest overhead but includes templating

    - by Simon Martin
    I want to rewrite a simple small (10 page) website and besides a contact form it could be written in pure html. It is currently built with classic asp and Dreamweaver templates. The reason I'm not simply writing 10 html pages is that I want to keep the layout all in 1 place so would need either includes or a masterpage. I don't want to use Dreamweaver templates, or batch processing (like org-mode) because I want to be able to edit using notepad (or Visual Studio) because occasionally I might need to edit a file on the server (Go Daddy's IIS admin interface will let me edit text). I don't want to use ASP.NET MVC or WebForms (which I use in my day job) because I don't need all the overhead they bring with them when essentially I'm serving up 9 static files, 1 contact form and 1 list of clubs (that I aim to use jQuery to filter). The shared hosting package I have on Go Daddy seems to take a long time to spin up when serving aspx files. Currently the clubs page is driven from an MS SQL database that I try to keep up to date by manually checking the dojo locator on the main HQ pages and editing the entries myself, this is again way over the top. I aim to get a text file with the club details (probably in JSON or xml format) and use that as the source for the clubs page. There will need to be a bit of programming for this as the HQ site is unable to provide an extract / feed so something will have to scrape the site periodically to update my clubs persistence file. I'd like that to be automated - but I'm happy to have that triggered on a visit to the clubs page so I don't need to worry about scheduling a job. I would probably have a separate process that updates the persistence that has nothing to do with the rest of the site. Ideally I'd like to use Mercurial (or git) to publish, I know Bitbucket (and github) both serve static page sites so they wouldn't work in this scenario (dynamic pages and a contact form) but that's the model I'd like to use if there is such a thing. My requirements are: Simple templating system, 1 place to define header, footers, menu etc., that can be edited using just notepad. Very minimal / lightweight framework. I don't need a monster for 10 pages Must run either on IIS7 (shared Go Daddy Windows hosting) or other free host

    Read the article

  • Cannot mount Android phone in Ubuntu and sync with Banshee

    - by Brett Alton
    I can't get my LG Optimus One to sync with Banshee. I read somewhere that the root needs to have an empty file called '.is_audio_player'. I did that and it still doesn't mount. I ran dmesg however and it appears that the card is unmounting before I even have a change to run Banshee. [ 7250.321359] usb 1-1.4: new high speed USB device using ehci_hcd and address 10 [ 7250.444795] scsi12 : usb-storage 1-1.4:1.0 [ 7251.567946] scsi 12:0:0:0: Direct-Access Multiple Card Reader 1.00 PQ: 0 ANSI: 0 [ 7251.568839] sd 12:0:0:0: Attached scsi generic sg3 type 0 [ 7252.232433] sd 12:0:0:0: [sdc] 15564800 512-byte logical blocks: (7.96 GB/7.42 GiB) [ 7252.233299] sd 12:0:0:0: [sdc] Write Protect is off [ 7252.233306] sd 12:0:0:0: [sdc] Mode Sense: 03 00 00 00 [ 7252.233309] sd 12:0:0:0: [sdc] Assuming drive cache: write through [ 7252.235658] sd 12:0:0:0: [sdc] Assuming drive cache: write through [ 7252.235666] sdc: sdc1 [ 7252.239132] sd 12:0:0:0: [sdc] Assuming drive cache: write through [ 7252.239140] sd 12:0:0:0: [sdc] Attached SCSI removable disk [ 7272.573437] usb 1-1.4: USB disconnect, address 10 Suggestions?

    Read the article

< Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >