Search Results

Search found 27521 results on 1101 pages for '2012 memory manager kb'.

Page 565/1101 | < Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >

  • SQL SERVER – Out of the Box – Activty and Performance Reports from SSSMS

    - by pinaldave
    SQL Server management Studio 2008 is wonderful tool and has many different features. Many times, an average user does not use them as they are not aware about these features. Today, we will learn one such feature. SSMS comes with many inbuilt performance and activity reports, but we do not use it to the full potential. Let us see how we can access these standard reports. Connect to SQL Server Node >> Right Click on it >> Go to Reports >> Click on Standard Reports >> Pick Any Report. Click to Enlarge You can see there are many reports, which an average users needs right away, are available there. Let me list all the reports available. Server Dashboard Configuration Changes History Schema Changes History Scheduler Health Memory Consumption Activity – All Blocking Transactions Activity – All Cursors Activity – All Sessions Activity – Top Sessions Activity – Dormant Sessions Activity -  Top Connections Top Transactions by Age Top Transactions by Blocked Transactions Count Top Transactions by Locks Count Performance – Batch Execution Statistics Performance – Object Execution Statistics Performance – Top Queries by Average CPU Time Performance – Top Queries by Average IO Performance – Top Queries by Total CPU Time Performance – Top Queries by Total IO Service Broker Statistics Transactions Log Shipping Status In fact, when you look at the above list, it is fairly clear that they are very thought out and commonly needed reports that are available in SQL Server 2008. Let us run a couple of reports and observe their result. Performance – Top Queries by Total CPU Time Click to Enlarge Memory Consumption Click to Enlarge There are options for custom reports as well, which we can configure. We will learn about them in some other post. Additionally, you can right click on the reports and export in Excel or PDF. I think this tool can really help those who are just looking for some quick details. Does any of you use this feature, or this feature has some limitations and You would like to see more features? Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Apache segfault glibc segfault

    - by tester
    I keep getting (about every 5-6 hours) this segfault in apache: [Tue Jun 26 12:43:10 2012] [notice] child pid 26810 exit signal Aborted (6) *** glibc detected *** /usr/sbin/apache2: free(): invalid pointer: 0xb68c2628 *** ======= Backtrace: ========= /lib/i386-linux-gnu/libc.so.6(+0x6ff22)[0xb75aef22] /lib/i386-linux-gnu/libc.so.6(+0x70bc2)[0xb75afbc2] /lib/i386-linux-gnu/libc.so.6(cfree+0x6d)[0xb75b2cad] /usr/lib/apache2/modules/libphp5.so(destroy_zend_class+0x228)[0xb5d40518] /usr/lib/apache2/modules/libphp5.so(zend_hash_clean+0x77)[0xb5d58957] /usr/lib/php5/220100525+lfs/apc.so(apc_interned_strings_shutdown+0x32)[0xb64930b2] /usr/lib/apache2/modules/libphp5.so(+0x318ff0)[0xb5d56ff0] /usr/lib/apache2/modules/libphp5.so(zend_hash_graceful_reverse_destroy+0x27)[0xb5d58a67] /usr/lib/apache2/modules/libphp5.so(zend_destroy_modules+0x3c)[0xb5d506cc] /usr/lib/apache2/modules/libphp5.so(+0x30c743)[0xb5d4a743] /usr/lib/apache2/modules/libphp5.so(php_module_shutdown+0x42)[0xb5ce5172] /usr/lib/apache2/modules/libphp5.so(php_module_shutdown_wrapper+0x17)[0xb5ce5257] /usr/lib/apache2/modules/libphp5.so(+0x3bebe1)[0xb5dfcbe1] /usr/lib/libapr-1.so.0(+0x19846)[0xb76f2846] /usr/lib/libapr-1.so.0(apr_pool_destroy+0x52)[0xb76f19ec] /usr/sbin/apache2(+0x4ccee)[0xb77eccee] ======= Memory map: ======== b2e18000-b2e2c000 rw-s 00000000 00:04 8841030 /dev/zero (deleted) b2e2c000-b2eaa000 rw-s 00000000 00:04 8841029 /dev/zero (deleted) b2eaa000-b2eab000 ---p 00000000 00:00 0 b2eab000-b36ab000 rw-p 00000000 00:00 0 b5900000-b5921000 rw-p 00000000 00:00 0 b5921000-b5a00000 ---p 00000000 00:00 0 b5a3e000-b60bd000 r-xp 00000000 ca:00 44137 /usr/lib/apache2/modules/libphp5.so b60bd000-b611e000 r--p 0067f000 ca:00 44137 /usr/lib/apache2/modules/libphp5.so b611e000-b6123000 rw-p 006e0000 ca:00 44137 /usr/lib/apache2/modules/libphp5.so b6123000-b6142000 rw-p 00000000 00:00 0 b6142000-b6147000 r-xp 00000000 ca:00 24570 /lib/i386-linux-gnu/libnss_dns-2.13.so b6147000-b6148000 r--p 00004000 ca:00 24570 /lib/i386-linux-gnu/libnss_dns-2.13.so b6148000-b6149000 rw-p 00005000 ca:00 24570 /lib/i386-linux-gnu/libnss_dns-2.13.so b6149000-b6175000 rw-p 00000000 00:00 0 b6175000-b6180000 r-xp 00000000 ca:00 24572 /lib/i386-linux-gnu/libnss_files-2.13.so b6180000-b6181000 r--p 0000a000 ca:00 24572 /lib/i386-linux-gnu/libnss_files-2.13.so b6181000-b6182000 rw-p 0000b000 ca:00 24572 /lib/i386-linux-gnu/libnss_files-2.13.so b6182000-b618c000 r-xp 00000000 ca:00 24576 /lib/i386-linux-gnu/libnss_nis-2.13.so b618c000-b618d000 r--p 00009000 ca:00 24576 /lib/i386-linux-gnu/libnss_nis-2.13.so b618d000-b618e000 rw-p 0000a000 ca:00 24576 /lib/i386-linux-gnu/libnss_nis-2.13.so b618e000-b6196000 r-xp 00000000 ca:00 24562 /lib/i386-linux-gnu/libnss_compat-2.13.so b6196000-b6197000 r--p 00007000 ca:00 24562 /lib/i386-linux-gnu/libnss_compat-2.13.so b6197000-b6198000 rw-p 00008000 ca:00 24562 /lib/i386-linux-gnu/libnss_compat-2.13.so b6198000-b6270000 rw-p 00000000 00:00 0 b6270000-b6274000 rw-p 00000000 00:00 0 b6468000-b6474000 rw-p 00000000 00:00 0 b6475000-b6479000 rw-p 00000000 00:00 0 b6479000-b649a000 r-xp 00000000 ca:00 65670 /usr/lib/php5/220100525+lfs/apc.so b649a000-b649b000 r--p 00021000 ca:00 65670 /usr/lib/php5/220100525+lfs/apc.so b649b000-b649c000 rw-p 00022000 ca:00 65670 /usr/lib/php5/220100525+lfs/apc.so b649c000-b64a1000 rw-p 00000000 00:00 0 b64a1000-b64a6000 rw-p 00000000 00:00 0 b64a7000-b64aa000 rw-p 00000000 00:00 0 b64aa000-b64af000 rw-p 00000000 00:00 0 b64b0000-b64b3000 rw-p 00000000 00:00 0 b64bf000-b64c4000 rw-p 00000000 00:00 0 b64c4000-b64c9000 rw-p 00000000 00:00 0 b64c9000-b64cc000 rw-p 00000000 00:00 0 b64cd000-b64cf000 rw-p 00000000 00:00 0 b64ea000-b64fd000 r-xp 00000000 ca:00 24598 /lib/i386-linux-gnu/libresolv-2.13.so b64fd000-b64fe000 r--p 00012000 ca:00 24598 /lib/i386-linux-gnu/libresolv-2.13.so b64fe000-b64ff000 rw-p 00013000 ca:00 24598 /lib/i386-linux-gnu/libresolv-2.13.so b64ff000-b6501000 rw-p 00000000 00:00 0 b650e000-b652a000 r-xp 00000000 ca:00 22450 /lib/i386-linux-gnu/libgcc_s.so.1 b652a000-b652b000 r--p 0001b000 ca:00 22450 /lib/i386-linux-gnu/libgcc_s.so.1 b652b000-b652c000 rw-p 0001c000 ca:00 22450 /lib/i386-linux-gnu/libgcc_s.so.1 b652c000-b6534000 rw-p 00000000 00:00 0 b65dd000-b65df000 rw-p 00000000 00:00 0 b67ad000-b67c2000 r-xp 00000000 ca:00 22063 /lib/i386-linux-gnu/libnsl-2.13.so b67c2000-b67c3000 r--p 00015000 ca:00 22063 /lib/i386-linux-gnu/libnsl-2.13.so b67c3000-b67c4000 rw-p 00016000 ca:00 22063 /lib/i386-linux-gnu/libnsl-2.13.so b67c4000-b67c6000 rw-p 00000000 00:00 0 b67c6000-b67ee000 r-xp 00000000 ca:00 21904 /lib/i386-linux-gnu/libm-2.13.so b67ee000-b67ef000 r--p 00028000 ca:00 21904 /lib/i386-linux-gnu/libm-2.13.so b67ef000-b67f0000 rw-p 00029000 ca:00 21904 /lib/i386-linux-gnu/libm-2.13.so b67f0000-b67f7000 r-xp 00000000 ca:00 24600 /lib/i386-linux-gnu/librt-2.13.so b67f7000-b67f8000 r--p 00006000 ca:00 24600 /lib/i386-linux-gnu/librt-2.13.so b67f8000-b67f9000 rw-p 00007000 ca:00 24600 /lib/i386-linux-gnu/librt-2.13.so b6886000-b69af000 rw-p 00000000 00:00 0 b69af000-b6b3c000 r-xp 00000000 ca:00 23592 /lib/i386-linux-gnu/libcrypto.so.1.0.0 b6b3c000-b6b4a000 r--p 0018d000 ca:00 23592 /lib/i386-linux-gnu/libcrypto.so.1.0.0 b6b4a000-b6b50000 rw-p 0019b000 ca:00 23592 /lib/i386-linux-gnu/libcrypto.so.1.0.0 b6b50000-b6b53000 rw-p 00000000 00:00 0 b6b53000-b6b9b000 r-xp 00000000 ca:00 23621 /lib/i386-linux-gnu/libssl.so.1.0.0 b6b9b000-b6b9d000 r--p 00047000 ca:00 23621 /lib/i386-linux-gnu/libssl.so.1.0.0 b6b9d000-b6ba0000 rw-p 00049000 ca:00 23621 /lib/i386-linux-gnu/libssl.so.1.0.0 b6ba0000-b6c7e000 r-xp 00000000 ca:00 9878 /usr/lib/i386-linux-gnu/libstdc++.so.6.0.16 b6c7e000-b6c7f000 ---p 000de000 ca:00 9878 /usr/lib/i386-linux-gnu/libstdc++.so.6.0.16 b6c7f000-b6c83000 r--p 000de000 ca:00 9878 /usr/lib/i386-linux-gnu/libstdc++.so.6.0.16 b6c83000-b6c84000 rw-p 000e2000 ca:00 9878 /usr/lib/i386-linux-gnu/libstdc++.so.6.0.16 b6c84000-b6c8b000 rw-p 00000000 00:00 0 b6c93000-b6cd4000 rw-p 00000000 00:00 0 b6cd4000-b6ce0000 rw-p 00000000 00:00 0 b6cea000-b6cef000 r-xp 00000000 ca:00 45178 /usr/lib/apache2/modules/mod_status.so b6cef000-b6cf0000 r--p 00004000 ca:00 45178 /usr/lib/apache2/modules/mod_status.so b6cf0000-b6cf1000 rw-p 00005000 ca:00 45178 /usr/lib/apache2/modules/mod_status.so b6cf1000-b6d19000 r-xp 00000000 ca:00 45175 /usr/lib/apache2/modules/mod_ssl.so b6d19000-b6d1a000 ---p 00028000 ca:00 45175 /usr/lib/apache2/modules/mod_ssl.so b6d1a000-b6d1b000 r--p 00028000 ca:00 45175 /usr/lib/apache2/modules/mod_ssl.so b6d1b000-b6d1c000 rw-p 00029000 ca:00 45175 /usr/lib/apache2/modules/mod_ssl.so b6d1c000-b6d1e000 rw-p 00000000 00:00 0 b6d1e000-b6d20000 r-xp 00000000 ca:00 45166 /usr/lib/apache2/modules/mod_setenvif.so b6d20000-b6d21000 r--p 00001000 ca:00 45166 /usr/lib/apache2/modules/mod_setenvif.so b6d21000-b6d22000 rw-p 00002000 ca:00 45166 /usr/lib/apache2/modules/mod_setenvif.so b6d22000-b6d30000 r-xp 00000000 ca:00 45195 /usr/lib/apache2/modules/mod_rewrite.so b6d30000-b6d31000 r--p 0000e000 ca:00 45195 /usr/lib/apache2/modules/mod_rewrite.so b6d31000-b6d32000 rw-p 0000f000 ca:00 45195 /usr/lib/apache2/modules/mod_rewrite.so b6d32000-b6d45000 r-xp 00000000 ca:00 45168 /usr/lib/apache2/modules/mod_proxy.so b6d45000-b6d46000 r--p 00012000 ca:00 45168 /usr/lib/apache2/modules/mod_proxy.so b6d46000-b6d47000 rw-p 00013000 ca:00 45168 /usr/lib/apache2/modules/mod_proxy.so b6d47000-b6d4e000 r-xp 00000000 ca:00 9904 /usr/lib/i386-linux-gnu/libkrb5support.so.0.1 b6d4e000-b6d4f000 r--p 00006000 ca:00 9904 /usr/lib/i386-linux-gnu/libkrb5support.so.0.1 b6d4f000-b6d50000 rw-p 00007000 ca:00 9904 /usr/lib/i386-linux-gnu/libkrb5support.so.0.1 b6d50000-b6e97000 r-xp 00000000 ca:00 3416 /usr/lib/libxml2.so.2.7.8 b6e97000-b6e9b000 r--p 00147000 ca:00 3416 /usr/lib/libxml2.so.2.7.8 b6e9b000-b6e9c000 rw-p 0014b000 ca:00 3416 /usr/lib/libxml2.so.2.7.8 b6e9c000-b6e9d000 rw-p 00000000 00:00 0 b6e9d000-b6ec4000 r-xp 00000000 ca:00 12282 /usr/lib/i386-linux-gnu/libk5crypto.so.3.1 b6ec4000-b6ec5000 r--p 00026000 ca:00 12282 /usr/lib/i386-linux-gnu/libk5crypto.so.3.1 b6ec5000-b6ec6000 rw-p 00027000 ca:00 12282 /usr/lib/i386-linux-gnu/libk5crypto.so.3.1 b6ec6000-b6f88000 r-xp 00000000 ca:00 13335 /usr/lib/i386-linux-gnu/libkrb5.so.3.3 b6f88000-b6f8e000 r--p 000c1000 ca:00 13335 /usr/lib/i386-linux-gnu/libkrb5.so.3.3 b6f8e000-b6f8f000 rw-p 000c7000 ca:00 13335 /usr/lib/i386-linux-gnu/libkrb5.so.3.3 b6f8f000-b6fca000 r-xp 00000000 ca:00 9854 /usr/lib/i386-linux-gnu/libgssapi_krb5.so.2.2 b6fca000-b6fcb000 ---p 0003b000 ca:00 9854 /usr/lib/i386-linux-gnu/libgssapi_krb5.so.2.2 b6fcb000-b6fcc000 r--p 0003b000 ca:00 9854 /usr/lib/i386-linux-gnu/libgssapi_krb5.so.2.2 b6fcc000-b6fcd000 rw-p 0003c000 ca:00 9854 /usr/lib/i386-linux-gnu/libgssapi_krb5.so.2.2 b6fcd000-b6fdc000 r-xp 00000000 ca:00 21797 /lib/libbz2.so.1.0.4 b6fdc000-b6fdd000 r--p 0000e000 ca:00 21797 /lib/libbz2.so.1.0.4 b6fdd000-b6fde000 rw-p 0000f000 ca:00 21797 /lib/libbz2.so.1.0.4 b6fde000-b702a000 r-xp 00000000 ca:00 2505 /usr/lib/libqdbm.so.14.13.0 b702a000-b702b000 r--p 0004c000 ca:00 2505 /usr/lib/libqdbm.so.14.13.0 b702b000-b702c000 rw-p 0004d000 ca:00 2505 /usr/lib/libqdbm.so.14.13.0 b702c000-b71aa000 r-xp 00000000 ca:00 10201 /usr/lib/i386-linux-gnu/libdb-4.8.so b71aa000-b71ac000 r--p 0017d000 ca:00 10201 /usr/lib/i386-linux-gnu/libdb-4.8.so b71ac000-b71ad000 rw-p 0017f000 ca:00 10201 /usr/lib/i386-linux-gnu/libdb-4.8.so b71ad000-b71f7000 r-xp 00000000 ca:00 23521 /lib/libssl.so.0.9.8 b71f7000-b71f8000 r--p 0004a000 ca:00 23521 /lib/libssl.so.0.9.8 b71f8000-b71fb000 rw-p 0004b000 ca:00 23521 /lib/libssl.so.0.9.8 b71fb000-b7359000 r-xp 00000000 ca:00 835379 /lib/libcrypto.so.0.9.8 b7359000-b735a000 ---p 0015e000 ca:00 835379 /lib/libcrypto.so.0.9.8 b735a000-b7362000 r--p 0015e000 ca:00 835379 /lib/libcrypto.so.0.9.8 b7362000-b7371000 rw-p 00166000 ca:00 835379 /lib/libcrypto.so.0.9.8 b7371000-b7374000 rw-p 00000000 00:00 0 b7374000-b73ba000 r-xp 00000000 ca:00 2503 /usr/lib/libonig.so.2.0.0 b73ba000-b73bd000 rw-p 00045000 ca:00 2503 /usr/lib/libonig.so.2.0.0 b73be000-b73c0000 rw-p 00000000 00:00 0 b73c0000-b73c7000 r-xp 00000000 ca:00 45171 /usr/lib/apache2/modules/mod_proxy_http.so b73c7000-b73c8000 r--p 00006000 ca:00 45171 /usr/lib/apache2/modules/mod_proxy_http.so b73c8000-b73c9000 rw-p 00007000 ca:00 45171 /usr/lib/apache2/modules/mod_proxy_http.so b73c9000-b73dc000 r-xp 00000000 ca:00 22461 /lib/i386-linux-gnu/libz.so.1.2.3.4 b73dc000-b73dd000 r--p 00012000 ca:00 22461 /lib/i386-linux-gnu/libz.so.1.2.3.4 b73dd000-b73de000 rw-p 00013000 ca:00 22461 /lib/i386-linux-gnu/libz.so.1.2.3.4 b73de000-b73e3000 rw-p 00000000 00:00 0 b73e3000-b73ea000 r-xp 00000000 ca:00 45188 /usr/lib/apache2/modules/mod_negotiation.so b73ea000-b73eb000 r--p 00006000 ca:00 45188 /usr/lib/apache2/modules/mod_negotiation.so b73eb000-b73ec000 rw-p 00007000 ca:00 45188 /usr/lib/apache2/modules/mod_negotiation.so b73ec000-b73f1000 rw-p 00000000 00:00 0 b73f2000-b73f5000 r-xp 00000000 ca:00 45149 /usr/lib/apache2/modules/mod_reqtimeout.so b73f5000-b73f6000 r--p 00002000 ca:00 45149 /usr/lib/apache2/modules/mod_reqtimeout.so b73f6000-b73f7000 rw-p 00003000 ca:00 45149 /usr/lib/apache2/modules/mod_reqtimeout.so b73f7000-b73fc000 rw-p 00000000 00:00 0 b73fc000-b73fe000 rw-p 00000000 00:00 0 b73fe000-b7400000 r-xp 00000000 ca:00 22437 /lib/i386-linux-gnu/libkeyutils.so.1.3 b7400000-b7401000 r--p 00001000 ca:00 22437 /lib/i386-linux-gnu/libkeyutils.so.1.3 b7401000-b7402000 rw-p 00002000 ca:00 22437 /lib/i386-linux-gnu/libkeyutils.so.1.3 b7402000-b7407000 rw-p 00000000 00:00 0 b7407000-b7409000 r-xp 00000000 ca:00 22344 /lib/i386-linux-gnu/libcom_err.so.2.1 b7409000-b740a000 r--p 00001000 ca:00 22344 /lib/i386-linux-gnu/libcom_err.so.2.1 b740a000-b740b000 rw-p 00002000 ca:00 22344 /lib/i386-linux-gnu/libcom_err.so.2.1 b740b000-b7410000 rw-p 00000000 00:00 0 b7411000-b7413000 rw-p 00000000 00:00 0 b7413000-b7416000 rw-p 00000000 00:00 0 b7416000-b7418000 rw-p 00000000 00:00 0 b7418000-b741c000 r-xp 00000000 ca:00 45176 /usr/lib/apache2/modules/mod_mime.so b741c000-b741d000 r--p 00003000 ca:00 45176 /usr/lib/apache2/modules/mod_mime.so b741d000-b741e000 rw-p 00004000 ca:00 45176 /usr/lib/apache2/modules/mod_mime.so b741e000-b7422000 r-xp 00000000 ca:00 45162 /usr/lib/apache2/modules/mod_headers.so b7422000-b7423000 r--p 00003000 ca:00 45162 /usr/lib/apache2/modules/mod_headers.so b7423000-b7424000 rw-p 00004000 ca:00 45162 /usr/lib/apache2/modules/mod_headers.so b7424000-b7426000 r-xp 00000000 ca:00 45161 /usr/lib/apache2/modules/mod_expires.so b7426000-b7427000 r--p 00001000 ca:00 45161 /usr/lib/apache2/modules/mod_expires.so b7427000-b7428000 rw-p 00002000 ca:00 45161 /usr/lib/apache2/modules/mod_expires.so b7428000-b742a000 r-xp 00000000 ca:00 45189 /usr/lib/apache2/modules/mod_dir.so b742a000-b742b000 r--p 00001000 ca:00 45189 /usr/lib/apache2/modules/mod_dir.so b742b000-b742c000 rw-p 00002000 ca:00 45189 /usr/lib/apache2/modules/mod_dir.so b742c000-b742e000 rw-p 00000000 00:00 0 b742f000-b7430000 r-xp 00000000 ca:00 45158 /usr/lib/apache2/modules/mod_env.so b7430000-b7431000 r--p 00000000 ca:00 45158 /usr/lib/apache2/modules/mod_env.so b7431000-b7432000 rw-p 00001000 ca:00 45158 /usr/lib/apache2/modules/mod_env.so b7432000-b7437000 rw-p 00000000 00:00 0 b7437000-b743c000 r-xp 00000000 ca:00 45155 /usr/lib/apache2/modules/mod_deflate.so b743c000-b743d000 r--p 00004000 ca:00 45155 /usr/lib/apache2/modules/mod_deflate.so b743d000-b743e000 rw-p 00005000 ca:00 45155 /usr/lib/apache2/modules/mod_deflate.so b743e000-b7443000 rw-p 00000000 00:00 0 b7443000-b7448000 r-xp 00000000 ca:00 45184 /usr/lib/apache2/modules/mod_cgi.so b7448000-b7449000 r--p 00004000 ca:00 45184 /usr/lib/apache2/modules/mod_cgi.so b7449000-b744a000 rw-p 00005000 ca:00 45184 /usr/lib/apache2/modules/mod_cgi.so b744a000-b744f000 rw-p 00000000 00:00 0 b744f000-b7457000 r-xp 00000000 ca:00 45179 /usr/lib/apache2/modules/mod_autoindex.so b7457000-b7458000 r--p 00007000 ca:00 45179 /usr/lib/apache2/modules/mod_autoindex.so b7458000-b7459000 rw-p 00008000 ca:00 45179 /usr/lib/apache2/modules/mod_autoindex.so b7459000-b745e000 rw-p 00000000 00:00 0 b745e000-b745f000 r-xp 00000000 ca:00 45136 /usr/lib/apache2/modules/mod_authz_user.so b745f000-b7460000 r--p 00000000 ca:00 45136 /usr/lib/apache2/modules/mod_authz_user.so b7460000-b7461000 rw-p 00001000 ca:00 45136 /usr/lib/apache2/modules/mod_authz_user.so b7461000-b7466000 rw-p 00000000 00:00 0 b7466000-b7468000 r-xp 00000000 ca:00 45134 /usr/lib/apache2/modules/mod_authz_host.so b7468000-b7469000 r--p 00001000 ca:00 45134 /usr/lib/apache2/modules/mod_authz_host.so b7469000-b746a000 rw-p 00002000 ca:00 45134 /usr/lib/apache2/modules/mod_authz_host.so b746a000-b746f000 rw-p 00000000 00:00 0 b746f000-b7471000 r-xp 00000000 ca:00 45135 /usr/lib/apache2/modules/mod_authz_groupfile.so b7471000-b7472000 r--p 00001000 ca:00 45135 /usr/lib/apache2/modules/mod_authz_groupfile.so b7472000-b7473000 rw-p 00002000 ca:00 45135 /usr/lib/apache2/modules/mod_authz_groupfile.so b7473000-b7478000 rw-p 00000000 00:00 0 b7478000-b7479000 r-xp 00000000 ca:00 45140 /usr/lib/apache2/modules/mod_authz_default.so b7479000-b747a000 r--p 00000000 ca:00 45140 /usr/lib/apache2/modules/mod_authz_default.so b747a000-b747b000 rw-p 00001000 ca:00 45140 /usr/lib/apache2/modules/mod_authz_default.so b747b000-b7480000 rw-p 00000000 00:00 0 b7480000-b7481000 r-xp 00000000 ca:00 44436 /usr/lib/apache2/modules/mod_authn_file.so b7481000-b7482000 ---p 00001000 ca:00 44436 /usr/lib/apache2/modules/mod_authn_file.so b7482000-b7483000 r--p 00001000 ca:00 44436 /usr/lib/apache2/modules/mod_authn_file.so b7483000-b7484000 rw-p 00002000 ca:00 44436 /usr/lib/apache2/modules/mod_authn_file.so b7484000-b7489000 rw-p 00000000 00:00 0 b7489000-b748b000 r-xp 00000000 ca:00 45141 /usr/lib/apache2/modules/mod_auth_basic.so b748b000-b748c000 r--p 00001000 ca:00 45141 /usr/lib/apache2/modules/mod_auth_basic.so b748c000-b748d000 rw-p 00002000 ca:00 45141 /usr/lib/apache2/modules/mod_auth_basic.so b748d000-b7492000 rw-p 00000000 00:00 0 b7492000-b7495000 r-xp 00000000 ca:00 45194 /usr/lib/apache2/modules/mod_alias.so b7495000-b7496000 r--p 00002000 ca:00 45194 /usr/lib/apache2/modules/mod_alias.so b7496000-b7497000 rw-p 00003000 ca:00 45194 /usr/lib/apache2/modules/mod_alias.so b7497000-b74d8000 rw-p 00000000 00:00 0 b74d8000-b74db000 r-xp 00000000 ca:00 21902 /lib/i386-linux-gnu/libdl-2.13.so b74db000-b74dc000 r--p 00002000 ca:00 21902 /lib/i386-linux-gnu/libdl-2.13.so b74dc000-b74dd000 rw-p 00003000 ca:00 21902 /lib/i386-linux-gnu/libdl-2.13.so b74dd000-b74de000 rw-p 00000000 00:00 0 b74de000-b74e2000 r-xp 00000000 ca:00 22401 /lib/i386-linux-gnu/libuuid.so.1.3.0 b74e2000-b74e3000 r--p 00003000 ca:00 22401 /lib/i386-linux-gnu/libuuid.so.1.3.0 b74e3000-b74e4000 rw-p 00004000 ca:00 22401 /lib/i386-linux-gnu/libuuid.so.1.3.0 b74e4000-b750a000 r-xp 00000000 ca:00 22420 /lib/i386-linux-gnu/libexpat.so.1.5.2 b750a000-b750b000 ---p 00026000 ca:00 22420 /lib/i386-linux-gnu/libexpat.so.1.5.2 b750b000-b750d000 r--p 00026000 ca:00 22420 /lib/i386-linux-gnu/libexpat.so.1.5.2 b750d000-b750e000 rw-p 00028000 ca:00 22420 /lib/i386-linux-gnu/libexpat.so.1.5.2 b750e000-b7516000 r-xp 00000000 ca:00 21889 /lib/i386-linux-gnu/libcrypt-2.13.so b7516000-b7517000 r--p 00007000 ca:00 21889 /lib/i386-linux-gnu/libcrypt-2.13.so b7517000-b7518000 rw-p 00008000 ca:00 21889 /lib/i386-linux-gnu/libcrypt-2.13.so b7518000-b753f000 rw-p 00000000 00:00 0 b753f000-b76b7000 r-xp 00000000 ca:00 21864 /lib/i386-linux-gnu/libc-2.13.so b76b7000-b76b9000 r--p 00178000 ca:00 21864 /lib/i386-linux-gnu/libc-2.13.so b76b9000-b76ba000 rw-p 0017a000 ca:00 21864 /lib/i386-linux-gnu/libc-2.13.so b76ba000-b76bd000 rw-p 00000000 00:00 0 b76bd000-b76d4000 r-xp 00000000 ca:00 24594 /lib/i386-linux-gnu/libpthread-2.13.so b76d4000-b76d5000 r--p 00016000 ca:00 24594 /lib/i386-linux-gnu/libpthread-2.13.so b76d5000-b76d6000 rw-p 00017000 ca:00 24594 /lib/i386-linux-gnu/libpthread-2.13.so b76d6000-b76d9000 rw-p 00000000 00:00 0 b76d9000-b770c000 r-xp 00000000 ca:00 6233 /usr/lib/libapr-1.so.0.4.5 b770c000-b770d000 r--p 00032000 ca:00 6233 /usr/lib/libapr-1.so.0.4.5 b770d000-b770e000 rw-p 00033000 ca:00 6233 /usr/lib/libapr-1.so.0.4.5 b770e000-b772f000 r-xp 00000000 ca:00 6236 /usr/lib/libaprutil-1.so.0.3.12 b772f000-b7730000 r--p 00020000 ca:00 6236 /usr/lib/libaprutil-1.so.0.3.12 b7730000-b7731000 rw-p 00021000 ca:00 6236 /usr/lib/libaprutil-1.so.0.3.12 b7731000-b776e000 r-xp 00000000 ca:00 22336 /lib/i386-linux-gnu/libpcre.so.3.12.1 b776e000-b776f000 r--p 0003c000 ca:00 22336 /lib/i386-linux-gnu/libpcre.so.3.12.1 b776f000-b7770000 rw-p 0003d000 ca:00 22336 /lib/i386-linux-gnu/libpcre.so.3.12.1 b7770000-b7780000 rw-p 00000000 00:00 0 b7780000-b779e000 r-xp 00000000 ca:00 21844 /lib/i386-linux-gnu/ld-2.13.so b779e000-b779f000 r--p 0001d000 ca:00 21844 /lib/i386-linux-gnu/ld-2.13.so b779f000-b77a0000 rw-p 0001e000 ca:00 21844 /lib/i386-linux-gnu/ld-2.13.so b77a0000-b7803000 r-xp 00000000 ca:00 44432 /usr/lib/apache2/mpm-prefork/apache2 b7803000-b7805000 r--p 00063000 ca:00 44432 /usr/lib/apache2/mpm-prefork/apache2 b7805000-b7807000 rw-p 00065000 ca:00 44432 /usr/lib/apache2/mpm-prefork/apache2 b7807000-b780a000 rw-p 00000000 00:00 0 b7a17000-b7a55000 rw-p 00000000 00:00 0 [heap] b7a55000-b7b9f000 rw-p 00000000 00:00 0 [heap] b7b9f000-b7c1a000 rw-p 00000000 00:00 0 [heap] bf9a1000-bf9c2000 rw-p 00000000 00:00 0 [stack] f57fe000-f57ff000 r-xp 00000000 00:00 0 [vdso] [Tue Jun 26 13:15:10 2012] [notice] child pid 26840 exit signal Aborted (6) Sometimes it recovers, but sometimes it kills the server. It's unclear to me what glibc is doing to crash.. can anyone decipher what's crashing in this error log?

    Read the article

  • Oracle Insurance Unveils Next Generation of Enterprise Document Automation: Oracle Documaker Enterprise Edition

    - by helen.pitts(at)oracle.com
    Oracle today announced the introduction of Oracle Documaker Enterprise Edition, the next generation of the company's market-leading Enterprise Document Automation (EDA) solution for dynamically creating, managing and delivering adaptive enterprise communications across multiple channels. "Insurers and other organizations need enterprise document automation that puts the power to manage the complete document lifecycle in the hands of the business user," said Srini Venkatasanthanam, vice president, Product Strategy, Oracle Insurancein the press release. "Built with features such as rules-based configurability and interactive processing, Oracle Documaker Enterprise Edition makes possible an adaptive approach to enterprise document automation - documents when, where and in the form they're needed." Key enhancements in Oracle Documaker Enterprise Edition include: Documaker Interactive, the newly renamed and redesigned Web-based iDocumaker module. Documaker Interactive enables users to quickly and interactively create and assemble compliant communications such as policy and claims correspondence directly from their desktops. Users benefits from built-in accelerators and rules-based configurability, pre-configured content as well as embedded workflow leveraging Oracle BPEL Process Manager. Documaker Documaker Factory, which helps enterprises reduce cost and improve operational efficiency through better management of their enterprise publishing operations. Dashboards, analytics, reporting and an administrative console provide insurers with greater insight and centralized control over document production allowing them to better adapt their resources based on business demands. Other enhancements include: enhanced business user empowerment; additional multi-language localization capabilities; and benefits from the use of powerful Oracle technologies such as the Oracle Application Development Framework for all interfaces and Oracle Universal Content Management (Oracle UCM) for enterprise content management. Drive Competitive Advantage and Growth: Deb Smallwood, founder of SMA Strategy Meets Action, a leading industry insurance analyst consulting firm and co-author of 3CM in Insurance: Customer Communications and Content Management published last month, noted in the press release that "maximum value can be gained from investments when Enterprise Document Automation (EDA) is viewed holistically and all forms of communication and all types of information are integrated across the entire enterprise. "Insurers that choose an approach that takes all communications, both structured and unstructured data, coming into the company from a wide range of channels, and then create seamless flows of information will have a real competitive advantage," Smallwood said. "This capability will soon become essential for selling, servicing, and ultimately driving growth through new business and retention." Learn More: Click here to watch a short flash demo that demonstrates the real business value offered by Oracle Documaker Enterprise Edition. You can also see how an insurance company can use Oracle Documaker Enterprise Edition to dynamically create, manage and publish adaptive enterprise content throughout the insurance business lifecycle for delivery across multiple channels by visiting Alamere Insurance, a fictional model insurance company created by Oracle to showcase how Oracle applications can be leveraged within the insurance enterprise. Meet Our Newest Oracle Insurance Blogger: I'm pleased to introduce our newest Oracle Insurance blogger, Susanne Hale. Susanne, who manages product marketing for Oracle Insurance EDA solutions, will be sharing insights about this topic along with examples of how our customers are transforming their enterprise communications using Oracle Documaker Enterprise Edition in future Oracle Insurance blog entries. Helen Pitts is senior product marketing manager for Oracle Insurance.

    Read the article

  • Agile Development

    - by James Oloo Onyango
    Alot of literature has and is being written about agile developement and its surrounding philosophies. In my quest to find the best way to express the importance of agile methodologies, i have found Robert C. Martin's "A Satire Of Two Companies" to be both the most concise and thorough! Enjoy the read! Rufus Inc Project Kick Off Your name is Bob. The date is January 3, 2001, and your head still aches from the recent millennial revelry. You are sitting in a conference room with several managers and a group of your peers. You are a project team leader. Your boss is there, and he has brought along all of his team leaders. His boss called the meeting. "We have a new project to develop," says your boss's boss. Call him BB. The points in his hair are so long that they scrape the ceiling. Your boss's points are just starting to grow, but he eagerly awaits the day when he can leave Brylcream stains on the acoustic tiles. BB describes the essence of the new market they have identified and the product they want to develop to exploit this market. "We must have this new project up and working by fourth quarter October 1," BB demands. "Nothing is of higher priority, so we are cancelling your current project." The reaction in the room is stunned silence. Months of work are simply going to be thrown away. Slowly, a murmur of objection begins to circulate around the conference table.   His points give off an evil green glow as BB meets the eyes of everyone in the room. One by one, that insidious stare reduces each attendee to quivering lumps of protoplasm. It is clear that he will brook no discussion on this matter. Once silence has been restored, BB says, "We need to begin immediately. How long will it take you to do the analysis?" You raise your hand. Your boss tries to stop you, but his spitwad misses you and you are unaware of his efforts.   "Sir, we can't tell you how long the analysis will take until we have some requirements." "The requirements document won't be ready for 3 or 4 weeks," BB says, his points vibrating with frustration. "So, pretend that you have the requirements in front of you now. How long will you require for analysis?" No one breathes. Everyone looks around to see whether anyone has some idea. "If analysis goes beyond April 1, we have a problem. Can you finish the analysis by then?" Your boss visibly gathers his courage: "We'll find a way, sir!" His points grow 3 mm, and your headache increases by two Tylenol. "Good." BB smiles. "Now, how long will it take to do the design?" "Sir," you say. Your boss visibly pales. He is clearly worried that his 3 mms are at risk. "Without an analysis, it will not be possible to tell you how long design will take." BB's expression shifts beyond austere.   "PRETEND you have the analysis already!" he says, while fixing you with his vacant, beady little eyes. "How long will it take you to do the design?" Two Tylenol are not going to cut it. Your boss, in a desperate attempt to save his new growth, babbles: "Well, sir, with only six months left to complete the project, design had better take no longer than 3 months."   "I'm glad you agree, Smithers!" BB says, beaming. Your boss relaxes. He knows his points are secure. After a while, he starts lightly humming the Brylcream jingle. BB continues, "So, analysis will be complete by April 1, design will be complete by July 1, and that gives you 3 months to implement the project. This meeting is an example of how well our new consensus and empowerment policies are working. Now, get out there and start working. I'll expect to see TQM plans and QIT assignments on my desk by next week. Oh, and don't forget that your crossfunctional team meetings and reports will be needed for next month's quality audit." "Forget the Tylenol," you think to yourself as you return to your cubicle. "I need bourbon."   Visibly excited, your boss comes over to you and says, "Gosh, what a great meeting. I think we're really going to do some world shaking with this project." You nod in agreement, too disgusted to do anything else. "Oh," your boss continues, "I almost forgot." He hands you a 30-page document. "Remember that the SEI is coming to do an evaluation next week. This is the evaluation guide. You need to read through it, memorize it, and then shred it. It tells you how to answer any questions that the SEI auditors ask you. It also tells you what parts of the building you are allowed to take them to and what parts to avoid. We are determined to be a CMM level 3 organization by June!"   You and your peers start working on the analysis of the new project. This is difficult because you have no requirements. But from the 10-minute introduction given by BB on that fateful morning, you have some idea of what the product is supposed to do.   Corporate process demands that you begin by creating a use case document. You and your team begin enumerating use cases and drawing oval and stick diagrams. Philosophical debates break out among the team members. There is disagreement as to whether certain use cases should be connected with <<extends>> or <<includes>> relationships. Competing models are created, but nobody knows how to evaluate them. The debate continues, effectively paralyzing progress.   After a week, somebody finds the iceberg.com Web site, which recommends disposing entirely of <<extends>> and <<includes>> and replacing them with <<precedes>> and <<uses>>. The documents on this Web site, authored by Don Sengroiux, describes a method known as stalwart-analysis, which claims to be a step-by-step method for translating use cases into design diagrams. More competing use case models are created using this new scheme, but again, people can't agree on how to evaluate them. The thrashing continues. More and more, the use case meetings are driven by emotion rather than by reason. If it weren't for the fact that you don't have requirements, you'd be pretty upset by the lack of progress you are making. The requirements document arrives on February 15. And then again on February 20, 25, and every week thereafter. Each new version contradicts the previous one. Clearly, the marketing folks who are writing the requirements, empowered though they might be, are not finding consensus.   At the same time, several new competing use case templates have been proposed by the various team members. Each template presents its own particularly creative way of delaying progress. The debates rage on. On March 1, Prudence Putrigence, the process proctor, succeeds in integrating all the competing use case forms and templates into a single, all-encompassing form. Just the blank form is 15 pages long. She has managed to include every field that appeared on all the competing templates. She also presents a 159- page document describing how to fill out the use case form. All current use cases must be rewritten according to the new standard.   You marvel to yourself that it now requires 15 pages of fill-in-the-blank and essay questions to answer the question: What should the system do when the user presses Return? The corporate process (authored by L. E. Ott, famed author of "Holistic Analysis: A Progressive Dialectic for Software Engineers") insists that you discover all primary use cases, 87 percent of all secondary use cases, and 36.274 percent of all tertiary use cases before you can complete analysis and enter the design phase. You have no idea what a tertiary use case is. So in an attempt to meet this requirement, you try to get your use case document reviewed by the marketing department, which you hope will know what a tertiary use case is.   Unfortunately, the marketing folks are too busy with sales support to talk to you. Indeed, since the project started, you have not been able to get a single meeting with marketing, which has provided a never-ending stream of changing and contradictory requirements documents.   While one team has been spinning endlessly on the use case document, another team has been working out the domain model. Endless variations of UML documents are pouring out of this team. Every week, the model is reworked.   The team members can't decide whether to use <<interfaces>> or <<types>> in the model. A huge disagreement has been raging on the proper syntax and application of OCL. Others on the team just got back from a 5-day class on catabolism, and have been producing incredibly detailed and arcane diagrams that nobody else can fathom.   On March 27, with one week to go before analysis is to be complete, you have produced a sea of documents and diagrams but are no closer to a cogent analysis of the problem than you were on January 3. **** And then, a miracle happens.   **** On Saturday, April 1, you check your e-mail from home. You see a memo from your boss to BB. It states unequivocally that you are done with the analysis! You phone your boss and complain. "How could you have told BB that we were done with the analysis?" "Have you looked at a calendar lately?" he responds. "It's April 1!" The irony of that date does not escape you. "But we have so much more to think about. So much more to analyze! We haven't even decided whether to use <<extends>> or <<precedes>>!" "Where is your evidence that you are not done?" inquires your boss, impatiently. "Whaaa . . . ." But he cuts you off. "Analysis can go on forever; it has to be stopped at some point. And since this is the date it was scheduled to stop, it has been stopped. Now, on Monday, I want you to gather up all existing analysis materials and put them into a public folder. Release that folder to Prudence so that she can log it in the CM system by Monday afternoon. Then get busy and start designing."   As you hang up the phone, you begin to consider the benefits of keeping a bottle of bourbon in your bottom desk drawer. They threw a party to celebrate the on-time completion of the analysis phase. BB gave a colon-stirring speech on empowerment. And your boss, another 3 mm taller, congratulated his team on the incredible show of unity and teamwork. Finally, the CIO takes the stage to tell everyone that the SEI audit went very well and to thank everyone for studying and shredding the evaluation guides that were passed out. Level 3 now seems assured and will be awarded by June. (Scuttlebutt has it that managers at the level of BB and above are to receive significant bonuses once the SEI awards level 3.)   As the weeks flow by, you and your team work on the design of the system. Of course, you find that the analysis that the design is supposedly based on is flawedno, useless; no, worse than useless. But when you tell your boss that you need to go back and work some more on the analysis to shore up its weaker sections, he simply states, "The analysis phase is over. The only allowable activity is design. Now get back to it."   So, you and your team hack the design as best you can, unsure of whether the requirements have been properly analyzed. Of course, it really doesn't matter much, since the requirements document is still thrashing with weekly revisions, and the marketing department still refuses to meet with you.     The design is a nightmare. Your boss recently misread a book named The Finish Line in which the author, Mark DeThomaso, blithely suggested that design documents should be taken down to code-level detail. "If we are going to be working at that level of detail," you ask, "why don't we simply write the code instead?" "Because then you wouldn't be designing, of course. And the only allowable activity in the design phase is design!" "Besides," he continues, "we have just purchased a companywide license for Dandelion! This tool enables 'Round the Horn Engineering!' You are to transfer all design diagrams into this tool. It will automatically generate our code for us! It will also keep the design diagrams in sync with the code!" Your boss hands you a brightly colored shrinkwrapped box containing the Dandelion distribution. You accept it numbly and shuffle off to your cubicle. Twelve hours, eight crashes, one disk reformatting, and eight shots of 151 later, you finally have the tool installed on your server. You consider the week your team will lose while attending Dandelion training. Then you smile and think, "Any week I'm not here is a good week." Design diagram after design diagram is created by your team. Dandelion makes it very difficult to draw these diagrams. There are dozens and dozens of deeply nested dialog boxes with funny text fields and check boxes that must all be filled in correctly. And then there's the problem of moving classes between packages. At first, these diagram are driven from the use cases. But the requirements are changing so often that the use cases rapidly become meaningless. Debates rage about whether VISITOR or DECORATOR design patterns should be used. One developer refuses to use VISITOR in any form, claiming that it's not a properly object-oriented construct. Someone refuses to use multiple inheritance, since it is the spawn of the devil. Review meetings rapidly degenerate into debates about the meaning of object orientation, the definition of analysis versus design, or when to use aggregation versus association. Midway through the design cycle, the marketing folks announce that they have rethought the focus of the system. Their new requirements document is completely restructured. They have eliminated several major feature areas and replaced them with feature areas that they anticipate customer surveys will show to be more appropriate. You tell your boss that these changes mean that you need to reanalyze and redesign much of the system. But he says, "The analysis phase is system. But he says, "The analysis phase is over. The only allowable activity is design. Now get back to it."   You suggest that it might be better to create a simple prototype to show to the marketing folks and even some potential customers. But your boss says, "The analysis phase is over. The only allowable activity is design. Now get back to it." Hack, hack, hack, hack. You try to create some kind of a design document that might reflect the new requirements documents. However, the revolution of the requirements has not caused them to stop thrashing. Indeed, if anything, the wild oscillations of the requirements document have only increased in frequency and amplitude.   You slog your way through them.   On June 15, the Dandelion database gets corrupted. Apparently, the corruption has been progressive. Small errors in the DB accumulated over the months into bigger and bigger errors. Eventually, the CASE tool just stopped working. Of course, the slowly encroaching corruption is present on all the backups. Calls to the Dandelion technical support line go unanswered for several days. Finally, you receive a brief e-mail from Dandelion, informing you that this is a known problem and that the solution is to purchase the new version, which they promise will be ready some time next quarter, and then reenter all the diagrams by hand.   ****   Then, on July 1 another miracle happens! You are done with the design!   Rather than go to your boss and complain, you stock your middle desk drawer with some vodka.   **** They threw a party to celebrate the on-time completion of the design phase and their graduation to CMM level 3. This time, you find BB's speech so stirring that you have to use the restroom before it begins. New banners and plaques are all over your workplace. They show pictures of eagles and mountain climbers, and they talk about teamwork and empowerment. They read better after a few scotches. That reminds you that you need to clear out your file cabinet to make room for the brandy. You and your team begin to code. But you rapidly discover that the design is lacking in some significant areas. Actually, it's lacking any significance at all. You convene a design session in one of the conference rooms to try to work through some of the nastier problems. But your boss catches you at it and disbands the meeting, saying, "The design phase is over. The only allowable activity is coding. Now get back to it."   ****   The code generated by Dandelion is really hideous. It turns out that you and your team were using association and aggregation the wrong way, after all. All the generated code has to be edited to correct these flaws. Editing this code is extremely difficult because it has been instrumented with ugly comment blocks that have special syntax that Dandelion needs in order to keep the diagrams in sync with the code. If you accidentally alter one of these comments, the diagrams will be regenerated incorrectly. It turns out that "Round the Horn Engineering" requires an awful lot of effort. The more you try to keep the code compatible with Dandelion, the more errors Dandelion generates. In the end, you give up and decide to keep the diagrams up to date manually. A second later, you decide that there's no point in keeping the diagrams up to date at all. Besides, who has time?   Your boss hires a consultant to build tools to count the number of lines of code that are being produced. He puts a big thermometer graph on the wall with the number 1,000,000 on the top. Every day, he extends the red line to show how many lines have been added. Three days after the thermometer appears on the wall, your boss stops you in the hall. "That graph isn't growing quickly enough. We need to have a million lines done by October 1." "We aren't even sh-sh-sure that the proshect will require a m-million linezh," you blather. "We have to have a million lines done by October 1," your boss reiterates. His points have grown again, and the Grecian formula he uses on them creates an aura of authority and competence. "Are you sure your comment blocks are big enough?" Then, in a flash of managerial insight, he says, "I have it! I want you to institute a new policy among the engineers. No line of code is to be longer than 20 characters. Any such line must be split into two or more preferably more. All existing code needs to be reworked to this standard. That'll get our line count up!"   You decide not to tell him that this will require two unscheduled work months. You decide not to tell him anything at all. You decide that intravenous injections of pure ethanol are the only solution. You make the appropriate arrangements. Hack, hack, hack, and hack. You and your team madly code away. By August 1, your boss, frowning at the thermometer on the wall, institutes a mandatory 50-hour workweek.   Hack, hack, hack, and hack. By September 1st, the thermometer is at 1.2 million lines and your boss asks you to write a report describing why you exceeded the coding budget by 20 percent. He institutes mandatory Saturdays and demands that the project be brought back down to a million lines. You start a campaign of remerging lines. Hack, hack, hack, and hack. Tempers are flaring; people are quitting; QA is raining trouble reports down on you. Customers are demanding installation and user manuals; salespeople are demanding advance demonstrations for special customers; the requirements document is still thrashing, the marketing folks are complaining that the product isn't anything like they specified, and the liquor store won't accept your credit card anymore. Something has to give.    On September 15, BB calls a meeting. As he enters the room, his points are emitting clouds of steam. When he speaks, the bass overtones of his carefully manicured voice cause the pit of your stomach to roll over. "The QA manager has told me that this project has less than 50 percent of the required features implemented. He has also informed me that the system crashes all the time, yields wrong results, and is hideously slow. He has also complained that he cannot keep up with the continuous train of daily releases, each more buggy than the last!" He stops for a few seconds, visibly trying to compose himself. "The QA manager estimates that, at this rate of development, we won't be able to ship the product until December!" Actually, you think it's more like March, but you don't say anything. "December!" BB roars with such derision that people duck their heads as though he were pointing an assault rifle at them. "December is absolutely out of the question. Team leaders, I want new estimates on my desk in the morning. I am hereby mandating 65-hour work weeks until this project is complete. And it better be complete by November 1."   As he leaves the conference room, he is heard to mutter: "Empowermentbah!" * * * Your boss is bald; his points are mounted on BB's wall. The fluorescent lights reflecting off his pate momentarily dazzle you. "Do you have anything to drink?" he asks. Having just finished your last bottle of Boone's Farm, you pull a bottle of Thunderbird from your bookshelf and pour it into his coffee mug. "What's it going to take to get this project done? " he asks. "We need to freeze the requirements, analyze them, design them, and then implement them," you say callously. "By November 1?" your boss exclaims incredulously. "No way! Just get back to coding the damned thing." He storms out, scratching his vacant head.   A few days later, you find that your boss has been transferred to the corporate research division. Turnover has skyrocketed. Customers, informed at the last minute that their orders cannot be fulfilled on time, have begun to cancel their orders. Marketing is re-evaluating whether this product aligns with the overall goals of the company. Memos fly, heads roll, policies change, and things are, overall, pretty grim. Finally, by March, after far too many sixty-five hour weeks, a very shaky version of the software is ready. In the field, bug-discovery rates are high, and the technical support staff are at their wits' end, trying to cope with the complaints and demands of the irate customers. Nobody is happy.   In April, BB decides to buy his way out of the problem by licensing a product produced by Rupert Industries and redistributing it. The customers are mollified, the marketing folks are smug, and you are laid off.     Rupert Industries: Project Alpha   Your name is Robert. The date is January 3, 2001. The quiet hours spent with your family this holiday have left you refreshed and ready for work. You are sitting in a conference room with your team of professionals. The manager of the division called the meeting. "We have some ideas for a new project," says the division manager. Call him Russ. He is a high-strung British chap with more energy than a fusion reactor. He is ambitious and driven but understands the value of a team. Russ describes the essence of the new market opportunity the company has identified and introduces you to Jane, the marketing manager, who is responsible for defining the products that will address it. Addressing you, Jane says, "We'd like to start defining our first product offering as soon as possible. When can you and your team meet with me?" You reply, "We'll be done with the current iteration of our project this Friday. We can spare a few hours for you between now and then. After that, we'll take a few people from the team and dedicate them to you. We'll begin hiring their replacements and the new people for your team immediately." "Great," says Russ, "but I want you to understand that it is critical that we have something to exhibit at the trade show coming up this July. If we can't be there with something significant, we'll lose the opportunity."   "I understand," you reply. "I don't yet know what it is that you have in mind, but I'm sure we can have something by July. I just can't tell you what that something will be right now. In any case, you and Jane are going to have complete control over what we developers do, so you can rest assured that by July, you'll have the most important things that can be accomplished in that time ready to exhibit."   Russ nods in satisfaction. He knows how this works. Your team has always kept him advised and allowed him to steer their development. He has the utmost confidence that your team will work on the most important things first and will produce a high-quality product.   * * *   "So, Robert," says Jane at their first meeting, "How does your team feel about being split up?" "We'll miss working with each other," you answer, "but some of us were getting pretty tired of that last project and are looking forward to a change. So, what are you people cooking up?" Jane beams. "You know how much trouble our customers currently have . . ." And she spends a half hour or so describing the problem and possible solution. "OK, wait a second" you respond. "I need to be clear about this." And so you and Jane talk about how this system might work. Some of her ideas aren't fully formed. You suggest possible solutions. She likes some of them. You continue discussing.   During the discussion, as each new topic is addressed, Jane writes user story cards. Each card represents something that the new system has to do. The cards accumulate on the table and are spread out in front of you. Both you and Jane point at them, pick them up, and make notes on them as you discuss the stories. The cards are powerful mnemonic devices that you can use to represent complex ideas that are barely formed.   At the end of the meeting, you say, "OK, I've got a general idea of what you want. I'm going to talk to the team about it. I imagine they'll want to run some experiments with various database structures and presentation formats. Next time we meet, it'll be as a group, and we'll start identifying the most important features of the system."   A week later, your nascent team meets with Jane. They spread the existing user story cards out on the table and begin to get into some of the details of the system. The meeting is very dynamic. Jane presents the stories in the order of their importance. There is much discussion about each one. The developers are concerned about keeping the stories small enough to estimate and test. So they continually ask Jane to split one story into several smaller stories. Jane is concerned that each story have a clear business value and priority, so as she splits them, she makes sure that this stays true.   The stories accumulate on the table. Jane writes them, but the developers make notes on them as needed. Nobody tries to capture everything that is said; the cards are not meant to capture everything but are simply reminders of the conversation.   As the developers become more comfortable with the stories, they begin writing estimates on them. These estimates are crude and budgetary, but they give Jane an idea of what the story will cost.   At the end of the meeting, it is clear that many more stories could be discussed. It is also clear that the most important stories have been addressed and that they represent several months worth of work. Jane closes the meeting by taking the cards with her and promising to have a proposal for the first release in the morning.   * * *   The next morning, you reconvene the meeting. Jane chooses five cards and places them on the table. "According to your estimates, these cards represent about one perfect team-week's worth of work. The last iteration of the previous project managed to get one perfect team-week done in 3 real weeks. If we can get these five stories done in 3 weeks, we'll be able to demonstrate them to Russ. That will make him feel very comfortable about our progress." Jane is pushing it. The sheepish look on her face lets you know that she knows it too. You reply, "Jane, this is a new team, working on a new project. It's a bit presumptuous to expect that our velocity will be the same as the previous team's. However, I met with the team yesterday afternoon, and we all agreed that our initial velocity should, in fact, be set to one perfectweek for every 3 real-weeks. So you've lucked out on this one." "Just remember," you continue, "that the story estimates and the story velocity are very tentative at this point. We'll learn more when we plan the iteration and even more when we implement it."   Jane looks over her glasses at you as if to say "Who's the boss around here, anyway?" and then smiles and says, "Yeah, don't worry. I know the drill by now."Jane then puts 15 more cards on the table. She says, "If we can get all these cards done by the end of March, we can turn the system over to our beta test customers. And we'll get good feedback from them."   You reply, "OK, so we've got our first iteration defined, and we have the stories for the next three iterations after that. These four iterations will make our first release."   "So," says Jane, can you really do these five stories in the next 3 weeks?" "I don't know for sure, Jane," you reply. "Let's break them down into tasks and see what we get."   So Jane, you, and your team spend the next several hours taking each of the five stories that Jane chose for the first iteration and breaking them down into small tasks. The developers quickly realize that some of the tasks can be shared between stories and that other tasks have commonalities that can probably be taken advantage of. It is clear that potential designs are popping into the developers' heads. From time to time, they form little discussion knots and scribble UML diagrams on some cards.   Soon, the whiteboard is filled with the tasks that, once completed, will implement the five stories for this iteration. You start the sign-up process by saying, "OK, let's sign up for these tasks." "I'll take the initial database generation." Says Pete. "That's what I did on the last project, and this doesn't look very different. I estimate it at two of my perfect workdays." "OK, well, then, I'll take the login screen," says Joe. "Aw, darn," says Elaine, the junior member of the team, "I've never done a GUI, and kinda wanted to try that one."   "Ah, the impatience of youth," Joe says sagely, with a wink in your direction. "You can assist me with it, young Jedi." To Jane: "I think it'll take me about three of my perfect workdays."   One by one, the developers sign up for tasks and estimate them in terms of their own perfect workdays. Both you and Jane know that it is best to let the developers volunteer for tasks than to assign the tasks to them. You also know full well that you daren't challenge any of the developers' estimates. You know these people, and you trust them. You know that they are going to do the very best they can.   The developers know that they can't sign up for more perfect workdays than they finished in the last iteration they worked on. Once each developer has filled his or her schedule for the iteration, they stop signing up for tasks.   Eventually, all the developers have stopped signing up for tasks. But, of course, tasks are still left on the board.   "I was worried that that might happen," you say, "OK, there's only one thing to do, Jane. We've got too much to do in this iteration. What stories or tasks can we remove?" Jane sighs. She knows that this is the only option. Working overtime at the beginning of a project is insane, and projects where she's tried it have not fared well.   So Jane starts to remove the least-important functionality. "Well, we really don't need the login screen just yet. We can simply start the system in the logged-in state." "Rats!" cries Elaine. "I really wanted to do that." "Patience, grasshopper." says Joe. "Those who wait for the bees to leave the hive will not have lips too swollen to relish the honey." Elaine looks confused. Everyone looks confused. "So . . .," Jane continues, "I think we can also do away with . . ." And so, bit by bit, the list of tasks shrinks. Developers who lose a task sign up for one of the remaining ones.   The negotiation is not painless. Several times, Jane exhibits obvious frustration and impatience. Once, when tensions are especially high, Elaine volunteers, "I'll work extra hard to make up some of the missing time." You are about to correct her when, fortunately, Joe looks her in the eye and says, "When once you proceed down the dark path, forever will it dominate your destiny."   In the end, an iteration acceptable to Jane is reached. It's not what Jane wanted. Indeed, it is significantly less. But it's something the team feels that can be achieved in the next 3 weeks.   And, after all, it still addresses the most important things that Jane wanted in the iteration. "So, Jane," you say when things had quieted down a bit, "when can we expect acceptance tests from you?" Jane sighs. This is the other side of the coin. For every story the development team implements,   Jane must supply a suite of acceptance tests that prove that it works. And the team needs these long before the end of the iteration, since they will certainly point out differences in the way Jane and the developers imagine the system's behaviour.   "I'll get you some example test scripts today," Jane promises. "I'll add to them every day after that. You'll have the entire suite by the middle of the iteration."   * * *   The iteration begins on Monday morning with a flurry of Class, Responsibilities, Collaborators sessions. By midmorning, all the developers have assembled into pairs and are rapidly coding away. "And now, my young apprentice," Joe says to Elaine, "you shall learn the mysteries of test-first design!"   "Wow, that sounds pretty rad," Elaine replies. "How do you do it?" Joe beams. It's clear that he has been anticipating this moment. "OK, what does the code do right now?" "Huh?" replied Elaine, "It doesn't do anything at all; there is no code."   "So, consider our task; can you think of something the code should do?" "Sure," Elaine said with youthful assurance, "First, it should connect to the database." "And thereupon, what must needs be required to connecteth the database?" "You sure talk weird," laughed Elaine. "I think we'd have to get the database object from some registry and call the Connect() method. "Ah, astute young wizard. Thou perceives correctly that we requireth an object within which we can cacheth the database object." "Is 'cacheth' really a word?" "It is when I say it! So, what test can we write that we know the database registry should pass?" Elaine sighs. She knows she'll just have to play along. "We should be able to create a database object and pass it to the registry in a Store() method. And then we should be able to pull it out of the registry with a Get() method and make sure it's the same object." "Oh, well said, my prepubescent sprite!" "Hay!" "So, now, let's write a test function that proves your case." "But shouldn't we write the database object and registry object first?" "Ah, you've much to learn, my young impatient one. Just write the test first." "But it won't even compile!" "Are you sure? What if it did?" "Uh . . ." "Just write the test, Elaine. Trust me." And so Joe, Elaine, and all the other developers began to code their tasks, one test case at a time. The room in which they worked was abuzz with the conversations between the pairs. The murmur was punctuated by an occasional high five when a pair managed to finish a task or a difficult test case.   As development proceeded, the developers changed partners once or twice a day. Each developer got to see what all the others were doing, and so knowledge of the code spread generally throughout the team.   Whenever a pair finished something significant whether a whole task or simply an important part of a task they integrated what they had with the rest of the system. Thus, the code base grew daily, and integration difficulties were minimized.   The developers communicated with Jane on a daily basis. They'd go to her whenever they had a question about the functionality of the system or the interpretation of an acceptance test case.   Jane, good as her word, supplied the team with a steady stream of acceptance test scripts. The team read these carefully and thereby gained a much better understanding of what Jane expected the system to do. By the beginning of the second week, there was enough functionality to demonstrate to Jane. She watched eagerly as the demonstration passed test case after test case. "This is really cool," Jane said as the demonstration finally ended. "But this doesn't seem like one-third of the tasks. Is your velocity slower than anticipated?"   You grimace. You'd been waiting for a good time to mention this to Jane but now she was forcing the issue. "Yes, unfortunately, we are going more slowly than we had expected. The new application server we are using is turning out to be a pain to configure. Also, it takes forever to reboot, and we have to reboot it whenever we make even the slightest change to its configuration."   Jane eyes you with suspicion. The stress of last Monday's negotiations had still not entirely dissipated. She says, "And what does this mean to our schedule? We can't slip it again, we just can't. Russ will have a fit! He'll haul us all into the woodshed and ream us some new ones."   You look Jane right in the eyes. There's no pleasant way to give someone news like this. So you just blurt out, "Look, if things keep going like they're going, we're not going to be done with everything by next Friday. Now it's possible that we'll figure out a way to go faster. But, frankly, I wouldn't depend on that. You should start thinking about one or two tasks that could be removed from the iteration without ruining the demonstration for Russ. Come hell or high water, we are going to give that demonstration on Friday, and I don't think you want us to choose which tasks to omit."   "Aw forchrisakes!" Jane barely manages to stifle yelling that last word as she stalks away, shaking her head. Not for the first time, you say to yourself, "Nobody ever promised me project management would be easy." You are pretty sure it won't be the last time, either.   Actually, things went a bit better than you had hoped. The team did, in fact, have to drop one task from the iteration, but Jane had chosen wisely, and the demonstration for Russ went without a hitch. Russ was not impressed with the progress, but neither was he dismayed. He simply said, "This is pretty good. But remember, we have to be able to demonstrate this system at the trade show in July, and at this rate, it doesn't look like you'll have all that much to show." Jane, whose attitude had improved dramatically with the completion of the iteration, responded to Russ by saying, "Russ, this team is working hard, and well. When July comes around, I am confident that we'll have something significant to demonstrate. It won't be everything, and some of it may be smoke and mirrors, but we'll have something."   Painful though the last iteration was, it had calibrated your velocity numbers. The next iteration went much better. Not because your team got more done than in the last iteration but simply because the team didn't have to remove any tasks or stories in the middle of the iteration.   By the start of the fourth iteration, a natural rhythm has been established. Jane, you, and the team know exactly what to expect from one another. The team is running hard, but the pace is sustainable. You are confident that the team can keep up this pace for a year or more.   The number of surprises in the schedule diminishes to near zero; however, the number of surprises in the requirements does not. Jane and Russ frequently look over the growing system and make recommendations or changes to the existing functionality. But all parties realize that these changes take time and must be scheduled. So the changes do not cause anyone's expectations to be violated. In March, there is a major demonstration of the system to the board of directors. The system is very limited and is not yet in a form good enough to take to the trade show, but progress is steady, and the board is reasonably impressed.   The second release goes even more smoothly than the first. By now, the team has figured out a way to automate Jane's acceptance test scripts. The team has also refactored the design of the system to the point that it is really easy to add new features and change old ones. The second release was done by the end of June and was taken to the trade show. It had less in it than Jane and Russ would have liked, but it did demonstrate the most important features of the system. Although customers at the trade show noticed that certain features were missing, they were very impressed overall. You, Russ, and Jane all returned from the trade show with smiles on your faces. You all felt as though this project was a winner.   Indeed, many months later, you are contacted by Rufus Inc. That company had been working on a system like this for its internal operations. Rufus has canceled the development of that system after a death-march project and is negotiating to license your technology for its environment.   Indeed, things are looking up!

    Read the article

  • HTC to launch Windows 7 phone in India

    - by samsudeen
    It is a good news for the Indian smart phone users as the wait is finally over for Windows 7 mobile.The Taiwanese  mobile giant HTC is all set to release its Windows 7 based Smartphone series in India from January. HTC HD7 & HTC Mozart , the two smart phones running on Windows 7 OS started appearing on the HTC Indian website (HTC India) from last week.Though Flip kart (Indian online e-commerce website)  has started getting pre -orders for HTC HD7 a month ago , the buzz has started from last week after the introduction of “HTC Mozart”. The complete feature comparison between both the smart phones is given below. Feature Comparison HTC Mozart HTC HD 7 Microsoft Windows 7 Microsoft Windows 7 Qualcomm Snapdragon Processor QSD 8250 1 GHz CPU Qualcomm Snapdragon Processor QSD 8250 1 GHz CPU 8MegaPixel camera with Xenon Flash 5 MP, 2592?1944 pixels, autofocus, dual-LED flash, 480 x 800 pixels, 3.7 inches 480 x 800 pixels, 4.3 inches 11.9mm thick and Weighs 130g 11.2 mm thick and Weighs 162 g Bluetooth 2.1 Bluetooth 2.1 8 GB of internal storage memory 8 GB of internal storage memory 512MB of ROM and 576 of RAM 512MB of ROM and 576 of RAM 3G HSDPA 7.2 Mbps and HSUPA 2 Mbps 3G HSDPA 7.2 Mbps; HSUPA 2 Mbps Wi-Fi 802.11 b/g/n Wi-Fi 802.11 b/g/n Micro-USB interconnector Micro-USB interconnector 3.5mm audio jack 3.5mm audio jack GPS antenna GPS antenna Standard battery Li-Po 1300 MA Standard battery, Li-Ion 1230 MA Standby 360 h (2G) up to 435 h (3G) Up to 310 h (2G) / Up to 320 h (3G) Talk time Up to 6 h 40 min (2G) and 5 h 30 min (3G) Up to 6 h 20 min (2G) / Up to 5 h 20 min (3G) Estimated Price “HTC HD 7″ is priced between  INR 27855 to 32000. though the price of “HDT Mozart” is officially not announced it is estimated to be around INR 30000. Where to Buy The Windows 7 phone is not yet available in stores directly, but most of the leading mobile stores are getting pre -orders. I have given some of the online store links below. Flip kart UniverCell This article titled,HTC to launch Windows 7 phone in India, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • SQL SERVER – Size of Index Table for Each Index – Solution 2

    - by pinaldave
    Earlier I had ran puzzle where I asked question regarding size of index table for each index in database over here SQL SERVER – Size of Index Table – A Puzzle to Find Index Size for Each Index on Table. I had received good amount answers and I had blogged about that here SQL SERVER – Size of Index Table for Each Index – Solution. As a comment to that blog I have received another very interesting comment and that provides near accurate answers to original question. Many thanks to Rama Mathanmohan for providing wonderful solution. SELECT OBJECT_NAME(i.OBJECT_ID) AS TableName, i.name AS IndexName, i.index_id AS IndexID, 8 * SUM(a.used_pages) AS 'Indexsize(KB)' FROM sys.indexes AS i JOIN sys.partitions AS p ON p.OBJECT_ID = i.OBJECT_ID AND p.index_id = i.index_id JOIN sys.allocation_units AS a ON a.container_id = p.partition_id GROUP BY i.OBJECT_ID,i.index_id,i.name ORDER BY OBJECT_NAME(i.OBJECT_ID),i.index_id Let me know if you have any better script for the same. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, Readers Contribution, SQL, SQL Authority, SQL Data Storage, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Dynamic Unpivot : SSIS Nugget

    - by jamiet
    A question on the SSIS forum earlier today asked: I need to dynamically unpivot some set of columns in my source file. Every month there is one new column and its set of Values. I want to unpivot it without editing my SSIS packages that is deployed Let’s be clear about what we mean by Unpivot. It is a normalisation technique that basically converts columns into rows. By way of example it converts something like this: AccountCode Jan Feb Mar AC1 100.00 150.00 125.00 AC2 45.00 75.50 90.00 into something like this: AccountCode Month Amount AC1 Jan 100.00 AC1 Feb 150.00 AC1 Mar 125.00 AC2 Jan 45.00 AC2 Feb 75.50 AC2 Mar 90.00 The Unpivot transformation in SSIS is perfectly capable of carrying out the operation defined in this example however in the case outlined in the aforementioned forum thread the problem was a little bit different. I interpreted it to mean that the number of columns could change and in that scenario the Unpivot transformation (and indeed the SSIS dataflow in general) is rendered useless because it expects that the number of columns will not change from what is specified at design-time. There is a workaround however. Assuming all of the columns that CAN exist will appear at the end of the rows, we can (1) import all of the columns in the file as just a single column, (2) use a script component to loop over all the values in that “column” and (3) output each one as a column all of its own. Let’s go over that in a bit more detail.   I’ve prepared a data file that shows some data that we want to unpivot which shows some customers and their mythical shopping lists (it has column names in the first row): We use a Flat File Connection Manager to specify the format of our data file to SSIS: and a Flat File Source Adapter to put it into the dataflow (no need a for a screenshot of that one – its very basic). Notice that the values that we want to unpivot all exist in a column called [Groceries]. Now onto the script component where the real work goes on, although the code is pretty simple: Here I show a screenshot of this executing along with some data viewers. As you can see we have successfully pulled out all of the values into a row all of their own thus accomplishing the Dynamic Unpivot that the forum poster was after. If you want to run the demo for yourself then I have uploaded the demo package and source file up to my SkyDrive: http://cid-550f681dad532637.skydrive.live.com/self.aspx/Public/BlogShare/20100529/Dynamic%20Unpivot.zip Simply extract the two files into a folder, make sure the Connection Manager is pointing to the file, and execute! Hope this is useful. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL SERVER – Find Most Active Database in SQL Server – DMV dm_io_virtual_file_stats

    - by pinaldave
    Few days ago, I wrote about SQL SERVER – Find Current Location of Data and Log File of All the Database. There was very interesting conversation in comments by blog readers. Blog reader and SQL Expert Sreedhar has very interesting DMV presented which lists the most active database in SQL Server. For quick reference he has included the size of the disk in KB, MB and GB as well. SELECT DB_NAME(mf.database_id) AS databaseName, name AS File_LogicalName, CASE WHEN type_desc = 'LOG' THEN 'Log File' WHEN type_desc = 'ROWS' THEN 'Data File' ELSE type_desc END AS File_type_desc ,mf.physical_name ,num_of_reads ,num_of_bytes_read ,io_stall_read_ms ,num_of_writes ,num_of_bytes_written ,io_stall_write_ms ,io_stall ,size_on_disk_bytes ,size_on_disk_bytes/ 1024 AS size_on_disk_KB ,size_on_disk_bytes/ 1024 / 1024 AS size_on_disk_MB ,size_on_disk_bytes/ 1024 / 1024 / 1024 AS size_on_disk_GB FROM sys.dm_io_virtual_file_stats(NULL, NULL) AS divfs JOIN sys.master_files AS mf ON mf.database_id = divfs.database_id AND mf.FILE_ID = divfs.FILE_ID ORDER BY num_of_Reads DESC If you like to read and practice with DMVs, I suggest to read the blog of my very good friend Glenn Berry. He is one DMV expert. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Best Practices for Building a Virtualized SPARC Computing Environment

    - by Scott Elvington
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle just published Best Practices for Building a Virtualized SPARC Computing Environment, a white paper that provides guidance on the complete hardware and software stack for deploying and managing your physical and virtual SPARC infrastructure. The solution is based on Oracle SPARC T4 servers, Oracle Solaris 11 with Oracle VM for SPARC 2.2, Sun ZFS storage appliances, Sun 10GbE 72 port switches and Oracle Enterprise Manager Ops Center 12c. The paper emphasizes the value and importance of planning the resources (compute, network and storage) that will comprise the virtualized environment to achieve the desired capacity, performance and availability characteristics. The document also details numerous operational best practices that will help you deliver on those characteristics with unique capabilities provided by Enterprise Manager Ops Center including policy-based guest placement, pool resource balancing and automated guest recovery in the event of server failure. Plenty of references to supplementary documentation are included to help point you to additional resources. Whether you’re building the first stages of your private cloud or a general-purpose virtualized SPARC computing environment, these documented best practices will help ensure success. Please join Phil Bullinger and Steve Wilson from Oracle to learn more about breakthrough efficiency in private cloud infrastructure and how SPARC based virtualization can help you get started on your cloud journey. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • The best algorithm enhancing alpha-beta?

    - by Risa
    I'm studying AI. My teacher gave us source code of a chess-like game and asked us to enhance it. My exercise is to improve the alpha/beta algorithm implementing in that game. The programmer already uses transposition tables, MTD(f) with alpha/beta+memory (MTD(f) is the best algorithm I know by far). So is there any better algorithm to enhance alpha-beta search or a good way to implement MTD(f) in coding a game?

    Read the article

  • MySQL Syslog Audit Plugin

    - by jonathonc
    This post shows the construction process of the Syslog Audit plugin that was presented at MySQL Connect 2012. It is based on an environment that has the appropriate development tools enabled including gcc,g++ and cmake. It also assumes you have downloaded the MySQL source code (5.5.16 or higher) and have compiled and installed the system into the /usr/local/mysql directory ready for use.  The information provided below is designed to show the different components that make up a plugin, and specifically an audit type plugin, and how it comes together to be used within the MySQL service. The MySQL Reference Manual contains information regarding the plugin API and how it can be used, so please refer there for more detailed information. The code in this post is designed to give the simplest information necessary, so handling every return code, managing race conditions etc is not part of this example code. Let's start by looking at the most basic implementation of our plugin code as seen below: /*    Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.    Author:  Jonathon Coombes    Licence: GPL    Description: An auditing plugin that logs to syslog and                 can adjust the loglevel via the system variables. */ #include <stdio.h> #include <string.h> #include <mysql/plugin_audit.h> #include <syslog.h> There is a commented header detailing copyright/licencing and meta-data information and then the include headers. The two important include statements for our plugin are the syslog.h plugin, which gives us the structures for syslog, and the plugin_audit.h include which has details regarding the audit specific plugin api. Note that we do not need to include the general plugin header plugin.h, as this is done within the plugin_audit.h file already. To implement our plugin within the current implementation we need to add it into our source code and compile. > cd /usr/local/src/mysql-5.5.28/plugin > mkdir audit_syslog > cd audit_syslog A simple CMakeLists.txt file is created to manage the plugin compilation: MYSQL_ADD_PLUGIN(audit_syslog audit_syslog.cc MODULE_ONLY) Run the cmake  command at the top level of the source and then you can compile the plugin using the 'make' command. This results in a compiled audit_syslog.so library, but currently it is not much use to MySQL as there is no level of api defined to communicate with the MySQL service. Now we need to define the general plugin structure that enables MySQL to recognise the library as a plugin and be able to install/uninstall it and have it show up in the system. The structure is defined in the plugin.h file in the MySQL source code.  /*   Plugin library descriptor */ mysql_declare_plugin(audit_syslog) {   MYSQL_AUDIT_PLUGIN,           /* plugin type                    */   &audit_syslog_descriptor,     /* descriptor handle               */   "audit_syslog",               /* plugin name                     */   "Author Name",                /* author                          */   "Simple Syslog Audit",        /* description                     */   PLUGIN_LICENSE_GPL,           /* licence                         */   audit_syslog_init,            /* init function     */   audit_syslog_deinit,          /* deinit function */   0x0001,                       /* plugin version                  */   NULL,                         /* status variables        */   NULL,                         /* system variables                */   NULL,                         /* no reserves                     */   0,                            /* no flags                        */ } mysql_declare_plugin_end; The general plugin descriptor above is standard for all plugin types in MySQL. The plugin type is defined along with the init/deinit functions and interface methods into the system for sharing information, and various other metadata information. The descriptors have an internally recognised version number so that plugins can be matched against the api on the running server. The other details are usually related to the type-specific methods and structures to implement the plugin. Each plugin has a type-specific descriptor as well which details how the plugin is implemented for the specific purpose of that plugin type. /*   Plugin type-specific descriptor */ static struct st_mysql_audit audit_syslog_descriptor= {   MYSQL_AUDIT_INTERFACE_VERSION,                        /* interface version    */   NULL,                                                 /* release_thd function */   audit_syslog_notify,                                  /* notify function      */   { (unsigned long) MYSQL_AUDIT_GENERAL_CLASSMASK |                     MYSQL_AUDIT_CONNECTION_CLASSMASK }  /* class mask           */ }; In this particular case, the release_thd function has not been defined as it is not required. The important method for auditing is the notify function which is activated when an event occurs on the system. The notify function is designed to activate on an event and the implementation will determine how it is handled. For the audit_syslog plugin, the use of the syslog feature sends all events to the syslog for recording. The class mask allows us to determine what type of events are being seen by the notify function. There are currently two major types of event: 1. General Events: This includes general logging, errors, status and result type events. This is the main one for tracking the queries and operations on the database. 2. Connection Events: This group is based around user logins. It monitors connections and disconnections, but also if somebody changes user while connected. With most audit plugins, the principle behind the plugin is to track changes to the system over time and counters can be an important part of this process. The next step is to define and initialise the counters that are used to track the events in the service. There are 3 counters defined in total for our plugin - the # of general events, the # of connection events and the total number of events.  static volatile int total_number_of_calls; /* Count MYSQL_AUDIT_GENERAL_CLASS event instances */ static volatile int number_of_calls_general; /* Count MYSQL_AUDIT_CONNECTION_CLASS event instances */ static volatile int number_of_calls_connection; The init and deinit functions for the plugin are there to be called when the plugin is activated and when it is terminated. These offer the best option to initialise the counters for our plugin: /*  Initialize the plugin at server start or plugin installation. */ static int audit_syslog_init(void *arg __attribute__((unused))) {     openlog("mysql_audit:",LOG_PID|LOG_PERROR|LOG_CONS,LOG_USER);     total_number_of_calls= 0;     number_of_calls_general= 0;     number_of_calls_connection= 0;     return(0); } The init function does a call to openlog to initialise the syslog functionality. The parameters are the service to log under ("mysql_audit" in this case), the syslog flags and the facility for the logging. Then each of the counters are initialised to zero and a success is returned. If the init function is not defined, it will return success by default. /*  Terminate the plugin at server shutdown or plugin deinstallation. */ static int audit_syslog_deinit(void *arg __attribute__((unused))) {     closelog();     return(0); } The deinit function will simply close our syslog connection and return success. Note that the syslog functionality is part of the glibc libraries and does not require any external factors.  The function names are what we define in the general plugin structure, so these have to match otherwise there will be errors. The next step is to implement the event notifier function that was defined in the type specific descriptor (audit_syslog_descriptor) which is audit_syslog_notify. /* Event notifier function */ static void audit_syslog_notify(MYSQL_THD thd __attribute__((unused)), unsigned int event_class, const void *event) { total_number_of_calls++; if (event_class == MYSQL_AUDIT_GENERAL_CLASS) { const struct mysql_event_general *event_general= (const struct mysql_event_general *) event; number_of_calls_general++; syslog(audit_loglevel,"%lu: User: %s Command: %s Query: %s\n", event_general->general_thread_id, event_general->general_user, event_general->general_command, event_general->general_query ); } else if (event_class == MYSQL_AUDIT_CONNECTION_CLASS) { const struct mysql_event_connection *event_connection= (const struct mysql_event_connection *) event; number_of_calls_connection++; syslog(audit_loglevel,"%lu: User: %s@%s[%s] Event: %d Status: %d\n", event_connection->thread_id, event_connection->user, event_connection->host, event_connection->ip, event_connection->event_subclass, event_connection->status ); } }   In the case of an event, the notifier function is called. The first step is to increment the total number of events that have occurred in our database.The event argument is then cast into the appropriate event structure depending on the class type, of general event or connection event. The event type counters are incremented and details are sent via the syslog() function out to the system log. There are going to be different line formats and information returned since the general events have different data compared to the connection events, even though some of the details overlap, for example, user, thread id, host etc. On compiling the code now, there should be no errors and the resulting audit_syslog.so can be loaded into the server and ready to use. Log into the server and type: mysql> INSTALL PLUGIN audit_syslog SONAME 'audit_syslog.so'; This will install the plugin and will start updating the syslog immediately. Note that the audit plugin attaches to the immediate thread and cannot be uninstalled while that thread is active. This means that you cannot run the UNISTALL command until you log into a different connection (thread) on the server. Once the plugin is loaded, the system log will show output such as the following: Oct  8 15:33:21 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: (null)  Query: INSTALL PLUGIN audit_syslog SONAME 'audit_syslog.so' Oct  8 15:33:21 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: Query  Query: INSTALL PLUGIN audit_syslog SONAME 'audit_syslog.so' Oct  8 15:33:40 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: (null)  Query: show tables Oct  8 15:33:40 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: Query  Query: show tables Oct  8 15:33:43 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: (null)  Query: select * from t1 Oct  8 15:33:43 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: Query  Query: select * from t1 It appears that two of each event is being shown, but in actuality, these are two separate event types - the result event and the status event. This could be refined further by changing the audit_syslog_notify function to handle the different event sub-types in a different manner.  So far, it seems that the logging is working with events showing up in the syslog output. The issue now is that the counters created earlier to track the number of events by type are not accessible when the plugin is being run. Instead there needs to be a way to expose the plugin specific information to the service and vice versa. This could be done via the information_schema plugin api, but for something as simple as counters, the obvious choice is the system status variables. This is done using the standard structure and the declaration: /*  Plugin status variables for SHOW STATUS */ static struct st_mysql_show_var audit_syslog_status[]= {   { "Audit_syslog_total_calls",     (char *) &total_number_of_calls,     SHOW_INT },   { "Audit_syslog_general_events",     (char *) &number_of_calls_general,     SHOW_INT },   { "Audit_syslog_connection_events",     (char *) &number_of_calls_connection,     SHOW_INT },   { 0, 0, SHOW_INT } };   The structure is simply the name that will be displaying in the mysql service, the address of the associated variables, and the data type being used for the counter. It is finished with a blank structure to show that there are no more variables. Remember that status variables may have the same name for variables from other plugin, so it is considered appropriate to add the plugin name at the start of the status variable name to avoid confusion. Looking at the status variables in the mysql client shows something like the following: mysql> show global status like "audit%"; +--------------------------------+-------+ | Variable_name                  | Value | +--------------------------------+-------+ | Audit_syslog_connection_events | 1     | | Audit_syslog_general_events    | 2     | | Audit_syslog_total_calls       | 3     | +--------------------------------+-------+ 3 rows in set (0.00 sec) The final connectivity piece for the plugin is to allow the interactive change of the logging level between the plugin and the system. This requires the ability to send changes via the mysql service through to the plugin. This is done using the system variables interface and defining a single variable to keep track of the active logging level for the facility. /* Plugin system variables for SHOW VARIABLES */ static MYSQL_SYSVAR_STR(loglevel, audit_loglevel,                         PLUGIN_VAR_RQCMDARG,                         "User can specify the log level for auditing",                         audit_loglevel_check, audit_loglevel_update, "LOG_NOTICE"); static struct st_mysql_sys_var* audit_syslog_sysvars[] = {     MYSQL_SYSVAR(loglevel),     NULL }; So now the system variable 'loglevel' is defined for the plugin and associated to the global variable 'audit_loglevel'. The check or validation function is defined to make sure that no garbage values are attempted in the update of the variable. The update function is used to save the new value to the variable. Note that the audit_syslog_sysvars structure is defined in the general plugin descriptor to associate the link between the plugin and the system and how much they interact. Next comes the implementation of the validation function and the update function for the system variable. It is worth noting that if you have a simple numeric such as integers for the variable types, the validate function is often not required as MySQL will handle the automatic check and validation of simple types. /* longest valid value */ #define MAX_LOGLEVEL_SIZE 100 /* hold the valid values */ static const char *possible_modes[]= { "LOG_ERROR", "LOG_WARNING", "LOG_NOTICE", NULL };  static int audit_loglevel_check(     THD*                        thd,    /*!< in: thread handle */     struct st_mysql_sys_var*    var,    /*!< in: pointer to system                                         variable */     void*                       save,   /*!< out: immediate result                                         for update function */     struct st_mysql_value*      value)  /*!< in: incoming string */ {     char buff[MAX_LOGLEVEL_SIZE];     const char *str;     const char **found;     int length;     length= sizeof(buff);     if (!(str= value->val_str(value, buff, &length)))         return 1;     /*         We need to return a pointer to a locally allocated value in "save".         Here we pick to search for the supplied value in an global array of         constant strings and return a pointer to one of them.         The other possiblity is to use the thd_alloc() function to allocate         a thread local buffer instead of the global constants.     */     for (found= possible_modes; *found; found++)     {         if (!strcmp(*found, str))         {             *(const char**)save= *found;             return 0;         }     }     return 1; } The validation function is simply to take the value being passed in via the SET GLOBAL VARIABLE command and check if it is one of the pre-defined values allowed  in our possible_values array. If it is found to be valid, then the value is assigned to the save variable ready for passing through to the update function. static void audit_loglevel_update(     THD*                        thd,        /*!< in: thread handle */     struct st_mysql_sys_var*    var,        /*!< in: system variable                                             being altered */     void*                       var_ptr,    /*!< out: pointer to                                             dynamic variable */     const void*                 save)       /*!< in: pointer to                                             temporary storage */ {     /* assign the new value so that the server can read it */     *(char **) var_ptr= *(char **) save;     /* assign the new value to the internal variable */     audit_loglevel= *(char **) save; } Since all the validation has been done already, the update function is quite simple for this plugin. The first part is to update the system variable pointer so that the server can read the value. The second part is to update our own global plugin variable for tracking the value. Notice that the save variable is passed in as a void type to allow handling of various data types, so it must be cast to the appropriate data type when assigning it to the variables. Looking at how the latest changes affect the usage of the plugin and the interaction within the server shows: mysql> show global variables like "audit%"; +-----------------------+------------+ | Variable_name         | Value      | +-----------------------+------------+ | audit_syslog_loglevel | LOG_NOTICE | +-----------------------+------------+ 1 row in set (0.00 sec) mysql> set global audit_syslog_loglevel="LOG_ERROR"; Query OK, 0 rows affected (0.00 sec) mysql> show global status like "audit%"; +--------------------------------+-------+ | Variable_name                  | Value | +--------------------------------+-------+ | Audit_syslog_connection_events | 1     | | Audit_syslog_general_events    | 11    | | Audit_syslog_total_calls       | 12    | +--------------------------------+-------+ 3 rows in set (0.00 sec) mysql> show global variables like "audit%"; +-----------------------+-----------+ | Variable_name         | Value     | +-----------------------+-----------+ | audit_syslog_loglevel | LOG_ERROR | +-----------------------+-----------+ 1 row in set (0.00 sec)   So now we have a plugin that will audit the events on the system and log the details to the system log. It allows for interaction to see the number of different events within the server details and provides a mechanism to change the logging level interactively via the standard system methods of the SET command. A more complex auditing plugin may have more detailed code, but each of the above areas is what will be involved and simply expanded on to add more functionality. With the above skeleton code, it is now possible to create your own audit plugins to implement your own auditing requirements. If, however, you are not of the coding persuasion, then you could always consider the option of the MySQL Enterprise Audit plugin that is available to purchase.

    Read the article

  • Debian squeeze keyboard and touchpad not working / detected on laptop

    - by Esa
    They work before gdm3 starts. a connected mouse also stops working, but functions after removal and re-plug. no xorg.conf. log doesn't show any loading of drivers for kbd/touchpad [ 33.783] X.Org X Server 1.10.4 Release Date: 2011-08-19 [ 33.783] X Protocol Version 11, Revision 0 [ 33.783] Build Operating System: Linux 3.0.0-1-amd64 x86_64 Debian [ 33.783] Current Operating System: Linux sus 3.2.0-0.bpo.2-amd64 #1 SMP Sun Mar 25 10:33:35 UTC 2012 x86_64 [ 33.783] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-0.bpo.2-amd64 root=UUID=8686f840-d165-4d1e-b995-2ebbd94aa3d2 ro quiet [ 33.783] Build Date: 28 August 2011 09:39:43PM [ 33.783] xorg-server 2:1.10.4-1~bpo60+1 (Cyril Brulebois <[email protected]>) [ 33.783] Current version of pixman: 0.16.4 [ 33.783] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 33.783] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 33.783] (==) Log file: "/var/log/Xorg.0.log", Time: Wed Mar 28 09:34:04 2012 [ 33.837] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 33.936] (==) No Layout section. Using the first Screen section. [ 33.936] (==) No screen section available. Using defaults. [ 33.936] (**) |-->Screen "Default Screen Section" (0) [ 33.936] (**) | |-->Monitor "<default monitor>" [ 33.936] (==) No monitor specified for screen "Default Screen Section". Using a default monitor configuration. [ 33.936] (==) Automatically adding devices [ 33.936] (==) Automatically enabling devices [ 34.164] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. [ 34.164] Entry deleted from font path. [ 34.226] (==) FontPath set to: /usr/share/fonts/X11/misc, /usr/share/fonts/X11/100dpi/:unscaled, /usr/share/fonts/X11/75dpi/:unscaled, /usr/share/fonts/X11/Type1, /usr/share/fonts/X11/100dpi, /usr/share/fonts/X11/75dpi, /var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType, built-ins [ 34.226] (==) ModulePath set to "/usr/lib/xorg/modules" [ 34.226] (II) The server relies on udev to provide the list of input devices. If no devices become available, reconfigure udev or disable AutoAddDevices. [ 34.226] (II) Loader magic: 0x7d3ae0 [ 34.226] (II) Module ABI versions: [ 34.226] X.Org ANSI C Emulation: 0.4 [ 34.226] X.Org Video Driver: 10.0 [ 34.226] X.Org XInput driver : 12.2 [ 34.226] X.Org Server Extension : 5.0 [ 34.227] (--) PCI:*(0:1:5:0) 1002:9712:103c:1661 rev 0, Mem @ 0xd0000000/268435456, 0xf1400000/65536, 0xf1300000/1048576, I/O @ 0x00008000/256 [ 34.227] (--) PCI: (0:2:0:0) 1002:6760:103c:1661 rev 0, Mem @ 0xe0000000/268435456, 0xf0300000/131072, I/O @ 0x00004000/256, BIOS @ 0x????????/131072 [ 34.227] (II) Open ACPI successful (/var/run/acpid.socket) [ 34.227] (II) LoadModule: "extmod" [ 34.249] (II) Loading /usr/lib/xorg/modules/extensions/libextmod.so [ 34.277] (II) Module extmod: vendor="X.Org Foundation" [ 34.277] compiled for 1.10.4, module version = 1.0.0 [ 34.277] Module class: X.Org Server Extension [ 34.277] ABI class: X.Org Server Extension, version 5.0 [ 34.277] (II) Loading extension SELinux [ 34.277] (II) Loading extension MIT-SCREEN-SAVER [ 34.277] (II) Loading extension XFree86-VidModeExtension [ 34.277] (II) Loading extension XFree86-DGA [ 34.277] (II) Loading extension DPMS [ 34.277] (II) Loading extension XVideo [ 34.277] (II) Loading extension XVideo-MotionCompensation [ 34.277] (II) Loading extension X-Resource [ 34.277] (II) LoadModule: "dbe" [ 34.277] (II) Loading /usr/lib/xorg/modules/extensions/libdbe.so [ 34.299] (II) Module dbe: vendor="X.Org Foundation" [ 34.299] compiled for 1.10.4, module version = 1.0.0 [ 34.299] Module class: X.Org Server Extension [ 34.299] ABI class: X.Org Server Extension, version 5.0 [ 34.299] (II) Loading extension DOUBLE-BUFFER [ 34.299] (II) LoadModule: "glx" [ 34.299] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 34.477] (II) Module glx: vendor="X.Org Foundation" [ 34.477] compiled for 1.10.4, module version = 1.0.0 [ 34.477] ABI class: X.Org Server Extension, version 5.0 [ 34.477] (==) AIGLX enabled [ 34.477] (II) Loading extension GLX [ 34.477] (II) LoadModule: "record" [ 34.478] (II) Loading /usr/lib/xorg/modules/extensions/librecord.so [ 34.481] (II) Module record: vendor="X.Org Foundation" [ 34.481] compiled for 1.10.4, module version = 1.13.0 [ 34.481] Module class: X.Org Server Extension [ 34.481] ABI class: X.Org Server Extension, version 5.0 [ 34.481] (II) Loading extension RECORD [ 34.481] (II) LoadModule: "dri" [ 34.481] (II) Loading /usr/lib/xorg/modules/extensions/libdri.so [ 34.512] (II) Module dri: vendor="X.Org Foundation" [ 34.512] compiled for 1.10.4, module version = 1.0.0 [ 34.512] ABI class: X.Org Server Extension, version 5.0 [ 34.512] (II) Loading extension XFree86-DRI [ 34.512] (II) LoadModule: "dri2" [ 34.512] (II) Loading /usr/lib/xorg/modules/extensions/libdri2.so [ 34.515] (II) Module dri2: vendor="X.Org Foundation" [ 34.515] compiled for 1.10.4, module version = 1.2.0 [ 34.515] ABI class: X.Org Server Extension, version 5.0 [ 34.515] (II) Loading extension DRI2 [ 34.515] (==) Matched ati as autoconfigured driver 0 [ 34.515] (==) Matched vesa as autoconfigured driver 1 [ 34.515] (==) Matched fbdev as autoconfigured driver 2 [ 34.515] (==) Assigned the driver to the xf86ConfigLayout [ 34.515] (II) LoadModule: "ati" [ 34.706] (II) Loading /usr/lib/xorg/modules/drivers/ati_drv.so [ 34.724] (II) Module ati: vendor="X.Org Foundation" [ 34.724] compiled for 1.10.3, module version = 6.14.2 [ 34.724] Module class: X.Org Video Driver [ 34.724] ABI class: X.Org Video Driver, version 10.0 [ 34.724] (II) LoadModule: "radeon" [ 34.725] (II) Loading /usr/lib/xorg/modules/drivers/radeon_drv.so [ 34.923] (II) Module radeon: vendor="X.Org Foundation" [ 34.923] compiled for 1.10.3, module version = 6.14.2 [ 34.923] Module class: X.Org Video Driver [ 34.923] ABI class: X.Org Video Driver, version 10.0 [ 34.945] (II) LoadModule: "vesa" [ 34.945] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 34.988] (II) Module vesa: vendor="X.Org Foundation" [ 34.988] compiled for 1.10.3, module version = 2.3.0 [ 34.988] Module class: X.Org Video Driver [ 34.988] ABI class: X.Org Video Driver, version 10.0 [ 34.988] (II) LoadModule: "fbdev" [ 34.988] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 35.020] (II) Module fbdev: vendor="X.Org Foundation" [ 35.020] compiled for 1.10.3, module version = 0.4.2 [ 35.020] ABI class: X.Org Video Driver, version 10.0 [ 35.020] (II) RADEON: Driver for ATI Radeon chipsets: <snip> [ 35.023] (II) VESA: driver for VESA chipsets: vesa [ 35.023] (II) FBDEV: driver for framebuffer: fbdev [ 35.023] (++) using VT number 7 [ 35.033] (II) Loading /usr/lib/xorg/modules/drivers/radeon_drv.so [ 35.033] (II) [KMS] Kernel modesetting enabled. [ 35.033] (WW) Falling back to old probe method for vesa [ 35.034] (WW) Falling back to old probe method for fbdev [ 35.034] (II) Loading sub module "fbdevhw" [ 35.034] (II) LoadModule: "fbdevhw" [ 35.034] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so [ 35.185] (II) Module fbdevhw: vendor="X.Org Foundation" [ 35.185] compiled for 1.10.4, module version = 0.0.2 [ 35.185] ABI class: X.Org Video Driver, version 10.0 [ 35.288] (II) RADEON(0): Creating default Display subsection in Screen section "Default Screen Section" for depth/fbbpp 24/32 [ 35.288] (==) RADEON(0): Depth 24, (--) framebuffer bpp 32 [ 35.288] (II) RADEON(0): Pixel depth = 24 bits stored in 4 bytes (32 bpp pixmaps) [ 35.288] (==) RADEON(0): Default visual is TrueColor [ 35.288] (==) RADEON(0): RGB weight 888 [ 35.288] (II) RADEON(0): Using 8 bits per RGB (8 bit DAC) [ 35.288] (--) RADEON(0): Chipset: "ATI Mobility Radeon HD 4200" (ChipID = 0x9712) [ 35.288] (II) RADEON(0): PCI card detected [ 35.288] drmOpenDevice: node name is /dev/dri/card0 [ 35.288] drmOpenDevice: open result is 9, (OK) [ 35.288] drmOpenByBusid: Searching for BusID pci:0000:01:05.0 [ 35.288] drmOpenDevice: node name is /dev/dri/card0 [ 35.288] drmOpenDevice: open result is 9, (OK) [ 35.288] drmOpenByBusid: drmOpenMinor returns 9 [ 35.288] drmOpenByBusid: drmGetBusid reports pci:0000:01:05.0 [ 35.288] (II) Loading sub module "exa" [ 35.288] (II) LoadModule: "exa" [ 35.288] (II) Loading /usr/lib/xorg/modules/libexa.so [ 35.335] (II) Module exa: vendor="X.Org Foundation" [ 35.335] compiled for 1.10.4, module version = 2.5.0 [ 35.335] ABI class: X.Org Video Driver, version 10.0 [ 35.335] (II) RADEON(0): KMS Color Tiling: disabled [ 35.335] (II) RADEON(0): KMS Pageflipping: enabled [ 35.335] (II) RADEON(0): SwapBuffers wait for vsync: enabled [ 35.360] (II) RADEON(0): Output VGA-0 has no monitor section [ 35.360] (II) RADEON(0): Output LVDS has no monitor section [ 35.364] (II) RADEON(0): Output HDMI-0 has no monitor section [ 35.388] (II) RADEON(0): EDID for output VGA-0 [ 35.388] (II) RADEON(0): EDID for output LVDS [ 35.388] (II) RADEON(0): Manufacturer: LGD Model: 2ac Serial#: 0 [ 35.388] (II) RADEON(0): Year: 2010 Week: 0 [ 35.388] (II) RADEON(0): EDID Version: 1.3 [ 35.388] (II) RADEON(0): Digital Display Input [ 35.388] (II) RADEON(0): Max Image Size [cm]: horiz.: 34 vert.: 19 [ 35.388] (II) RADEON(0): Gamma: 2.20 [ 35.388] (II) RADEON(0): No DPMS capabilities specified [ 35.388] (II) RADEON(0): Supported color encodings: RGB 4:4:4 YCrCb 4:4:4 [ 35.388] (II) RADEON(0): First detailed timing is preferred mode [ 35.388] (II) RADEON(0): redX: 0.616 redY: 0.371 greenX: 0.355 greenY: 0.606 [ 35.388] (II) RADEON(0): blueX: 0.152 blueY: 0.100 whiteX: 0.313 whiteY: 0.329 [ 35.388] (II) RADEON(0): Manufacturer's mask: 0 [ 35.388] (II) RADEON(0): Supported detailed timing: [ 35.388] (II) RADEON(0): clock: 69.3 MHz Image Size: 344 x 194 mm [ 35.388] (II) RADEON(0): h_active: 1366 h_sync: 1398 h_sync_end 1430 h_blank_end 1486 h_border: 0 [ 35.388] (II) RADEON(0): v_active: 768 v_sync: 770 v_sync_end 774 v_blanking: 782 v_border: 0 [ 35.388] (II) RADEON(0): LG Display [ 35.388] (II) RADEON(0): LP156WH2-TLQB [ 35.388] (II) RADEON(0): EDID (in hex): [ 35.388] (II) RADEON(0): 00ffffffffffff0030e4ac0200000000 [ 35.388] (II) RADEON(0): 00140103802213780ac1259d5f5b9b27 [ 35.388] (II) RADEON(0): 19505400000001010101010101010101 [ 35.388] (II) RADEON(0): 010101010101121b567850000e302020 [ 35.388] (II) RADEON(0): 240058c2100000190000000000000000 [ 35.388] (II) RADEON(0): 00000000000000000000000000fe004c [ 35.388] (II) RADEON(0): 4720446973706c61790a2020000000fe [ 35.388] (II) RADEON(0): 004c503135365748322d544c514200c1 [ 35.388] (II) RADEON(0): Printing probed modes for output LVDS [ 35.388] (II) RADEON(0): Modeline "1366x768"x59.6 69.30 1366 1398 1430 1486 768 770 774 782 -hsync -vsync (46.6 kHz) [ 35.388] (II) RADEON(0): Modeline "1280x720"x59.9 74.50 1280 1344 1472 1664 720 723 728 748 -hsync +vsync (44.8 kHz) [ 35.388] (II) RADEON(0): Modeline "1152x768"x59.8 71.75 1152 1216 1328 1504 768 771 781 798 -hsync +vsync (47.7 kHz) [ 35.388] (II) RADEON(0): Modeline "1024x768"x59.9 63.50 1024 1072 1176 1328 768 771 775 798 -hsync +vsync (47.8 kHz) [ 35.388] (II) RADEON(0): Modeline "800x600"x59.9 38.25 800 832 912 1024 600 603 607 624 -hsync +vsync (37.4 kHz) [ 35.388] (II) RADEON(0): Modeline "848x480"x59.7 31.50 848 872 952 1056 480 483 493 500 -hsync +vsync (29.8 kHz) [ 35.388] (II) RADEON(0): Modeline "720x480"x59.7 26.75 720 744 808 896 480 483 493 500 -hsync +vsync (29.9 kHz) [ 35.388] (II) RADEON(0): Modeline "640x480"x59.4 23.75 640 664 720 800 480 483 487 500 -hsync +vsync (29.7 kHz) [ 35.392] (II) RADEON(0): EDID for output HDMI-0 [ 35.392] (II) RADEON(0): Output VGA-0 disconnected [ 35.392] (II) RADEON(0): Output LVDS connected [ 35.392] (II) RADEON(0): Output HDMI-0 disconnected [ 35.392] (II) RADEON(0): Using exact sizes for initial modes [ 35.392] (II) RADEON(0): Output LVDS using initial mode 1366x768 [ 35.392] (II) RADEON(0): Using default gamma of (1.0, 1.0, 1.0) unless otherwise stated. [ 35.392] (II) RADEON(0): mem size init: gart size :1fdff000 vram size: s:10000000 visible:fba0000 [ 35.392] (II) RADEON(0): EXA: Driver will allow EXA pixmaps in VRAM [ 35.392] (==) RADEON(0): DPI set to (96, 96) [ 35.392] (II) Loading sub module "fb" [ 35.392] (II) LoadModule: "fb" [ 35.392] (II) Loading /usr/lib/xorg/modules/libfb.so [ 35.492] (II) Module fb: vendor="X.Org Foundation" [ 35.492] compiled for 1.10.4, module version = 1.0.0 [ 35.492] ABI class: X.Org ANSI C Emulation, version 0.4 [ 35.492] (II) Loading sub module "ramdac" [ 35.492] (II) LoadModule: "ramdac" [ 35.492] (II) Module "ramdac" already built-in [ 35.492] (II) UnloadModule: "vesa" [ 35.492] (II) Unloading vesa [ 35.492] (II) UnloadModule: "fbdev" [ 35.492] (II) Unloading fbdev [ 35.492] (II) UnloadModule: "fbdevhw" [ 35.492] (II) Unloading fbdevhw [ 35.492] (--) Depth 24 pixmap format is 32 bpp [ 35.492] (II) RADEON(0): [DRI2] Setup complete [ 35.492] (II) RADEON(0): [DRI2] DRI driver: r600 [ 35.492] (II) RADEON(0): Front buffer size: 4224K [ 35.492] (II) RADEON(0): VRAM usage limit set to 228096K [ 35.615] (==) RADEON(0): Backing store disabled [ 35.615] (II) RADEON(0): Direct rendering enabled [ 35.658] (II) RADEON(0): Setting EXA maxPitchBytes [ 35.658] (II) EXA(0): Driver allocated offscreen pixmaps [ 35.658] (II) EXA(0): Driver registered support for the following operations: [ 35.658] (II) Solid [ 35.658] (II) Copy [ 35.658] (II) Composite (RENDER acceleration) [ 35.658] (II) UploadToScreen [ 35.658] (II) DownloadFromScreen [ 35.687] (II) RADEON(0): Acceleration enabled [ 35.687] (==) RADEON(0): DPMS enabled [ 35.687] (==) RADEON(0): Silken mouse enabled [ 35.721] (II) RADEON(0): Set up textured video [ 35.721] (II) RADEON(0): RandR 1.2 enabled, ignore the following RandR disabled message. [ 35.721] (--) RandR disabled [ 35.721] (II) Initializing built-in extension Generic Event Extension [ 35.721] (II) Initializing built-in extension SHAPE [ 35.721] (II) Initializing built-in extension MIT-SHM [ 35.721] (II) Initializing built-in extension XInputExtension [ 35.721] (II) Initializing built-in extension XTEST [ 35.721] (II) Initializing built-in extension BIG-REQUESTS [ 35.721] (II) Initializing built-in extension SYNC [ 35.721] (II) Initializing built-in extension XKEYBOARD [ 35.721] (II) Initializing built-in extension XC-MISC [ 35.721] (II) Initializing built-in extension SECURITY [ 35.721] (II) Initializing built-in extension XINERAMA [ 35.721] (II) Initializing built-in extension XFIXES [ 35.721] (II) Initializing built-in extension RENDER [ 35.721] (II) Initializing built-in extension RANDR [ 35.721] (II) Initializing built-in extension COMPOSITE [ 35.721] (II) Initializing built-in extension DAMAGE [ 35.721] (II) SELinux: Disabled on system [ 35.982] (II) AIGLX: enabled GLX_MESA_copy_sub_buffer [ 35.982] (II) AIGLX: enabled GLX_INTEL_swap_event [ 35.982] (II) AIGLX: enabled GLX_SGI_swap_control and GLX_MESA_swap_control [ 35.982] (II) AIGLX: enabled GLX_SGI_make_current_read [ 35.982] (II) AIGLX: GLX_EXT_texture_from_pixmap backed by buffer objects [ 35.982] (II) AIGLX: Loaded and initialized /usr/lib/dri/r600_dri.so [ 35.982] (II) GLX: Initialized DRI2 GL provider for screen 0 [ 35.999] (II) RADEON(0): Setting screen physical size to 361 x 203 [ 43.896] (II) RADEON(0): EDID vendor "LGD", prod id 684 [ 43.896] (II) RADEON(0): Printing DDC gathered Modelines: [ 43.896] (II) RADEON(0): Modeline "1366x768"x0.0 69.30 1366 1398 1430 1486 768 770 774 782 -hsync -vsync (46.6 kHz) [ 43.924] (II) RADEON(0): EDID vendor "LGD", prod id 684 [ 43.924] (II) RADEON(0): Printing DDC gathered Modelines: [ 43.924] (II) RADEON(0): Modeline "1366x768"x0.0 69.30 1366 1398 1430 1486 768 770 774 782 -hsync -vsync (46.6 kHz) [ 43.988] (II) RADEON(0): EDID vendor "LGD", prod id 684 [ 43.988] (II) RADEON(0): Printing DDC gathered Modelines: [ 43.988] (II) RADEON(0): Modeline "1366x768"x0.0 69.30 1366 1398 1430 1486 768 770 774 782 -hsync -vsync (46.6 kHz) [ 67.375] (II) config/udev: Adding input device Logitech USB Optical Mouse (/dev/input/event1) [ 67.376] (**) Logitech USB Optical Mouse: Applying InputClass "evdev pointer catchall" [ 67.376] (II) LoadModule: "evdev" [ 67.376] (II) Loading /usr/lib/xorg/modules/input/evdev_drv.so [ 67.392] (II) Module evdev: vendor="X.Org Foundation" [ 67.392] compiled for 1.10.3, module version = 2.6.0 [ 67.392] Module class: X.Org XInput Driver [ 67.392] ABI class: X.Org XInput driver, version 12.2 [ 67.392] (II) Using input driver 'evdev' for 'Logitech USB Optical Mouse' [ 67.392] (II) Loading /usr/lib/xorg/modules/input/evdev_drv.so [ 67.392] (**) Logitech USB Optical Mouse: always reports core events [ 67.392] (**) Logitech USB Optical Mouse: Device: "/dev/input/event1" [ 67.392] (--) Logitech USB Optical Mouse: Found 12 mouse buttons [ 67.392] (--) Logitech USB Optical Mouse: Found scroll wheel(s) [ 67.392] (--) Logitech USB Optical Mouse: Found relative axes [ 67.392] (--) Logitech USB Optical Mouse: Found x and y relative axes [ 67.392] (II) Logitech USB Optical Mouse: Configuring as mouse [ 67.392] (II) Logitech USB Optical Mouse: Adding scrollwheel support [ 67.392] (**) Logitech USB Optical Mouse: YAxisMapping: buttons 4 and 5 [ 67.392] (**) Logitech USB Optical Mouse: EmulateWheelButton: 4, EmulateWheelInertia: 10, EmulateWheelTimeout: 200 [ 67.392] (**) Option "config_info" "udev:/sys/devices/pci0000:00/0000:00:13.0/usb5/5-1/5-1:1.0/input/input14/event1" [ 67.392] (II) XINPUT: Adding extended input device "Logitech USB Optical Mouse" (type: MOUSE) [ 67.392] (II) Logitech USB Optical Mouse: initialized for relative axes. [ 67.392] (**) Logitech USB Optical Mouse: (accel) keeping acceleration scheme 1 [ 67.392] (**) Logitech USB Optical Mouse: (accel) acceleration profile 0 [ 67.392] (**) Logitech USB Optical Mouse: (accel) acceleration factor: 2.000 [ 67.392] (**) Logitech USB Optical Mouse: (accel) acceleration threshold: 4 [ 67.392] (II) config/udev: Adding input device Logitech USB Optical Mouse (/dev/input/mouse0) [ 67.392] (II) No input driver/identifier specified (ignoring) [ 78.692] (II) Logitech USB Optical Mouse: Close [ 78.692] (II) UnloadModule: "evdev" [ 78.692] (II) Unloading evdev

    Read the article

  • Looking ahead at 2011-with Gartner

    - by andrea.mulder
    Speaking of forecasting the future. Gartner highlighted the top 10 technologies and trends that will be strategic for most organizations in 2011. While Gartner's predictions are not specific to CRM, you just cannot help but notice some of the common themes in store for 2011. The top 10 strategic technologies for 2011 include: Cloud Computing Mobile Applications and Media Tablets Social Communications and Collaborations Video Next Generation Analytics Social Analytics Context-Aware Computing Storage Class Memory Ubiquitous Computing Fabric-Based Infrastructure and Computers

    Read the article

  • Google Chrome Extensions: Launch Event (part 6)

    Google Chrome Extensions: Launch Event (part 6) Video Footage from the Google Chrome Extensions launch event on 12/09/09. Nick Baum, product manager for Google Chrome's extension system presents the gallery approval process, gives tips to extensions developers on how to make their extension successful and discusses the team's short term plans. From: GoogleDevelopers Views: 5659 17 ratings Time: 08:42 More in Science & Technology

    Read the article

  • SSIS - XML Source Script

    - by simonsabin
    The XML Source in SSIS is great if you have a 1 to 1 mapping between entity and table. You can do more complex mapping but it becomes very messy and won't perform. What other options do you have? The challenge with XML processing is to not need a huge amount of memory. I remember using the early versions of Biztalk with loaded the whole document into memory to map from one document type to another. This was fine for small documents but was an absolute killer for large documents. You therefore need a streaming approach. For flexibility however you want to be able to generate your rows easily, and if you've ever used the XmlReader you will know its ugly code to write. That brings me on to LINQ. The is an implementation of LINQ over XML which is really nice. You can write nice LINQ queries instead of the XMLReader stuff. The downside is that by default LINQ to XML requires a whole XML document to work with. No streaming. Your code would look like this. We create an XDocument and then enumerate over a set of annoymous types we generate from our LINQ statement XDocument x = XDocument.Load("C:\\TEMP\\CustomerOrders-Attribute.xml");   foreach (var xdata in (from customer in x.Elements("OrderInterface").Elements("Customer")                        from order in customer.Elements("Orders").Elements("Order")                        select new { Account = customer.Attribute("AccountNumber").Value                                   , OrderDate = order.Attribute("OrderDate").Value }                        )) {     Output0Buffer.AddRow();     Output0Buffer.AccountNumber = xdata.Account;     Output0Buffer.OrderDate = Convert.ToDateTime(xdata.OrderDate); } As I said the downside to this is that you are loading the whole document into memory. I did some googling and came across some helpful videos from a nice UK DPE Mike Taulty http://www.microsoft.com/uk/msdn/screencasts/screencast/289/LINQ-to-XML-Streaming-In-Large-Documents.aspx. Which show you how you can combine LINQ and the XmlReader to get a semi streaming approach. I took what he did and implemented it in SSIS. What I found odd was that when I ran it I got different numbers between using the streamed and non streamed versions. I found the cause was a little bug in Mikes code that causes the pointer in the XmlReader to progress past the start of the element and thus foreach (var xdata in (from customer in StreamReader("C:\\TEMP\\CustomerOrders-Attribute.xml","Customer")                                from order in customer.Elements("Orders").Elements("Order")                                select new { Account = customer.Attribute("AccountNumber").Value                                           , OrderDate = order.Attribute("OrderDate").Value }                                ))         {             Output0Buffer.AddRow();             Output0Buffer.AccountNumber = xdata.Account;             Output0Buffer.OrderDate = Convert.ToDateTime(xdata.OrderDate);         } These look very similiar and they are the key element is the method we are calling, StreamReader. This method is what gives us streaming, what it does is return a enumerable list of elements, because of the way that LINQ works this results in the data being streamed in. static IEnumerable<XElement> StreamReader(String filename, string elementName) {     using (XmlReader xr = XmlReader.Create(filename))     {         xr.MoveToContent();         while (xr.Read()) //Reads the first element         {             while (xr.NodeType == XmlNodeType.Element && xr.Name == elementName)             {                 XElement node = (XElement)XElement.ReadFrom(xr);                   yield return node;             }         }         xr.Close();     } } This code is specifically designed to return a list of the elements with a specific name. The first Read reads the root element and then the inner while loop checks to see if the current element is the type we want. If not we do the xr.Read() again until we find the element type we want. We then use the neat function XElement.ReadFrom to read an element and all its sub elements into an XElement. This is what is returned and can be consumed by the LINQ statement. Essentially once one element has been read we need to check if we are still on the same element type and name (the inner loop) This was Mikes mistake, if we called .Read again we would advance the XmlReader beyond the start of the Element and so the ReadFrom method wouldn't work. So with the code above you can use what ever LINQ statement you like to flatten your XML into the rowsets you want. You could even have multiple outputs and generate your own surrogate keys.        

    Read the article

  • 12.10 update breaks NFS mount

    - by TarekEldeeb
    I've just upgraded to the latest 12.10 beta. Rebooted twice. The problem is with the NFS folders not mounting, here's a verbose log. # mount -v myserver:/nfs_shared/tools /tools/ mount: no type was given - I'll assume nfs because of the colon mount.nfs: timeout set for Mon Oct 1 11:42:28 2012 mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: Connection timed out My IP is 109, the server is 22. Just before the last upgrade, I was able to mount normally. Other PCs on my network are able to mount too, it's not a server issue. How can I analyse the problem? Certain log files? Any help?

    Read the article

  • Podcast Show Notes: The Red Room Interview &ndash; Part 1

    - by Bob Rhubart
      The latest OTN Arch2Arch podcast is Part 1 of a three-part series featuring a discussion of a broad range of SOA  issues with three members of the small army of contributors to The Red Room Blog, now part of the OJam.biz site, the Australia-New Zealand outpost of the global Oracle community. The panelists for this program are: Sean Boiling - Sales Consulting Manager for Oracle Fusion Middleware LinkedIn | Twitter | Blog Richard Ward - SOA Channel Development Manager at Oracle LinkedIn | Blog Mervin Chiang - Consulting Principal at Leonardo Consulting LinkedIn | Twitter | Blog (You can also follow the Red Room itself on Twitter: @OracleRedRoom.) The genesis of this interview goes back to 2009, and the original Red Room blog, on which Sean, Richard, Mervin, and other Red Roomers published a 10-part series of posts that, taken together, form a kind of SOA best-practices guide, presented in an irreverent style that is rare in a lot of technical writing. It was on the basis of their expertise and irreverence that I wanted to get a few of the Red Room bloggers on an Arch2Arch podcast.  Easier said than done. Trying to schedule a group interview with very busy people on the other side of world (they’re actually 15 hours in the future, relative to my location) is not a simple process. The conversations about getting some of the Red Room people on the program began in the summer of 2009. The interview finally happened at 5:30 PM EDT on Tuesday March 30, 2010, which for the panelists, located in Australia, was 8:30 AM on Wednesday March 31, 2010. I was waiting for dinner, and Sean, Richard, and Mervin were waiting for breakfast. But the call went off without a hitch, and the panelists carried on a great discussion of SOA issues. Listen to Part 1 Many thanks to Gareth Llewellyn for his help in putting this together. SOA Best Practices Here’s a complete list of the posts in the original 10-part Red Room series: SOA is Dead. Long Live SOA by Sean Boiling Are you doing SOP’s instead of SOA? by Saul Cunningham All The President's SOA by Sean Boiling SOA – Pay Now or Pay Dearly by Richard Ward SOA where are the skills? by Richard Ward Project Management Pitfalls within SOA by Anton Gouws Viewing SOA as a project instead of an architecture by Saul Cunningham Kiss and Tell by Sean Boiling Failure to implement and adhere to SOA Governance by Mervin Chiang Ten Out Of Ten by Sean Boiling Parts 2 of the Red Room Interview will be available next week, followed by Part 3, so stay tuned: RSS Change in the Wind Beginning with next week’s program, the OTN Arch2Arch Podcast will be rechristened as the OTN ArchBeat Podcast, to better align with this blog. The transformation will be painless – you won’t feel a thing.   del.icio.us Tags: otn,oracle,Archbeat,Arch2Arch,soa,service oriented architecture,podcast Technorati Tags: otn,oracle,Archbeat,Arch2Arch,soa,service oriented architecture,podcast

    Read the article

  • SQL SERVER – Transcript of Learning SQL Server Performance: Indexing Basics – Interview of Vinod Kumar by Pinal Dave

    - by pinaldave
    Recently I just wrote a blog post on about Learning SQL Server Performance: Indexing Basics and I received lots of request that if we can share some insight into the course. Here is 200 seconds interview of Vinod Kumar I took right after completing the course. We have few free codes to watch the course, please your comment at http://facebook.com/SQLAuth and we will few of first ones, we will send the code. There are many people who said they would like to read the transcript of the video. Here I have generated the same. Pinal: Vinod, we recently released this course, SQL Server Indexing. It is about performance tuning. So tell me – how do indexes help performance? Vinod: I think what happens in the industry when it comes to performance is that developers and DBAs look at indexes first.  So that’s the first step for any performance tuning exercise, indexing is one of the most critical aspects and it is important to learn it the right way. Pinal: Correct. So what you mean to say is that if you know indexing you can pretty much tune any server and query. Vinod: So I might contradict my false statement now. Indexing is usually a stepping stone but it does not lead you to the end. But it’s good to start with indexing and there are lots of nuances to indexing that you need to understand, like how SQL uses indexing and how performance can improve because of the strategies that you have made. Pinal: But now I’m confused. First you said indexes are good, and then you said that indexes can degrade your performance.  So what is this course about?  I mean how does this course really make an impact? Vinod: Ok -so from the course perspective, what we are trying to do is give you a capsule which gives you a good start. Every journey needs a beginning, you need that first step.  This course is that first step in understanding. This is the most basic, fundamental course that we have tried to attack. This is the fundamentals of indexing, some of the key things that you must know about indexing.   Some of the basics of indexing are lesser known and so I think this course is geared towards each and every one of you out there who wants to understand little bit more about indexing. Pinal: So what I understand is that if I enrolled in this course I will have a minimum understanding about indexing when dealing with performance tuning.  Right? Vinod: Exactly. In this course is we have tried to give you a nice summary. We are talking about clustered indexing, non clustered indexing, too many indexes, too few indexes, over indexing, under indexing, duplicate indexing, columns tune indexing, with SQL Server 2012. There’s lot’s to learn. Pinal: You can see the URL [http://bit.ly/sql-index] of the course on the screen. Go ahead, attend, and let us know what you think about it. Thank you. Vinod: Thank you. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology, Video

    Read the article

  • “Query cost (relative to the batch)” <> Query cost relative to batch

    - by Dave Ballantyne
    OK, so that is quite a contradictory title, but unfortunately it is true that a common misconception is that the query with the highest percentage relative to batch is the worst performing.  Simply put, it is a lie, or more accurately we dont understand what these figures mean. Consider the two below simple queries: SELECT * FROM Person.BusinessEntity JOIN Person.BusinessEntityAddress ON Person.BusinessEntity.BusinessEntityID = Person.BusinessEntityAddress.BusinessEntityID go SELECT * FROM Sales.SalesOrderDetail JOIN Sales.SalesOrderHeader ON Sales.SalesOrderDetail.SalesOrderID = Sales.SalesOrderHeader.SalesOrderID After executing these and looking at the plans, I see this : So, a 13% / 87% split ,  but 13% / 87% of WHAT ? CPU ? Duration ? Reads ? Writes ? or some magical weighted algorithm ?  In a Profiler trace of the two we can find the metrics we are interested in. CPU and duration are well out but what about reads (210 and 1935)? To save you doing the maths, though you are more than welcome to, that’s a 90.2% / 9.8% split.  Close, but no cigar. Lets try a different tact.  Looking at the execution plan the “Estimated Subtree cost” of query 1 is 0.29449 and query 2 its 1.96596.  Again to save you the maths that works out to 13.03% and 86.97%, round those and thats the figures we are after.  But, what is the worrying word there ? “Estimated”.  So these are not “actual”  execution costs,  but what’s the problem in comparing the estimated costs to derive a meaning of “Most Costly”.  Well, in the case of simple queries such as the above , probably not a lot.  In more complicated queries , a fair bit. By modifying the second query to also show the total number of lines on each order SELECT *,COUNT(*) OVER (PARTITION BY Sales.SalesOrderDetail.SalesOrderID) FROM Sales.SalesOrderDetail JOIN Sales.SalesOrderHeader ON Sales.SalesOrderDetail.SalesOrderID = Sales.SalesOrderHeader.SalesOrderID The split in percentages is now 6% / 94% and the profiler metrics are : Even more of a discrepancy. Estimates can be out with actuals for a whole host of reasons,  scalar UDF’s are a particular bug bear of mine and in-fact the cost of a udf call is entirely hidden inside the execution plan.  It always estimates to 0 (well, a very small number). Take for instance the following udf Create Function dbo.udfSumSalesForCustomer(@CustomerId integer) returns money as begin Declare @Sum money Select @Sum= SUM(SalesOrderHeader.TotalDue) from Sales.SalesOrderHeader where CustomerID = @CustomerId return @Sum end If we have two statements , one that fires the udf and another that doesn't: Select CustomerID from Sales.Customer order by CustomerID go Select CustomerID,dbo.udfSumSalesForCustomer(Customer.CustomerID) from Sales.Customer order by CustomerID The costs relative to batch is a 50/50 split, but the has to be an actual cost of firing the udf. Indeed profiler shows us : No where even remotely near 50/50!!!! Moving forward to window framing functionality in SQL Server 2012 the optimizer sees ROWS and RANGE ( see here for their functional differences) as the same ‘cost’ too SELECT SalesOrderDetailID,SalesOrderId, SUM(LineTotal) OVER(PARTITION BY salesorderid ORDER BY Salesorderdetailid RANGE unbounded preceding) from Sales.SalesOrderdetail go SELECT SalesOrderDetailID,SalesOrderId, SUM(LineTotal) OVER(PARTITION BY salesorderid ORDER BY Salesorderdetailid Rows unbounded preceding) from Sales.SalesOrderdetail By now it wont be a great display to show you the Profiler trace reads a *tiny* bit different. So moral of the story, Percentage relative to batch can give a rough ‘finger in the air’ measurement, but dont rely on it as fact.

    Read the article

  • Refreshing Your PC Won’t Help: Why Bloatware is Still a Problem on Windows 8

    - by Chris Hoffman
    Bloatware is still a big problem on new Windows 8 and 8.1 PCs. Some websites will tell you that you can easily get rid of manufacturer-installed bloatware with Windows 8′s Reset feature, but they’re generally wrong. This junk software often turns the process of powering on your new PC from what could be a delightful experience into a tedious slog, forcing you to spend hours cleaning up your new PC before you can enjoy it. Why Refreshing Your PC (Probably) Won’t Help Manufacturers install software along with Windows on their new PCs. In addition to hardware drivers that allow the PC’s hardware to work properly, they install more questionable things like trial antivirus software and other nagware. Much of this software runs at boot, cluttering the system tray and slowing down boot times, often dramatically. Software companies pay computer manufacturers to include this stuff. It’s installed to make the PC manufacturer money at the cost of making the Windows computer worse for actual users. Windows 8 includes “Refresh Your PC” and “Reset Your PC” features that allow Windows users to quickly get their computers back to a fresh state. It’s essentially a quick, streamlined way of reinstalling Windows.  If you install Windows 8 or 8.1 yourself, the Refresh operation will give your PC a clean Windows system without any additional third-party software. However, Microsoft allows computer manufacturers to customize their Refresh images. In other words, most computer manufacturers will build their drivers, bloatware, and other system customizations into the Refresh image. When you Refresh your computer, you’ll just get back to the factory-provided system complete with bloatware. It’s possible that some computer manufacturers aren’t building bloatware into their refresh images in this way. It’s also possible that, when Windows 8 came out, some computer manufacturer didn’t realize they could do this and that refreshing a new PC would strip the bloatware. However, on most Windows 8 and 8.1 PCs, you’ll probably see bloatware come back when you refresh your PC. It’s easy to understand how PC manufacturers do this. You can create your own Refresh images on Windows 8 and 8.1 with just a simple command, replacing Microsoft’s image with a customized one. Manufacturers can install their own refresh images in the same way. Microsoft doesn’t lock down the Refresh feature. Desktop Bloatware is Still Around, Even on Tablets! Not only is typical Windows desktop bloatware not gone, it has tagged along with Windows as it moves to new form factors. Every Windows tablet currently on the market — aside from Microsoft’s own Surface and Surface 2 tablets — runs on a standard Intel x86 chip. This means that every Windows 8 and 8.1 tablet you see in stores has a full desktop with the capability to run desktop software. Even if that tablet doesn’t come with a keyboard, it’s likely that the manufacturer has preinstalled bloatware on the tablet’s desktop. Yes, that means that your Windows tablet will be slower to boot and have less memory because junk and nagging software will be on its desktop and in its system tray. Microsoft considers tablets to be PCs, and PC manufacturers love installing their bloatware. If you pick up a Windows tablet, don’t be surprised if you have to deal with desktop bloatware on it. Microsoft Surfaces and Signature PCs Microsoft is now selling their own Surface PCs that they built themselves — they’re now a “devices and services” company after all, not a software company. One of the nice things about Microsoft’s Surface PCs is that they’re free of the typical bloatware. Microsoft won’t take money from Norton to include nagging software that worsens the experience. If you pick up a Surface device that provides Windows 8.1 and 8 as Microsoft intended it — or install a fresh Windows 8.1 or 8 system — you won’t see any bloatware. Microsoft is also continuing their Signature program. New PCs purchased from Microsoft’s official stores are considered “Signature PCs” and don’t have the typical bloatware. For example, the same laptop could be full of bloatware in a traditional computer store and clean, without the nasty bloatware when purchased from a Microsoft Store. Microsoft will also continue to charge you $99 if you want them to remove your computer’s bloatware for you — that’s the more questionable part of the Signature program. Windows 8 App Bloatware is an Improvement There’s a new type of bloatware on new Windows 8 systems, which is thankfully less harmful. This is bloatware in the form of included “Windows 8-style”, “Store-style”, or “Modern” apps in the new, tiled interface. For example, Amazon may pay a computer manufacturer to include the Amazon Kindle app from the Windows Store. (The manufacturer may also just receive a cut of book sales for including it. We’re not sure how the revenue sharing works — but it’s clear PC manufacturers are getting money from Amazon.) The manufacturer will then install the Amazon Kindle app from the Windows Store by default. This included software is technically some amount of clutter, but it doesn’t cause the problems older types of bloatware does. It won’t automatically load and delay your computer’s startup process, clutter your system tray, or take up memory while you’re using your computer. For this reason, a shift to including new-style apps as bloatware is a definite improvement over older styles of bloatware. Unfortunately, this type of bloatware has not replaced traditional desktop bloatware, and new Windows PCs will generally have both. Windows RT is Immune to Typical Bloatware, But… Microsoft’s Windows RT can’t run Microsoft desktop software, so it’s immune to traditional bloatware. Just as you can’t install your own desktop programs on it, the Windows RT device’s manufacturer can’t install their own desktop bloatware. While Windows RT could be an antidote to bloatware, this advantage comes at the cost of being able to install any type of desktop software at all. Windows RT has also seemingly failed — while a variety of manufacturers came out with their own Windows RT devices when Windows 8 was first released, they’ve all since been withdrawn from the market. Manufacturers who created Windows RT devices have criticized it in the media and stated they have no plans to produce any future Windows RT devices. The only Windows RT devices still on the market are Microsoft’s Surface (originally named Surface RT) and Surface 2. Nokia is also coming out with their own Windows RT tablet, but they’re in the process of being purchased by Microsoft. In other words, Windows RT just isn’t a factor when it comes to bloatware — you wouldn’t get a Windows RT device unless you purchased a Surface, but those wouldn’t come with bloatware anyway. Removing Bloatware or Reinstalling Windows 8.1 While bloatware is still a problem on new Windows systems and the Refresh option probably won’t help you, you can still eliminate bloatware in the traditional way. Bloatware can be uninstalled from the Windows Control Panel or with a dedicated removal tool like PC Decrapifier, which tries to automatically uninstall the junk for you. You can also do what Windows geeks have always tended to do with new computers — reinstall Windows 8 or 8.1 from scratch with installation media from Microsoft. You’ll get a clean Windows system and you can install only the hardware drivers and other software you need. Unfortunately, bloatware is still a big problem for Windows PCs. Windows 8 tries to do some things to address bloatware, but it ultimately comes up short. Most Windows PCs sold in most stores to most people will still have the typical bloatware slowing down the boot process, wasting memory, and adding clutter. Image Credit: LG on Flickr, Intel Free Press on Flickr, Wilson Hui on Flickr, Intel Free Press on Flickr, Vernon Chan on Flickr     

    Read the article

  • The Unspoken Truth About Managing Geeks

    - by Malcolm Anderson
    Late last year, Jeff Ello wrote a great article for cio magazine entitled "The Unspoken Truth About Managing Geeks" (http://www.cio.com/article/501697/The_Unspoken_Truth_About_Managing_Geeks)   If you are a non-geek managing geeks you will find this article enlightening.  It doesn't provide much in the way of soltutions, but it does show you how you can stop digging the hole that you're in, deeper than it already is.   In the event that you are a geek with a manager that just doesn't get it, then just print out this sleek little 4 page article and drop it in your managers in-basket.

    Read the article

  • Android SDK PATH Error: doesn´t find home directory [migrated]

    - by THeo Oliveira
    I use Ubuntu 12.10 and I tried to follow this guide Install android sdk and eclipse in Ubuntu 12.04 but I keep getting this error: Error: The AVD manager normally uses the user's profile directory to store AVD files. However it failed to find the default profile directory. To fix this, please set the environment variable ANDROID_SDK_HOME to a valid path such as ¨~¨. Any ideas how can I fix this?

    Read the article

  • Visual Studio 2010 Service Pack 1 And .NET Framework 4.0 Update

    - by Paulo Morgado
    As announced by Jason Zender in his blog post, Visual Studio 2010 Service Pack 1 is available for download for MSDN subscribers since March 8 and is available to the general public since March 10. Brian Harry provides information related to TFS and S. "Soma" Somasegar provides information on the latest Visual Studio 2010 enhancements. With this service pack for Visual Studio an update to the .NET Framework 4.0 is also released. For detailed information about these releases, please refer to the corresponding KB articles: Update for Microsoft .NET Framework 4 Description of Visual Studio 2010 Service Pack 1 Update: When I was upgrading from the Beta to the final release on Windows 7 Enterprise 64bit, the instalation hanged with Returning IDCANCEL. INSTALLMESSAGE_WARNING [Warning 1946.Property 'System.AppUserModel.ExcludeFromShowInNewInstall' for shortcut 'Manage Help Settings - ENU.lnk' could not be set.]. Canceling the installation didn’t work and I had to kill the setup.exe process. When reapplying it again, rollbacks were reported, so I reapplied it again – this time with succes.

    Read the article

  • Java Spotlight Episode 57: Live From #Devoxx - Ben Evans and Martijn Verburg of the London JUG with Yara Senger of SouJava

    - by Roger Brinkley
    Tweet Live from Devoxx 11,  an interview with Ben Evans and Martijn Verburg from the London JUG along with  Yara Senger from the SouJava JUG on the JCP Executive Committee Elections, JSR 248, and Adopt-a-JSR program. Both the London JUG and SouJava JUG are JCP Standard Edition Executive Committee Members. Joining us this week on the Java All Star Developer Panel are Geertjan Wielenga, Principal Product Manger in Oracle Developer Tools; Stephen Chin, Java Champion and Java FX expert; and Antonio Goncalves, Paris JUG leader. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link: Java Spotlight Podcast in iTunes. Show Notes News Netbeans 7.1 JDK 7 upgrade tools Netbeans First Patch Program OpenJFX approved as an OpenJDK project Devoxx France April 18-20, 2012 Events Nov 22-25, OTN Developer Days in the Nordics Nov 22-23, Goto Conference, Prague Dec 6-8, Java One Brazil, Sao Paulo Feature interview Ben Evans has lived in "Interesting Times" in technology - he was the lead performance testing engineer for the Google IPO, worked on the initial UK trials of 3G networks with BT, built award-winning websites for some of Hollywood's biggest hits of the 90s, rearchitected and reimagined technology helping some of the most vulnerable people in the UK and has worked on everything from some of the UKs very first ecommerce sites, through to multi-billion dollar currency trading systems. He helps to run the London Java Community, and represents the JUG on the Java SE/EE Executive Committee. His first book "The Well-Grounded Java Developer" (with Martijn Verburg) has just been published by Manning. Martijn Verburg (aka 'the Diabolical Developer') herds Cats in the Java/open source communities and is constantly humbled by the creative power to be found there. Currently he resides in London where he co-leads the London JUG (a JCP EC member), runs a couple of open source projects & drinks too much beer at his local pub. You can find him online moderating at the Javaranch or discussing (ranting?) subjects on the Prgorammers Stack Exchange site. Most recently he's become a regular speaker at conferences on Java, open source and software development and has recently wrapped up his first Manning title - "The Well-Grounded Java Developer" with his co-author Ben Evans. Yara Senger is the partner and director of teacher education and Globalcode, graduated from the University of Sao Paulo, Sao Carlos, has significant experience in Brazil and abroad in developing solutions to critical Java. She is the co-creator of Java programs Academy and Academy of Web Developer, accumulating over 1000 hours in the classroom teaching Java. She currently serves as the President of Sou Java. In this interview Ben, Martijn, and Yara talk about the JCP Executive Committee Elections, JSR 348, and the Adopt-a-JSR program. Mail Bag What's Cool Show Transcripts Transcript for this show is available here when available.

    Read the article

< Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >