Search Results

Search found 6638 results on 266 pages for 'cache coherence'.

Page 202/266 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • Varnish "FetchError no backend connection" error

    - by clueless-anon
    Varnishlog: 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1340829925 1.0 12 SessionOpen c 79.124.74.11 3063 :80 12 SessionClose c EOF 12 StatSess c 79.124.74.11 3063 0 1 0 0 0 0 0 0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1340829928 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1340829931 1.0 12 SessionOpen c 108.62.115.226 46211 :80 12 ReqStart c 108.62.115.226 46211 467185881 12 RxRequest c GET 12 RxURL c / 12 RxProtocol c HTTP/1.0 12 RxHeader c User-Agent: Pingdom.com_bot_version_1.4_(http://www.pingdom.com/) 12 RxHeader c Host: www.mysite.com 12 VCL_call c recv lookup 12 VCL_call c hash 12 Hash c / 12 Hash c www.mysite.com 12 VCL_return c hash 12 VCL_call c miss fetch 12 FetchError c no backend connection 12 VCL_call c error deliver 12 VCL_call c deliver deliver 12 TxProtocol c HTTP/1.1 12 TxStatus c 503 12 TxResponse c Service Unavailable 12 TxHeader c Server: Varnish 12 TxHeader c Content-Type: text/html; charset=utf-8 12 TxHeader c Retry-After: 5 12 TxHeader c Content-Length: 418 12 TxHeader c Accept-Ranges: bytes 12 TxHeader c Date: Wed, 27 Jun 2012 20:45:31 GMT 12 TxHeader c X-Varnish: 467185881 12 TxHeader c Age: 1 12 TxHeader c Via: 1.1 varnish 12 TxHeader c Connection: close 12 Length c 418 12 ReqEnd c 467185881 1340829931.192433119 1340829931.891024113 0.000051022 0.698516846 0.000074035 12 SessionClose c error 12 StatSess c 108.62.115.226 46211 1 1 1 0 0 0 256 418 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1340829934 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1340829937 1.0 netstat -tlnp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 3086/nginx tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1915/varnishd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1279/sshd tcp 0 0 127.0.0.2:25 0.0.0.0:* LISTEN 3195/sendmail: MTA: tcp 0 0 127.0.0.2:6082 0.0.0.0:* LISTEN 1914/varnishd tcp 0 0 127.0.0.2:9000 0.0.0.0:* LISTEN 1317/php-fpm.conf) tcp 0 0 127.0.0.2:3306 0.0.0.0:* LISTEN 1192/mysqld tcp 0 0 127.0.0.2:587 0.0.0.0:* LISTEN 3195/sendmail: MTA: tcp 0 0 127.0.0.2:11211 0.0.0.0:* LISTEN 3072/memcached tcp6 0 0 :::8080 :::* LISTEN 3086/nginx tcp6 0 0 :::80 :::* LISTEN 1915/varnishd tcp6 0 0 :::22 :::* LISTEN 1279/sshd /etc/nginx/site-enabled/default server { listen 8080; ## listen for ipv4; this line is default and implied listen [::]:8080 default ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.html index.htm index.php; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ /index.html; } location /doc { root /usr/share; autoindex on; allow 127.0.0.2; deny all; } location /images { root /usr/share; autoindex off; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.2:9000; fastcgi_index index.php; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } /etc/nginx/sites-enabled/www.mysite.com.vhost server { listen 8080; server_name www.mysite.com mysite.com.net; root /var/www/www.mysite.com/web; if ($http_host != "www.mysite.com") { rewrite ^ http://www.mysite.com$request_uri permanent; } index index.php index.html; location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } # Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac). location ~ /\. { deny all; access_log off; log_not_found off; } location / { try_files $uri $uri/ /index.php?$args; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; location ~* \.(jpg|jpeg|png|gif|css|js|ico)$ { expires max; log_not_found off; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.2:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } include /var/www/www.mysite.com/web/nginx.conf; location ~ /nginx.conf { deny all; access_log off; log_not_found off; } } /etc/varnish/default.vcl # This is a basic VCL configuration file for varnish. See the vcl(7) # man page for details on VCL syntax and semantics. # # Default backend definition. Set this to point to your content # server. # backend default { .host = "127.0.0.2"; .port = "8080"; # .connect_timeout = 600s; #.first_byte_timeout = 600s; # .between_bytes_timeout = 600s; # .max_connections = 800; Note: uncommenting the last four options at default.vcl made no difference. cat /etc/default/varnish # Configuration file for varnish # # /etc/init.d/varnish expects the variables $DAEMON_OPTS, $NFILES and $MEMLOCK # to be set from this shell script fragment. # # Should we start varnishd at boot? Set to "yes" to enable. START=yes # Maximum number of open files (for ulimit -n) NFILES=131072 # Maximum locked memory size (for ulimit -l) # Used for locking the shared memory log in memory. If you increase log size, # you need to increase this number as well MEMLOCK=82000 # Default varnish instance name is the local nodename. Can be overridden with # the -n switch, to have more instances on a single server. INSTANCE=$(uname -n) # This file contains 4 alternatives, please use only one. ## Alternative 1, Minimal configuration, no VCL # # Listen on port 6081, administration on localhost:6082, and forward to # content server on localhost:8080. Use a 1GB fixed-size cache file. # # DAEMON_OPTS="-a :6081 \ # -T localhost:6082 \ # -b localhost:8080 \ # -u varnish -g varnish \ # -S /etc/varnish/secret \ # -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G" ## Alternative 2, Configuration with VCL # # Listen on port 6081, administration on localhost:6082, and forward to # one content server selected by the vcl file, based on the request. Use a 1GB # fixed-size cache file. # DAEMON_OPTS="-a :80 \ -T 127.0.0.2:6082 \ -f /etc/varnish/default.vcl \ -S /etc/varnish/secret \ -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G" If you need any other info let me know. I am all out of clue as to whats the problem.

    Read the article

  • How to use sudo with WinSCP and ProFTPd?

    - by Gaia
    I need to run the SFTP fileserver binary as root, but direct root login is not allowed. In WinSCP, if I use "default" on SFTP server protocol option everything works as expected. Following the instructions to sudo in WinSCP, I tried using "sudo /usr/sbin/proftpd" (works on the command line without any prompts) but it brings up "Cannot initialize SFTP protocol. Is the host running a SFTP server?" How to use sudo with WinSCP and ProFTPd? WinSCP 4.3.7 GUI Protocol: SFTP-3 CentOS 6.2 Webmin/Virtualmin (Current Version) PS: only cert based login is allowed . 2012-06-17 11:05:56.998 -------------------------------------------------------------------------- . 2012-06-17 11:05:56.998 WinSCP Version 4.3.7 (Build 1679) (OS 6.1.7601 Service Pack 1) . 2012-06-17 11:05:56.998 Configuration: HKEY_CURRENT_USER\Software\Martin Prikryl\WinSCP 2\ . 2012-06-17 11:05:56.999 Login time: Sunday, June 17, 2012 11:05:56 AM . 2012-06-17 11:05:56.999 -------------------------------------------------------------------------- . 2012-06-17 11:05:56.999 Session name: KVM1 (Modified stored session) . 2012-06-17 11:05:57.047 Host name: mykvm.com (Port: 22) . 2012-06-17 11:05:57.048 User name: adminuser (Password: No, Key file: Yes) . 2012-06-17 11:05:57.048 Tunnel: No . 2012-06-17 11:05:57.048 Transfer Protocol: SFTP (SCP) . 2012-06-17 11:05:57.048 Ping type: -, Ping interval: 30 sec; Timeout: 15 sec . 2012-06-17 11:05:57.048 Proxy: none . 2012-06-17 11:05:57.048 SSH protocol version: 2; Compression: Yes . 2012-06-17 11:05:57.048 Bypass authentication: No . 2012-06-17 11:05:57.048 Try agent: Yes; Agent forwarding: No; TIS/CryptoCard: No; KI: Yes; GSSAPI: No . 2012-06-17 11:05:57.048 Ciphers: aes,blowfish,3des,WARN,arcfour,des; Ssh2DES: No . 2012-06-17 11:05:57.048 SSH Bugs: -,-,-,-,-,-,-,-,- . 2012-06-17 11:05:57.048 SFTP Bugs: -,- . 2012-06-17 11:05:57.048 Return code variable: Autodetect; Lookup user groups: Yes . 2012-06-17 11:05:57.048 Shell: default . 2012-06-17 11:05:57.048 EOL: 0, UTF: 2 . 2012-06-17 11:05:57.048 Clear aliases: Yes, Unset nat.vars: Yes, Resolve symlinks: Yes . 2012-06-17 11:05:57.048 LS: ls -la, Ign LS warn: Yes, Scp1 Comp: No . 2012-06-17 11:05:57.048 Local directory: default, Remote directory: home, Update: No, Cache: Yes . 2012-06-17 11:05:57.048 Cache directory changes: Yes, Permanent: Yes . 2012-06-17 11:05:57.048 DST mode: 1 . 2012-06-17 11:05:57.048 -------------------------------------------------------------------------- . 2012-06-17 11:05:57.113 Looking up host "mykvm.com" . 2012-06-17 11:05:57.132 Connecting to xxx.xxx.128.59 port 22 . 2012-06-17 11:05:57.499 Server version: SSH-2.0-OpenSSH_5.3 . 2012-06-17 11:05:57.499 Using SSH protocol version 2 . 2012-06-17 11:05:57.499 We claim version: SSH-2.0-WinSCP_release_4.3.7 . 2012-06-17 11:05:57.679 Server supports delayed compression; will try this later . 2012-06-17 11:05:57.679 Doing Diffie-Hellman group exchange . 2012-06-17 11:05:58.077 Doing Diffie-Hellman key exchange with hash SHA-1 . 2012-06-17 11:05:58.498 Host key fingerprint is: . 2012-06-17 11:05:58.498 ssh-rsa 2048 bd:e4:34:b1:d4:69:d6:4e:e4:26:04:8b:b7:b3:de:c3 . 2012-06-17 11:05:58.498 Initialised AES-256 SDCTR client->server encryption . 2012-06-17 11:05:58.498 Initialised HMAC-SHA1 client->server MAC algorithm . 2012-06-17 11:05:58.498 Initialised AES-256 SDCTR server->client encryption . 2012-06-17 11:05:58.498 Initialised HMAC-SHA1 server->client MAC algorithm . 2012-06-17 11:05:58.922 Reading private key file "D:\id_rsa.ppk" ! 2012-06-17 11:05:58.924 Using username "adminuser". . 2012-06-17 11:05:59.550 Offered public key . 2012-06-17 11:05:59.743 Offer of public key accepted ! 2012-06-17 11:05:59.743 Authenticating with public key "masterkey for admin" . 2012-06-17 11:05:59.764 Prompt (3, SSH key passphrase, , Passphrase for key "masterkey for admin": ) . 2012-06-17 11:06:02.938 Sent public key signature . 2012-06-17 11:06:03.352 Access granted . 2012-06-17 11:06:03.352 Initiating key re-exchange (enabling delayed compression) . 2012-06-17 11:06:03.765 Doing Diffie-Hellman group exchange . 2012-06-17 11:06:03.955 Doing Diffie-Hellman key exchange with hash SHA-1 . 2012-06-17 11:06:04.410 Initialised AES-256 SDCTR client->server encryption . 2012-06-17 11:06:04.410 Initialised HMAC-SHA1 client->server MAC algorithm . 2012-06-17 11:06:04.410 Initialised zlib (RFC1950) compression . 2012-06-17 11:06:04.410 Initialised AES-256 SDCTR server->client encryption . 2012-06-17 11:06:04.410 Initialised HMAC-SHA1 server->client MAC algorithm . 2012-06-17 11:06:04.410 Initialised zlib (RFC1950) decompression . 2012-06-17 11:06:04.839 Opened channel for session . 2012-06-17 11:06:05.247 Started a shell/command . 2012-06-17 11:06:05.253 -------------------------------------------------------------------------- . 2012-06-17 11:06:05.253 Using SFTP protocol. . 2012-06-17 11:06:05.253 Doing startup conversation with host. > 2012-06-17 11:06:05.259 Type: SSH_FXP_INIT, Size: 5, Number: -1 . 2012-06-17 11:06:05.354 Server sent command exit status 0 . 2012-06-17 11:06:05.354 Disconnected: All channels closed * 2012-06-17 11:06:05.380 (ESshFatal) Connection has been unexpectedly closed. Server sent command exit status 0. * 2012-06-17 11:06:05.380 Cannot initialize SFTP protocol. Is the host running a SFTP server?

    Read the article

  • How to diagnose failing 6Gbps SATA connection?

    - by whitequark
    I have a Samsung RC530 notebook and OCZ Vertex-3 6Gbps SATA SSD working in AHCI mode. # dmesg | grep DMI SAMSUNG ELECTRONICS CO., LTD. RC530/RC730/RC530/RC730, BIOS 03WD.M008.20110927.PSA 09/27/2011 # lspci -nn 00:1f.2 SATA controller [0106]: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller [8086:1c03] (rev 04) # sdparm -a /dev/sda /dev/sda: ATA OCZ-VERTEX3 2.15 At the boot, the following messages are present in dmesg (I am running Debian wheezy @ Linux 3.2.8): # dmesg | grep -iE '(ata|ahci)' [ 5.179783] ahci 0000:00:1f.2: version 3.0 [ 5.179802] ahci 0000:00:1f.2: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 5.179864] ahci 0000:00:1f.2: irq 42 for MSI/MSI-X [ 5.195424] ahci 0000:00:1f.2: AHCI 0001.0300 32 slots 6 ports 6 Gbps 0x5 impl SATA mode [ 5.195429] ahci 0000:00:1f.2: flags: 64bit ncq sntf pm led clo pio slum part ems apst [ 5.195436] ahci 0000:00:1f.2: setting latency timer to 64 [ 5.204035] scsi0 : ahci [ 5.204301] scsi1 : ahci [ 5.204447] scsi2 : ahci [ 5.204592] scsi3 : ahci [ 5.204682] scsi4 : ahci [ 5.204799] scsi5 : ahci [ 5.204917] ata1: SATA max UDMA/133 abar m2048@0xf7c06000 port 0xf7c06100 irq 42 [ 5.204920] ata2: DUMMY [ 5.204923] ata3: SATA max UDMA/133 abar m2048@0xf7c06000 port 0xf7c06200 irq 42 [ 5.204924] ata4: DUMMY [ 5.204926] ata5: DUMMY [ 5.204927] ata6: DUMMY [ 5.523039] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) [ 5.525911] ata3.00: ATAPI: TSSTcorp CDDVDW SN-208BB, SC00, max UDMA/100 [ 5.531006] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 5.533703] ata3.00: configured for UDMA/100 [ 5.542790] ata1.00: ATA-8: OCZ-VERTEX3, 2.15, max UDMA/133 [ 5.542800] ata1.00: 117231408 sectors, multi 16: LBA48 NCQ (depth 31/32), AA [ 5.552751] ata1.00: configured for UDMA/133 [ 5.553050] scsi 0:0:0:0: Direct-Access ATA OCZ-VERTEX3 2.15 PQ: 0 ANSI: 5 [ 5.559621] scsi 2:0:0:0: CD-ROM TSSTcorp CDDVDW SN-208BB SC00 PQ: 0 ANSI: 5 [ 5.564059] sd 0:0:0:0: [sda] 117231408 512-byte logical blocks: (60.0 GB/55.8 GiB) [ 5.564127] sd 0:0:0:0: [sda] Write Protect is off [ 5.564131] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 5.564158] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 5.564582] sda: sda1 [ 5.564810] sd 0:0:0:0: [sda] Attached SCSI disk [ 5.572006] sr0: scsi3-mmc drive: 16x/24x writer dvd-ram cd/rw xa/form2 cdda tray [ 5.572010] cdrom: Uniform CD-ROM driver Revision: 3.20 [ 5.572189] sr 2:0:0:0: Attached scsi CD-ROM sr0 [ 6.717181] ata1.00: exception Emask 0x50 SAct 0x1 SErr 0x280900 action 0x6 frozen [ 6.717238] ata1.00: irq_stat 0x08000000, interface fatal error [ 6.717291] ata1: SError: { UnrecovData HostInt 10B8B BadCRC } [ 6.717342] ata1.00: failed command: READ FPDMA QUEUED [ 6.717395] ata1.00: cmd 60/50:00:20:39:58/00:00:00:00:00/40 tag 0 ncq 40960 in [ 6.717396] res 40/00:00:20:39:58/00:00:00:00:00/40 Emask 0x50 (ATA bus error) [ 6.717503] ata1.00: status: { DRDY } [ 6.717553] ata1: hard resetting link [ 7.033417] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 7.055234] ata1.00: configured for UDMA/133 [ 7.055262] ata1: EH complete [ 7.147280] ata1.00: exception Emask 0x10 SAct 0xf8 SErr 0x280100 action 0x6 frozen [ 7.147340] ata1.00: irq_stat 0x08000000, interface fatal error [ 7.147393] ata1: SError: { UnrecovData 10B8B BadCRC } [ 7.147460] ata1.00: failed command: READ FPDMA QUEUED [ 7.147529] ata1.00: cmd 60/08:18:88:17:41/00:00:02:00:00/40 tag 3 ncq 4096 in [ 7.147531] res 40/00:38:50:99:64/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.147691] ata1.00: status: { DRDY } [ 7.147754] ata1.00: failed command: READ FPDMA QUEUED [ 7.147821] ata1.00: cmd 60/00:20:f8:42:4c/01:00:02:00:00/40 tag 4 ncq 131072 in [ 7.147822] res 40/00:38:50:99:64/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.147977] ata1.00: status: { DRDY } [ 7.148036] ata1.00: failed command: READ FPDMA QUEUED [ 7.148100] ata1.00: cmd 60/50:28:f8:43:4c/00:00:02:00:00/40 tag 5 ncq 40960 in [ 7.148101] res 40/00:38:50:99:64/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.148255] ata1.00: status: { DRDY } [ 7.148315] ata1.00: failed command: READ FPDMA QUEUED [ 7.148379] ata1.00: cmd 60/00:30:50:98:64/01:00:02:00:00/40 tag 6 ncq 131072 in [ 7.148380] res 40/00:38:50:99:64/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.148534] ata1.00: status: { DRDY } [ 7.148593] ata1.00: failed command: READ FPDMA QUEUED [ 7.148657] ata1.00: cmd 60/00:38:50:99:64/01:00:02:00:00/40 tag 7 ncq 131072 in [ 7.148658] res 40/00:38:50:99:64/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.148813] ata1.00: status: { DRDY } [ 7.148875] ata1: hard resetting link [ 7.464842] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 7.486794] ata1.00: configured for UDMA/133 [ 7.486822] ata1: EH complete [ 7.546395] ata1.00: exception Emask 0x10 SAct 0x2f SErr 0x280100 action 0x6 frozen [ 7.546470] ata1.00: irq_stat 0x08000000, interface fatal error [ 7.546531] ata1: SError: { UnrecovData 10B8B BadCRC } [ 7.546588] ata1.00: failed command: READ FPDMA QUEUED [ 7.546648] ata1.00: cmd 60/00:00:e0:4b:61/01:00:02:00:00/40 tag 0 ncq 131072 in [ 7.546649] res 40/00:28:e0:4c:61/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.546794] ata1.00: status: { DRDY } [ 7.546847] ata1.00: failed command: READ FPDMA QUEUED [ 7.546906] ata1.00: cmd 60/00:08:90:2f:48/01:00:02:00:00/40 tag 1 ncq 131072 in [ 7.546907] res 40/00:28:e0:4c:61/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.547053] ata1.00: status: { DRDY } [ 7.547106] ata1.00: failed command: READ FPDMA QUEUED [ 7.547165] ata1.00: cmd 60/00:10:90:30:48/01:00:02:00:00/40 tag 2 ncq 131072 in [ 7.547166] res 40/00:28:e0:4c:61/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.547310] ata1.00: status: { DRDY } [ 7.547363] ata1.00: failed command: READ FPDMA QUEUED [ 7.547422] ata1.00: cmd 60/00:18:50:c7:64/01:00:02:00:00/40 tag 3 ncq 131072 in [ 7.547423] res 40/00:28:e0:4c:61/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.547568] ata1.00: status: { DRDY } [ 7.547621] ata1.00: failed command: READ FPDMA QUEUED [ 7.547681] ata1.00: cmd 60/00:28:e0:4c:61/01:00:02:00:00/40 tag 5 ncq 131072 in [ 7.547682] res 40/00:28:e0:4c:61/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.547825] ata1.00: status: { DRDY } [ 7.547882] ata1: hard resetting link [ 7.864408] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 7.886351] ata1.00: configured for UDMA/133 [ 7.886375] ata1: EH complete [ 7.890012] ata1: limiting SATA link speed to 3.0 Gbps [ 7.890016] ata1.00: exception Emask 0x10 SAct 0x7 SErr 0x280100 action 0x6 frozen [ 7.890093] ata1.00: irq_stat 0x08000000, interface fatal error [ 7.890152] ata1: SError: { UnrecovData 10B8B BadCRC } [ 7.890210] ata1.00: failed command: READ FPDMA QUEUED [ 7.890272] ata1.00: cmd 60/00:00:90:33:48/01:00:02:00:00/40 tag 0 ncq 131072 in [ 7.890273] res 40/00:10:e0:4f:61/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.890418] ata1.00: status: { DRDY } [ 7.890472] ata1.00: failed command: READ FPDMA QUEUED [ 7.890530] ata1.00: cmd 60/00:08:90:34:48/01:00:02:00:00/40 tag 1 ncq 131072 in [ 7.890531] res 40/00:10:e0:4f:61/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.890672] ata1.00: status: { DRDY } [ 7.890724] ata1.00: failed command: READ FPDMA QUEUED [ 7.890781] ata1.00: cmd 60/78:10:e0:4f:61/00:00:02:00:00/40 tag 2 ncq 61440 in [ 7.890782] res 40/00:10:e0:4f:61/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.890925] ata1.00: status: { DRDY } [ 7.890981] ata1: hard resetting link [ 8.208021] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 320) [ 8.230100] ata1.00: configured for UDMA/133 [ 8.230124] ata1: EH complete Looks like the SATA interface tries to use 6Gbps link, then fails miserably and Linux fallbacks to 3Gbps. This is somewhat fine for me, as the system boots successfully each time and works under high load (cd linux-3.2.8; make -j16). I've also ran memtest86+ and it did not find any errors. What concerns me more is that Grub sometimes takes a long time to load the images and/or fails to load itself completely. The error is consistent and is probablistic: that is, each time I boot I have a certain chance to fail. Actually, I have a slight suspiction on the cause of the failure. Look at the cabling: What kind of engineer does it this way? Nah. Even 1Gbps Ethernet hardly tolerates cables bent over a small angle, and there you have 6Gbps SATA. How cound I determine and fix the cause of errors and/or switch the link to 3Gbps mode permanently?

    Read the article

  • e2fsck extremely slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there is some problem that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg ... [ 113.084079] usb 2-1: new high-speed USB device number 3 using ehci_hcd [ 113.217783] usb 2-1: New USB device found, idVendor=0bc2, idProduct=3320 [ 113.217787] usb 2-1: New USB device strings: Mfr=2, Product=3, SerialNumber=1 [ 113.217790] usb 2-1: Product: Expansion Desk [ 113.217792] usb 2-1: Manufacturer: Seagate [ 113.217794] usb 2-1: SerialNumber: NA4J4N6K [ 113.435404] usbcore: registered new interface driver uas [ 113.455315] Initializing USB Mass Storage driver... [ 113.468051] scsi5 : usb-storage 2-1:1.0 [ 113.468180] usbcore: registered new interface driver usb-storage [ 113.468182] USB Mass Storage support registered. [ 114.473105] scsi 5:0:0:0: Direct-Access Seagate Expansion Desk 070B PQ: 0 ANSI: 6 [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! ... So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and e2fsck scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So the numbers in the lseek lines before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if those numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. UPDATE2: Okey, big disappointment, the numbers are back to very small again (2012-11-07_0720) lseek(4, 52174548992, SEEK_SET) = 52174548992 read(4, "\374\312\22\\\325\215\213\23\0357U\222\246\370v^f(\312|f\212\362\343\375\373\342\4\204mU6"..., 4096) = 4096 lseek(4, 46603526144, SEEK_SET) = 46603526144 write(4, "\370\261\223\227\23?\4\4\217\264\320_Am\246CQ\313^\203U\253\274\204\277\2564n\227\177\267\343"..., 4096) = 4096 so either e2fsck goes over the data multiple times, or it just hops back and forth multiple times. Or my assumption that those numbers are bytes is wrong. UPDATE3: Since it's mentioned here http://forums.fedoraforum.org/showthread.php?t=282125&page=2 that you can testisk while e2fsck is running, i tried that, though not with a lot of success. When asking testdisk to display the data of my partition, this is what I get: TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org 1 P Linux 0 4 5 45600 40 8 732566272 Can't open filesystem. Filesystem seems damaged. And this is what strace currently gives me (2012-11-07_1030) lseek(4, 212460343296, SEEK_SET) = 212460343296 read(4, "\315Mb\265v\377Gn \24\f\205EHh\2349~\330\273\203\3375\206\10\r3=W\210\372\352"..., 4096) = 4096 lseek(4, 47347830784, SEEK_SET) = 47347830784 write(4, "]\204\223\300I\357\4\26\33+\243\312G\230\250\371*m2U\t_\215\265J \252\342Pm\360D"..., 4096) = 4096 (times are in CET)

    Read the article

  • DNS resolution problems; dig SERVFAIL error

    - by JustinP
    I'm setting up a couple of dedicated servers, and having problems setting up my nameservers properly. One of these is a LEMP server (LAMP with nginx in place of Apache), and the other will function solely as an email server, running exim/dovecot/ASSP antispam (no Apache). The LEMP server is CentOS 5.5, with no control panel, while the email server is CentOS 5.5 as well, with cPanel/WHM. So, I've had problems getting DNS set up properly. I have two domains, each one pointing to one of these servers. The nameservers are registered correctly with the domain registrar, and the nameserver IPs are entered correctly as well. I've spoken to tech support at the registrar and they confirm that everything is set up on their end. Not knowing much about DNS, I googled nameservers and DNS until I nearly went blind, and spent hours messing with the configuration. Eventually, I got the LEMP server's DNS working properly (no cPanel). Pleased with this triumph, I'm trying to mimic that configuration and repeat the process with the email server, and it's just not happening. The nameserver starts and stops, but the domain doesn't resolve. Things I have tried Going through standard procedures to set up DNS in WHM Clearing all DNS information, uninstalling BIND, then reinstalling all of that and again going through WHM procedures for setting up DNS Clearing all DNS information, and setting up BIND via shell (completely outside of cPanel) by using my config and zone files from the LEMP server as a template named runs just fine, but nothing is resolving. When I "dig any example.com" I get a SERVFAIL message. Nslookups return no information. Here are my config and zone files. named.conf controls { inet 127.0.0.1 allow { localhost; } keys { coretext-key; }; }; options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; // Those options should be used carefully because they disable port // randomization // query-source port 53; // query-source-v6 port 53; allow-query { any; }; allow-query-cache { any; }; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; view "localhost_resolver" { match-clients { 127.0.0.0/24; }; match-destinations { localhost; }; recursion yes; //zone "." IN { // type hint; // file "/var/named/named.ca"; //}; include "/etc/named.rfc1912.zones"; }; view "internal" { /* This view will contain zones you want to serve only to "internal" clients that connect via your directly attached LAN interfaces - "localnets" . */ match-clients { localnets; }; match-destinations { localnets; }; recursion yes; zone "." IN { type hint; file "/var/named/named.ca"; }; // include "/var/named/named.rfc1912.zones"; // you should not serve your rfc1912 names to non-localhost clients. // These are your "authoritative" internal zones, and would probably // also be included in the "localhost_resolver" view above : zone "example.com" { type master; file "data/db.example.com"; }; zone "3.2.1.in-addr.arpa" { type master; file "data/db.1.2.3"; }; }; view "external" { /* This view will contain zones you want to serve only to "external" clients * that have addresses that are not on your directly attached LAN interface subnets: */ match-clients { any; }; match-destinations { any; }; recursion no; // you'd probably want to deny recursion to external clients, so you don't // end up providing free DNS service to all takers allow-query-cache { none; }; // Disable lookups for any cached data and root hints // all views must contain the root hints zone: //include "/etc/named.rfc1912.zones"; zone "." IN { type hint; file "/var/named/named.ca"; }; zone "example.com" { type master; file "data/db.example.com"; }; zone "3.2.1.in-addr.arpa" { type master; file "data/db.1.2.3"; }; }; include "/etc/rndc.key"; db.example.com $TTL 1D ; ; Zone file for example.com ; ; Mandatory minimum for a working domain ; @ IN SOA ns1.example.com. contact.example.com. ( 2011042905 ; serial 8H ; refresh 2H ; retry 4W ; expire 1D ; default_ttl ) NS ns1.example.com. NS ns2.example.com. ns1 A 1.2.3.4 ns2 A 1.2.3.5 example.com. A 1.2.3.4 localhost A 127.0.0.1 www CNAME example.com. mail CNAME example.com. ; db.1.2.3 $TTL 1D $ORIGIN 3.2.1.in-addr.arpa. @ IN SOA ns1.example.com contact.example.com. ( 2011042908 ; 8H ; 2H ; 4W ; 1D ; ) NS ns1.example.com. NS ns2.example.com. 4 PTR hostname.example.com. 5 PTR hostname.example.com. ; Also of note: both of these servers are managed. Tech support is very responsive, and largely useless. Hours go by with them asking me questions to narrow down what could be wrong, then they pass the ticket to the tech on the next shift, who ignores everything that's happened already and spend his whole shift asking all the same questions the last guy asked. So, in summary: *Nameservers, with IPs, are correctly registered with domain registrar *named is configured and running *...and must not be configured correctly, because nothing resolves. Any help would be great. I changed domains and IPs in the files to generics, but let me know if you need to know the domain in question. Thanks! UPDATE I found that I didn't have 127.0.0.1 in /etc/resolv.conf, so I added it, along with my two public IPs that I have named listening on. resolv.conf search www.example.com example.com nameserver 127.0.0.1 nameserver 7.8.9.10 ;Was in here by default, authoritative nameserver of hosting company nameserver 1.2.3.4 ;Public IP #1 nameserver 1.2.3.5 ;Public IP #2 Now when I DIG example.com from the host, it resolves. If I try to DIG from my other server (in the same datacenter), or from the internet, it times out or I get SERVFAIL.

    Read the article

  • Rails 2 and Ngnix: https pages can't load css or js (but will load graphics)

    - by Max Williams
    ADMISSION: i've posted this same question on stackoverflow, before realising it's probabaly better suited to superuser, but it kind of depends on the answer: If it turns out to be a problem in my nginx config, it's definitely superuser. If it turns out to be a problem in my Rails config (or code) then it's arguably stackoverflow. I'm adding some https pages to my rails site. In order to test it locally, i'm running my site under one mongrel_rails instance (on 3000) and nginx. I've managed to get my nginx config to the point where i can actually go to the https pages, and they load. Except, the javascript and css files all fail to load: looking in the Network tab in chrome web tools, i can see that it is trying to load them via an https url. Eg, one of the non-working file urls is https://cmw-local.co.uk/stylesheets/cmw-logged-out.css?1383759216 I have these set up (or at least think i do) in my nginx config to redirect to the http versions of the static files. This seems to be working for graphics, but not for css and js files. If i click on this in the Network tab, it takes me to the above url, which redirects to the http version. So, the redirect seems to be working in some sense, but not when they're loaded by an https page. Like i say, i thought i had this covered in the second try_files directive in my config below, but maybe not. Can anyone see what i'm doing wrong? thanks, Max Here's my nginx config - sorry it's a bit lengthy! I think the error is likely to be in the first (ssl) server block: server { listen 443 ssl; keepalive_timeout 70; ssl_certificate /home/max/work/charanga/elearn_container/elearn/config/nginx/certs/max-local-server.crt; ssl_certificate_key /home/max/work/charanga/elearn_container/elearn/config/nginx/certs/max-local-server.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_protocols SSLv3 TLSv1; ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; server_name elearning.dev cmw-dev.co.uk cmw-dev.com cmw-nginx.co.uk cmw-local.co.uk; root /home/max/work/charanga/elearn_container/elearn; # ensure that we serve css, js, other statics when requested # as SSL, but if the files don't exist (i.e. any non /basket controller) # then redirect to the non-https version location / { try_files $uri @non-ssl-redirect; } # securely serve everything under /basket (/basket/checkout etc) # we need general too, because of the email/username checking location ~ ^/(basket|general|cmw/account/check_username_availability) { # make sure cached copies are revalidated once they're stale add_header Cache-Control "public, must-revalidate, proxy-revalidate"; # this serves Rails static files that exist without running # other rewrite tests try_files $uri @rails-ssl; expires 1h; } location @non-ssl-redirect { return 301 http://$host$request_uri; } location @rails-ssl { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 180; proxy_next_upstream off; proxy_pass http://127.0.0.1:3000; expires 0d; } } #upstream elrs { # server 127.0.0.1:3000; #} server { listen 80; server_name elearning.dev cmw-dev.co.uk cmw-dev.com cmw-nginx.co.uk cmw-local.co.uk; root /home/max/work/charanga/elearn_container/elearn; access_log /home/max/work/charanga/elearn_container/elearn/log/access.log; error_log /home/max/work/charanga/elearn_container/elearn/log/error.log debug; client_max_body_size 50M; index index.html index.htm; # gzip html, css & javascript, but don't gzip javascript for pre-SP2 MSIE6 (i.e. those *without* SV1 in their user-agent string) gzip on; gzip_http_version 1.1; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; #text/html # make sure gzip does not lose large gzipped js or css files # see http://blog.leetsoft.com/2007/7/25/nginx-gzip-ssl gzip_buffers 16 8k; # Disable gzip for certain browsers. #gzip_disable "MSIE [1-6].(?!.*SV1)"; gzip_disable "MSIE [1-6]"; # blank gif like it's 1995 location = /images/blank.gif { empty_gif; } # don't serve files beginning with dots location ~ /\. { access_log off; log_not_found off; deny all; } # we don't care if these are missing location = /robots.txt { log_not_found off; } location = /favicon.ico { log_not_found off; } location ~ affiliate.xml { log_not_found off; } location ~ copyright.xml { log_not_found off; } # convert urls with multiple slashes to a single / if ($request ~ /+ ) { rewrite ^(/)+(.*) /$2 break; } # X-Accel-Redirect # Don't tie up mongrels with serving the lesson zips or exes, let Nginx do it instead location /zips { internal; root /var/www/apps/e_learning_resource/shared/assets; } location /tmp { internal; root /; } location /mnt{ root /; } # resource library thumbnails should be served as usual location ~ ^/resource_library/.*/*thumbnail.jpg$ { if (!-f $request_filename) { rewrite ^(.*)$ /images/no-thumb.png break; } expires 1m; } # don't make Rails generate the dynamic routes to the dcr and swf, we'll do it here location ~ "lesson viewer.dcr" { rewrite ^(.*)$ "/assets/players/lesson viewer.dcr" break; } # we need this rule so we don't serve the older lessonviewer when the rule below is matched location = /assets/players/virgin_lesson_viewer/_cha5513/lessonViewer.swf { rewrite ^(.*)$ /assets/players/virgin_lesson_viewer/_cha5513/lessonViewer.swf break; } location ~ v6lessonViewer.swf { rewrite ^(.*)$ /assets/players/v6lessonViewer.swf break; } location ~ lessonViewer.swf { rewrite ^(.*)$ /assets/players/lessonViewer.swf break; } location ~ lgn111.dat { empty_gif; } # try to get autocomplete school names from memcache first, then # fallback to rails when we can't location /schools/autocomplete { set $memcached_key $uri?q=$arg_q; memcached_pass 127.0.0.1:11211; default_type text/html; error_page 404 =200 @rails; # 404 not really! Hand off to rails } location / { # make sure cached copies are revalidated once they're stale add_header Cache-Control "public, must-revalidate, proxy-revalidate"; # this serves Rails static files that exist without running other rewrite tests try_files $uri @rails; expires 1h; } location @rails { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 180; proxy_next_upstream off; proxy_pass http://127.0.0.1:3000; expires 0d; } }

    Read the article

  • Linux server is only using 60% of memory, then swapping

    - by Kamil Kisiel
    I've got a Linux server that's running our bacula backup system. The machine is grinding like mad because it's going heavy in to swap. The problem is, it's only using 60% of its physical memory! Here's the output from free -m: free -m total used free shared buffers cached Mem: 3949 2356 1593 0 0 1 -/+ buffers/cache: 2354 1595 Swap: 7629 1804 5824 and some sample output from vmstat 1: procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 0 2 1843536 1634512 0 4188 54 13 2524 666 2 1 1 1 89 9 0 1 11 1845916 1640724 0 388 2700 4816 221880 4879 14409 170721 4 3 63 30 0 0 9 1846096 1643952 0 0 4956 756 174832 804 12357 159306 3 4 63 30 0 0 11 1846104 1643532 0 0 4916 540 174320 580 10609 139960 3 4 64 29 0 0 4 1846084 1640272 0 2336 4080 524 140408 548 9331 118287 3 4 63 30 0 0 8 1846104 1642096 0 1488 2940 432 102516 457 7023 82230 2 4 65 29 0 0 5 1846104 1642268 0 1276 3704 452 126520 452 9494 119612 3 5 65 27 0 3 12 1846104 1641528 0 328 6092 608 187776 636 8269 113059 4 3 64 29 0 2 2 1846084 1640960 0 724 5948 0 111480 0 7751 116370 4 4 63 29 0 0 4 1846100 1641484 0 404 4144 1476 125760 1500 10668 105358 2 3 71 25 0 0 13 1846104 1641932 0 0 5872 828 153808 840 10518 128447 3 4 70 22 0 0 8 1846096 1639172 0 3164 3556 556 74884 580 5082 65362 2 2 73 23 0 1 4 1846080 1638676 0 396 4512 28 50928 44 2672 38277 2 2 80 16 0 0 3 1846080 1628808 0 7132 2636 0 28004 8 1358 14090 0 1 78 20 0 0 2 1844728 1618552 0 11140 7680 0 12740 8 763 2245 0 0 82 18 0 0 2 1837764 1532056 0 101504 2952 0 95644 24 802 3817 0 1 87 12 0 0 11 1842092 1633324 0 4416 1748 10900 143144 11024 6279 134442 3 3 70 24 0 2 6 1846104 1642756 0 0 4768 468 78752 468 4672 60141 2 2 76 20 0 1 12 1846104 1640792 0 236 4752 440 140712 464 7614 99593 3 5 58 34 0 0 3 1846084 1630368 0 6316 5104 0 20336 0 1703 22424 1 1 72 26 0 2 17 1846104 1638332 0 3168 4080 1720 211960 1744 11977 155886 3 4 65 28 0 1 10 1846104 1640800 0 132 4488 556 126016 584 8016 106368 3 4 63 29 0 0 14 1846104 1639740 0 2248 3436 428 114188 452 7030 92418 3 3 59 35 0 1 6 1846096 1639504 0 1932 5500 436 141412 460 8261 112210 4 4 63 29 0 0 10 1846104 1640164 0 3052 4028 448 147684 472 7366 109554 4 4 61 30 0 0 10 1846100 1641040 0 2332 4952 632 147452 664 8767 118384 3 4 63 30 0 4 8 1846084 1641092 0 664 4948 276 152264 292 6448 98813 5 5 62 28 0 Furthermore, the output of top sorted by CPU time seems to support the theory that swap is what's bogging down the system: top - 09:05:32 up 37 days, 23:24, 1 user, load average: 9.75, 8.24, 7.12 Tasks: 173 total, 1 running, 172 sleeping, 0 stopped, 0 zombie Cpu(s): 1.6%us, 1.4%sy, 0.0%ni, 76.1%id, 20.6%wa, 0.1%hi, 0.2%si, 0.0%st Mem: 4044632k total, 2405628k used, 1639004k free, 0k buffers Swap: 7812492k total, 1851852k used, 5960640k free, 436k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ TIME COMMAND 4174 root 17 0 63156 176 56 S 8 0.0 2138:52 35,38 bacula-fd 4185 root 17 0 63352 284 104 S 6 0.0 1709:25 28,29 bacula-sd 240 root 15 0 0 0 0 D 3 0.0 831:55.19 831:55 kswapd0 2852 root 10 -5 0 0 0 S 1 0.0 126:35.59 126:35 xfsbufd 2849 root 10 -5 0 0 0 S 0 0.0 119:50.94 119:50 xfsbufd 1364 root 10 -5 0 0 0 S 0 0.0 117:05.39 117:05 xfsbufd 21 root 10 -5 0 0 0 S 1 0.0 48:03.44 48:03 events/3 6940 postgres 16 0 43596 8 8 S 0 0.0 46:50.35 46:50 postmaster 1342 root 10 -5 0 0 0 S 0 0.0 23:14.34 23:14 xfsdatad/4 5415 root 17 0 1770m 108 48 S 0 0.0 15:03.74 15:03 bacula-dir 23 root 10 -5 0 0 0 S 0 0.0 13:09.71 13:09 events/5 5604 root 17 0 1216m 500 200 S 0 0.0 12:38.20 12:38 java 5552 root 16 0 1194m 580 248 S 0 0.0 11:58.00 11:58 java Here's the same sorted by virtual memory image size: top - 09:08:32 up 37 days, 23:27, 1 user, load average: 8.43, 8.26, 7.32 Tasks: 173 total, 1 running, 172 sleeping, 0 stopped, 0 zombie Cpu(s): 3.6%us, 3.4%sy, 0.0%ni, 62.2%id, 30.2%wa, 0.2%hi, 0.3%si, 0.0%st Mem: 4044632k total, 2404212k used, 1640420k free, 0k buffers Swap: 7812492k total, 1852548k used, 5959944k free, 100k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ TIME COMMAND 5415 root 17 0 1770m 56 44 S 0 0.0 15:03.78 15:03 bacula-dir 5604 root 17 0 1216m 492 200 S 0 0.0 12:38.30 12:38 java 5552 root 16 0 1194m 476 200 S 0 0.0 11:58.20 11:58 java 4598 root 16 0 117m 44 44 S 0 0.0 0:13.37 0:13 eventmond 9614 gdm 16 0 93188 0 0 S 0 0.0 0:00.30 0:00 gdmgreeter 5527 root 17 0 78716 0 0 S 0 0.0 0:00.30 0:00 gdm 4185 root 17 0 63352 284 104 S 20 0.0 1709:52 28,29 bacula-sd 4174 root 17 0 63156 208 88 S 24 0.0 2139:25 35,39 bacula-fd 10849 postgres 18 0 54740 216 108 D 0 0.0 0:31.40 0:31 postmaster 6661 postgres 17 0 49432 0 0 S 0 0.0 0:03.50 0:03 postmaster 5507 root 15 0 47980 0 0 S 0 0.0 0:00.00 0:00 gdm 6940 postgres 16 0 43596 16 16 S 0 0.0 46:51.39 46:51 postmaster 5304 postgres 16 0 40580 132 88 S 0 0.0 6:21.79 6:21 postmaster 5301 postgres 17 0 40448 24 24 S 0 0.0 0:32.17 0:32 postmaster 11280 root 16 0 40288 28 28 S 0 0.0 0:00.11 0:00 sshd 5534 root 17 0 37580 0 0 S 0 0.0 0:56.18 0:56 X 30870 root 30 15 31668 28 28 S 0 0.0 1:13.38 1:13 snmpd 5305 postgres 17 0 30628 16 16 S 0 0.0 0:11.60 0:11 postmaster 27403 postfix 17 0 30248 0 0 S 0 0.0 0:02.76 0:02 qmgr 10815 postfix 15 0 30208 16 16 S 0 0.0 0:00.02 0:00 pickup 5306 postgres 16 0 29760 20 20 S 0 0.0 0:52.89 0:52 postmaster 5302 postgres 17 0 29628 64 32 S 0 0.0 1:00.64 1:00 postmaster I've tried tuning the swappiness kernel parameter to both high and low values, but nothing appears to change the behavior here. I'm at a loss to figure out what's going on. How can I find out what's causing this? Update: The system is a fully 64-bit system, so there should be no question of memory limitations due to 32-bit issues. Update2: As I mentioned in the original question, I've already tried tuning swappiness to all sorts of values, including 0. The result is always the same, with approximately 1.6 GB of memory remaining unused. Update3: Added top output to the above info.

    Read the article

  • Issue 15: The Benefits of Oracle Exastack

    - by rituchhibber
         SOLUTIONS FOCUS The Benefits of Oracle Exastack Paul ThompsonDirector, Alliances and Solutions Partner ProgramsOracle EMEA Alliances & Channels RESOURCES -- Oracle PartnerNetwork (OPN) Oracle Exastack Program Oracle Exastack Ready Oracle Exastack Optimized Oracle Exastack Labs and Enablement Resources Oracle Exastack Labs Video Tour SUBSCRIBE FEEDBACK PREVIOUS ISSUES Exastack is a revolutionary programme supporting Oracle independent software vendor partners across the entire Oracle technology stack. Oracle's core strategy is to engineer software and hardware together, and our ISV strategy is the same. At Oracle we design engineered systems that are pre-integrated to reduce the cost and complexity of IT infrastructures while increasing productivity and performance. Oracle innovates and optimises performance at every layer of the stack to simplify business operations, drive down costs and accelerate business innovation. Our engineered systems are optimised to achieve enterprise performance levels that are unmatched in the industry. Faster time to production is achieved by implementing pre-engineered and pre-assembled hardware and software bundles. Our strategy of delivering a single-vendor stack simplifies and reduces costs associated with purchasing, deploying, and supporting IT environments for our customers and partners. In parallel to this core engineered systems strategy, the Oracle Exastack Program enables our Oracle ISV partners to leverage a scalable, integrated infrastructure that delivers their applications tuned, tested and optimised for high-performance. Specifically, the Oracle Exastack Program helps ISVs run their solutions on the Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4 - integrated systems products in which the software and hardware are engineered to work together. These products provide OPN members with a lower cost and high performance infrastructure for database and application workloads across on-premise and cloud based environments. Ready and Optimized Oracle Partners can now leverage our new Oracle Exastack Program to become Oracle Exastack Ready and Oracle Exastack Optimized. Partners can achieve Oracle Exastack Ready status through their support for Oracle Solaris, Oracle Linux, Oracle VM, Oracle Database, Oracle WebLogic Server, Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4. By doing this, partners can demonstrate to their customers that their applications are available on the latest major releases of these products. The Oracle Exastack Ready programme helps customers readily differentiate Oracle partners from lesser software developers, and identify applications that support Oracle engineered systems. Achieving Oracle Exastack Optimized status demonstrates that an OPN member has proven itself against goals for performance and scalability on Oracle integrated systems. This status enables end customers to readily identify Oracle partners that have tested and tuned their solutions for optimum performance on an Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4. These ISVs can display the Oracle Exadata Optimized, Oracle Exalogic Optimized or Oracle SPARC SuperCluster Optimized logos on websites and on all their collateral to show that they have tested and tuned their application for optimum performance. Deliver higher value to customers Oracle's investment in engineered systems enables ISV partners to deliver higher value to customer business processes. New innovations are enabled through extreme performance unachievable through traditional best-of-breed multi-vendor server/software approaches. Core product requirements can be launched faster, enabling ISVs to focus research and development investment on core competencies in order to bring value to market as quickly as possible. Through Exastack, partners no longer have to worry about the underlying product stack, which allows greater focus on the development of intellectual property above the stack. Partners are not burdened by platform issues and can concentrate simply on furthering their applications. The advantage to end customers is that partners can focus all efforts on business functionality, rather than bullet-proofing underlying technologies, and so will inevitably deliver application updates faster. Exastack provides ISVs with a number of flexible deployment options, such as on-premise or Cloud, while maintaining one single code base for applications regardless of customer deployment preference. Customers buying their solutions from Exastack ISVs can therefore be confident in deploying on their own networks, on private clouds or into a public cloud. The underlying platform will support all conceivable deployments, enabling a focus on the ISV's application itself that wouldn't be possible with other vendor partners. It stands to reason that Exastack accelerates time to value as well as lowering implementation costs all round. There is a big competitive advantage in partners being able to offer customers an optimised, pre-configured solution rather than an assortment of components and a suggested fit. Once a customer has decided to buy an Oracle Exastack Ready or Optimized partner solution, it will be up and running without any need for the customer to conduct testing of its own. Operational costs and complexity are also reduced, thanks to streamlined customer support through standardised configurations and pro-active monitoring. 'Engineered to Work Together' is a significant statement of Oracle strategy. It guarantees smoother deployment of a single vendor solution, clear ownership with no finger-pointing and the peace of mind of the Oracle Support Centre underpinning the entire product stack. Next steps Every OPN member with packaged applications must seriously consider taking steps to become Exastack Ready, or Exastack Optimized at the first opportunity. That first step down the track is to talk to an expert on the OPN Portal, at the Oracle Partner Business Center or to discuss the next steps with the closest Oracle account manager. Oracle Exastack lab environments and other technical enablement resources are available for OPN members wishing to further their knowledge of Oracle Exastack and qualify their applications for Oracle Exastack Optimized. New Boot Camps and Guided Learning Paths (GLPs), tailored specifically for ISVs, are available for Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, Oracle Linux, Oracle Solaris, Oracle Database, and Oracle WebLogic Server. More information about these GLPs and Boot Camps (including delivery dates and locations) are posted on the OPN Competency Center and corresponding OPN Knowledge Zones. Learn more about Oracle Exastack labs and ISV specific enablement resources. "Oracle Specialized partners are of course front-and-centre, with potential customers clearly directed to those partners and to Exadata Ready partners as a matter of priority." --More OpenWorld 2011 highlights for Oracle partners and customers Oracle Application Testing Suite 9.3 application testing solution for Web, SOA and Oracle Applications Oracle Application Express Release 4.1 improving the development of database-centric Web 2.0 applications and reports Oracle Unified Directory 11g helping customers manage the critical identity information that drives their business applications Oracle SOA Suite for healthcare integration Oracle Enterprise Pack for Eclipse 11g demonstrating continued commitment to the developer and open source communities Oracle Coherence 3.7.1, the latest release of the industry's leading distributed in-memory data grid Oracle Process Accelerators helping to simplify and accelerate time-to-value for customers' business process management initiatives Oracle's JD Edwards EnterpriseOne on the iPad meeting the increasingly mobile demands of today's workforces Oracle CRM On Demand Release 19 Innovation Pack introducing industry-leading hosted call centre and enterprise-marketing capabilities designed to drive further revenue and productivity while reducing costs and improving the customer experience Oracle's Primavera Portfolio Management 9 for businesses delivering on project portfolio goals with increased versatility, transparency and accuracy Oracle's PeopleSoft Human Capital Management (HCM) 9.1 On Demand Standard Edition helping customers manage their long-term investment in enterprise-wide business applications New versions of Oracle FLEXCUBE Universal Banking and Oracle FLEXCUBE Investor Servicing for Financial Institutions, as well as Oracle Financial Services Enterprise Case Management, Oracle Financial Services Pricing Management, Oracle Financial Management Analytics and Oracle Tax Analytics Oracle Utilities Network Management System 1.11 offering new modelling and analysis features to improve distribution-grid management for electric utilities Oracle Communications Network Charging and Control 4.4 helping communications service providers (CSPs) offer their customers more flexible charging options Plus many, many more technology announcements, enhancements, momentum news and community updates -- Oracle OpenWorld 2012 A date has already been set for Oracle OpenWorld 2012. Held once again in San Francisco, exhibitors, partners, customers and Oracle people will gather from 30 September until 4 November to meet, network and learn together with the rest of the global Oracle community. Register now for Oracle OpenWorld 2012 and save $$$! We'll reward your early planning for Oracle OpenWorld 2012 with reduced rates. Super Saver deals are now available! -- Back to the welcome page

    Read the article

  • New OFM versions released SOA Suite 11.1.1.4 &amp; BPM 11.1.1.4 &amp; JDeveloper 11.1.1.4 WebLogic on JRockit 10.3.4 feedback from the community

    - by Jürgen Kress
    Oracle SOA Suite 11g Installations This is the latest release of the Oracle SOA Suite 11g. Please see the Documentation tab for Release Notes, Installation Guides and other release specific information. Please also see the List of New Features and Samples provided for this release. Release 11gR1 (11.1.1.4.0) Microsoft Windows (32-bit JVM) Linux (32-bit JVM) Generic Oracle JDeveloper 11g Rel 1 (11.1.1.x) (JDeveloper + ADF) Integrated development environment certified on Windows, Linux, and Macintosh. License is free (read the Pricing FAQ). Studio Edition for Windows (1.2 GB) | Studio Edition for Linux (1.3 GB) | See All See Additional Development Tools Oracle WebLogic Server 11g Rel 1 (10.3.4) Installers The WebLogic Server installers include Oracle Coherence and Oracle Enterprise Pack for Eclipse and supports development with other Fusion Middleware products . The zip includes WebLogic Server only and is intended for WebLogic Server development only. Linux x86 (1.1 GB) | Windows x86 (1 GB) Zip for Windows x86, Linux x86, Mac OS X (316 MB) | See All Oracle WebLogic Server 11gR1 (10.3.4) on JRockit Virtual Edition Download For additional downloads please visit the Oracle Fusion Middleware Products Update Center Share your feedback with the @soacommunity on twitter SOASimone Simone Geib SOA Suite 11gR1 (11.1.1.4.0) has just been released: http://www.oracle.com/technetwork/middleware/soasuite/downloads/index.html gschmutz gschmutz My new blog post: WebLogic Server, JDev, SOA, BPM, OSB and CEP 11.1.1.4 (PS3) available! - http://tinyurl.com/4negnpn simon_haslam Simon Haslam I'm very pleased to see WLS 10.3.4 for JRockit VE launched at the same time as the rest of PS3 http://j.mp/gl1nQm (32bit anyway) lucasjellema Lucas Jellema See http://www.oracle.com/ocom/groups/public/@otn/documents/webcontent/156082.xml for PS3 extension downloads BPM, SOA Editor, WebCenter demed demed List of new features in @OracleSOA 11gR1 PS3: http://bit.ly/fVRwsP is not extremely long but huge release by # of bugs fixed. Go! biemond Edwin Biemond WebLogic 10.3.4 new features http://bit.ly/f7L1Eu Exalogic Elastic Cloud , JPA2 , Maven plugin, OWSM policies on WebLogic SCA applications JDeveloper JDeveloper & ADF JDeveloper and Oracle ADF 11g Release 1 Patch Set 3 (11.1.1.4.0): New Features and Bug Fixes http://bit.ly/feghnY simon_haslam Simon Haslam WebLogic Server 10.3.4 (i.e. 11gR1 PS3) available now too http://bit.ly/eeysZ2 JDeveloper JDeveloper & ADF Share your impressions on the new JDeveloper 11g Patchset 3 release that came out today! Download it here: http://bit.ly/dogRN8 VikasAatOracle Vikas Anand SOA Suite 11gR1PS3 is Hotpluggable ...see list of features that @Demed posted..#soa #soacommunity   New versions of Oracle Fusion Middleware 11g R1 (11.1.1.4.x)  include: Oracle WebLogic Server 11g R1 (10.3.4) Oracle SOA Suite 11g R1 (11.1.1.4.0) Oracle Business Process Management 11g R1 (11.1.1.4.0) Oracle Complex Event Processing 11g R1 (11.1.1.4.0) Oracle Application Integration Architecture Foundation Pack 11g R1 (11.1.1.4.0) Oracle Service Bus 11g R1 (11.1.1.4.0) Oracle Enterprise Repository 11g R1 (11.1.1.4.0) Oracle Identity Management 11g R1 (11.1.1.4.0) Oracle Enterprise Content Management 11g R1 (11.1.1.4.0) Oracle WebCenter 11g R1 (11.1.1.4.0) - coming soon Oracle Forms, Reports, Portal & Discoverer 11g R1 (11.1.1.4.0) Oracle Repository Creation Utility 11g R1 (11.1.1.4.0) Oracle JDeveloper & Application Development Runtime 11g R1 (11.1.1.4.0) Resources Download  (OTN) Certification Documentation   New Features in Oracle SOA Suite 11g Release 1 (11.1.1.4.0) Updated: January, 2011 Go to Oracle SOA Suite 11g Doc Introduction Oracle SOA Suite 11gR1 (11.1.1.4.0) includes both bug fixes as well as new features listed below - click on the title of each feature for more details. Downloads, documentation links and more information on the Oracle SOA Suite available on the SOA Suite OTN page and as always, we welcome your feedback on the SOA OTN forum. New in Oracle SOA Suite in this release BPEL Component BPEL 2.0 support in JDeveloper The BPEL editor in JDeveloper now generates BPEL 2.0 code and introduces several new activities. Augmented XML variables auto-initialization capabilities The XML variable auto-initialization capabilities have been enhanced to support two need additional use cases: to initialize the to-spec node if it doesn't exist during the rule and to initialize array elements. New Assign Activity dialog The new Assign Activity supports the same drag & drop paradigm used for the XSLT mapper, greatly streamlining the task of assigning multiple variables. Mediator Component Time window parameter for the resequencer This new parameter lets users initiate a best-effort resequencing based on a time window rather than a number of messages. Support for attachments in the Mediator assign dialog The Mediator assign dialog now supports attachment, enabling usage of the Mediator to transmit attachments even if source and target schemas are different. Adapters & Bindings ChunkSize property added to the File Adapter header properties The ChunkSize property of the File Adapter is now available as a header property, allowing in-process modification of the value for this property. Improved support for distributed WLS JMS topics though automatic rebalancing of listeners The JMS Adapter has been enhanced to subscribe to administrative events from WLS JMS. Based on these events, it dynamically rebalances listeners when there are changes to the members of a local or remote WLS JMS distributed destination. JDeveloper configuration wizard for custom JCA adapters A new wizard is available in JDeveloper to configure custom-built adapters Administration & Enterprise Manager Enhanced purging capabilities to manage database growth Historical instance data can now be purged using three different strategies: batch script, scheduled batch script or data partitioning. Asynchronous bulk instance deletion in Enterprise Manager Bulk deletion of instances in Enterprise Manager now executes as an asynchronous operation in Enterprise Manager, returning control to the user as soon as the action has been submitted and acknowledged. B2B Ability to schedule partner downtime This feature allows trading partners to notify each other about planned downtime and to delay delivery of messages during that period. Message sequencing B2B now supports both inbound and outbound message sequencing. Simplified BAM integration with B2B B2B ships with various pre-configured artifacts to simplify monitoring in BAM. Instance Message Java API for B2B The new instance message Java API supports programmatic access to B2B instance message data. Oracle Service Bus (OSB) Certification of the File and FTP JCA Adapters The File and FTP JCA adapters are now certified for use with Oracle Service Bus (in addition to the native transports). Security enhancements Oracle Service Bus now supports SAML 2.0 as well as the OWSM authorization policies. Check the Oracle Service Bus 11.1.1.4 Release Notes for a complete list of new features. Installation, Hot-Pluggability & Certifications Ability to run Oracle SOA Suite on IBM WebSphere Application Server Oracle SOA Suite can now be deployed on IBM WebSphere Application Server Network Deployment (ND) 7.0.11 and IBM WebSphere Application Server 7.0.11. Single JVM developer installation template Oracle SOA Suite can now be targeted to the WebLogic admin server - there is no requirement to also have a managed server. This topology is intended to minimize the memory foorprint of development environments. This is in addition to the list of supported browsers, operating systems and databases already certified in prior releases. Complex Event Processing (CEP) IDE enhancements This release introduces several enhancements to the development IDE, such as adapter wizards and event-type repository. CQL enhancements CQL enhancements include JDBC data cartridges and parametrized queries. Tracing and injecting events in the Event Processing Network (EPN) In the development environment you can now trace and inject events. Check the Oracle CEP 11.1.1.4 Release Notes for a complete list of new features. SOA Suite page on OTN For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA Suite 11.1.1.4,JDeveloper 11.1.1.4,WebLogic 10.3.4,JRockit 10.3.4,SOA Community,Oracle,OPN,SOA,Simone Geib,Guido Schmutz,Edwin Biemond,Lucas Jellema,Simon Haslam,Demed,Vikas Anand,Jürgen Kress

    Read the article

  • CEN/CENELEC Lacks Perspective

    - by trond-arne.undheim
    Over the last few months, two of the European Standardization Organizations (ESOs), CEN and CENELEC have circulated an unfortunate position statement distorting the facts around fora and consortia. For the benefit of outsiders to this debate, let's just say that this debate regards whether and how the EU should recognize standards and specifications from certain fora and consortia based on a process evaluating the openness and transparency of such deliverables. The topic is complex, and somewhat confusing even to insiders, but nevertheless crucial to the European economy. As far as I can judge, their positions are not based on facts. This is unfortunate. For the benefit of clarity, here are some of the observations they make: a)"Most consortia are in essence driven by technology companies making hardware and software solutions, by definition very few of the largest ones are European-based". b) "Most consortia lack a European presence, relevant Committees, even those that are often cited as having stronger links with Europe, seem to lack an overall, inclusive set of participants". c) "Recognising specific consortia specifications will not resolve any concrete problems of interoperability for public authorities; interoperability depends on stringing together a range of specifications (from formal global bodies or consortia alike)". d) "Consortia already have the option to have their specifications adopted by the international formal standards bodies and many more exercise this than the two that seem to be campaigning for European recognition. Such specifications can then also be adopted as European standards." e) "Consortium specifications completely lack any process to take due and balanced account of requirements at national level - this is not important for technologies but can be a critical issue when discussing cross-border issues within the EU such as eGovernment, eHealth and so on". f) "The proposed recognition will not lead to standstill on national or European activities, nor to the adoption of the specifications as national standards in the CEN and CENELEC members (usually in their official national languages), nor to withdrawal of conflicting national standards. A big asset of the European standardization system is its coherence and lack of fragmentation." g) "We always miss concrete and specific examples of where consortia referencing are supposed to be helpful." First of all, note that ETSI, the third ESO, did not join the position. The reason is, of course, that ETSI beyond being an ESO, also has a global perspective and, moreover, does consider reality. Secondly, having produced arguments a) to g), CEN/CENELEC has the audacity to call a meeting on Friday 25 February entitled "ICT standardization - improving collaboration in Europe". This sounds very nice, but they have not set the stage for constructive debate. Rather, they demonstrate a striking lack of vision and lack of perspective. I will back this up by three facts, and leave it there. 1. Since the 1980s, global industry fora and consortia, such as IETF, W3C and OASIS have emerged as world-leading ICT standards development organizations with excellent procedures for openness and transparency in all phases of standards development, ex post and ex ante. - Practically no ICT system can be built without using fora and consortia standards (FCS). - Without using FCS, neither the Internet, upon which the EU economy depends, nor EU institutions would operate. - FCS are of high relevance for achieving and promoting interoperability and driving innovation. 2. FCS are complementary to the formally recognized standards organizations including the ESOs. - No work will be taken away from the ESOs should the EU recognize certain FCS. - Each FCS would be evaluated on its merit and on the openness of the process that produced it. ESOs would, with other stakeholders, have a say. - ESOs could potentially educate and assist European stakeholders to engage more actively and constructively with FCS. - ETSI, also an ESO, seems to clearly recognize these facts. 3. Europe and its Member States have a strong voice in several of the most relevant global industry fora and consortia. - W3C: W3C was founded in 1994 by an Englishman, Sir Tim Berners-Lee, in collaboration with CERN, the European research lab. In April 1995, INRIA (Institut National de Recherche en Informatique et Automatique) in France became the first European W3C host and in 2003, ERCIM (European Research Consortium in Informatics and Mathematics), also based in France, took over the role of European W3C host from INRIA. Today, W3C has 326 Members, 40% of which are European. Government participation is also strong, and it could be increased - a development that is very much desired by W3C. Current members of the W3C Advisory Board includes Ora Lassila (Nokia) and Charles McCathie Nevile (Opera). Nokia is Finnish company, Opera is a Norwegian company. SAP's Claus von Riegen is an alumni of the same Advisory Board. - OASIS: its membership - 30% of which is European - represents the marketplace, reflecting a balance of providers, user companies, government agencies, and non-profit organizations. In particular, about 15% of OASIS members are governments or universities. Frederick Hirsch from Nokia, Claus von Riegen from SAP AG and Charles-H. Schulz from Ars Aperta are on the Board of Directors. Nokia is a Finnish company, SAP is a German company and Ars Aperta is a French company. The Chairman of the Board is Peter Brown, who is an Independent Consultant, an Austrian citizen AND an official of the European Parliament currently on long-term leave. - IETF: The oversight of its activities is by the Internet Architecture Board (IAB), since 2007 chaired by Olaf Kolkman, a Dutch national who lives in Uithoorn, NL. Kolkman is director of NLnet Labs, a foundation chartered to develop open source software and open source standards for the Internet. Other IAB members include Marcelo Bagnulo whose affiliation is the University Carlos III of Madrid, Spain as well as Hannes Tschofenig from Nokia Siemens Networks. Nokia is a Finnish company. Siemens is a German company. Nokia Siemens is a European joint venture. - Member States: At least 17 European Member States have developed Interoperability Frameworks that include FCS, according to the EU-funded National Interoperability Framework Observatory (see list and NIFO web site on IDABC). This also means they actively procure solutions using FCS, reference FCS in their policies and even in laws. Member State reps are free to engage in FCS, and many do. It would be nice if the EU adjusted to this reality. - A huge number of European nationals work in the global IT industry, on European soil or elsewhere, whether in EU registered companies or not. CEN/CENELEC lacks perspective and has engaged in an effort to twist facts that is quite striking from a publicly funded organization. I wish them all possible success with Friday's meeting but I fear all of the most important stakeholders will not be at the table. Not because they do not wish to collaborate, but because they just have been insulted. If they do show up, it would be a gracious move, almost beyond comprehension. While I do not expect CEN/CENELEC to line up perfectly in favor of fora and consortia, I think it would be to their benefit to stick to more palatable observations. Actually, I would suggest an apology, straightening out the facts. This works among friends and it works in an organizational context. Then, we can all move on. Standardization is important. Too important to ignore. Too important to distort. The European economy depends on it. We need CEN/CENELEC. It is an important organization. But CEN/CENELEC needs fora and consortia, too.

    Read the article

  • Is RTD Stateless or Stateful?

    - by [email protected]
    Yes.   A stateless service is one where each request is an independent transaction that can be processed by any of the servers in a cluster.  A stateful service is one where state is kept in a server's memory from transaction to transaction, thus necessitating the proper routing of requests to the right server. The main advantage of stateless systems is simplicity of design. The main advantage of stateful systems is performance. I'm often asked whether RTD is a stateless or stateful service, so I wanted to clarify this issue in depth so that RTD's architecture will be properly understood. The short answer is: "RTD can be configured as a stateless or stateful service." The performance difference between stateless and stateful systems can be very significant, and while in a call center implementation it may be reasonable to use a pure stateless configuration, a web implementation that produces thousands of requests per second is practically impossible with a stateless configuration. RTD's performance is orders of magnitude better than most competing systems. RTD was architected from the ground up to achieve this performance. Features like automatic and dynamic compression of prediction models, automatic translation of metadata to machine code, lack of interpreted languages, and separation of model building from decisioning contribute to achieving this performance level. Because  of this focus on performance we decided to have RTD's default configuration work in a stateful manner. By being stateful RTD requests are typically handled in a few milliseconds when repeated requests come to the same session. Now, those readers that have participated in implementations of RTD know that RTD's architecture is also focused on reducing Total Cost of Ownership (TCO) with features like automatic model building, automatic time windows, automatic maintenance of database tables, automatic evaluation of data mining models, automatic management of models partitioned by channel, geography, etcetera, and hot swapping of configurations. How do you reconcile the need for a low TCO and the need for performance? How do you get the performance of a stateful system with the simplicity of a stateless system? The answer is that you make the system behave like a stateless system to the exterior, but you let it automatically take advantage of situations where being stateful is better. For example, one of the advantages of stateless systems is that you can route a message to any server in a cluster, without worrying about sending it to the same server that was handling the session in previous messages. With an RTD stateful configuration you can still route the message to any server in the cluster, so from the point of view of the configuration of other systems, it is the same as a stateless service. The difference though comes in performance, because if the message arrives to the right server, RTD can serve it without any external access to the session's state, thus tremendously reducing processing time. In typical implementations it is not rare to have high percentages of messages routed directly to the right server, while those that are not, are easily handled by forwarding the messages to the right server. This architecture usually provides the best of both worlds with performance and simplicity of configuration.   Configuring RTD as a pure stateless service A pure stateless configuration requires session data to be persisted at the end of handling each and every message and reloading that data at the beginning of handling any new message. This is of course, the root of the inefficiency of these configurations. This is also the reason why many "stateless" implementations actually do keep state to take advantage of a request coming back to the same server. Nevertheless, if the implementation requires a pure stateless decision service, this is easy to configure in RTD. The way to do it is: Mark every Integration Point to Close the session at the end of processing the message In the Session entity persist the session data on closing the session In the session entity check if a persisted version exists and load it An excellent solution for persisting the session data is Oracle Coherence, which provides a high performance, distributed cache that minimizes the performance impact of persisting and reloading the session. Alternatively, the session can be persisted to a local database. An interesting feature of the RTD stateless configuration is that it can cope with serializing concurrent requests for the same session. For example, if a web page produces two requests to the decision service, these requests could come concurrently to the decision services and be handled by different servers. Most stateless implementation would have the two requests step onto each other when saving the state, or fail one of the messages. When properly configured, RTD will make one message wait for the other before processing.   A Word on Context Using the context of a customer interaction typically significantly increases lift. For example, offer success in a call center could double if the context of the call is taken into account. For this reason, it is important to utilize the contextual information in decision making. To make the contextual information available throughout a session it needs to be persisted. When there is a well defined owner for the information then there is no problem because in case of a session restart, the information can be easily retrieved. If there is no official owner of the information, then RTD can be configured to persist this information.   Once again, RTD provides flexibility to ensure high performance when it is adequate to allow for some loss of state in the rare cases of server failure. For example, in a heavy use web site that serves 1000 pages per second the navigation history may be stored in the in memory session. In such sites it is typical that there is no OLTP that stores all the navigation events, therefore if an RTD server were to fail, it would be possible for the navigation to that point to be lost (note that a new session would be immediately established in one of the other servers). In most cases the loss of this navigation information would be acceptable as it would happen rarely. If it is desired to save this information, RTD would persist it every time the visitor navigates to a new page. Note that this practice is preferred whether RTD is configured in a stateless or stateful manner.  

    Read the article

  • 6 Facts About GlassFish Announcement

    - by Bruno.Borges
    Since Oracle announced the end of commercial support for future Oracle GlassFish Server versions, the Java EE world has started wondering what will happen to GlassFish Server Open Source Edition. Unfortunately, there's a lot of misleading information going around. So let me clarify some things with facts, not FUD. Fact #1 - GlassFish Open Source Edition is not dead GlassFish Server Open Source Edition will remain the reference implementation of Java EE. The current trunk is where an implementation for Java EE 8 will flourish, and this will become the future GlassFish 5.0. Calling "GlassFish is dead" does no good to the Java EE ecosystem. The GlassFish Community will remain strong towards the future of Java EE. Without revenue-focused mind, this might actually help the GlassFish community to shape the next version, and set free from any ties with commercial decisions. Fact #2 - OGS support is not over As I said before, GlassFish Server Open Source Edition will continue. Main change is that there will be no more future commercial releases of Oracle GlassFish Server. New and existing OGS 2.1.x and 3.1.x commercial customers will continue to be supported according to the Oracle Lifetime Support Policy. In parallel, I believe there's no other company in the Java EE business that offers commercial support to more than one build of a Java EE application server. This new direction can actually help customers and partners, simplifying decision through commercial negotiations. Fact #3 - WebLogic is not always more expensive than OGS Oracle GlassFish Server ("OGS") is a build of GlassFish Server Open Source Edition bundled with a set of commercial features called GlassFish Server Control and license bundles such as Java SE Support. OGS has at the moment of this writing the pricelist of U$ 5,000 / processor. One information that some bloggers are mentioning is that WebLogic is more expensive than this. Fact 3.1: it is not necessarily the case. The initial edition of WebLogic is called "Standard Edition" and falls into a policy where some “Standard Edition” products are licensed on a per socket basis. As of current pricelist, US$ 10,000 / socket. If you do the math, you will realize that WebLogic SE can actually be significantly more cost effective than OGS, and a customer can save money if running on a CPU with 4 cores or more for example. Quote from the price list: “When licensing Oracle programs with Standard Edition One or Standard Edition in the product name (with the exception of Java SE Support, Java SE Advanced, and Java SE Suite), a processor is counted equivalent to an occupied socket; however, in the case of multi-chip modules, each chip in the multi-chip module is counted as one occupied socket.” For more details speak to your Oracle sales representative - this is clearly at list price and every customer typically has a relationship with Oracle (like they do with other vendors) and different contractual details may apply. And although OGS has always been production-ready for Java EE applications, it is no secret that WebLogic has always been more enterprise, mission critical application server than OGS since BEA. Different editions of WLS provide features and upgrade irons like the WebLogic Diagnostic Framework, Work Managers, Side by Side Deployment, ADF and TopLink bundled license, Web Tier (Oracle HTTP Server) bundled licensed, Fusion Middleware stack support, Oracle DB integration features, Oracle RAC features (such as GridLink), Coherence Management capabilities, Advanced HA (Whole Service Migration and Server Migration), Java Mission Control, Flight Recorder, Oracle JDK support, etc. Fact #4 - There’s no major vendor supporting community builds of Java EE app servers There are no major vendors providing support for community builds of any Open Source application server. For example, IBM used to provide community support for builds of Apache Geronimo, not anymore. Red Hat does not commercially support builds of WildFly and if I remember correctly, never supported community builds of former JBoss AS. Oracle has never commercially supported GlassFish Server Open Source Edition builds. Tomitribe appears to be the exception to the rule, offering commercial support for Apache TomEE. Fact #5 - WebLogic and GlassFish share several Java EE implementations It has been no secret that although GlassFish and WebLogic share some JSR implementations (as stated in the The Aquarium announcement: JPA, JSF, WebSockets, CDI, Bean Validation, JAX-WS, JAXB, and WS-AT) and WebLogic understands GlassFish deployment descriptors, they are not from the same codebase. Fact #6 - WebLogic is not for GlassFish what JBoss EAP is for WildFly WebLogic is closed-source offering. It is commercialized through a license-based plus support fee model. OGS although from an Open Source code, has had the same commercial model as WebLogic. Still, one cannot compare GlassFish/WebLogic to WildFly/JBoss EAP. It is simply not the same case, since Oracle has had two different products from different codebases. The comparison should be limited to GlassFish Open Source / Oracle GlassFish Server versus WildFly / JBoss EAP. But the message now is much clear: Oracle will commercially support only the proprietary product WebLogic, and invest on GlassFish Server Open Source Edition as the reference implementation for the Java EE platform and future Java EE 8, as a developer-friendly community distribution, and encourages community participation through Adopt a JSR and contributions to GlassFish. In comparison Oracle's decision has pretty much the same goal as to when IBM killed support for Websphere Community Edition; and to when Red Hat decided to change the name of JBoss Community Edition to WildFly, simplifying and clarifying marketing message and leaving the commercial field wide open to JBoss EAP only. Oracle can now, as any other vendor has already been doing, focus on only one commercial offer. Some users are saying they will now move to WildFly, but it is important to note that Red Hat does not offer commercial support for WildFly builds. Although the future JBoss EAP versions will come from the same codebase as WildFly, the builds will definitely not be the same, nor sharing 100% of their functionalities and bug fixes. This means there will be no company running a WildFly build in production with support from Red Hat. This discussion has also raised an important and interesting information: Oracle offers a free for developers OTN License for WebLogic. For other environments this is different, but please note this is the same policy Red Hat applies to JBoss EAP, as stated in their download page and terms. Oracle had the same policy for OGS. TL;DR; GlassFish Server Open Source Edition isn’t dead. Current and new OGS 2.x/3.x customers will continue to have support (respecting LSP). WebLogic is not necessarily more expensive than OGS. Oracle will focus on one commercially supported Java EE application server, like other vendors also limit themselves to support one build/product only. Community builds are hardly supported. Commercially supported builds of Open Source products are not exactly from the same codebase as community builds. What's next for GlassFish and the Java EE community? There are conversations in place to tackle some of the community desires, most of them stated by Markus Eisele in his blog post. We will keep you posted.

    Read the article

  • Oracle OpenWorld 2013 – Wrap up by Sven Bernhardt

    - by JuergenKress
    OOW 2013 is over and we’re heading home, so it is time to lean back and reflecting about the impressions we have from the conference. First of all: OOW was great! It was a pleasure to be a part of it. As already mentioned in our last blog article: It was the biggest OOW ever. Parallel to the conference the America’s Cup took place in San Francisco and the Oracle Team America won. Amazing job by the team and again congratulations from our side Back to the conference. The main topics for us are: Oracle SOA / BPM Suite 12c Adaptive Case management (ACM) Big Data Fast Data Cloud Mobile Below we will go a little more into detail, what are the key takeaways regarding the mentioned points: Oracle SOA / BPM Suite 12c During the five days at OOW, first details of the upcoming major release of Oracle SOA Suite 12c and Oracle BPM Suite 12c have been introduced. Some new key features are: Managed File Transfer (MFT) for transferring big files from a source to a target location Enhanced REST support by introducing a new REST binding Introduction of a generic cloud adapter, which can be used to connect to different cloud providers, like Salesforce Enhanced analytics with BAM, which has been totally reengineered (BAM Console now also runs in Firefox!) Introduction of templates (OSB pipelines, component templates, BPEL activities templates) EM as a single monitoring console OSB design-time integration into JDeveloper (Really great!) Enterprise modeling capabilities in BPM Composer These are only a few points from what is coming with 12c. We are really looking forward for the new realese to come out, because this seems to be really great stuff. The suite becomes more and more integrated. From 10g to 11g it was an evolution in terms of developing SOA-based applications. With 12c, Oracle continues it’s way – very impressive. Adaptive Case Management Another fantastic topic was Adaptive Case Management (ACM). The Oracle PMs did a great job especially at the demo grounds in showing the upcoming Case Management UI (will be available in 11g with the next BPM Suite MLR Patch), the roadmap and the differences between traditional business process modeling. They have been very busy during the conference because a lot of partners and customers have been interested Big Data Big Data is one of the current hype themes. Because of huge data amounts from different internal or external sources, the handling of these data becomes more and more challenging. Companies have a need for analyzing the data to optimize their business. The challenge is here: the amount of data is growing daily! To store and analyze the data efficiently, it is necessary to have a scalable and flexible infrastructure. Here it is important that hardware and software are engineered to work together. Therefore several new features of the Oracle Database 12c, like the new in-memory option, have been presented by Larry Ellison himself. From a hardware side new server machines like Fujitsu M10 or new processors, such as Oracle’s new M6-32 have been announced. The performance improvements, when using one of these hardware components in connection with the improved software solutions were really impressive. For more details about this, please take look at our previous blog post. Regarding Big Data, Oracle also introduced their Big Data architecture, which consists of: Oracle Big Data Appliance that is preconfigured with Hadoop Oracle Exdata which stores a huge amount of data efficently, to achieve optimal query performance Oracle Exalytics as a fast and scalable Business analytics system Analysis of the stored data can be performed using SQL, by streaming the data directly from Hadoop to an Oracle Database 12c. Alternatively the analysis can be directly implemented in Hadoop using “R”. In addition Oracle BI Tools can be used to analyze the data. Fast Data Fast Data is a complementary approach to Big Data. A huge amount of mostly unstructured data comes in via different channels with a high frequency. The analysis of these data streams is also important for companies, because the incoming data has to be analyzed regarding business-relevant patterns in real-time. Therefore these patterns must be identified efficiently and performant. To do so, in-memory grid solutions in combination with Oracle Coherence and Oracle Event Processing demonstrated very impressive how efficient real-time data processing can be. One example for Fast Data solutions that was shown during the OOW was the analysis of twitter streams regarding customer satisfaction. The feeds with negative words like “bad” or “worse” have been filtered and after a defined treshold has been reached in a certain timeframe, a business event was triggered. Cloud Another key trend in the IT market is of course Cloud Computing and what it means for companies and their businesses. Oracle announced their Cloud strategy and vision – companies can focus on their real business while all of the applications are available via Cloud. This also includes Oracle Database or Oracle Weblogic, so that companies can also build, deploy and run their own applications within the cloud. Three different approaches have been introduced: Infrastructure as a Service (IaaS) Platform as a Service (PaaS) Software as a Service (SaaS) Using the IaaS approach only the infrastructure components will be managed in the Cloud. Customers will be very flexible regarding memory, storage or number of CPUs because those parameters can be adjusted elastically. The PaaS approach means that besides the infrastructure also the platforms (such as databases or application servers) necessary for running applications will be provided within the Cloud. Here customers can also decide, if installation and management of these infrastructure components should be done by Oracle. The SaaS approach describes the most complete one, hence all applications a company uses are managed in the Cloud. Oracle is planning to provide all of their applications, like ERP systems or HR applications, as Cloud services. In conclusion this seems to be a very forward-thinking strategy, which opens up new possibilities for customers to manage their infrastructure and applications in a flexible, scalable and future-oriented manner. As you can see, our OOW days have been very very interresting. We collected many helpful informations for our projects. The new innovations presented at the confernce are great and being part of this was even greater! We are looking forward to next years’ conference! Links: http://www.oracle.com/openworld/index.html http://thecattlecrew.wordpress.com/2013/09/23/first-impressions-from-oracle-open-world-2013 SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: cattleCrew,Sven Bernhard,OOW2013,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Oracle SOA Suite - Highlighted Travel and Transportation Customer References

    - by Bruce Tierney
    0 0 1 1137 6483 - 54 15 7605 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Next in this series on industry-specific highlights of Oracle SOA Suite customers is the Travel and Transportation industry.  If you are in the travel or transportation industry, take a look at how these Oracle SOA Suite integration customers have addressed common business requirements to enable better customer service, lower costs, and deliver new business services. For example, All Nippon Airways (ANA) has significantly lowered management costs associated with their hybrid on-premise/cloud ticketing system deployments for domestic and international flights. Their lead-time for changes or new applications has been greatly reduced compared to their old mainframe-based systems, enabling ANA to rapidly develop new services in response to changing market needs. Another example is Schneider National, a leading provider of truckload logistics, and how they have integrated Oracle E-Business Suite, Siebel CRM, Oracle Transportation Management and customers applications using Oracle SOA Suite. Schneider National has 400 BPEL processes that generate over 60 million composite instances over five SOA clusters.  Take a deeper look into any of these case studies, videos, and Oracle Magazine articles that closely align with your industry:  Customers fly and airline succeeds with an IT transformation. Company:  All Nippon Airways  Customer Oracle or Profit Magazine Article   |   Travel and Transportation   |   Published on January 06, 2014 Any successful business must ensure ongoing customer satisfaction, respond to increased competition, and minimize costs. Running a successful airline in today’s economic climate requires all of those things, as well a... Openmatics Revolutionizes Fleet Management with Standards-Based Vehicle Telematics Platform New Company:  Openmatics s.r.o.  Customer Snapshot   |   Automotive   |   Published on May 20, 2014 Openmatics uses Oracle WebCenter Portal and Oracle Application Development Framework as a foundation for Openmatics, a vehicle telematics service for next-generation fleet management. It integrated its own app shop wi... Future Proof: To keep pace with mobile, social, and location-based services, smart technologists are using middleware to innovate Company:  SFpark  Customer Oracle or Profit Magazine Article   |   Professional Services   |   Published on August 01, 2012 Oracle Fusion Middleware is at the heart of a recently completed and very ambitious project to change how people handle the challenge of finding a parking space in San Francisco, California. “Parking is a universal is... Globalia Corporación Empresarial Accelerates Hotel Bookings, Boosts Sales by 40% with In-Memory Data Grid Solution Company:  Globalia Corporación Empresarial S.A.  Customer Snapshot   |   Travel and Transportation   |   Published on April 29, 2013 Globalia Corporación Empresarial S.A. deployed Oracle Coherence to reengineer the group’s core system for hotel bookings, now serving booking requests involving 80 hotels within an average response time of 100 millise... Choice Hotels Uses Oracle SOA Suite and Oracle BPM Suite to Modernize Global IT Architecture Company:  Choice Hotels  Press Release   |   Travel and Transportation   |   Published on August 07, 2012 Choice Hotels International, one of the largest and most successful hotel franchises in the world, has implemented Oracle SOA Suite and Oracle BPM Suite. Sascar Consolidates Fleet Management Infrastructure and Accelerates Customers’ Data Access Company:  Sascar  Customer Case Study   |   Travel and Transportation   |   Published on February 07, 2014 Description – Sascar used Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud and Oracle WebLogic Suite 11g to consolidate fleet management and perform real-time vehicle tracking 4x faster. Directorate General of Civil Aviation Streamlines Key Aviation Applications Access, Improves Productivity and Reduces Maintenance Costs Company:  Directorate General of Civil Aviation (DGAC)  Customer Snapshot   |   Travel and Transportation   |   Published on May 24, 2013 With Oracle Fusion Middleware, the Directorate General of Civil Aviation (DGAC) provided its 12,500 employees a virtual office environment that integrates team workspaces, business applications, and e-mails within a n... Schneider National Implements Next-Generation IT Infrastructure to Continue Leadership in Transportation and Logistics Industry Company:  Schneider National, Inc.  Customer Snapshot   |   Travel and Transportation   |   Published on February 26, 2013 Schneider National, Inc. deployed Oracle applications, Oracle Fusion Middleware, and Oracle development tools as the foundation for its next-generation IT environment, which is driving new levels of efficiency, profit... DGAC Cuts Subscription Costs with Oracle Company:  DGAC  Video   |   Travel and Transportation   |   Published on October 31, 2012 Using Oracle WebCenter Portal, Oracle SOA Suite, and Oracle Exalogic, DGAC reduces the cost of subscriptions to newsletters and provide to its 12,500 employees a collaborative workspace portal. Asiana Airlines Builds PIP System with Oracle Solutions Company:  Asiana Airlines  Video   |   Travel and Transportation   |   Published on July 26, 2012 With Oracle Exalogic and the Oracle SOA Suite, Asiana Airlines builds a passenger service integrated platform providing various services such as integration between its interface and internal systems and a data wareho... Choice Hotels Reduces Time to Market with Oracle WebCenter Company:  Choice Hotels  Video   |   Travel and Transportation   |   Published on April 11, 2014 Using Oracle WebCenter and Oracle SOA standardization, Choice Hotels consolidated multiple platforms, reduced IT dependency and realized tremendous benefits in total cost of ownership and faster time to market support... An Interview with Schneider National's Judy Lemke Company:  Schneider National  Video   |   Travel and Transportation   |   Published on December 17, 2013 Judy Lemke talks with Mark Sunday about the challenges Schneider National faced and how they overcame them through a companywide transformational change. For more details on these case studies, you can use this pre-filtered search on “Travel and Transportation” / “Middleware” / “Service Oriented Architecture” or browse on your own at www.oracle.com/customers

    Read the article

  • Clouds Everywhere But not a Drop of Rain – Part 3

    - by sxkumar
    I was sharing with you how a broad-based transformation such as cloud will increase agility and efficiency of an organization if process re-engineering is part of the plan.  I have also stressed on the key enterprise requirements such as “broad and deep solutions, “running your mission critical applications” and “automated and integrated set of capabilities”. Let me walk you through some key cloud attributes such as “elasticity” and “self-service” and what they mean for an enterprise class cloud. I will also talk about how we at Oracle have taken a very enterprise centric view to developing cloud solutions and how our products have been specifically engineered to address enterprise cloud needs. Cloud Elasticity and Enterprise Applications Requirements Easy and quick scalability for a short-period of time is the signature of cloud based solutions. It is this elasticity that allows you to dynamically redistribute your resources according to business priorities, helps increase your overall resource utilization, and reduces operational costs by allowing you to get the most out of your existing investment. Most public clouds are offering a instant provisioning mechanism of compute power (CPU, RAM, Disk), customer pay for the instance-hours(and bandwidth) they use, adding computing resources at peak times and removing them when they are no longer needed. This type of “just-in-time” serving of compute resources is well known for mid-tiers “state less” servers such as web application servers and web servers that just need another machine to start and run on it but what does it really mean for an enterprise application and its underlying data? Most enterprise applications are not as quite as “state less” and justifiably so. As such, how do you take advantage of cloud elasticity and make it relevant for your enterprise apps? This is where Cloud meets Grid Computing. At Oracle, we have invested enormous amount of time, energy and resources in creating enterprise grid solutions. All our technology products offer built-in elasticity via clustering and dynamic scaling. With products like Real Application Clusters (RAC), Automatic Storage Management, WebLogic Clustering, and Coherence In-Memory Grid, we allow all your enterprise applications to benefit from Cloud elasticity –both vertically and horizontally - without requiring any application changes. A number of technology vendors take a rather simplistic route of starting up additional or removing unneeded VM as the "Cloud Scale-Out" solution. While this may work for stateless mid-tier servers where load balancers can handle the addition and remove of instances transparently but following a similar approach for the database tier - often called as "database sharding" - requires significant application modification and typically does not work with off the shelf packaged applications. Technologies like Oracle Database Real Application Clusters, Automatic Storage Management, etc. on the other hand bring the benefits of incremental scalability and on-demand elasticity to ANY application by providing a simplified abstraction layers where the application does not need deal with data spread over multiple database instances. Rather they just talk to a single database and the database software takes care of aggregating resources across multiple hardware components. It is the technologies like these that truly make a cloud solution relevant for enterprises.  For customers who are looking for a next generation hardware consolidation platform, our engineered systems (e.g. Exadata, Exalogic) not only provide incredible amount of performance and capacity, they also reduce the data center complexity and simplify operations. Assemble, Deploy and Manage Enterprise Applications for Cloud Products like Oracle Virtual assembly builder (OVAB) resolve the complex problem of bringing the cloud speed to complex multi-tier applications. With assemblies, you can not only provision all components of a multi-tier application and wire them together by push of a button, other aspects of application lifecycle, such as real-time application testing, scale-up/scale-down, performance and availability monitoring, etc., are also automated using Oracle Enterprise Manager.  An essential criteria for an enterprise cloud to succeed is the ability to ensure business service levels especially when business users have either full visibility on the usage cost with a “show back” or a “charge back”. With Oracle Enterprise Manager 12c, we have created the most comprehensive cloud management solution in the industry that is capable of managing business service levels “applications-to-disk” in a enterprise private cloud – all from a single console. It is the only cloud management platform in the industry that allows you to deliver infrastructure, platform and application cloud services out of the box. Moreover, it offers integrated and complete lifecycle management of the cloud - including planning and set up, service delivery, operations management, metering and chargeback, etc .  Sounds unbelievable? Well, just watch this space for more details on how Oracle Enterprise Manager 12c is the nerve center of Oracle Cloud! Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies.  Summary  But the intent of this blog post isn't to dwell on how great our solutions are (these are just some examples to illustrate how we at Oracle have approached this problem space). Rather it is to help you ask the right questions before you embark on your cloud journey.  So to summarize, here are the key takeaways.       It is critical that you are clear on why you are building the cloud. Successful organizations keep business benefits as the first and foremost cloud objective. On the other hand, those who approach this purely as a technology project are more likely to fail. Think about where you want to be in 3-5 years before you get started. Your long terms objectives should determine what your first step ought to be. As obvious as it may seem, more people than not make the first move without knowing where they are headed.  Don’t make the mistake of equating cloud to virtualization and Infrastructure-as-a-Service (IaaS). Spinning a VM on-demand will give some short term relief to your IT staff but is unlikely to solve your larger business problems. As such, even if IaaS is your first step towards a more comprehensive cloud, plan the roadmap around those higher level services before you begin. And ask your vendors on how they are going to be your partners in this journey. Capabilities like self-service access and chargeback/showback are absolutely critical if you really expect your cloud to be transformational. Your business won't see the full benefits of the cloud until it empowers them with same kind of control and transparency that they are used to while using a public cloud service.  Evaluate the benefits of integration, as opposed to blindly following the best-of-breed strategy. Integration is a huge challenge and more so in a cloud environment. There are enormous costs associated with stitching a solution out of disparate components and even more in maintaining it. Hope you found these ideas helpful. Looking forward to hearing your thoughts and experiences.

    Read the article

  • "Oracle ?????????" Oracle Days Tokyo 2012 ?????

    - by OTN-J Master
    Normal 0 0 2 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0mm 5.4pt 0mm 5.4pt; mso-para-margin:0mm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.5pt; mso-bidi-font-size:11.0pt; font-family:"Century","serif"; mso-ascii-font-family:Century; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"MS ??"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Century; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-font-kerning:1.0pt;} Normal 0 0 2 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0mm 5.4pt 0mm 5.4pt; mso-para-margin:0mm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.5pt; mso-bidi-font-size:11.0pt; font-family:"Century","serif"; mso-ascii-font-family:Century; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"MS ??"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Century; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-font-kerning:1.0pt;} 10?30?(?)?31?(?)?2????Oracle Days Tokyo 2012?????????????????????Oracle Days ???????????????????????????????????????????????? ??????????????????IT???????????????(Simplified IT, Unleash Innovation)????IT????????????????????????????????????????????????????9/30??10/4???????????????Oracle OpenWorld 2012 ??????????????????????????????????????????????????????????????????? Normal 0 0 2 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0mm 5.4pt 0mm 5.4pt; mso-para-margin:0mm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.5pt; mso-bidi-font-size:11.0pt; font-family:"Century","serif"; mso-ascii-font-family:Century; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"MS ??"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Century; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-font-kerning:1.0pt;} Oracle Days Tokyo 2012?????????????????????????????13?????60????????????????????????????????? ?1??:??????????????????? 1????????????????????????????????????????????·????????????????·??????????????????????????????????????????????????????????????????????????????????????????????????????? ????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????·????????????????????????????????????????????????????????????????????????????????????????????Exadata???????????????????????????????????? Normal 0 0 2 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0mm 5.4pt 0mm 5.4pt; mso-para-margin:0mm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Century","serif";} ?2??:???????????·??????????????????????? 2????????????????????????????????????????????????????????????????????????????·??????????????·???????????????????Oracle Cloud???????? ??????????6????????????? ·????????:???????·?????????????????·????????????????·??????????3??????????????WebLogic Server 12?Oracle Fusion Middleware 11g?Oracle Exalogic?Oracle Event Processing?Oracle Coherence?Oracle Tuxedo ART 12c?Java??? ·????·???????:?????????·??????????????&????????2???????Oracle ?????????????????????????????????????????????? ·??????????:?????·??????????1?????13??????????????????????·??????????????????????????? Normal 0 0 2 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0mm 5.4pt 0mm 5.4pt; mso-para-margin:0mm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Century","serif";} ¦??????????????????? ?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ¦???????????????? Oracle Days Tokyo 2012???????????????????????????????????????????????????????????????????????????????????????????????????????????? ???????????????????? ???????????????????

    Read the article

  • Using FiddlerCore to capture HTTP Requests with .NET

    - by Rick Strahl
    Over the last few weeks I’ve been working on my Web load testing utility West Wind WebSurge. One of the key components of a load testing tool is the ability to capture URLs effectively so that you can play them back later under load. One of the options in WebSurge for capturing URLs is to use its built-in capture tool which acts as an HTTP proxy to capture any HTTP and HTTPS traffic from most Windows HTTP clients, including Web Browsers as well as standalone Windows applications and services. To make this happen, I used Eric Lawrence’s awesome FiddlerCore library, which provides most of the functionality of his desktop Fiddler application, all rolled into an easy to use library that you can plug into your own applications. FiddlerCore makes it almost too easy to capture HTTP content! For WebSurge I needed to capture all HTTP traffic in order to capture the full HTTP request – URL, headers and any content posted by the client. The result of what I ended up creating is this semi-generic capture form: In this post I’m going to demonstrate how easy it is to use FiddlerCore to build this HTTP Capture Form.  If you want to jump right in here are the links to get Telerik’s Fiddler Core and the code for the demo provided here. FiddlerCore Download FiddlerCore on NuGet Show me the Code (WebSurge Integration code from GitHub) Download the WinForms Sample Form West Wind Web Surge (example implementation in live app) Note that FiddlerCore is bound by a license for commercial usage – see license.txt in the FiddlerCore distribution for details. Integrating FiddlerCore FiddlerCore is a library that simply plugs into your application. You can download it from the Telerik site and manually add the assemblies to your project, or you can simply install the NuGet package via:       PM> Install-Package FiddlerCore The library consists of the FiddlerCore.dll as well as a couple of support libraries (CertMaker.dll and BCMakeCert.dll) that are used for installing SSL certificates. I’ll have more on SSL captures and certificate installation later in this post. But first let’s see how easy it is to use FiddlerCore to capture HTTP content by looking at how to build the above capture form. Capturing HTTP Content Once the library is installed it’s super easy to hook up Fiddler functionality. Fiddler includes a number of static class methods on the FiddlerApplication object that can be called to hook up callback events as well as actual start monitoring HTTP URLs. In the following code directly lifted from WebSurge, I configure a few filter options on Form level object, from the user inputs shown on the form by assigning it to a capture options object. In the live application these settings are persisted configuration values, but in the demo they are one time values initialized and set on the form. Once these options are set, I hook up the AfterSessionComplete event to capture every URL that passes through the proxy after the request is completed and start up the Proxy service:void Start() { if (tbIgnoreResources.Checked) CaptureConfiguration.IgnoreResources = true; else CaptureConfiguration.IgnoreResources = false; string strProcId = txtProcessId.Text; if (strProcId.Contains('-')) strProcId = strProcId.Substring(strProcId.IndexOf('-') + 1).Trim(); strProcId = strProcId.Trim(); int procId = 0; if (!string.IsNullOrEmpty(strProcId)) { if (!int.TryParse(strProcId, out procId)) procId = 0; } CaptureConfiguration.ProcessId = procId; CaptureConfiguration.CaptureDomain = txtCaptureDomain.Text; FiddlerApplication.AfterSessionComplete += FiddlerApplication_AfterSessionComplete; FiddlerApplication.Startup(8888, true, true, true); } The key lines for FiddlerCore are just the last two lines of code that include the event hookup code as well as the Startup() method call. Here I only hook up to the AfterSessionComplete event but there are a number of other events that hook various stages of the HTTP request cycle you can also hook into. Other events include BeforeRequest, BeforeResponse, RequestHeadersAvailable, ResponseHeadersAvailable and so on. In my case I want to capture the request data and I actually have several options to capture this data. AfterSessionComplete is the last event that fires in the request sequence and it’s the most common choice to capture all request and response data. I could have used several other events, but AfterSessionComplete is one place where you can look both at the request and response data, so this will be the most common place to hook into if you’re capturing content. The implementation of AfterSessionComplete is responsible for capturing all HTTP request headers and it looks something like this:private void FiddlerApplication_AfterSessionComplete(Session sess) { // Ignore HTTPS connect requests if (sess.RequestMethod == "CONNECT") return; if (CaptureConfiguration.ProcessId > 0) { if (sess.LocalProcessID != 0 && sess.LocalProcessID != CaptureConfiguration.ProcessId) return; } if (!string.IsNullOrEmpty(CaptureConfiguration.CaptureDomain)) { if (sess.hostname.ToLower() != CaptureConfiguration.CaptureDomain.Trim().ToLower()) return; } if (CaptureConfiguration.IgnoreResources) { string url = sess.fullUrl.ToLower(); var extensions = CaptureConfiguration.ExtensionFilterExclusions; foreach (var ext in extensions) { if (url.Contains(ext)) return; } var filters = CaptureConfiguration.UrlFilterExclusions; foreach (var urlFilter in filters) { if (url.Contains(urlFilter)) return; } } if (sess == null || sess.oRequest == null || sess.oRequest.headers == null) return; string headers = sess.oRequest.headers.ToString(); var reqBody = sess.GetRequestBodyAsString(); // if you wanted to capture the response //string respHeaders = session.oResponse.headers.ToString(); //var respBody = session.GetResponseBodyAsString(); // replace the HTTP line to inject full URL string firstLine = sess.RequestMethod + " " + sess.fullUrl + " " + sess.oRequest.headers.HTTPVersion; int at = headers.IndexOf("\r\n"); if (at < 0) return; headers = firstLine + "\r\n" + headers.Substring(at + 1); string output = headers + "\r\n" + (!string.IsNullOrEmpty(reqBody) ? reqBody + "\r\n" : string.Empty) + Separator + "\r\n\r\n"; BeginInvoke(new Action<string>((text) => { txtCapture.AppendText(text); UpdateButtonStatus(); }), output); } The code starts by filtering out some requests based on the CaptureOptions I set before the capture is started. These options/filters are applied when requests actually come in. This is very useful to help narrow down the requests that are captured for playback based on options the user picked. I find it useful to limit requests to a certain domain for captures, as well as filtering out some request types like static resources – images, css, scripts etc. This is of course optional, but I think it’s a common scenario and WebSurge makes good use of this feature. AfterSessionComplete like other FiddlerCore events, provides a Session object parameter which contains all the request and response details. There are oRequest and oResponse objects to hold their respective data. In my case I’m interested in the raw request headers and body only, as you can see in the commented code you can also retrieve the response headers and body. Here the code captures the request headers and body and simply appends the output to the textbox on the screen. Note that the Fiddler events are asynchronous, so in order to display the content in the UI they have to be marshaled back the UI thread with BeginInvoke, which here simply takes the generated headers and appends it to the existing textbox test on the form. As each request is processed, the headers are captured and appended to the bottom of the textbox resulting in a Session HTTP capture in the format that Web Surge internally supports, which is basically raw request headers with a customized 1st HTTP Header line that includes the full URL rather than a server relative URL. When the capture is done the user can either copy the raw HTTP session to the clipboard, or directly save it to file. This raw capture format is the same format WebSurge and also Fiddler use to import/export request data. While this code is application specific, it demonstrates the kind of logic that you can easily apply to the request capture process, which is one of the reasonsof why FiddlerCore is so powerful. You get to choose what content you want to look up as part of your own application logic and you can then decide how to capture or use that data as part of your application. The actual captured data in this case is only a string. The user can edit the data by hand or in the the case of WebSurge, save it to disk and automatically open the captured session as a new load test. Stopping the FiddlerCore Proxy Finally to stop capturing requests you simply disconnect the event handler and call the FiddlerApplication.ShutDown() method:void Stop() { FiddlerApplication.AfterSessionComplete -= FiddlerApplication_AfterSessionComplete; if (FiddlerApplication.IsStarted()) FiddlerApplication.Shutdown(); } As you can see, adding HTTP capture functionality to an application is very straight forward. FiddlerCore offers tons of features I’m not even touching on here – I suspect basic captures are the most common scenario, but a lot of different things can be done with FiddlerCore’s simple API interface. Sky’s the limit! The source code for this sample capture form (WinForms) is provided as part of this article. Adding Fiddler Certificates with FiddlerCore One of the sticking points in West Wind WebSurge has been that if you wanted to capture HTTPS/SSL traffic, you needed to have the full version of Fiddler and have HTTPS decryption enabled. Essentially you had to use Fiddler to configure HTTPS decryption and the associated installation of the Fiddler local client certificate that is used for local decryption of incoming SSL traffic. While this works just fine, requiring to have Fiddler installed and then using a separate application to configure the SSL functionality isn’t ideal. Fortunately FiddlerCore actually includes the tools to register the Fiddler Certificate directly using FiddlerCore. Why does Fiddler need a Certificate in the first Place? Fiddler and FiddlerCore are essentially HTTP proxies which means they inject themselves into the HTTP conversation by re-routing HTTP traffic to a special HTTP port (8888 by default for Fiddler) and then forward the HTTP data to the original client. Fiddler injects itself as the system proxy in using the WinInet Windows settings  which are the same settings that Internet Explorer uses and that are configured in the Windows and Internet Explorer Internet Settings dialog. Most HTTP clients running on Windows pick up and apply these system level Proxy settings before establishing new HTTP connections and that’s why most clients automatically work once Fiddler – or FiddlerCore/WebSurge are running. For plain HTTP requests this just works – Fiddler intercepts the HTTP requests on the proxy port and then forwards them to the original port (80 for HTTP and 443 for SSL typically but it could be any port). For SSL however, this is not quite as simple – Fiddler can easily act as an HTTPS/SSL client to capture inbound requests from the server, but when it forwards the request to the client it has to also act as an SSL server and provide a certificate that the client trusts. This won’t be the original certificate from the remote site, but rather a custom local certificate that effectively simulates an SSL connection between the proxy and the client. If there is no custom certificate configured for Fiddler the SSL request fails with a certificate validation error. The key for this to work is that a custom certificate has to be installed that the HTTPS client trusts on the local machine. For a much more detailed description of the process you can check out Eric Lawrence’s blog post on Certificates. If you’re using the desktop version of Fiddler you can install a local certificate into the Windows certificate store. Fiddler proper does this from the Options menu: This operation does several things: It installs the Fiddler Root Certificate It sets trust to this Root Certificate A new client certificate is generated for each HTTPS site monitored Certificate Installation with FiddlerCore You can also provide this same functionality using FiddlerCore which includes a CertMaker class. Using CertMaker is straight forward to use and it provides an easy way to create some simple helpers that can install and uninstall a Fiddler Root certificate:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } return true; } InstallCertificate() works by first checking whether the root certificate is already installed and if it isn’t goes ahead and creates a new one. The process of creating the certificate is a two step process – first the actual certificate is created and then it’s moved into the certificate store to become trusted. I’m not sure why you’d ever split these operations up since a cert created without trust isn’t going to be of much value, but there are two distinct steps. When you trigger the trustRootCert() method, a message box will pop up on the desktop that lets you know that you’re about to trust a local private certificate. This is a security feature to ensure that you really want to trust the Fiddler root since you are essentially installing a man in the middle certificate. It’s quite safe to use this generated root certificate, because it’s been specifically generated for your machine and thus is not usable from external sources, the only way to use this certificate in a trusted way is from the local machine. IOW, unless somebody has physical access to your machine, there’s no useful way to hijack this certificate and use it for nefarious purposes (see Eric’s post for more details). Once the Root certificate has been installed, FiddlerCore/Fiddler create new certificates for each site that is connected to with HTTPS. You can end up with quite a few temporary certificates in your certificate store. To uninstall you can either use Fiddler and simply uncheck the Decrypt HTTPS traffic option followed by the remove Fiddler certificates button, or you can use FiddlerCore’s CertMaker.removeFiddlerGeneratedCerts() which removes the root cert and any of the intermediary certificates Fiddler created. Keep in mind that when you uninstall you uninstall the certificate for both FiddlerCore and Fiddler, so use UninstallCertificate() with care and realize that you might affect the Fiddler application’s operation by doing so as well. When to check for an installed Certificate Note that the check to see if the root certificate exists is pretty fast, while the actual process of installing the certificate is a relatively slow operation that even on a fast machine takes a few seconds. Further the trust operation pops up a message box so you probably don’t want to install the certificate repeatedly. Since the check for the root certificate is fast, you can easily put a call to InstallCertificate() in any capture startup code – in which case the certificate installation only triggers when a certificate is in fact not installed. Personally I like to make certificate installation explicit – just like Fiddler does, so in WebSurge I use a small drop down option on the menu to install or uninstall the SSL certificate:   This code calls the InstallCertificate and UnInstallCertificate functions respectively – the experience with this is similar to what you get in Fiddler with the extra dialog box popping up to prompt confirmation for installation of the root certificate. Once the cert is installed you can then capture SSL requests. There’s a gotcha however… Gotcha: FiddlerCore Certificates don’t stick by Default When I originally tried to use the Fiddler certificate installation I ran into an odd problem. I was able to install the certificate and immediately after installation was able to capture HTTPS requests. Then I would exit the application and come back in and try the same HTTPS capture again and it would fail due to a missing certificate. CertMaker.rootCertExists() would return false after every restart and if re-installed the certificate a new certificate would get added to the certificate store resulting in a bunch of duplicated root certificates with different keys. What the heck? CertMaker and BcMakeCert create non-sticky CertificatesI turns out that FiddlerCore by default uses different components from what the full version of Fiddler uses. Fiddler uses a Windows utility called MakeCert.exe to create the Fiddler Root certificate. FiddlerCore however installs the CertMaker.dll and BCMakeCert.dll assemblies, which use a different crypto library (Bouncy Castle) for certificate creation than MakeCert.exe which uses the Windows Crypto API. The assemblies provide support for non-windows operation for Fiddler under Mono, as well as support for some non-Windows certificate platforms like iOS and Android for decryption. The bottom line is that the FiddlerCore provided bouncy castle assemblies are not sticky by default as the certificates created with them are not cached as they are in Fiddler proper. To get certificates to ‘stick’ you have to explicitly cache the certificates in Fiddler’s internal preferences. A cache aware version of InstallCertificate looks something like this:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; App.Configuration.UrlCapture.Cert = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.cert", null); App.Configuration.UrlCapture.Key = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.key", null); } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } App.Configuration.UrlCapture.Cert = null; App.Configuration.UrlCapture.Key = null; return true; } In this code I store the Fiddler cert and private key in an application configuration settings that’s stored with the application settings (App.Configuration.UrlCapture object). These settings automatically persist when WebSurge is shut down. The values are read out of Fiddler’s internal preferences store which is set after a new certificate has been created. Likewise I clear out the configuration settings when the certificate is uninstalled. In order for these setting to be used you have to also load the configuration settings into the Fiddler preferences *before* a call to rootCertExists() is made. I do this in the capture form’s constructor:public FiddlerCapture(StressTestForm form) { InitializeComponent(); CaptureConfiguration = App.Configuration.UrlCapture; MainForm = form; if (!string.IsNullOrEmpty(App.Configuration.UrlCapture.Cert)) { FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.key", App.Configuration.UrlCapture.Key); FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.cert", App.Configuration.UrlCapture.Cert); }} This is kind of a drag to do and not documented anywhere that I could find, so hopefully this will save you some grief if you want to work with the stock certificate logic that installs with FiddlerCore. MakeCert provides sticky Certificates and the same functionality as Fiddler But there’s actually an easier way. If you want to skip the above Fiddler preference configuration code in your application you can choose to distribute MakeCert.exe instead of certmaker.dll and bcmakecert.dll. When you use MakeCert.exe, the certificates settings are stored in Windows so they are available without any custom configuration inside of your application. It’s easier to integrate and as long as you run on Windows and you don’t need to support iOS or Android devices is simply easier to deal with. To integrate into your project, you can remove the reference to CertMaker.dll (and the BcMakeCert.dll assembly) from your project. Instead copy MakeCert.exe into your output folder. To make sure MakeCert.exe gets pushed out, include MakeCert.exe in your project and set the Build Action to None, and Copy to Output Directory to Copy if newer. Note that the CertMaker.dll reference in the project has been removed and on disk the files for Certmaker.dll, as well as the BCMakeCert.dll files on disk. Keep in mind that these DLLs are resources of the FiddlerCore NuGet package, so updating the package may end up pushing those files back into your project. Once MakeCert.exe is distributed FiddlerCore checks for it first before using the assemblies so as long as MakeCert.exe exists it’ll be used for certificate creation (at least on Windows). Summary FiddlerCore is a pretty sweet tool, and it’s absolutely awesome that we get to plug in most of the functionality of Fiddler right into our own applications. A few years back I tried to build this sort of functionality myself for an app and ended up giving up because it’s a big job to get HTTP right – especially if you need to support SSL. FiddlerCore now provides that functionality as a turnkey solution that can be plugged into your own apps easily. The only downside is FiddlerCore’s documentation for more advanced features like certificate installation which is pretty sketchy. While for the most part FiddlerCore’s feature set is easy to work with without any documentation, advanced features are often not intuitive to gleam by just using Intellisense or the FiddlerCore help file reference (which is not terribly useful). While Eric Lawrence is very responsive on his forum and on Twitter, there simply isn’t much useful documentation on Fiddler/FiddlerCore available online. If you run into trouble the forum is probably the first place to look and then ask a question if you can’t find the answer. The best documentation you can find is Eric’s Fiddler Book which covers a ton of functionality of Fiddler and FiddlerCore. The book is a great reference to Fiddler’s feature set as well as providing great insights into the HTTP protocol. The second half of the book that gets into the innards of HTTP is an excellent read for anybody who wants to know more about some of the more arcane aspects and special behaviors of HTTP – it’s well worth the read. While the book has tons of information in a very readable format, it’s unfortunately not a great reference as it’s hard to find things in the book and because it’s not available online you can’t electronically search for the great content in it. But it’s hard to complain about any of this given the obvious effort and love that’s gone into this awesome product for all of these years. A mighty big thanks to Eric Lawrence  for having created this useful tool that so many of us use all the time, and also to Telerik for picking up Fiddler/FiddlerCore and providing Eric the resources to support and improve this wonderful tool full time and keeping it free for all. Kudos! Resources FiddlerCore Download FiddlerCore NuGet Fiddler Capture Sample Form Fiddler Capture Form in West Wind WebSurge (GitHub) Eric Lawrence’s Fiddler Book© Rick Strahl, West Wind Technologies, 2005-2014Posted in .NET  HTTP   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQLAuthority News – SafePeak’s SQL Server Performance Contest – Winners

    - by pinaldave
    SafePeak, the unique automated SQL performance acceleration and performance tuning software vendor, announced the winners of their SQL Performance Contest 2011. The contest quite unique: the writer of the best / most interesting and most community liked “performance story” would win an expensive gadget. The judges were the community DBAs that could participating and Like’ing stories and could also win expensive prizes. Robert Pearl SQL MVP, was the contest supervisor. I liked most of the stories and decided then to contact SafePeak and suggested to participate in the give-away and they have gladly accepted the same. The winner of best story is: Jason Brimhall (USA) with a story about a proc with a fair amount of business logic. Congratulations Jason! The 3 participants won the second prize of $100 gift card on amazon.com are: Michael Corey (USA), Hakim Ali (USA) and Alex Bernal (USA). And 5 participants won a printed copy of a book of mine (Book Reviews of SQL Wait Stats Joes 2 Pros: SQL Performance Tuning Techniques Using Wait Statistics, Types & Queues) are: Patrick Kansa (USA), Wagner Bianchi (USA), Riyas.V.K (India), Farzana Patwa (USA) and Wagner Crivelini (Brazil). The winners are welcome to send safepeak their mail address to receive the prizes (to “info ‘at’ safepeak.com”). Also SafePeak team asked me to welcome you all to continue sending stories, simply because they (and we all) like to read interesting stuff) as well as to send them ideas for future contests. You can do it from here: www.safepeak.com/SQL-Performance-Contest-2011/Submit-Story Congratulations to everybody! I found this very funny video about SafePeak: It looks like someone (maybe the vendor) played with video’s once and created this non-commercial like video: SafePeak dynamic caching is an immediate plug-n-play performance acceleration and scalability solution for cloud, hosted and business SQL server applications. By caching in memory result sets of queries and stored procedures, while keeping all those cache correct and up to date using unique patent pending technology, SafePeak can fix SQL performance problems and bottlenecks of most applications – most importantly: without actual code changes. By the way, I checked their website prior this contest announcement and noticed that they are running these days a special end year promotion giving between 30% to 45% discounts. Since the installation is quick and full testing can be done within couple of days – those have the need (performance problems) and have budget leftovers: I suggest you hurry. A free fully functional trial is here: www.safepeak.com/download, while those that want to start with a quote should ping here www.safepeak.com/quote. Good luck! Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Why does my laptop resume immediately after suspend?

    - by Igor Zinov'yev
    I seem to be having some problem with suspend mode. Every time I try to suspend my laptop, it just locks the screen. Or maybe it successfully suspends just to resume only an instant after. What could cause such a behaviour? I'm running 32-bit Ubuntu 12.04 with the 3.2.0-25 kernel on a HP dv5-1178er Pavilion laptop (Intel Core 2 Duo). Here are the relevant log sections: kern.log: Jun 1 10:42:21 igor-laptop kernel: [ 2225.131171] PM: Syncing filesystems ... done. Jun 1 10:42:21 igor-laptop kernel: [ 2225.141222] PM: Preparing system for mem sleep Jun 1 10:42:21 igor-laptop kernel: [ 2225.141239] Freezing user space processes ... (elapsed 0.01 seconds) done. Jun 1 10:42:21 igor-laptop kernel: [ 2225.156171] Freezing remaining freezable tasks ... (elapsed 0.01 seconds) done. Jun 1 10:42:21 igor-laptop kernel: [ 2225.172139] PM: Entering mem sleep Jun 1 10:42:21 igor-laptop kernel: [ 2225.172169] Suspending console(s) (use no_console_suspend to debug) Jun 1 10:42:21 igor-laptop kernel: [ 2225.172895] sd 0:0:0:0: [sda] Synchronizing SCSI cache Jun 1 10:42:21 igor-laptop kernel: [ 2225.181767] sd 0:0:0:0: [sda] Stopping disk Jun 1 10:42:21 igor-laptop kernel: [ 2225.251089] ene_ir 00:0a: wake-up capability enabled by ACPI Jun 1 10:42:21 igor-laptop kernel: [ 2225.251115] i8042 aux 00:09: wake-up capability disabled by ACPI Jun 1 10:42:21 igor-laptop kernel: [ 2225.251133] i8042 kbd 00:08: wake-up capability enabled by ACPI Jun 1 10:42:21 igor-laptop kernel: [ 2225.251286] jmb38x_ms 0000:06:00.3: PCI INT A disabled Jun 1 10:42:21 igor-laptop kernel: [ 2225.252491] sdhci-pci 0000:06:00.1: PCI INT A disabled Jun 1 10:42:21 igor-laptop kernel: [ 2225.264130] uhci_hcd 0000:00:1d.2: PCI INT D disabled Jun 1 10:42:21 igor-laptop kernel: [ 2225.264142] uhci_hcd 0000:00:1d.1: PCI INT B disabled Jun 1 10:42:21 igor-laptop kernel: [ 2225.264325] uhci_hcd 0000:00:1a.1: PCI INT B disabled Jun 1 10:42:21 igor-laptop kernel: [ 2225.288059] uhci_hcd 0000:00:1a.0: PCI INT A disabled Jun 1 10:42:21 igor-laptop kernel: [ 2225.288097] uhci_hcd 0000:00:1d.3: PCI INT C disabled Jun 1 10:42:21 igor-laptop kernel: [ 2225.288135] uhci_hcd 0000:00:1d.0: PCI INT A disabled Jun 1 10:42:21 igor-laptop kernel: [ 2225.316051] ehci_hcd 0000:00:1d.7: PCI INT A disabled Jun 1 10:42:21 igor-laptop kernel: [ 2225.316068] ehci_hcd 0000:00:1a.7: PCI INT D disabled Jun 1 10:42:21 igor-laptop kernel: [ 2225.522872] PM: suspend of drv:sd dev:0:0:0:0 complete after 349.979 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.522901] PM: suspend of drv:scsi dev:target0:0:0 complete after 349.955 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.522927] PM: suspend of drv:scsi dev:host0 complete after 272.260 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.522969] ahci 0000:00:1f.2: BIOS update required for suspend/resume Jun 1 10:42:21 igor-laptop kernel: [ 2225.522976] pci_legacy_suspend(): ahci_pci_device_suspend+0x0/0x80 returns -5 Jun 1 10:42:21 igor-laptop kernel: [ 2225.522981] pm_op(): pci_pm_suspend+0x0/0x110 returns -5 Jun 1 10:42:21 igor-laptop kernel: [ 2225.522984] PM: suspend of drv:ahci dev:0000:00:1f.2 complete after 258.932 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.522987] PM: Device 0000:00:1f.2 failed to suspend async: error -5 Jun 1 10:42:21 igor-laptop kernel: [ 2225.576228] snd_hda_intel 0000:00:1b.0: PCI INT A disabled Jun 1 10:42:21 igor-laptop kernel: [ 2225.576270] ACPI handle has no context! Jun 1 10:42:21 igor-laptop kernel: [ 2225.592136] PM: suspend of drv:snd_hda_intel dev:0000:00:1b.0 complete after 327.889 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.592206] PM: Some devices failed to suspend Jun 1 10:42:21 igor-laptop kernel: [ 2225.592291] uhci_hcd 0000:00:1a.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592298] uhci_hcd 0000:00:1a.0: setting latency timer to 64 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592325] usb usb3: root hub lost power or was reset Jun 1 10:42:21 igor-laptop kernel: [ 2225.592339] uhci_hcd 0000:00:1a.1: PCI INT B -> GSI 21 (level, low) -> IRQ 21 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592345] uhci_hcd 0000:00:1a.1: setting latency timer to 64 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592371] usb usb4: root hub lost power or was reset Jun 1 10:42:21 igor-laptop kernel: [ 2225.592387] ehci_hcd 0000:00:1a.7: PCI INT D -> GSI 19 (level, low) -> IRQ 19 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592395] ehci_hcd 0000:00:1a.7: setting latency timer to 64 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592843] uhci_hcd 0000:00:1d.0: PCI INT A -> GSI 20 (level, low) -> IRQ 20 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592851] uhci_hcd 0000:00:1d.0: setting latency timer to 64 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592854] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 19 (level, low) -> IRQ 19 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592863] uhci_hcd 0000:00:1d.1: setting latency timer to 64 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592878] usb usb5: root hub lost power or was reset Jun 1 10:42:21 igor-laptop kernel: [ 2225.592892] usb usb6: root hub lost power or was reset Jun 1 10:42:21 igor-laptop kernel: [ 2225.592895] uhci_hcd 0000:00:1d.2: PCI INT D -> GSI 16 (level, low) -> IRQ 16 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592903] uhci_hcd 0000:00:1d.2: setting latency timer to 64 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592906] uhci_hcd 0000:00:1d.3: PCI INT C -> GSI 18 (level, low) -> IRQ 18 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592915] uhci_hcd 0000:00:1d.3: setting latency timer to 64 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592930] usb usb7: root hub lost power or was reset Jun 1 10:42:21 igor-laptop kernel: [ 2225.592946] usb usb8: root hub lost power or was reset Jun 1 10:42:21 igor-laptop kernel: [ 2225.592949] ehci_hcd 0000:00:1d.7: PCI INT A -> GSI 20 (level, low) -> IRQ 20 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592957] ehci_hcd 0000:00:1d.7: setting latency timer to 64 Jun 1 10:42:21 igor-laptop kernel: [ 2225.592963] pci 0000:00:1e.0: setting latency timer to 64 Jun 1 10:42:21 igor-laptop kernel: [ 2225.597106] sd 0:0:0:0: [sda] Starting disk Jun 1 10:42:21 igor-laptop kernel: [ 2225.608138] snd_hda_intel 0000:00:1b.0: BAR 0: set to [mem 0xdf300000-0xdf303fff 64bit] (PCI address [0xdf300000-0xdf303fff]) Jun 1 10:42:21 igor-laptop kernel: [ 2225.608180] snd_hda_intel 0000:00:1b.0: restoring config space at offset 0xf (was 0x100, writing 0x10b) Jun 1 10:42:21 igor-laptop kernel: [ 2225.608233] snd_hda_intel 0000:00:1b.0: restoring config space at offset 0x3 (was 0x0, writing 0x10) Jun 1 10:42:21 igor-laptop kernel: [ 2225.608248] snd_hda_intel 0000:00:1b.0: restoring config space at offset 0x1 (was 0x100000, writing 0x100002) Jun 1 10:42:21 igor-laptop kernel: [ 2225.608299] snd_hda_intel 0000:00:1b.0: PCI INT A -> GSI 22 (level, low) -> IRQ 22 Jun 1 10:42:21 igor-laptop kernel: [ 2225.608313] snd_hda_intel 0000:00:1b.0: setting latency timer to 64 Jun 1 10:42:21 igor-laptop kernel: [ 2225.608420] snd_hda_intel 0000:00:1b.0: irq 50 for MSI/MSI-X Jun 1 10:42:21 igor-laptop kernel: [ 2225.612095] firewire_ohci 0000:06:00.0: restoring config space at offset 0x1 (was 0x100000, writing 0x100006) Jun 1 10:42:21 igor-laptop kernel: [ 2225.612181] sdhci-pci 0000:06:00.1: restoring config space at offset 0x1 (was 0x100003, writing 0x100007) Jun 1 10:42:21 igor-laptop kernel: [ 2225.612211] sdhci-pci 0000:06:00.1: PCI INT A -> GSI 16 (level, low) -> IRQ 16 Jun 1 10:42:21 igor-laptop kernel: [ 2225.612225] sdhci-pci 0000:06:00.1: setting latency timer to 64 Jun 1 10:42:21 igor-laptop kernel: [ 2225.612296] jmb38x_ms 0000:06:00.3: restoring config space at offset 0x1 (was 0x100003, writing 0x100007) Jun 1 10:42:21 igor-laptop kernel: [ 2225.612326] jmb38x_ms 0000:06:00.3: PCI INT A -> GSI 16 (level, low) -> IRQ 16 Jun 1 10:42:21 igor-laptop kernel: [ 2225.612332] jmb38x_ms 0000:06:00.3: setting latency timer to 64 Jun 1 10:42:21 igor-laptop kernel: [ 2225.699170] PM: resume of drv:uvcvideo dev:2-4:1.0 complete after 101.965 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.699179] PM: resume of drv:uvcvideo dev:2-4:1.1 complete after 101.932 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.699186] PM: resume of drv: dev:ep_00 complete after 101.917 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.699197] PM: resume of drv: dev:ep_83 complete after 101.972 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.716148] PM: resume of drv:hub dev:3-0:1.0 complete after 119.543 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.716155] PM: resume of drv: dev:ep_00 complete after 119.544 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.716161] PM: resume of drv:hub dev:5-0:1.0 complete after 119.420 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.716168] PM: resume of drv: dev:ep_00 complete after 119.381 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.716174] PM: resume of drv:hub dev:8-0:1.0 complete after 119.141 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.716181] PM: resume of drv: dev:ep_00 complete after 119.104 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.716186] PM: resume of drv: dev:ep_81 complete after 119.579 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.716191] PM: resume of drv: dev:ep_81 complete after 119.427 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.716197] PM: resume of drv: dev:ep_81 complete after 119.143 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.747148] firewire_core: skipped bus generations, destroying all nodes Jun 1 10:42:21 igor-laptop kernel: [ 2225.776093] PM: resume of drv:hp_accel dev:HPQ0004:00 complete after 167.225 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.777243] i8042 kbd 00:08: wake-up capability disabled by ACPI Jun 1 10:42:21 igor-laptop kernel: [ 2225.777278] ene_ir 00:0a: wake-up capability disabled by ACPI Jun 1 10:42:21 igor-laptop kernel: [ 2225.820100] PM: resume of drv:hub dev:4-0:1.0 complete after 223.436 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.820115] PM: resume of drv: dev:ep_00 complete after 223.444 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.820123] PM: resume of drv: dev:ep_81 complete after 223.456 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.820206] PM: resume of drv:hub dev:7-0:1.0 complete after 223.266 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.820221] PM: resume of drv: dev:ep_81 complete after 223.260 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.820238] PM: resume of drv: dev:ep_00 complete after 223.255 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.820295] PM: resume of drv:hub dev:6-0:1.0 complete after 223.453 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.820302] PM: resume of drv: dev:ep_00 complete after 223.415 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.820321] PM: resume of drv: dev:ep_81 complete after 223.457 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2225.932108] usb 4-2: reset full-speed USB device number 2 using uhci_hcd Jun 1 10:42:21 igor-laptop kernel: [ 2226.086714] PM: resume of drv:usbhid dev:4-2:1.0 complete after 489.393 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.086728] PM: resume of drv: dev:ep_81 complete after 489.384 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.086745] PM: resume of drv: dev:ep_00 complete after 489.329 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.086753] PM: resume of drv:usbhid dev:4-2:1.1 complete after 489.384 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.086764] PM: resume of drv: dev:ep_82 complete after 489.373 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.180555] usb 7-2: reset full-speed USB device number 2 using uhci_hcd Jun 1 10:42:21 igor-laptop kernel: [ 2226.244858] firewire_core: rediscovered device fw0 Jun 1 10:42:21 igor-laptop kernel: [ 2226.335066] btusb 7-2:1.0: no reset_resume for driver btusb? Jun 1 10:42:21 igor-laptop kernel: [ 2226.335068] btusb 7-2:1.1: no reset_resume for driver btusb? Jun 1 10:42:21 igor-laptop kernel: [ 2226.432082] usb 6-1: reset full-speed USB device number 2 using uhci_hcd Jun 1 10:42:21 igor-laptop kernel: [ 2226.578280] PM: resume of drv:nvidia dev:0000:01:00.0 complete after 985.301 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.584296] PM: resume of drv:usb dev:7-2:1.0 complete after 986.693 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.584308] PM: resume of drv: dev:ep_00 complete after 986.452 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.584311] PM: resume of drv:usb dev:7-2:1.1 complete after 986.616 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.584315] PM: resume of drv:usb dev:7-2:1.3 complete after 986.483 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.584320] PM: resume of drv:usb dev:7-2:1.2 complete after 986.556 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.584328] PM: resume of drv: dev:ep_03 complete after 986.588 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.584331] PM: resume of drv: dev:ep_81 complete after 986.704 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.584334] PM: resume of drv: dev:ep_83 complete after 986.617 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.584337] PM: resume of drv: dev:ep_82 complete after 986.688 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.584340] PM: resume of drv: dev:ep_02 complete after 986.667 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.584344] PM: resume of drv: dev:ep_84 complete after 986.558 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.584352] PM: resume of drv: dev:ep_04 complete after 986.542 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.590883] PM: resume of drv: dev:ep_00 complete after 993.327 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.590887] PM: resume of drv:usb dev:6-1:1.0 complete after 993.424 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.590927] PM: resume of drv: dev:ep_82 complete after 993.395 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.590934] PM: resume of drv: dev:ep_81 complete after 993.426 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.590940] PM: resume of drv: dev:ep_01 complete after 993.456 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.592450] PM: resume of drv:sd dev:0:0:0:0 complete after 995.343 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.592461] PM: resume of drv:scsi_disk dev:0:0:0:0 complete after 802.688 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.592472] PM: resume of drv:scsi_device dev:0:0:0:0 complete after 995.324 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.600339] PM: resume of devices complete after 1008.129 msecs Jun 1 10:42:21 igor-laptop kernel: [ 2226.601293] PM: resume devices took 1.008 seconds Jun 1 10:42:21 igor-laptop kernel: [ 2226.601330] PM: Finishing wakeup. Jun 1 10:42:21 igor-laptop kernel: [ 2226.601332] Restarting tasks ... done. Jun 1 10:42:21 igor-laptop kernel: [ 2226.625660] video LNXVIDEO:01: Restoring backlight state Jun 1 10:42:22 igor-laptop kernel: [ 2227.478921] iwlwifi 0000:02:00.0: L1 Disabled; Enabling L0S Jun 1 10:42:22 igor-laptop kernel: [ 2227.481981] iwlwifi 0000:02:00.0: Radio type=0x1-0x2-0x0 Jun 1 10:42:22 igor-laptop kernel: [ 2227.527727] ADDRCONF(NETDEV_UP): wlan0: link is not ready Jun 1 10:42:22 igor-laptop kernel: [ 2227.532468] r8169 0000:03:00.0: eth0: link down Jun 1 10:42:22 igor-laptop kernel: [ 2227.533967] ADDRCONF(NETDEV_UP): eth0: link is not ready pm_suspend.log: Fri Jun 1 10:42:14 MSK 2012: Running hooks for suspend. Running hook /usr/lib/pm-utils/sleep.d/000kernel-change suspend suspend: /usr/lib/pm-utils/sleep.d/000kernel-change suspend suspend: success. Running hook /usr/lib/pm-utils/sleep.d/00logging suspend suspend: Linux igor-laptop 3.2.0-25-generic #40-Ubuntu SMP Wed May 23 20:33:05 UTC 2012 i686 i686 i386 GNU/Linux Module Size Used by pci_stub 12550 1 vboxpci 22882 0 vboxnetadp 13328 0 vboxnetflt 27211 0 vboxdrv 252189 3 vboxpci,vboxnetadp,vboxnetflt dm_crypt 22528 0 snd_hda_codec_hdmi 31775 1 snd_hda_codec_idt 60251 1 arc4 12473 2 hp_wmi 13652 0 sparse_keymap 13658 1 hp_wmi rfcomm 38139 12 snd_hda_intel 32765 5 snd_hda_codec 109562 3 snd_hda_codec_hdmi,snd_hda_codec_idt,snd_hda_intel snd_hwdep 13276 1 snd_hda_codec bnep 17830 2 btusb 17912 2 bluetooth 158438 23 rfcomm,bnep,btusb joydev 17393 0 parport_pc 32114 0 snd_pcm 80845 4 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec ppdev 12849 0 uvcvideo 67203 0 binfmt_misc 17292 1 videodev 86588 1 uvcvideo snd_seq_midi 13132 0 snd_rawmidi 25424 1 snd_seq_midi nvidia 10958194 43 snd_seq_midi_event 14475 1 snd_seq_midi snd_seq 51567 2 snd_seq_midi,snd_seq_midi_event ir_lirc_codec 12739 0 lirc_dev 18700 1 ir_lirc_codec snd_timer 28931 2 snd_pcm,snd_seq snd_seq_device 14172 3 snd_seq_midi,snd_rawmidi,snd_seq ir_mce_kbd_decoder 12681 0 ir_sony_decoder 12462 0 ir_jvc_decoder 12459 0 ir_rc6_decoder 12459 0 psmouse 87213 0 ir_rc5_decoder 12459 0 serio_raw 13027 0 iwlwifi 287934 0 rc_rc6_mce 12454 0 ir_nec_decoder 12459 0 ene_ir 18019 0 rc_core 21263 10 ir_lirc_codec,ir_mce_kbd_decoder,ir_sony_decoder,ir_jvc_decoder,ir_rc6_decoder,ir_rc5_decoder,rc_rc6_mce,ir_nec_decoder,ene_ir mac80211 436455 1 iwlwifi snd 62064 19 snd_hda_codec_hdmi,snd_hda_codec_idt,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device cfg80211 178679 2 iwlwifi,mac80211 hp_accel 25728 0 lis3lv02d 19268 1 hp_accel input_polldev 13648 1 lis3lv02d mac_hid 13077 0 wmi 18744 1 hp_wmi jmb38x_ms 17406 0 soundcore 14635 1 snd snd_page_alloc 14115 2 snd_hda_intel,snd_pcm memstick 15857 1 jmb38x_ms firewire_sbp2 18346 0 lp 17455 0 parport 40930 3 parport_pc,ppdev,lp vesafb 13516 1 usbhid 41906 0 hid 77367 1 usbhid firewire_ohci 40180 0 firewire_core 56906 2 firewire_sbp2,firewire_ohci crc_itu_t 12627 1 firewire_core sdhci_pci 18324 0 sdhci 28241 1 sdhci_pci r8169 56321 0 video 19068 0 total used free shared buffers cached Mem: 3095544 2364260 731284 0 159020 1280240 -/+ buffers/cache: 925000 2170544 Swap: 1718916 0 1718916 /usr/lib/pm-utils/sleep.d/00logging suspend suspend: success. Running hook /usr/lib/pm-utils/sleep.d/00powersave suspend suspend: /usr/lib/pm-utils/sleep.d/00powersave suspend suspend: success. Running hook /usr/lib/pm-utils/sleep.d/01PulseAudio suspend suspend: Welcome to PulseAudio! Use "help" for usage information. >>> >>> Welcome to PulseAudio! Use "help" for usage information. >>> >>> Welcome to PulseAudio! Use "help" for usage information. >>> >>> /usr/lib/pm-utils/sleep.d/01PulseAudio suspend suspend: success. Running hook /etc/pm/sleep.d/10_grub-common suspend suspend: /etc/pm/sleep.d/10_grub-common suspend suspend: success. Running hook /etc/pm/sleep.d/10_unattended-upgrades-hibernate suspend suspend: /etc/pm/sleep.d/10_unattended-upgrades-hibernate suspend suspend: success. Running hook /usr/lib/pm-utils/sleep.d/55NetworkManager suspend suspend: Having NetworkManager put all interaces to sleep...Failed. /usr/lib/pm-utils/sleep.d/55NetworkManager suspend suspend: success. Running hook /usr/lib/pm-utils/sleep.d/60_wpa_supplicant suspend suspend: Failed to connect to wpa_supplicant - wpa_ctrl_open: No such file or directory /usr/lib/pm-utils/sleep.d/60_wpa_supplicant suspend suspend: success. Running hook /usr/lib/pm-utils/sleep.d/75modules suspend suspend: /usr/lib/pm-utils/sleep.d/75modules suspend suspend: success. Running hook /usr/lib/pm-utils/sleep.d/90clock suspend suspend: /usr/lib/pm-utils/sleep.d/90clock suspend suspend: not applicable. Running hook /usr/lib/pm-utils/sleep.d/94cpufreq suspend suspend: /usr/lib/pm-utils/sleep.d/94cpufreq suspend suspend: success. Running hook /usr/lib/pm-utils/sleep.d/95anacron suspend suspend: stop: Unknown instance: /usr/lib/pm-utils/sleep.d/95anacron suspend suspend: success. Running hook /usr/lib/pm-utils/sleep.d/95hdparm-apm suspend suspend: /usr/lib/pm-utils/sleep.d/95hdparm-apm suspend suspend: not applicable. Running hook /usr/lib/pm-utils/sleep.d/95led suspend suspend: /usr/lib/pm-utils/sleep.d/95led suspend suspend: not applicable. Running hook /usr/lib/pm-utils/sleep.d/98video-quirk-db-handler suspend suspend: nVidia binary video drive detected, not using quirks. /usr/lib/pm-utils/sleep.d/98video-quirk-db-handler suspend suspend: success. Running hook /usr/lib/pm-utils/sleep.d/99video suspend suspend: kernel.acpi_video_flags = 0 /usr/lib/pm-utils/sleep.d/99video suspend suspend: success. Running hook /etc/pm/sleep.d/novatel_3g_suspend suspend suspend: /etc/pm/sleep.d/novatel_3g_suspend suspend suspend: success. Fri Jun 1 10:42:19 MSK 2012: performing suspend Fri Jun 1 10:42:21 MSK 2012: Awake. Fri Jun 1 10:42:21 MSK 2012: Running hooks for resume Running hook /etc/pm/sleep.d/novatel_3g_suspend resume suspend: /etc/pm/sleep.d/novatel_3g_suspend resume suspend: success. Running hook /usr/lib/pm-utils/sleep.d/99video resume suspend: /usr/lib/pm-utils/sleep.d/99video resume suspend: success. Running hook /usr/lib/pm-utils/sleep.d/98video-quirk-db-handler resume suspend: /usr/lib/pm-utils/sleep.d/98video-quirk-db-handler resume suspend: success. Running hook /usr/lib/pm-utils/sleep.d/95led resume suspend: /usr/lib/pm-utils/sleep.d/95led resume suspend: not applicable. Running hook /usr/lib/pm-utils/sleep.d/95hdparm-apm resume suspend: /dev/sda: setting Advanced Power Management level to 0xfe (254) APM_level = 254 /dev/sda: setting Advanced Power Management level to 0xfe (254) APM_level = 254 /usr/lib/pm-utils/sleep.d/95hdparm-apm resume suspend: success. Running hook /usr/lib/pm-utils/sleep.d/95anacron resume suspend: /usr/lib/pm-utils/sleep.d/95anacron resume suspend: success. Running hook /usr/lib/pm-utils/sleep.d/94cpufreq resume suspend: /usr/lib/pm-utils/sleep.d/94cpufreq resume suspend: success. Running hook /usr/lib/pm-utils/sleep.d/90clock resume suspend: /usr/lib/pm-utils/sleep.d/90clock resume suspend: not applicable. Running hook /usr/lib/pm-utils/sleep.d/75modules resume suspend: Reloaded unloaded modules. /usr/lib/pm-utils/sleep.d/75modules resume suspend: success. Running hook /usr/lib/pm-utils/sleep.d/60_wpa_supplicant resume suspend: Failed to connect to wpa_supplicant - wpa_ctrl_open: No such file or directory /usr/lib/pm-utils/sleep.d/60_wpa_supplicant resume suspend: success. Running hook /usr/lib/pm-utils/sleep.d/55NetworkManager resume suspend: Having NetworkManager wake interfaces back up...Failed. /usr/lib/pm-utils/sleep.d/55NetworkManager resume suspend: success. Running hook /etc/pm/sleep.d/10_unattended-upgrades-hibernate resume suspend: /etc/pm/sleep.d/10_unattended-upgrades-hibernate resume suspend: success. Running hook /etc/pm/sleep.d/10_grub-common resume suspend: /etc/pm/sleep.d/10_grub-common resume suspend: success. Running hook /usr/lib/pm-utils/sleep.d/01PulseAudio resume suspend: Welcome to PulseAudio! Use "help" for usage information. >>> >>> Welcome to PulseAudio! Use "help" for usage information. >>> >>> Welcome to PulseAudio! Use "help" for usage information. >>> >>> /usr/lib/pm-utils/sleep.d/01PulseAudio resume suspend: success. Running hook /usr/lib/pm-utils/sleep.d/00powersave resume suspend: /usr/lib/pm-utils/sleep.d/00powersave resume suspend: success. Running hook /usr/lib/pm-utils/sleep.d/00logging resume suspend: /usr/lib/pm-utils/sleep.d/00logging resume suspend: success. Running hook /usr/lib/pm-utils/sleep.d/000kernel-change resume suspend: /usr/lib/pm-utils/sleep.d/000kernel-change resume suspend: success. Fri Jun 1 10:42:22 MSK 2012: Finished.

    Read the article

  • Why do apache2 upgrades remove and not re-install libapache2-mod-php5?

    - by nutznboltz
    We repeatedly see that when an apache2 update arrives and is installed it causes the libapache2-mod-php5 package to be removed and does not subsequently re-install it automatically. We must subsequently re-install the libapache2-mod-php5 manually in order to restore functionality to our web server. Please see the following github gist, it is a contiguous section of our server's dpkg.log showing the November 14, 2011 update to apache2: https://gist.github.com/1368361 it includes 2011-11-14 11:22:18 remove libapache2-mod-php5 5.3.2-1ubuntu4.10 5.3.2-1ubuntu4.10 Is this a known issue? Do other people see this too? I could not find any launchpad bug reports about it. Platform details: $ lsb_release -ds Ubuntu 10.04.3 LTS $ uname -srvm Linux 2.6.38-12-virtual #51~lucid1-Ubuntu SMP Thu Sep 29 20:27:50 UTC 2011 x86_64 $ dpkg -l | awk '/ii.*apache/ {print $2 " " $3 }' apache2 2.2.14-5ubuntu8.7 apache2-mpm-prefork 2.2.14-5ubuntu8.7 apache2-utils 2.2.14-5ubuntu8.7 apache2.2-bin 2.2.14-5ubuntu8.7 apache2.2-common 2.2.14-5ubuntu8.7 libapache2-mod-authnz-external 3.2.4-2+squeeze1build0.10.04.1 libapache2-mod-php5 5.3.2-1ubuntu4.10 Thanks At a high-level the update process looks like: package package_name do action :upgrade case node[:platform] when 'centos', 'redhat', 'scientific' options '--disableplugin=fastestmirror' when 'ubuntu' options '-o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold"' end end But at a lower level def install_package(name, version) run_command_with_systems_locale( :command = "apt-get -q -y#{expand_options(@new_resource.options)} install #{name}=#{version}", :environment = { "DEBIAN_FRONTEND" = "noninteractive" } ) end def upgrade_package(name, version) install_package(name, version) end So Chef is using "install" to do "update". This sort of moves the question around to "how does apt-get safe-upgrade" remember to re-install libapache-mod-php5? The exact sequence of packages that triggered this was: apache2 apache2-mpm-prefork apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common But the code is attempting to run checks to make sure the packages in that list are installed already before attempting to "upgrade" them. case node[:platform] when 'debian', 'centos', 'fedora', 'redhat', 'scientific', 'ubuntu' # first primitive way is to define the updates in the recipe # data bags will be used later %w/ apache2 apache2-mpm-prefork apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common /.each{ |package_name| Chef::Log.debug("is #{package_name} among local packages available for changes?") next unless node[:packages][:changes].keys.include?(package_name) Chef::Log.debug("is #{package_name} available for upgrade?") next unless node[:packages][:changes][package_name][:action] == 'upgrade' package package_name do action :upgrade case node[:platform] when 'centos', 'redhat', 'scientific' options '--disableplugin=fastestmirror' when 'ubuntu' options '-o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold"' end end tag('upgraded') } # after upgrading everything, run yum cache updater if tagged?('upgraded') # Remove old orphaned dependencies and kernel images and kernel headers etc. # Remove cached deb files. case node[:platform] when 'ubuntu' execute 'apt-get -y autoremove' execute 'apt-get clean' # Re-check what updates are available soon. when 'centos', 'fedora', 'redhat', 'scientific' node[:packages][:last_time_we_looked_at_yum] = 0 end untag('upgraded') end end But it's clear that it fails since the dpkg.log has 2011-11-14 11:22:25 install apache2-mpm-worker 2.2.14-5ubuntu8.7 on a system which does not currently have apache2-mpm-worker. I will have to discuss this with the author, thanks again.

    Read the article

  • NHibernate 2 Beginner's Guide Review

    - by Ricardo Peres
    OK, here's the review I promised a while ago. This is a beginner's introduction to NHibernate, so if you have already some experience with NHibernate, you will notice it lacks a lot of concepts and information. It starts with a good description of NHibernate and why would we use it. It goes on describing basic mapping scenarios having primary keys generated with the HiLo or Identity algorithms, without actually explaining why would we choose one over the other. As for mapping, the book talks about XML mappings and provides a simple example of Fluent NHibernate, comparing it to its XML counterpart. When it comes to relations, it covers one-to-many/many-to-one and many-to-many, not one-to-one relations, but only talks briefly about lazy loading, which is, IMO, an important concept. Only Bags are described, not any of the other collection types. The log4net configuration description gets it's own chapter, which I find excessive. The chapter on configuration merely lists the most common properties for configuring NHibernate, both in XML and in code. Querying only talks about loading by ID (using Get, not Load) and using Criteria API, on which a paging example is presented as well as some common filtering options (property equals/like/between to, no examples on conjunction/disjunction, however). There's a chapter fully dedicated to ASP.NET, which explains how we can use NHibernate in web applications. It basically talks about ASP.NET concepts, though. Following it, another chapter explains how we can build our own ASP.NET providers using NHibernate (Membership, Role). The available entity generators for NHibernate are referred and evaluated on a chapter of their own, the list is fine (CodeSmith, nhib-gen, AjGenesis, Visual NHibernate, MyGeneration, NGen, NHModeler, Microsoft T4 (?) and hbm2net), examples are provided whenever possible, however, I have some problems with some of the evaluations: for example, Visual NHibernate scores 5 out of 5 on Visual Studio integration, which simply does not exist! I suspect the author means to say that it can be launched from inside Visual Studio, but then, what can't? Finally, there's a chapter I really don't understand. It seems like a bag where a lot of things are thrown in, like NHibernate Burrow (which actually isn't explained at all), Blog.Net components, CSS template conversion and web.config settings related to the maximum request length for file uploads and ending with XML configuration, with the help of GhostDoc. Like I said, the book is only good for absolute beginners, it does a fair job in explaining the very basics, but lack a lot of not-so-basic concepts. Among other things, it lacks: Inheritance mapping strategies (table per class hierarchy, table per class, table per concrete class) Load versus Get usage Other usefull ISession methods First level cache (Identity Map pattern) Other collection types other that Bag (Set, List, Map, IdBag, etc Fetch options User Types Filters Named queries LINQ examples HQL examples And that's it! I hope you find this review useful. The link to the book site is https://www.packtpub.com/nhibernate-2-x-beginners-guide/book

    Read the article

  • Getting started with Blocks and namespaces - Enterprise Library 5.0 Tutorial Part 2

    This is my second post in this series. In first blog post I explained how to install Enterprise Library 5.0 and provided links to various resources. Enterprise Library is divided into various blocks. Simply we can say, a block is a ready made solution for a particular common problem across various applications. So instead focusing on implementation of common problem across various applications, we can reuse these fully tested and extendable blocks to increase the productivity and also extendibility as these blocks are made with good design principles and patterns. Major blocks of Enterprise Library 5.0 are as follows.   Core infrastructure Functional Application Blocks Caching Data Exception Handling Logging Security Cryptography Validation Wiring Application Blocks Unity Policy Injection/Interception   Each block resides in its own assembly, and also some extra assemblies for common infrastructure. Assemblies are as follows. Microsoft.Practices.EnterpriseLibrary.Caching.Cryptography.dll Microsoft.Practices.EnterpriseLibrary.Caching.Database.dll Microsoft.Practices.EnterpriseLibrary.Caching.dll Microsoft.Practices.EnterpriseLibrary.Common.dll Microsoft.Practices.EnterpriseLibrary.Configuration.Design.HostAdapter.dll Microsoft.Practices.EnterpriseLibrary.Configuration.Design.HostAdapterV5.dll Microsoft.Practices.EnterpriseLibrary.Configuration.DesignTime.dll Microsoft.Practices.EnterpriseLibrary.Configuration.EnvironmentalOverrides.dll Microsoft.Practices.EnterpriseLibrary.Data.dll Microsoft.Practices.EnterpriseLibrary.Data.SqlCe.dll Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.dll Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging.dll Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.WCF.dll Microsoft.Practices.EnterpriseLibrary.Logging.Database.dll Microsoft.Practices.EnterpriseLibrary.Logging.dll Microsoft.Practices.EnterpriseLibrary.PolicyInjection.dll Microsoft.Practices.EnterpriseLibrary.Security.Cache.CachingStore.dll Microsoft.Practices.EnterpriseLibrary.Security.Cryptography.dll Microsoft.Practices.EnterpriseLibrary.Security.dll Microsoft.Practices.EnterpriseLibrary.Validation.dll Microsoft.Practices.EnterpriseLibrary.Validation.Integration.AspNet.dll Microsoft.Practices.EnterpriseLibrary.Validation.Integration.WCF.dll Microsoft.Practices.EnterpriseLibrary.Validation.Integration.WinForms.dll Microsoft.Practices.ServiceLocation.dll Microsoft.Practices.Unity.Configuration.dll Microsoft.Practices.Unity.dll Microsoft.Practices.Unity.Interception.dll Enterprise Library Configuration Tool In addition to these assemblies you would get configuration tool “EntLibConfig-32.exe”. If you are targeting your application to .NET 4.0 framework then you would need to use “EntLibConfig.NET4.exe”. Optionally you can install Visual Studio 2008 and Visual Studio 2010 add-ins whilst installing of Enterprise Library. So that you can invoke the enterprise Library configuration from Visual Studio by right clicking on “app.config” or “web.config” file as shown below. I would suggest you to download the documentation from Codeplex which was released on May 2010. It consists 3MB of information. you can also find issue tracker to know various issues/bugs currently people talking about enterprise library. There is also discussion link takes you to community site where you can post your questions. In my next blog post, I would cover more on each block. span.fullpost {display:none;}

    Read the article

  • Errors when installing Open Office

    - by user109036
    I followed the first set of instructions on this page to install Open Office: How to install Open Office? However, the last step which says to change the CHMOD of a folder, I got an error saying that the directory does not exist. Open Office now appears in my Ubuntu start menu, but clicking on it does nothing. I tried a reboot. Below is what I could copy from my terminal. I am running the latest Ubuntu. I have not uninstalled Libreoffice as suggested somewhere. The reason is that in the Ubuntu software centre, Libre office appears to be made up of several components and I don't know which ones to remove (or all maybe?). They are Libreoffice Draw, Math, Writer, Calc. After this operation, 480 MB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe openjdk-6-jre-lib all 6b24-1.11.5-0ubuntu1~12.10.1 [6,135 kB] Get:2 http://ppa.launchpad.net/upubuntu-com/office/ubuntu/ quantal/main openoffice amd64 3.4~oneiric [321 MB] Get:3 http://gb.archive.ubuntu.com/ubuntu/ quantal/main ca-certificates-java all 20120721 [13.2 kB] Get:4 http://gb.archive.ubuntu.com/ubuntu/ quantal/main tzdata-java all 2012e-0ubuntu2 [140 kB] Get:5 http://gb.archive.ubuntu.com/ubuntu/ quantal/main java-common all 0.43ubuntu3 [61.7 kB] Get:6 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe openjdk-6-jre-headless amd64 6b24-1.11.5-0ubuntu1~12.10.1 [25.4 MB] Get:7 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libgif4 amd64 4.1.6-9.1ubuntu1 [31.3 kB] Get:8 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe openjdk-6-jre amd64 6b24-1.11.5-0ubuntu1~12.10.1 [234 kB] Get:9 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libatk-wrapper-java all 0.30.4-0ubuntu4 [29.8 kB] Get:10 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libatk-wrapper-java-jni amd64 0.30.4-0ubuntu4 [31.1 kB] Get:11 http://gb.archive.ubuntu.com/ubuntu/ quantal/main xorg-sgml-doctools all 1:1.10-1 [12.0 kB] Get:12 http://gb.archive.ubuntu.com/ubuntu/ quantal/main x11proto-core-dev all 7.0.23-1 [744 kB] Get:13 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libice-dev amd64 2:1.0.8-2 [57.6 kB] Get:14 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libpthread-stubs0 amd64 0.3-3 [3,258 B] Get:15 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libpthread-stubs0-dev amd64 0.3-3 [2,866 B] Get:16 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libsm-dev amd64 2:1.2.1-2 [19.9 kB] Get:17 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxau-dev amd64 1:1.0.7-1 [10.2 kB] Get:18 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxdmcp-dev amd64 1:1.1.1-1 [26.9 kB] Get:19 http://gb.archive.ubuntu.com/ubuntu/ quantal/main x11proto-input-dev all 2.2-1 [133 kB] Get:20 http://gb.archive.ubuntu.com/ubuntu/ quantal/main x11proto-kb-dev all 1.0.6-2 [269 kB] Get:21 http://gb.archive.ubuntu.com/ubuntu/ quantal/main xtrans-dev all 1.2.7-1 [84.3 kB] Get:22 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxcb1-dev amd64 1.8.1-1ubuntu1 [82.6 kB] Get:23 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libx11-dev amd64 2:1.5.0-1 [912 kB] Get:24 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libx11-doc all 2:1.5.0-1 [2,460 kB] Get:25 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxt-dev amd64 1:1.1.3-1 [492 kB] Get:26 http://gb.archive.ubuntu.com/ubuntu/ quantal/main ttf-dejavu-extra all 2.33-2ubuntu1 [3,420 kB] Get:27 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe icedtea-6-jre-cacao amd64 6b24-1.11.5-0ubuntu1~12.10.1 [417 kB] Get:28 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe icedtea-6-jre-jamvm amd64 6b24-1.11.5-0ubuntu1~12.10.1 [581 kB] Get:29 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/main icedtea-netx-common all 1.3-1ubuntu1.1 [617 kB] Get:30 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/main icedtea-netx amd64 1.3-1ubuntu1.1 [16.2 kB] Get:31 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe openjdk-6-jdk amd64 6b24-1.11.5-0ubuntu1~12.10.1 [11.1 MB] Fetched 374 MB in 9min 18s (671 kB/s) Extract templates from packages: 100% Selecting previously unselected package openjdk-6-jre-lib. (Reading database ... 143191 files and directories currently installed.) Unpacking openjdk-6-jre-lib (from .../openjdk-6-jre-lib_6b24-1.11.5-0ubuntu1~12.10.1_all.deb) ... Selecting previously unselected package ca-certificates-java. Unpacking ca-certificates-java (from .../ca-certificates-java_20120721_all.deb) ... Selecting previously unselected package tzdata-java. Unpacking tzdata-java (from .../tzdata-java_2012e-0ubuntu2_all.deb) ... Selecting previously unselected package java-common. Unpacking java-common (from .../java-common_0.43ubuntu3_all.deb) ... Selecting previously unselected package openjdk-6-jre-headless:amd64. Unpacking openjdk-6-jre-headless:amd64 (from .../openjdk-6-jre-headless_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package libgif4:amd64. Unpacking libgif4:amd64 (from .../libgif4_4.1.6-9.1ubuntu1_amd64.deb) ... Selecting previously unselected package openjdk-6-jre:amd64. Unpacking openjdk-6-jre:amd64 (from .../openjdk-6-jre_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package libatk-wrapper-java. Unpacking libatk-wrapper-java (from .../libatk-wrapper-java_0.30.4-0ubuntu4_all.deb) ... Selecting previously unselected package libatk-wrapper-java-jni:amd64. Unpacking libatk-wrapper-java-jni:amd64 (from .../libatk-wrapper-java-jni_0.30.4-0ubuntu4_amd64.deb) ... Selecting previously unselected package xorg-sgml-doctools. Unpacking xorg-sgml-doctools (from .../xorg-sgml-doctools_1%3a1.10-1_all.deb) ... Selecting previously unselected package x11proto-core-dev. Unpacking x11proto-core-dev (from .../x11proto-core-dev_7.0.23-1_all.deb) ... Selecting previously unselected package libice-dev:amd64. Unpacking libice-dev:amd64 (from .../libice-dev_2%3a1.0.8-2_amd64.deb) ... Selecting previously unselected package libpthread-stubs0:amd64. Unpacking libpthread-stubs0:amd64 (from .../libpthread-stubs0_0.3-3_amd64.deb) ... Selecting previously unselected package libpthread-stubs0-dev:amd64. Unpacking libpthread-stubs0-dev:amd64 (from .../libpthread-stubs0-dev_0.3-3_amd64.deb) ... Selecting previously unselected package libsm-dev:amd64. Unpacking libsm-dev:amd64 (from .../libsm-dev_2%3a1.2.1-2_amd64.deb) ... Selecting previously unselected package libxau-dev:amd64. Unpacking libxau-dev:amd64 (from .../libxau-dev_1%3a1.0.7-1_amd64.deb) ... Selecting previously unselected package libxdmcp-dev:amd64. Unpacking libxdmcp-dev:amd64 (from .../libxdmcp-dev_1%3a1.1.1-1_amd64.deb) ... Selecting previously unselected package x11proto-input-dev. Unpacking x11proto-input-dev (from .../x11proto-input-dev_2.2-1_all.deb) ... Selecting previously unselected package x11proto-kb-dev. Unpacking x11proto-kb-dev (from .../x11proto-kb-dev_1.0.6-2_all.deb) ... Selecting previously unselected package xtrans-dev. Unpacking xtrans-dev (from .../xtrans-dev_1.2.7-1_all.deb) ... Selecting previously unselected package libxcb1-dev:amd64. Unpacking libxcb1-dev:amd64 (from .../libxcb1-dev_1.8.1-1ubuntu1_amd64.deb) ... Selecting previously unselected package libx11-dev:amd64. Unpacking libx11-dev:amd64 (from .../libx11-dev_2%3a1.5.0-1_amd64.deb) ... Selecting previously unselected package libx11-doc. Unpacking libx11-doc (from .../libx11-doc_2%3a1.5.0-1_all.deb) ... Selecting previously unselected package libxt-dev:amd64. Unpacking libxt-dev:amd64 (from .../libxt-dev_1%3a1.1.3-1_amd64.deb) ... Selecting previously unselected package ttf-dejavu-extra. Unpacking ttf-dejavu-extra (from .../ttf-dejavu-extra_2.33-2ubuntu1_all.deb) ... Selecting previously unselected package icedtea-6-jre-cacao:amd64. Unpacking icedtea-6-jre-cacao:amd64 (from .../icedtea-6-jre-cacao_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package icedtea-6-jre-jamvm:amd64. Unpacking icedtea-6-jre-jamvm:amd64 (from .../icedtea-6-jre-jamvm_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package icedtea-netx-common. Unpacking icedtea-netx-common (from .../icedtea-netx-common_1.3-1ubuntu1.1_all.deb) ... Selecting previously unselected package icedtea-netx:amd64. Unpacking icedtea-netx:amd64 (from .../icedtea-netx_1.3-1ubuntu1.1_amd64.deb) ... Selecting previously unselected package openjdk-6-jdk:amd64. Unpacking openjdk-6-jdk:amd64 (from .../openjdk-6-jdk_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package openoffice. Unpacking openoffice (from .../openoffice_3.4~oneiric_amd64.deb) ... Processing triggers for doc-base ... Processing 2 added doc-base files... Processing triggers for man-db ... Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Processing triggers for hicolor-icon-theme ... Processing triggers for fontconfig ... Processing triggers for gnome-icon-theme ... Processing triggers for shared-mime-info ... Setting up tzdata-java (2012e-0ubuntu2) ... Setting up java-common (0.43ubuntu3) ... Setting up libgif4:amd64 (4.1.6-9.1ubuntu1) ... Setting up xorg-sgml-doctools (1:1.10-1) ... Setting up x11proto-core-dev (7.0.23-1) ... Setting up libice-dev:amd64 (2:1.0.8-2) ... Setting up libpthread-stubs0:amd64 (0.3-3) ... Setting up libpthread-stubs0-dev:amd64 (0.3-3) ... Setting up libsm-dev:amd64 (2:1.2.1-2) ... Setting up libxau-dev:amd64 (1:1.0.7-1) ... Setting up libxdmcp-dev:amd64 (1:1.1.1-1) ... Setting up x11proto-input-dev (2.2-1) ... Setting up x11proto-kb-dev (1.0.6-2) ... Setting up xtrans-dev (1.2.7-1) ... Setting up libxcb1-dev:amd64 (1.8.1-1ubuntu1) ... Setting up libx11-dev:amd64 (2:1.5.0-1) ... Setting up libx11-doc (2:1.5.0-1) ... Setting up libxt-dev:amd64 (1:1.1.3-1) ... Setting up ttf-dejavu-extra (2.33-2ubuntu1) ... Setting up icedtea-netx-common (1.3-1ubuntu1.1) ... Setting up openjdk-6-jre-lib (6b24-1.11.5-0ubuntu1~12.10.1) ... Setting up openjdk-6-jre-headless:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java to provide /usr/bin/java (java) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/keytool to provide /usr/bin/keytool (keytool) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/pack200 to provide /usr/bin/pack200 (pack200) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/rmid to provide /usr/bin/rmid (rmid) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/rmiregistry to provide /usr/bin/rmiregistry (rmiregistry) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/unpack200 to provide /usr/bin/unpack200 (unpack200) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/orbd to provide /usr/bin/orbd (orbd) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/servertool to provide /usr/bin/servertool (servertool) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/tnameserv to provide /usr/bin/tnameserv (tnameserv) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/lib/jexec to provide /usr/bin/jexec (jexec) in auto mode Setting up ca-certificates-java (20120721) ... Adding debian:Deutsche_Telekom_Root_CA_2.pem Adding debian:Comodo_Trusted_Services_root.pem Adding debian:Certum_Trusted_Network_CA.pem Adding debian:thawte_Primary_Root_CA_-_G2.pem Adding debian:UTN_USERFirst_Hardware_Root_CA.pem Adding debian:AddTrust_Low-Value_Services_Root.pem Adding debian:Microsec_e-Szigno_Root_CA.pem Adding debian:SwissSign_Silver_CA_-_G2.pem Adding debian:ComSign_Secured_CA.pem Adding debian:Buypass_Class_2_CA_1.pem Adding debian:Verisign_Class_1_Public_Primary_Certification_Authority_-_G3.pem Adding debian:Certum_Root_CA.pem Adding debian:AddTrust_External_Root.pem Adding debian:Chambers_of_Commerce_Root_-_2008.pem Adding debian:Starfield_Root_Certificate_Authority_-_G2.pem Adding debian:Verisign_Class_1_Public_Primary_Certification_Authority_-_G2.pem Adding debian:Visa_eCommerce_Root.pem Adding debian:Digital_Signature_Trust_Co._Global_CA_3.pem Adding debian:AC_Raíz_Certicámara_S.A..pem Adding debian:NetLock_Arany_=Class_Gold=_Fotanúsítvány.pem Adding debian:Taiwan_GRCA.pem Adding debian:Camerfirma_Chambers_of_Commerce_Root.pem Adding debian:Juur-SK.pem Adding debian:Entrust.net_Premium_2048_Secure_Server_CA.pem Adding debian:XRamp_Global_CA_Root.pem Adding debian:Security_Communication_RootCA2.pem Adding debian:AddTrust_Qualified_Certificates_Root.pem Adding debian:NetLock_Qualified_=Class_QA=_Root.pem Adding debian:TC_TrustCenter_Class_2_CA_II.pem Adding debian:DST_ACES_CA_X6.pem Adding debian:thawte_Primary_Root_CA.pem Adding debian:thawte_Primary_Root_CA_-_G3.pem Adding debian:GeoTrust_Universal_CA_2.pem Adding debian:ACEDICOM_Root.pem Adding debian:Security_Communication_EV_RootCA1.pem Adding debian:America_Online_Root_Certification_Authority_2.pem Adding debian:TC_TrustCenter_Universal_CA_I.pem Adding debian:SwissSign_Platinum_CA_-_G2.pem Adding debian:Global_Chambersign_Root_-_2008.pem Adding debian:SecureSign_RootCA11.pem Adding debian:GeoTrust_Global_CA_2.pem Adding debian:Buypass_Class_3_CA_1.pem Adding debian:Baltimore_CyberTrust_Root.pem Adding debian:UbuntuOne-Go_Daddy_Class_2_CA.pem Adding debian:Equifax_Secure_eBusiness_CA_1.pem Adding debian:SwissSign_Gold_CA_-_G2.pem Adding debian:AffirmTrust_Premium_ECC.pem Adding debian:TC_TrustCenter_Universal_CA_III.pem Adding debian:ca.pem Adding debian:Verisign_Class_3_Public_Primary_Certification_Authority_-_G2.pem Adding debian:NetLock_Express_=Class_C=_Root.pem Adding debian:VeriSign_Class_3_Public_Primary_Certification_Authority_-_G5.pem Adding debian:Firmaprofesional_Root_CA.pem Adding debian:Comodo_Secure_Services_root.pem Adding debian:cacert.org.pem Adding debian:GeoTrust_Primary_Certification_Authority.pem Adding debian:RSA_Security_2048_v3.pem Adding debian:Staat_der_Nederlanden_Root_CA.pem Adding debian:Cybertrust_Global_Root.pem Adding debian:DigiCert_High_Assurance_EV_Root_CA.pem Adding debian:TDC_OCES_Root_CA.pem Adding debian:A-Trust-nQual-03.pem Adding debian:Equifax_Secure_CA.pem Adding debian:Digital_Signature_Trust_Co._Global_CA_1.pem Adding debian:GeoTrust_Global_CA.pem Adding debian:Starfield_Class_2_CA.pem Adding debian:ApplicationCA_-_Japanese_Government.pem Adding debian:Swisscom_Root_CA_1.pem Adding debian:Verisign_Class_2_Public_Primary_Certification_Authority_-_G2.pem Adding debian:Camerfirma_Global_Chambersign_Root.pem Adding debian:QuoVadis_Root_CA_3.pem Adding debian:QuoVadis_Root_CA.pem Adding debian:Comodo_AAA_Services_root.pem Adding debian:ComSign_CA.pem Adding debian:AddTrust_Public_Services_Root.pem Adding debian:DigiCert_Assured_ID_Root_CA.pem Adding debian:UTN_DATACorp_SGC_Root_CA.pem Adding debian:CA_Disig.pem Adding debian:E-Guven_Kok_Elektronik_Sertifika_Hizmet_Saglayicisi.pem Adding debian:GlobalSign_Root_CA_-_R3.pem Adding debian:QuoVadis_Root_CA_2.pem Adding debian:Entrust_Root_Certification_Authority.pem Adding debian:GTE_CyberTrust_Global_Root.pem Adding debian:ValiCert_Class_1_VA.pem Adding debian:Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem Adding debian:GeoTrust_Primary_Certification_Authority_-_G2.pem Adding debian:spi-ca-2003.pem Adding debian:America_Online_Root_Certification_Authority_1.pem Adding debian:AffirmTrust_Premium.pem Adding debian:Sonera_Class_1_Root_CA.pem Adding debian:Verisign_Class_2_Public_Primary_Certification_Authority_-_G3.pem Adding debian:Certplus_Class_2_Primary_CA.pem Adding debian:TURKTRUST_Certificate_Services_Provider_Root_2.pem Adding debian:Network_Solutions_Certificate_Authority.pem Adding debian:Go_Daddy_Class_2_CA.pem Adding debian:StartCom_Certification_Authority.pem Adding debian:Hongkong_Post_Root_CA_1.pem Adding debian:Hellenic_Academic_and_Research_Institutions_RootCA_2011.pem Adding debian:Thawte_Premium_Server_CA.pem Adding debian:EBG_Elektronik_Sertifika_Hizmet_Saglayicisi.pem Adding debian:TURKTRUST_Certificate_Services_Provider_Root_1.pem Adding debian:NetLock_Business_=Class_B=_Root.pem Adding debian:Microsec_e-Szigno_Root_CA_2009.pem Adding debian:DigiCert_Global_Root_CA.pem Adding debian:VeriSign_Class_3_Public_Primary_Certification_Authority_-_G4.pem Adding debian:IGC_A.pem Adding debian:TWCA_Root_Certification_Authority.pem Adding debian:S-TRUST_Authentication_and_Encryption_Root_CA_2005_PN.pem Adding debian:VeriSign_Universal_Root_Certification_Authority.pem Adding debian:DST_Root_CA_X3.pem Adding debian:Verisign_Class_1_Public_Primary_Certification_Authority.pem Adding debian:Root_CA_Generalitat_Valenciana.pem Adding debian:UTN_USERFirst_Email_Root_CA.pem Adding debian:ssl-cert-snakeoil.pem Adding debian:Starfield_Services_Root_Certificate_Authority_-_G2.pem Adding debian:GeoTrust_Primary_Certification_Authority_-_G3.pem Adding debian:Certinomis_-_Autorité_Racine.pem Adding debian:Verisign_Class_3_Public_Primary_Certification_Authority.pem Adding debian:TDC_Internet_Root_CA.pem Adding debian:UbuntuOne-ValiCert_Class_2_VA.pem Adding debian:AffirmTrust_Commercial.pem Adding debian:spi-cacert-2008.pem Adding debian:Izenpe.com.pem Adding debian:EC-ACC.pem Adding debian:Go_Daddy_Root_Certificate_Authority_-_G2.pem Adding debian:COMODO_ECC_Certification_Authority.pem Adding debian:CNNIC_ROOT.pem Adding debian:NetLock_Notary_=Class_A=_Root.pem Adding debian:Equifax_Secure_eBusiness_CA_2.pem Adding debian:Verisign_Class_3_Public_Primary_Certification_Authority_-_G3.pem Adding debian:Secure_Global_CA.pem Adding debian:UbuntuOne-Go_Daddy_CA.pem Adding debian:GeoTrust_Universal_CA.pem Adding debian:Wells_Fargo_Root_CA.pem Adding debian:Thawte_Server_CA.pem Adding debian:WellsSecure_Public_Root_Certificate_Authority.pem Adding debian:TC_TrustCenter_Class_3_CA_II.pem Adding debian:COMODO_Certification_Authority.pem Adding debian:Equifax_Secure_Global_eBusiness_CA.pem Adding debian:Security_Communication_Root_CA.pem Adding debian:GlobalSign_Root_CA_-_R2.pem Adding debian:TÜBITAK_UEKAE_Kök_Sertifika_Hizmet_Saglayicisi_-_Sürüm_3.pem Adding debian:Verisign_Class_4_Public_Primary_Certification_Authority_-_G3.pem Adding debian:certSIGN_ROOT_CA.pem Adding debian:RSA_Root_Certificate_1.pem Adding debian:ePKI_Root_Certification_Authority.pem Adding debian:Entrust.net_Secure_Server_CA.pem Adding debian:OISTE_WISeKey_Global_Root_GA_CA.pem Adding debian:Sonera_Class_2_Root_CA.pem Adding debian:Certigna.pem Adding debian:AffirmTrust_Networking.pem Adding debian:ValiCert_Class_2_VA.pem Adding debian:GlobalSign_Root_CA.pem Adding debian:Staat_der_Nederlanden_Root_CA_-_G2.pem Adding debian:SecureTrust_CA.pem done. Setting up openjdk-6-jre:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/policytool to provide /usr/bin/policytool (policytool) in auto mode Setting up libatk-wrapper-java (0.30.4-0ubuntu4) ... Setting up icedtea-6-jre-cacao:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... Setting up icedtea-6-jre-jamvm:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... Setting up icedtea-netx:amd64 (1.3-1ubuntu1.1) ... update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/javaws to provide /usr/bin/javaws (javaws) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/itweb-settings to provide /usr/bin/itweb-settings (itweb-settings) in auto mode update-alternatives: using /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/javaws to provide /usr/bin/javaws (javaws) in auto mode update-alternatives: using /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/itweb-settings to provide /usr/bin/itweb-settings (itweb-settings) in auto mode Setting up openjdk-6-jdk:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/appletviewer to provide /usr/bin/appletviewer (appletviewer) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/extcheck to provide /usr/bin/extcheck (extcheck) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/idlj to provide /usr/bin/idlj (idlj) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jar to provide /usr/bin/jar (jar) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jarsigner to provide /usr/bin/jarsigner (jarsigner) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/javadoc to provide /usr/bin/javadoc (javadoc) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/javah to provide /usr/bin/javah (javah) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/javap to provide /usr/bin/javap (javap) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jconsole to provide /usr/bin/jconsole (jconsole) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jdb to provide /usr/bin/jdb (jdb) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jhat to provide /usr/bin/jhat (jhat) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jinfo to provide /usr/bin/jinfo (jinfo) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jmap to provide /usr/bin/jmap (jmap) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jps to provide /usr/bin/jps (jps) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jrunscript to provide /usr/bin/jrunscript (jrunscript) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jsadebugd to provide /usr/bin/jsadebugd (jsadebugd) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jstack to provide /usr/bin/jstack (jstack) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jstat to provide /usr/bin/jstat (jstat) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jstatd to provide /usr/bin/jstatd (jstatd) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/native2ascii to provide /usr/bin/native2ascii (native2ascii) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/rmic to provide /usr/bin/rmic (rmic) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/schemagen to provide /usr/bin/schemagen (schemagen) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/serialver to provide /usr/bin/serialver (serialver) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/wsgen to provide /usr/bin/wsgen (wsgen) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/wsimport to provide /usr/bin/wsimport (wsimport) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/xjc to provide /usr/bin/xjc (xjc) in auto mode Setting up openoffice (3.4~oneiric) ... Setting up libatk-wrapper-java-jni:amd64 (0.30.4-0ubuntu4) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place philip@X301-2:~$ sudo apt-get install libxrandr2:i386 libxinerama1:i386 Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: linux-headers-3.5.0-17 Use 'apt-get autoremove' to remove it. The following extra packages will be installed: gcc-4.7-base:i386 libc6:i386 libgcc1:i386 libx11-6:i386 libxau6:i386 libxcb1:i386 libxdmcp6:i386 libxext6:i386 libxrender1:i386 Suggested packages: glibc-doc:i386 locales:i386 The following NEW packages will be installed gcc-4.7-base:i386 libc6:i386 libgcc1:i386 libx11-6:i386 libxau6:i386 libxcb1:i386 libxdmcp6:i386 libxext6:i386 libxinerama1:i386 libxrandr2:i386 libxrender1:i386 0 upgraded, 11 newly installed, 0 to remove and 93 not upgraded. Need to get 4,936 kB of archives. After this operation, 11.9 MB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://gb.archive.ubuntu.com/ubuntu/ quantal/main gcc-4.7-base i386 4.7.2-2ubuntu1 [15.5 kB] Get:2 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libc6 i386 2.15-0ubuntu20 [3,940 kB] Get:3 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libgcc1 i386 1:4.7.2-2ubuntu1 [53.5 kB] Get:4 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxau6 i386 1:1.0.7-1 [8,582 B] Get:5 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxdmcp6 i386 1:1.1.1-1 [13.1 kB] Get:6 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxcb1 i386 1.8.1-1ubuntu1 [48.7 kB] Get:7 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libx11-6 i386 2:1.5.0-1 [776 kB] Get:8 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxext6 i386 2:1.3.1-2 [33.9 kB] Get:9 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxinerama1 i386 2:1.1.2-1 [8,118 B] Get:10 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxrender1 i386 1:0.9.7-1 [20.1 kB] Get:11 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxrandr2 i386 2:1.4.0-1 [18.8 kB] Fetched 4,936 kB in 30s (161 kB/s) Preconfiguring packages ... Selecting previously unselected package gcc-4.7-base:i386. (Reading database ... 146005 files and directories currently installed.) Unpacking gcc-4.7-base:i386 (from .../gcc-4.7-base_4.7.2-2ubuntu1_i386.deb) ... Selecting previously unselected package libc6:i386. Unpacking libc6:i386 (from .../libc6_2.15-0ubuntu20_i386.deb) ... Selecting previously unselected package libgcc1:i386. Unpacking libgcc1:i386 (from .../libgcc1_1%3a4.7.2-2ubuntu1_i386.deb) ... Selecting previously unselected package libxau6:i386. Unpacking libxau6:i386 (from .../libxau6_1%3a1.0.7-1_i386.deb) ... Selecting previously unselected package libxdmcp6:i386. Unpacking libxdmcp6:i386 (from .../libxdmcp6_1%3a1.1.1-1_i386.deb) ... Selecting previously unselected package libxcb1:i386. Unpacking libxcb1:i386 (from .../libxcb1_1.8.1-1ubuntu1_i386.deb) ... Selecting previously unselected package libx11-6:i386. Unpacking libx11-6:i386 (from .../libx11-6_2%3a1.5.0-1_i386.deb) ... Selecting previously unselected package libxext6:i386. Unpacking libxext6:i386 (from .../libxext6_2%3a1.3.1-2_i386.deb) ... Selecting previously unselected package libxinerama1:i386. Unpacking libxinerama1:i386 (from .../libxinerama1_2%3a1.1.2-1_i386.deb) ... Selecting previously unselected package libxrender1:i386. Unpacking libxrender1:i386 (from .../libxrender1_1%3a0.9.7-1_i386.deb) ... Selecting previously unselected package libxrandr2:i386. Unpacking libxrandr2:i386 (from .../libxrandr2_2%3a1.4.0-1_i386.deb) ... Setting up gcc-4.7-base:i386 (4.7.2-2ubuntu1) ... Setting up libc6:i386 (2.15-0ubuntu20) ... Setting up libgcc1:i386 (1:4.7.2-2ubuntu1) ... Setting up libxau6:i386 (1:1.0.7-1) ... Setting up libxdmcp6:i386 (1:1.1.1-1) ... Setting up libxcb1:i386 (1.8.1-1ubuntu1) ... Setting up libx11-6:i386 (2:1.5.0-1) ... Setting up libxext6:i386 (2:1.3.1-2) ... Setting up libxinerama1:i386 (2:1.1.2-1) ... Setting up libxrender1:i386 (1:0.9.7-1) ... Setting up libxrandr2:i386 (2:1.4.0-1) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place $ sudo chmod a+rx /opt/openoffice.org3/share/uno_packages/cache/uno_packages chmod: cannot access `/opt/openoffice.org3/share/uno_packages/cache/uno_packages': No such file or directory

    Read the article

  • Netbeans doesn't start, Java fatal error, unless sudo

    - by elect
    Fresh 13.10 64b Openjdk 6 is there, I just installed Netbeans 7.01 from the repo, but it doesn't work, I open then a console elect@elect-desktop:~$ netbeans # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007faebdf79325, pid=5251, tid=140388628424448 # # JRE version: 6.0_27-b27 # Java VM: OpenJDK 64-Bit Server VM (20.0-b12 mixed mode linux-amd64 compressed oops) # Derivative: IcedTea6 1.12.6 # Distribution: Ubuntu Saucy Salamander (development branch), package 6b27-1.12.6-1ubuntu2 # Problematic frame: # C [libgobject-2.0.so.0+0x14325] g_cclosure_marshal_BOOLEAN__BOXED_BOXEDv+0x985 # # An error report file with more information is saved as: # /home/elect/hs_err_pid5251.log [thread 140386948781824 also had an error] # # If you would like to submit a bug report, please include # instructions how to reproduce the bug and visit: # https://bugs.launchpad.net/ubuntu/+source/openjdk-6/ # /usr/share/netbeans/7.0.1/bin/../platform/lib/nbexec: line 548: 5251 Aborted (core dumped) "/usr/lib/jvm/java-6-openjdk-amd64/bin/java" -Djdk.home="/usr/lib/jvm/java-6-openjdk-amd64" -Djava.library.path=/usr/lib/jni -classpath "/usr/share/netbeans/7.0.1/platform/lib/boot.jar:/usr/share/netbeans/7.0.1/platform/lib/org-openide-modules.jar:/usr/share/netbeans/7.0.1/platform/lib/org-openide-util.jar:/usr/share/netbeans/7.0.1/platform/lib/org-openide-util-lookup.jar:/usr/lib/jvm/java-6-openjdk-amd64/lib/dt.jar:/usr/lib/jvm/java-6-openjdk-amd64/lib/tools.jar" -Dnetbeans.system_http_proxy="DIRECT" -Dnetbeans.system_http_non_proxy_hosts="" -Dnetbeans.dirs="/usr/share/netbeans/7.0.1/nb:/usr/share/netbeans/7.0.1/bin/../ergonomics:/usr/share/netbeans/7.0.1/ide:/usr/share/netbeans/7.0.1/java:/usr/share/netbeans/7.0.1/bin/../xml:/usr/share/netbeans/7.0.1/apisupport:/usr/share/netbeans/7.0.1/bin/../webcommon:/usr/share/netbeans/7.0.1/bin/../websvccommon:/usr/share/netbeans/7.0.1/bin/../enterprise:/usr/share/netbeans/7.0.1/bin/../mobility:/usr/share/netbeans/7.0.1/bin/../profiler:/usr/share/netbeans/7.0.1/bin/../ruby:/usr/share/netbeans/7.0.1/bin/../python:/usr/share/netbeans/7.0.1/bin/../php:/usr/share/netbeans/7.0.1/bin/../visualweb:/usr/share/netbeans/7.0.1/bin/../soa:/usr/share/netbeans/7.0.1/bin/../identity:/usr/share/netbeans/7.0.1/bin/../uml:/usr/share/netbeans/7.0.1/harness:/usr/share/netbeans/7.0.1/bin/../cnd:/usr/share/netbeans/7.0.1/bin/../dlight:/usr/share/netbeans/7.0.1/bin/../groovy:/usr/share/netbeans/7.0.1/bin/../extra:/usr/share/netbeans/7.0.1/bin/../javafx:/usr/share/netbeans/7.0.1/bin/../javacard:" -Dnetbeans.home="/usr/share/netbeans/7.0.1/platform" '-Dnetbeans.importclass=org.netbeans.upgrade.AutoUpgrade' '-Dnetbeans.accept_license_class=org.netbeans.license.AcceptLicense' '-XX:MaxPermSize=384m' '-Xmx768m' '-client' '-Xss2m' '-Xms32m' '-XX:PermSize=32m' '-Dapple.laf.useScreenMenuBar=true' '-Dapple.awt.graphics.UseQuartz=true' '-Dsun.java2d.noddraw=true' '-Dsun.java2d.pmoffscreen=false' -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath="/home/elect/.netbeans/7.0/var/log/heapdump.hprof" org.netbeans.Main --userdir "/home/elect/.netbeans/7.0" "--branding" "nb" 0<&0 Looking around, the second answer, here Vigintas Labakojis, points out something regarding permission, I just try sudo netbeans, it works.. Then I look for the ~/.cache/netbeans/ I dont have, I have instead ~/.netbeans/ Then I run his commands on those folder, it doesn't work.. It must be something else, do you have any idea? In any case, my log /home/elect/hs_err_pid5251.log is here

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >