Search Results

Search found 12376 results on 496 pages for 'active pattern'.

Page 374/496 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • What is hogging my connection?

    - by SF.
    At times it seems like dozens, if not hundreds of root-owned HTTP connections spring up. This is not much of a problem on LAN or WLAN as each of them seems to transfer very little, but if I use GPRS link, my ping times go into minutes (seriously, 80000ms is not infrequent!) and all connections grind to a halt waiting till these end. This usually lasts some 15 minutes and ends about when I start troubleshooting it for real. I've managed to capture a fragment of Nethogs output NetHogs version 0.8.0 PID USER PROGRAM DEV SENT RECEIVED ? root 37.209.147.180:59854-141.101.114.59:80 0.013 0.000 KB/sec ? root 37.209.147.180:59853-141.101.114.59:80 0.000 0.000 KB/sec ? root 37.209.147.180:52804-173.194.70.95:80 0.000 0.000 KB/sec 1954 bw /home/bw/.dropbox-dist/dropbox ppp0 0.000 0.000 KB/sec ? root 37.209.147.180:59851-141.101.114.59:80 0.000 0.000 KB/sec ? root 37.209.147.180:59850-141.101.114.59:80 0.000 0.000 KB/sec ? root 37.209.147.180:52801-173.194.70.95:80 0.000 0.000 KB/sec 13301 bw /usr/lib/firefox/firefox ppp0 0.000 0.000 KB/sec ? root unknown TCP 0.000 0.000 KB/sec Unfortunately, it doesn't display the owning process of these. Does anyone recognize these addresses or is able to suggest how to troubleshoot it further or disable it? Is it some automatic update or something like that? EDIT: per request; netstat -n, for obvious reason that normal netstat won't ever launch as all DNS requests are hogged just the same. netstat -n Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 1 93.154.166.62:51314 198.252.206.16:80 FIN_WAIT1 tcp 0 1 37.209.147.180:44098 198.252.206.16:80 FIN_WAIT1 tcp 0 1 37.209.147.180:59855 141.101.114.59:80 FIN_WAIT1 tcp 1 0 192.168.43.224:38237 213.189.45.39:443 CLOSE_WAIT tcp 1 0 93.154.146.186:35167 75.101.152.29:80 CLOSE_WAIT tcp 1 0 192.168.43.224:32939 199.15.160.100:80 CLOSE_WAIT tcp 1 0 192.168.43.224:55619 63.245.217.207:443 CLOSE_WAIT tcp 1 0 93.154.146.186:60210 75.101.152.29:443 CLOSE_WAIT tcp 1 0 192.168.43.224:32944 199.15.160.100:80 CLOSE_WAIT tcp 0 1 37.209.147.180:52804 173.194.70.95:80 FIN_WAIT1 tcp 1 0 93.154.146.186:46606 23.21.151.181:80 CLOSE_WAIT tcp 1 0 93.154.146.186:52619 107.22.246.76:80 CLOSE_WAIT tcp 415 0 93.154.146.186:36156 82.112.106.104:80 CLOSE_WAIT tcp 1 0 93.154.146.186:50352 107.22.246.76:443 CLOSE_WAIT tcp 1 0 192.168.43.224:55000 213.189.45.44:443 CLOSE_WAIT tcp 0 1 37.209.147.180:59853 141.101.114.59:80 FIN_WAIT1 tcp 1 0 192.168.43.224:32937 199.15.160.100:80 CLOSE_WAIT tcp 1 0 192.168.43.224:56055 93.184.221.40:80 CLOSE_WAIT tcp 415 0 93.154.146.186:36155 82.112.106.104:80 CLOSE_WAIT tcp 0 1 37.209.147.180:44097 198.252.206.16:80 FIN_WAIT1 tcp 1 0 93.154.146.186:35166 75.101.152.29:80 CLOSE_WAIT tcp 1 0 192.168.43.224:32943 199.15.160.100:80 CLOSE_WAIT tcp 1 0 93.154.146.186:46607 23.21.151.181:80 CLOSE_WAIT tcp 1 0 93.154.146.186:36422 23.21.151.181:443 CLOSE_WAIT tcp 1 0 192.168.43.224:36081 93.184.220.148:80 CLOSE_WAIT tcp 1 0 192.168.43.224:44462 213.189.45.29:443 CLOSE_WAIT tcp 1 0 192.168.43.224:32938 199.15.160.100:80 CLOSE_WAIT tcp 1 0 93.154.146.186:36419 23.21.151.181:443 CLOSE_WAIT tcp 0 497 93.154.166.62:51313 198.252.206.16:80 FIN_WAIT1 tcp 0 1 37.209.147.180:59851 141.101.114.59:80 FIN_WAIT1 tcp 0 1 37.209.147.180:44095 198.252.206.16:80 FIN_WAIT1 tcp 1 0 93.154.146.186:46611 23.21.151.181:80 CLOSE_WAIT tcp 1 0 192.168.43.224:38236 213.189.45.39:443 CLOSE_WAIT tcp 0 171 37.209.147.180:45341 173.194.113.146:443 ESTABLISHED tcp 0 1 37.209.147.180:52801 173.194.70.95:80 FIN_WAIT1 tcp 1 0 192.168.43.224:36080 93.184.220.148:80 CLOSE_WAIT tcp 0 1 37.209.147.180:59856 141.101.114.59:80 FIN_WAIT1 tcp 0 1 37.209.147.180:44096 198.252.206.16:80 FIN_WAIT1 tcp 0 1 93.154.166.62:57471 108.160.162.49:80 FIN_WAIT1 tcp 0 1 37.209.147.180:59854 141.101.114.59:80 FIN_WAIT1 tcp 0 171 37.209.147.180:45340 173.194.113.146:443 ESTABLISHED tcp 0 168 37.209.147.180:45334 173.194.113.146:443 FIN_WAIT1 tcp 1 0 93.154.146.186:46609 23.21.151.181:80 CLOSE_WAIT tcp 0 1248 93.154.166.62:58270 64.251.23.59:443 FIN_WAIT1 tcp 0 1 37.209.147.180:59850 141.101.114.59:80 FIN_WAIT1 tcp 1 0 93.154.146.186:35181 75.101.152.29:80 CLOSE_WAIT tcp 232 0 93.154.172.168:46384 198.252.206.25:80 ESTABLISHED tcp 1 0 93.154.146.186:52618 107.22.246.76:80 CLOSE_WAIT tcp 1 0 93.154.172.168:36298 173.194.69.95:443 CLOSE_WAIT tcp 1 0 93.154.146.186:60209 75.101.152.29:443 CLOSE_WAIT tcp 0 168 37.209.147.180:45335 173.194.113.146:443 FIN_WAIT1 tcp 415 0 93.154.146.186:36157 82.112.106.104:80 CLOSE_WAIT tcp 1 0 192.168.43.224:36082 93.184.220.148:80 CLOSE_WAIT tcp 1 0 192.168.43.224:32942 199.15.160.100:80 CLOSE_WAIT tcp 1 0 93.154.146.186:50350 107.22.246.76:443 CLOSE_WAIT tcp 1 0 192.168.43.224:32941 199.15.160.100:80 CLOSE_WAIT tcp 0 534 37.209.147.180:44089 198.252.206.16:80 FIN_WAIT1 tcp 1 0 93.154.146.186:46608 23.21.151.181:80 CLOSE_WAIT tcp 1 0 93.154.146.186:46612 23.21.151.181:80 CLOSE_WAIT udp 0 0 37.209.147.180:49057 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:51631 193.41.112.18:53 ESTABLISHED udp 0 0 37.209.147.180:34827 193.41.112.18:53 ESTABLISHED udp 0 0 37.209.147.180:35908 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:44106 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:42184 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:54485 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:42216 193.41.112.18:53 ESTABLISHED udp 0 0 37.209.147.180:51961 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:48412 193.41.112.14:53 ESTABLISHED The interesting lines from ping got lost, but the summary over past few hours is: --- 8.8.8.8 ping statistics --- 107459 packets transmitted, 104376 received, +22 duplicates, 2% packet loss, time 195427362ms rtt min/avg/max/mdev = 24.822/528.132/90538.257/2519.263 ms, pipe 90 EDIT: Per request: Happened again, reboot didn't help but cleaned up all "hanging" processes. Currently netstat shows: bw@pony:/var/log$ netstat -n -t Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 93.154.188.68:42767 74.125.239.143:443 TIME_WAIT tcp 0 0 93.154.188.68:50270 173.194.69.189:443 ESTABLISHED tcp 0 0 93.154.188.68:45250 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:53488 173.194.32.198:80 ESTABLISHED tcp 0 0 93.154.188.68:53490 173.194.32.198:80 ESTABLISHED tcp 0 159 93.154.188.68:42741 74.125.239.143:443 LAST_ACK tcp 0 0 93.154.188.68:45808 198.252.206.25:80 ESTABLISHED tcp 0 0 93.154.188.68:52449 173.194.32.199:443 ESTABLISHED tcp 0 0 93.154.188.68:52600 173.194.32.199:443 TIME_WAIT tcp 0 0 93.154.188.68:50300 173.194.69.189:443 TIME_WAIT tcp 0 0 93.154.188.68:45253 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:46252 173.194.32.204:443 ESTABLISHED tcp 0 0 93.154.188.68:45246 190.93.244.58:80 ESTABLISHED tcp 0 0 93.154.188.68:47064 173.194.113.143:443 ESTABLISHED tcp 0 0 93.154.188.68:34484 173.194.69.95:443 ESTABLISHED tcp 0 0 93.154.188.68:45252 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:54290 173.194.32.202:443 ESTABLISHED tcp 0 0 93.154.188.68:47063 173.194.113.143:443 ESTABLISHED tcp 0 0 93.154.188.68:53469 173.194.32.198:80 TIME_WAIT tcp 0 0 93.154.188.68:45242 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:53468 173.194.32.198:80 ESTABLISHED tcp 0 0 93.154.188.68:50299 173.194.69.189:443 TIME_WAIT tcp 0 0 93.154.188.68:42764 74.125.239.143:443 TIME_WAIT tcp 0 0 93.154.188.68:45256 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:58047 108.160.162.105:80 ESTABLISHED tcp 0 0 93.154.188.68:45249 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:50297 173.194.69.189:443 TIME_WAIT tcp 0 0 93.154.188.68:53470 173.194.32.198:80 ESTABLISHED tcp 0 0 93.154.188.68:34100 68.232.35.121:443 ESTABLISHED tcp 0 0 93.154.188.68:42758 74.125.239.143:443 ESTABLISHED tcp 0 0 93.154.188.68:42765 74.125.239.143:443 TIME_WAIT tcp 0 0 93.154.188.68:39000 173.194.69.95:80 TIME_WAIT tcp 0 0 93.154.188.68:50296 173.194.69.189:443 TIME_WAIT tcp 0 0 93.154.188.68:53467 173.194.32.198:80 ESTABLISHED tcp 0 0 93.154.188.68:42766 74.125.239.143:443 TIME_WAIT tcp 0 0 93.154.188.68:45251 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:45248 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:45247 190.93.244.58:80 ESTABLISHED tcp 0 159 93.154.188.68:50254 173.194.69.189:443 LAST_ACK tcp 0 0 93.154.188.68:34483 173.194.69.95:443 ESTABLISHED Output of ps: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.8 0.0 3628 2092 ? Ss 16:52 0:03 /sbin/init root 2 0.0 0.0 0 0 ? S 16:52 0:00 [kthreadd] root 3 0.1 0.0 0 0 ? S 16:52 0:00 [ksoftirqd/0] root 4 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/0:0] root 6 0.0 0.0 0 0 ? S 16:52 0:00 [migration/0] root 7 0.0 0.0 0 0 ? S 16:52 0:00 [watchdog/0] root 8 0.0 0.0 0 0 ? S 16:52 0:00 [migration/1] root 10 0.1 0.0 0 0 ? S 16:52 0:00 [ksoftirqd/1] root 11 0.0 0.0 0 0 ? S 16:52 0:00 [watchdog/1] root 12 0.0 0.0 0 0 ? S 16:52 0:00 [migration/2] root 14 0.1 0.0 0 0 ? S 16:52 0:00 [ksoftirqd/2] root 15 0.0 0.0 0 0 ? S 16:52 0:00 [watchdog/2] root 16 0.0 0.0 0 0 ? S 16:52 0:00 [migration/3] root 17 0.0 0.0 0 0 ? S 16:52 0:00 [kworker/3:0] root 18 0.1 0.0 0 0 ? S 16:52 0:00 [ksoftirqd/3] root 19 0.0 0.0 0 0 ? S 16:52 0:00 [watchdog/3] root 20 0.0 0.0 0 0 ? S< 16:52 0:00 [cpuset] root 21 0.0 0.0 0 0 ? S< 16:52 0:00 [khelper] root 22 0.0 0.0 0 0 ? S 16:52 0:00 [kdevtmpfs] root 23 0.0 0.0 0 0 ? S< 16:52 0:00 [netns] root 24 0.0 0.0 0 0 ? S 16:52 0:00 [sync_supers] root 25 0.0 0.0 0 0 ? S 16:52 0:00 [bdi-default] root 26 0.0 0.0 0 0 ? S< 16:52 0:00 [kintegrityd] root 27 0.0 0.0 0 0 ? S< 16:52 0:00 [kblockd] root 28 0.0 0.0 0 0 ? S< 16:52 0:00 [ata_sff] root 29 0.0 0.0 0 0 ? S 16:52 0:00 [khubd] root 30 0.0 0.0 0 0 ? S< 16:52 0:00 [md] root 42 0.0 0.0 0 0 ? S 16:52 0:00 [khungtaskd] root 43 0.0 0.0 0 0 ? S 16:52 0:00 [kswapd0] root 44 0.0 0.0 0 0 ? SN 16:52 0:00 [ksmd] root 45 0.0 0.0 0 0 ? SN 16:52 0:00 [khugepaged] root 46 0.0 0.0 0 0 ? S 16:52 0:00 [fsnotify_mark] root 47 0.0 0.0 0 0 ? S 16:52 0:00 [ecryptfs-kthrea] root 48 0.0 0.0 0 0 ? S< 16:52 0:00 [crypto] root 59 0.0 0.0 0 0 ? S< 16:52 0:00 [kthrotld] root 70 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/2:1] root 71 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_0] root 72 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_1] root 73 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_2] root 74 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_3] root 75 0.0 0.0 0 0 ? S 16:52 0:00 [kworker/u:2] root 76 0.0 0.0 0 0 ? S 16:52 0:00 [kworker/u:3] root 79 0.0 0.0 0 0 ? S 16:52 0:00 [kworker/1:1] root 99 0.0 0.0 0 0 ? S< 16:52 0:00 [deferwq] root 100 0.0 0.0 0 0 ? S< 16:52 0:00 [charger_manager] root 101 0.0 0.0 0 0 ? S< 16:52 0:00 [devfreq_wq] root 102 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/2:2] root 106 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_4] root 107 0.0 0.0 0 0 ? S 16:52 0:00 [usb-storage] root 108 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_5] root 109 0.0 0.0 0 0 ? S 16:52 0:00 [usb-storage] root 271 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/1:2] root 316 0.0 0.0 0 0 ? S 16:52 0:00 [jbd2/sda1-8] root 317 0.0 0.0 0 0 ? S< 16:52 0:00 [ext4-dio-unwrit] root 440 0.1 0.0 2820 608 ? S 16:52 0:00 upstart-udev-bridge --daemon root 478 0.0 0.0 3460 1648 ? Ss 16:52 0:00 /sbin/udevd --daemon root 632 0.0 0.0 3348 1336 ? S 16:52 0:00 /sbin/udevd --daemon root 633 0.0 0.0 3348 1204 ? S 16:52 0:00 /sbin/udevd --daemon root 782 0.0 0.0 2816 596 ? S 16:52 0:00 upstart-socket-bridge --daemon root 822 0.0 0.0 6684 2400 ? Ss 16:52 0:00 /usr/sbin/sshd -D 102 834 0.2 0.0 4064 1864 ? Ss 16:52 0:01 dbus-daemon --system --fork root 857 0.0 0.1 7420 3380 ? Ss 16:52 0:00 /usr/sbin/modem-manager root 858 0.0 0.0 4784 1636 ? Ss 16:52 0:00 /usr/sbin/bluetoothd syslog 860 0.0 0.0 31068 1496 ? Sl 16:52 0:00 rsyslogd -c5 root 869 0.1 0.1 24280 5564 ? Ssl 16:52 0:00 NetworkManager avahi 883 0.0 0.0 3448 1488 ? S 16:52 0:00 avahi-daemon: running [pony.local] avahi 884 0.0 0.0 3448 436 ? S 16:52 0:00 avahi-daemon: chroot helper root 885 0.0 0.0 0 0 ? S< 16:52 0:00 [kpsmoused] root 892 0.0 0.1 25696 4140 ? Sl 16:52 0:00 /usr/lib/policykit-1/polkitd --no-debug root 923 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_6] root 959 0.0 0.0 0 0 ? S< 16:52 0:00 [krfcommd] root 970 0.0 0.1 7536 3120 ? Ss 16:52 0:00 /usr/sbin/cupsd -F colord 976 0.1 0.3 55080 10396 ? Sl 16:52 0:00 /usr/lib/i386-linux-gnu/colord/colord root 979 0.0 0.0 4632 872 tty4 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty4 root 987 0.0 0.0 4632 884 tty5 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty5 root 994 0.0 0.0 4632 884 tty2 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty2 root 995 0.0 0.0 4632 868 tty3 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty3 root 998 0.0 0.0 4632 876 tty6 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty6 root 1022 0.0 0.0 2176 680 ? Ss 16:52 0:00 acpid -c /etc/acpi/events -s /var/run/acpid.socket root 1029 0.0 0.0 3632 664 ? Ss 16:52 0:00 /usr/sbin/irqbalance daemon 1030 0.0 0.0 2476 120 ? Ss 16:52 0:00 atd root 1031 0.0 0.0 2620 880 ? Ss 16:52 0:00 cron root 1061 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/3:2] root 1064 0.0 1.0 34116 31072 ? SLsl 16:52 0:00 lightdm root 1076 13.4 1.2 118688 37920 tty7 Ssl+ 16:52 0:55 /usr/bin/X :0 -core -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswit root 1085 0.0 0.0 0 0 ? S 16:52 0:00 [rts_pstor] root 1087 0.0 0.0 0 0 ? S 16:52 0:00 [rtsx-polling] root 1095 0.0 0.0 0 0 ? S< 16:52 0:00 [cfg80211] root 1127 0.0 0.0 0 0 ? S 16:52 0:00 [flush-8:0] root 1130 0.0 0.0 6136 1824 ? Ss 16:52 0:00 /sbin/wpa_supplicant -B -P /run/sendsigs.omit.d/wpasupplicant.pid -u -s -O /va root 1137 0.0 0.1 24604 3164 ? Sl 16:52 0:00 /usr/lib/accountsservice/accounts-daemon root 1140 0.0 0.0 0 0 ? S< 16:52 0:00 [hd-audio0] root 1188 0.0 0.1 34308 3420 ? Sl 16:52 0:00 /usr/sbin/console-kit-daemon --no-daemon root 1425 0.0 0.0 4632 872 tty1 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty1 root 1443 0.1 0.1 29460 4664 ? Sl 16:52 0:00 /usr/lib/upower/upowerd root 1579 0.0 0.1 16540 3272 ? Sl 16:53 0:00 lightdm --session-child 12 19 bw 1623 0.0 0.0 2232 644 ? Ss 16:53 0:00 /bin/sh /usr/bin/startkde bw 1672 0.0 0.0 4092 204 ? Ss 16:53 0:00 /usr/bin/ssh-agent /usr/bin/gpg-agent --daemon --sh --write-env-file=/home/bw/ bw 1673 0.0 0.0 5492 384 ? Ss 16:53 0:00 /usr/bin/gpg-agent --daemon --sh --write-env-file=/home/bw/.gnupg/gpg-agent-in bw 1676 0.0 0.0 3848 792 ? S 16:53 0:00 /usr/bin/dbus-launch --exit-with-session /usr/bin/startkde bw 1677 0.5 0.0 5384 2180 ? Ss 16:53 0:02 //bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session root 1704 0.3 0.1 25348 3600 ? Sl 16:53 0:01 /usr/lib/udisks/udisks-daemon root 1705 0.0 0.0 6620 728 ? S 16:53 0:00 udisks-daemon: not polling any devices bw 1736 0.0 0.0 2008 64 ? S 16:53 0:00 /usr/lib/kde4/libexec/start_kdeinit +kcminit_startup bw 1737 0.0 0.5 115200 15588 ? Ss 16:53 0:00 kdeinit4: kdeinit4 Running... bw 1738 0.1 0.2 116756 8728 ? S 16:53 0:00 kdeinit4: klauncher [kdeinit] --fd=9 bw 1740 0.6 1.0 340524 31264 ? Sl 16:53 0:02 kdeinit4: kded4 [kdeinit] bw 1742 0.0 0.0 8944 2144 ? S 16:53 0:00 /usr/lib/i386-linux-gnu/gconf/gconfd-2 bw 1746 0.2 0.4 92028 14688 ? S 16:53 0:00 /usr/bin/kglobalaccel bw 1748 0.0 0.4 90804 13500 ? S 16:53 0:00 /usr/bin/kwalletd bw 1752 0.1 0.5 103764 15152 ? S 16:53 0:00 /usr/bin/kactivitymanagerd bw 1758 0.0 0.0 2144 280 ? S 16:53 0:00 kwrapper4 ksmserver bw 1759 0.1 0.5 150016 16088 ? Sl 16:53 0:00 kdeinit4: ksmserver [kdeinit] bw 1763 2.2 1.0 178492 32100 ? Sl 16:53 0:08 kwin bw 1772 0.2 0.5 106292 16340 ? Sl 16:53 0:00 /usr/bin/knotify4 bw 1777 0.9 1.1 246120 32912 ? Sl 16:53 0:03 /usr/bin/krunner bw 1778 6.3 2.7 389884 80216 ? Sl 16:53 0:23 /usr/bin/plasma-desktop bw 1785 0.0 0.0 2844 1208 ? S 16:53 0:00 ksysguardd bw 1789 0.1 0.4 82036 14176 ? S 16:53 0:00 /usr/bin/kuiserver bw 1805 0.3 0.1 61560 5612 ? Sl 16:53 0:01 /usr/bin/akonadi_control root 1806 0.0 0.0 0 0 ? S 16:53 0:00 [kworker/0:2] bw 1808 0.1 0.2 211852 8460 ? Sl 16:53 0:00 akonadiserver bw 1810 0.4 0.8 244116 25360 ? Sl 16:53 0:01 /usr/sbin/mysqld --defaults-file=/home/bw/.local/share/akonadi/mysql.conf --da bw 1874 0.0 0.0 35284 2956 ? Sl 16:53 0:00 /usr/bin/xsettings-kde bw 1876 0.0 0.3 68776 9488 ? Sl 16:53 0:00 /usr/bin/nepomukserver bw 1884 0.4 0.9 173876 29240 ? SNl 16:53 0:01 /usr/bin/nepomukservicestub nepomukstorage bw 1902 6.1 2.1 451512 63924 ? Sl 16:53 0:21 /home/bw/.dropbox-dist/dropbox bw 1906 3.8 1.0 142368 32376 ? Rl 16:53 0:13 /usr/bin/yakuake bw 1933 0.0 0.1 54636 4680 ? Sl 16:53 0:00 /usr/bin/zeitgeist-datahub bw 1943 0.5 1.5 164836 46836 ? Sl 16:53 0:01 python /usr/bin/printer-applet bw 1945 0.1 0.1 99636 5048 ? S<l 16:53 0:00 /usr/bin/pulseaudio --start --log-target=syslog rtkit 1947 0.0 0.0 21336 1248 ? SNl 16:53 0:00 /usr/lib/rtkit/rtkit-daemon bw 1958 0.0 0.1 44204 3792 ? Sl 16:53 0:00 /usr/bin/zeitgeist-daemon bw 1972 0.0 0.0 27008 2684 ? Sl 16:53 0:00 /usr/lib/gvfs/gvfsd bw 1974 0.1 0.5 90480 16660 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_akonotes_resource akonadi_akonotes_res bw 1984 0.1 0.5 90472 16636 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_akonotes_resource akonadi_akonotes_res bw 1985 0.3 0.9 148800 28304 ? S 16:53 0:01 /usr/bin/akonadi_archivemail_agent --identifier akonadi_archivemail_agent bw 1992 0.1 0.5 90020 16148 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_contacts_resource akonadi_contacts_res bw 1993 0.1 0.5 90132 16452 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_contacts_resource akonadi_contacts_res bw 1994 0.1 0.5 90564 16332 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_ical_resource akonadi_ical_resource_0 bw 1995 0.1 0.5 90676 16732 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_ical_resource akonadi_ical_resource_1 bw 1996 0.1 0.5 90468 16800 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_maildir_resource akonadi_maildir_resou bw 1999 0.2 0.6 99324 19276 ? S 16:53 0:00 /usr/bin/akonadi_maildispatcher_agent --identifier akonadi_maildispatcher_agen bw 2006 0.3 0.9 148808 28332 ? S 16:53 0:01 /usr/bin/akonadi_mailfilter_agent --identifier akonadi_mailfilter_agent bw 2017 0.0 0.1 50256 4716 ? Sl 16:53 0:00 /usr/lib/zeitgeist/zeitgeist-fts bw 2024 0.2 0.6 103632 18376 ? Sl 16:53 0:00 /usr/bin/akonadi_nepomuk_feeder --identifier akonadi_nepomuk_feeder bw 2043 0.0 0.0 4484 280 ? S 16:53 0:00 /bin/cat bw 2101 0.2 0.7 113600 22396 ? Sl 16:53 0:00 /usr/lib/kde4/libexec/polkit-kde-authentication-agent-1 bw 2105 0.2 0.7 114196 22072 ? Sl 16:53 0:00 /usr/bin/nepomukcontroller bw 2156 0.3 1.0 333188 31244 ? Sl 16:54 0:01 /usr/bin/kmix bw 2167 0.0 0.0 6548 2724 pts/2 Ss 16:54 0:00 /bin/bash bw 2177 0.2 0.7 113496 22960 ? Sl 16:54 0:00 /usr/bin/klipper bw 2394 3.5 1.2 52932 35596 ? SNl 16:54 0:11 /usr/bin/virtuoso-t +foreground +configfile /tmp/virtuoso_hX1884.ini +wait root 2460 0.0 0.0 6184 1876 pts/2 S 16:54 0:00 sudo -s root 2500 0.0 0.0 6528 2700 pts/2 S 16:54 0:00 /bin/bash root 2599 0.0 0.0 5444 1280 pts/2 S+ 16:54 0:00 /bin/bash bin/aero root 2606 0.1 0.0 9836 2500 pts/2 S+ 16:54 0:00 wvdial aero2 root 2619 0.0 0.0 3504 1280 pts/2 S 16:54 0:00 /usr/sbin/pppd 57600 modem crtscts defaultroute usehostname -detach user aero bw 2653 0.0 0.0 6600 2880 pts/3 Ss 16:54 0:00 /bin/bash bw 2676 0.4 0.8 130296 24016 ? SNl 16:54 0:01 /usr/bin/nepomukservicestub nepomukfilewatch bw 2679 0.1 0.7 101636 22252 ? SNl 16:54 0:00 /usr/bin/nepomukservicestub nepomukqueryservice bw 2681 0.2 0.8 109836 24280 ? SNl 16:54 0:00 /usr/bin/nepomukservicestub nepomukbackupsync bw 3833 46.0 9.7 829272 288012 ? Rl 16:55 1:46 /usr/lib/firefox/firefox bw 3903 0.0 0.0 35128 2804 ? Sl 16:55 0:00 /usr/lib/at-spi2-core/at-spi-bus-launcher bw 4708 0.1 0.0 6564 2736 pts/4 Ss 16:56 0:00 /bin/bash root 5210 0.0 0.0 0 0 ? S 16:57 0:00 [kworker/u:0] root 6140 0.2 0.0 0 0 ? S 16:58 0:00 [kworker/0:1] root 6371 0.5 0.0 6184 1868 pts/4 S+ 16:59 0:00 sudo nethogs ppp0 root 6411 17.7 0.2 8616 6144 pts/4 S+ 16:59 0:05 nethogs ppp0 bw 6787 0.0 0.0 5464 1220 pts/3 R+ 16:59 0:00 ps auxw

    Read the article

  • Cocos2d: Changing b2Body x val every frame causes jitter

    - by Joey Green
    So, I have a jumping mechanism similar to what you would see in doodle jump where character jumps and you use the accelerometer to make character change direction left or right. I have a player object with position and a box2d b2Body with position. I'm changing the player X position via the accelerometer and the Y position according to box2d. pseudocode for this is like so -----accelerometer acceleration------ player.position = new X -----world update--------- physicsWorld-step() //this will get me the new Y according to the physics similation //so we keep the bodys Y value but change x to new X according to accelerometer data playerPhysicsBody.position = new pos(player.position.x, keepYval) player.position = playerPhysicsBody.position Now this is simplifying my code, but I'm doing the position conversion back and forth via mult or divide by PTM_. Well, I'm getting a weird jitter effect after I get big jump in acceleration data. So, my questions are: 1) Is this the right approach to have the accelerometer control the x pos and box2d control the y pos and just sync everthing up every frame? 2) Is there some issue with updating a b2body x position every frame? 3) Any idea what might be creating this jitter effect? I've collecting some data while running the game. Pre-body is before I set the x value on the b2Body in my update method after I world-step(). Post of course is afterwards. As you can see there is definitively a pattern. 012-06-19 08:14:13.118 Game[1073:707] pre-body pos 5.518720~24.362963 2012-06-19 08:14:13.120 Game[1073:707] post-body pos 5.060156~24.362963 2012-06-19 08:14:13.131 Game[1073:707] player velocity x: -31.833529 2012-06-19 08:14:13.133 Game[1073:707] delta 0.016669 2012-06-19 08:14:13.135 Game[1073:707] pre-body pos 5.060156~24.689455 2012-06-19 08:14:13.137 Game[1073:707] post-body pos 5.502138~24.689455 2012-06-19 08:14:13.148 Game[1073:707] player velocity x: -31.833529 2012-06-19 08:14:13.150 Game[1073:707] delta 0.016667 2012-06-19 08:14:13.151 Game[1073:707] pre-body pos 5.502138~25.006948 2012-06-19 08:14:13.153 Game[1073:707] post-body pos 5.043575~25.006948 2012-06-19 08:14:13.165 Game[1073:707] player velocity x: -31.833529 2012-06-19 08:14:13.167 Game[1073:707] delta 0.016644 2012-06-19 08:14:13.169 Game[1073:707] pre-body pos 5.043575~25.315441 2012-06-19 08:14:13.170 Game[1073:707] post-body pos 5.485580~25.315441 2012-06-19 08:14:13.180 Game[1073:707] player velocity x: -31.833529 2012-06-19 08:14:13.182 Game[1073:707] delta 0.016895 2012-06-19 08:14:13.185 Game[1073:707] pre-body pos 5.485580~25.614935 2012-06-19 08:14:13.188 Game[1073:707] post-body pos 5.026768~25.614935 2012-06-19 08:14:13.198 Game[1073:707] player velocity x: -31.833529 2012-06-19 08:14:13.199 Game[1073:707] delta 0.016454 2012-06-19 08:14:13.207 Game[1073:707] pre-body pos 5.026768~25.905428 2012-06-19 08:14:13.211 Game[1073:707] post-body pos 5.469213~25.905428 2012-06-19 08:14:13.217 Game[1073:707] acceleration x -0.137421 2012-06-19 08:14:13.223 Game[1073:707] player velocity x: -65.022644 2012-06-19 08:14:13.229 Game[1073:707] delta 0.016603

    Read the article

  • JRockit Virtual Edition Debug Key

    - by changjae.lee
    There are a few keys that can help the debugging of the JRVE env in console. you can type in each keys in JRVE console to see what's happening under the hood. key '0' : System information key '5' : Enable shutdown key '7' : Start JRockit Management Server (port 7091) key '8' : Statistics Counters key '9' : Full Thread Dump key '0' : Status of Debug-key Below is the sample out from each keys. Debug-key '1' pressed ============ JRockitVE System Information ============ JRockitVE version : 11.1.1.3.0-67-131044 Kernel version : 6.1.0.0-97-131024 JVM version : R27.6.6-28_o-125824-1.6.0_17-20091214-2104-linux-ia32 Hypervisor version : Xen 3.4.0 Boot state : 0x007effff Uptime : 0 days 02:04:31 CPU : uniprocessor @2327 Mhz CPU usage : 0% ctx/s: 285 preempt/s: 0 migrations/s: 0 Physical pages : 82379/261121 (321/1020 MB) Network info : 10.179.97.64 (10.179.97.64/255.255.254.0) GateWay : 10.179.96.1 MAC address : 00:16:3e:7e:dc:78 Boot options : vfsCwd : /application/user_projects/domains/wlsve_domain mainArgs : java -javaagent:/jrockitve/services/sshd/sshd.jar -cp /jrockitve/jrockit/lib/tools.jar:/jrockitve/lib/common.jar:/application/patch_wls1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/application/wlserver_10.3/server/lib/weblogic.jar -Dweblogic.Name=WlsveAdmin -Dweblogic.Domain=wlsve_domain -Dweblogic.management.username=weblogic -Dweblogic.management.password=welcome1 -Dweblogic.management.GenerateDefaultConfig=true weblogic.Server consLog : /jrockitve/log/jrockitve.log mounts : ext2 / dev0; posixLocale : en_US posixTimezone : Asia/Seoul posixEncoding : ISO-8859-1 Local disk : Size: 1024M, Used: 728M, Free: 295M ======================================================== Debug-key '5' pressed Shutdown enabled. Debug-key '7' pressed [JRockit] Management server already started. Ignoring request. Debug-key '8' pressed Starting stat recording Debug-key '8' pressed ========= Statistics Counters for the last second ========= dev.eth0_rx.cnt : 22 packets dev.eth0_rx_bytes.cnt : 2704 bytes dev.net_interrupts.cnt : 22 interrupts evt.timer_ticks.cnt : 123 ticks hyper.priv_entries.cnt : 144 entries schedule.context_switches.cnt : 271 switches schedule.idle_cpu_time.cnt : 997318849 nanoseconds schedule.idle_cpu_time_0.cnt : 997318849 nanoseconds schedule.total_cpu_time.cnt : 1000031757 nanoseconds time.system_time.cnt : 1000 ns time.timer_updates.cnt : 123 updates time.wallclock_time.cnt : 1000 ns ======================================= Debug-key '9' pressed ===== FULL THREAD DUMP =============== Fri Jun 4 08:22:12 2010 BEA JRockit(R) R27.6.6-28_o-125824-1.6.0_17-20091214-2104-linux-ia32 "Main Thread" id=1 idx=0x4 tid=1 prio=5 alive, in native, waiting -- Waiting for notification on: weblogic/t3/srvr/T3Srvr@0x646ede8[fat lock] at jrockit/vm/Threads.waitForNotifySignal(JLjava/lang/Object;)Z(Native Method) at java/lang/Object.wait(J)V(Native Method) at java/lang/Object.wait(Object.java:485) at weblogic/t3/srvr/T3Srvr.waitForDeath(T3Srvr.java:919) ^-- Lock released while waiting: weblogic/t3/srvr/T3Srvr@0x646ede8[fat lock] at weblogic/t3/srvr/T3Srvr.run(T3Srvr.java:479) at weblogic/Server.main(Server.java:67) at jrockit/vm/RNI.c2java(IIIII)V(Native Method) -- end of trace "(Signal Handler)" id=2 idx=0x8 tid=2 prio=5 alive, in native, daemon Open lock chains ================ Chain 1: "ExecuteThread: '0' for queue: 'weblogic.socket.Muxer'" id=23 idx=0x50 tid=20 waiting for java/lang/String@0x630c588 held by: "ExecuteThread: '1' for queue: 'weblogic.socket.Muxer'" id=24 idx=0x54 tid=21 (active) ===== END OF THREAD DUMP =============== Debug-key '0' pressed Debug-keys enabled Happy Cloud Walking :)

    Read the article

  • How to Disable the New Geolocation Feature in Google Chrome

    - by Asian Angel
    The latest release of Google Chrome has geolocation enabled by default, and if you are worried about privacy or just don’t want websites to prompt you for your location, we’ve got the quick details on how to turn it off. Readers should note that the new Geolocation feature doesn’t give out your details by default, so don’t panic. It’s also only active, at the time of this writing, in the Dev channel builds of Chrome—so if you are using the regular stable build this feature won’t arrive for a while anyway. Note: If you’re a Firefox user, be sure to check out our guide to disabling geolocation in Firefox 3.x. What’s this Geolocation Feature About? Geolocation is a way for your browser to tell a website about your physical location, so you can get results tailored to where you actually are—for example, if you visited Google Maps it can ask you for your location to give you an accurate picture of where you are. To use this feature in Google Maps, you would click on the small white icon to activate the feature. As soon as you have clicked on the small white icon, a thin green toolbar will appear at the top of the webpage, asking to Allow or Deny.   How to Turn Chrome’s Geolocation Off If you want to turn geolocation off you will need to open the “Chrome Options Window”, navigate to the third tab, and click on the “Content settings… ” button. When the “Content Settings Window” opens go to the “Location Tab” and select “Do not allow any site to track my physical location”. Once that is done close out the “Content Settings & Chrome Options Windows”. When you go back to Google Maps and try using the small white icon again this is the message that you will see at the top of the page. Now that is much better! If you are unhappy with geolocation being enabled by default in the latest Dev Channel release then this will help get the problem sorted out nicely. Similar Articles Productive Geek Tips Stupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeHow To Disable Individual Plug-ins in Google ChromeStop YouTube Videos from Automatically Playing in ChromeDisable YouTube Comments while using ChromeHow to Make Google Chrome Your Default Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff

    Read the article

  • Too Few Women in IT!

    - by Yolande
    Last year, only 1% of attendees at Devoxx were women . This year, Devoxx addressed the issue in a panel entitled "Why We Should Target Women." On the panel were Kim Ross, Régina ten Bruggencate, Trisha Gee, Antonio Goncalves and Claude Falguiére. The moderator was Martijn Verburg. The discussion focused on how to attract women to programming and how to get current women programmers to be more active in the community. The panelists agreed that the IT field should not just attract more women but also men of different ethnic backgrounds. The lack of women in programming is in part a cultural issue that differs from region to region. In developed countries, very few women work as programmers whereas in Brazil and India a lot of women pursue careers in IT.  Women in developed countries perceive the field as isolating and very few young women graduate in computer science.  This perception of isolation was based in reality decades ago, but that is no longer the case today. Main ideas discussed by the panel: - Parents should encourage their daughters to play with Lego and learn programming - More organizations should target girls in high schools and young women in university to expose them to programming.  Duchess organization is planning on being more involved with young girl events and mentoring. - Women tend to be more self-critical about their skills and are intimidated by high skill requirements in job advertisements. Companies should change job advertisements to get more women to interviews. - Panelists don't recommend affirmative action because women feel favored and lose credibility. They want to be judged for their skills. - Panelists recommend acting the same way when dealing with either female or male co-workers and managers - Women need mentors (men or women) to learn to become speakers at conferences and to promote themselves better - Men should be sensitive to the fact that women are alone at work to respond to men teasing. The balance of power at work is different from a social setting. - Men also experience discrimination on the job. It is more difficult for men to take time off when their children are sick, for example. Equal valuing of parental obligations could result in equal pay for women. See also: Trisha Gee Blog - http://mechanitis.blogspot.com/ Duchess Organization - http://www.jduchess.org/

    Read the article

  • Language Club

    - by Ben Griswold
    We started a language club at work this week.  Thus far, we have a collective interest in a number of languages: Python, Ruby, F#, Erlang, Objective-C, Scala, Clojure, Haskell and Go. There are more but these 9 received the most votes. During the first few meetings we are going to determine which language we should tackle first. To help make our selection, each member will provide a quick overview of their favored language by answering the following set of questions: Why are you interested in learning “your” language(s). (There’s lots of work, I’m an MS shill, It’s hip and  fun, etc) What type of language is it?  (OO, dynamic, functional, procedural, declarative, etc) What types of problems is your language best suited to solve?  (Algorithms over big data, rapid application development, modeling, merely academic, etc) Can you provide examples of where/how it is being used?  If it isn’t being used, why not?  (Erlang was invented at Ericsson to provide an extremely fault tolerant, concurrent system.) Quick history – Who created/sponsored the language?  When was it created?  Is it currently active? Does the language have hardware support (an attempt was made at one point to create processor instruction sets specific to Prolog), or can it run as an interpreted language inside another language (like Ruby in the JVM)? Are there facilities for programs written in this language to communicate with other languages?  How does this affect its utility? Does the language have a IDE tool support?  (Think Eclipse or Visual Studio) How well is the language supported in terms of books, community and documentation? What’s the number one things which differentiates the language from others?  (i.e. Why is it cool?) How is the language applicability to us as consultants?  What would the impact be of using the language in terms of cost, maintainability, personnel costs, etc.? What’s the number one things which differentiates the language from others?  (i.e. Why is it cool?) This should provide an decent introduction into nearly a dozen languages and give us enough context to decide which single language deserves our undivided attention for the weeks to come.  Stay tuned for the winner…

    Read the article

  • Access Control Service: Protocol and Token Transition

    - by Your DisplayName here!
    ACS v2 supports a number of protocols (WS-Federation, WS-Trust, OpenId, OAuth 2 / WRAP) and a number of token types (SWT, SAML 1.1/2.0) – see Vittorio’s Infographic here. Some protocols are designed for active client (WS-Trust, OAuth / WRAP) and some are designed for passive clients (WS-Federation, OpenID). One of the most obvious advantages of ACS is that it allows to transition between various protocols and token types. Once example would be using WS-Federation/SAML between your application and ACS to sign in with a Google account. Google is using OpenId and non-SAML tokens, but ACS transitions into WS-Federation and sends back a SAML token. This way you application only needs to understand a single protocol whereas ACS acts as a protocol bridge (see my ACS2 sample here). Another example would be transformation of a SAML token to a SWT. This is achieved by using the WRAP endpoint – you send a SAML token (from a registered identity provider) to ACS, and ACS turns it into a SWT token for the requested relying party, e.g. (using the WrapClient from Thinktecture.IdentityModel): [TestMethod] public void GetClaimsSamlToSwt() {     // get saml token from idp     var samlToken = Helper.GetSamlIdentityTokenForAcs();     // send to ACS for SWT converion     var swtToken = Helper.GetSimpleWebToken(samlToken);     var client = new HttpClient(Constants.BaseUri);     client.SetAccessToken(swtToken, WebClientTokenSchemes.OAuth);     // call REST service with SWT     var response = client.Get("wcf/client");     Assert.AreEqual<HttpStatusCode>(HttpStatusCode.OK, response.StatusCode); } There are more protocol transitions possible – but they are not so obvious. A popular example would be how to call a REST/SOAP service using e.g. a LiveId login. In the next post I will show you how to approach that scenario.

    Read the article

  • Newbie worried about CASE tool.

    - by Jason Evans
    Hi there. I'm looking for some guidance on CASE tools and whether my concerns are valid. Recently I was in a meeting between my employer and an external software company which have a CASE tool currently in beta. They demonstrated this tool to us, showing how you build a UML model in Enterprise Architect (or something like it) and then, through their tool, that UML model is transformed into a Visual Studio project, with C# files, stored procedures for SQL Server, code for the data layer, WCF stuff, logging code and allsorts. Now, admittedly, I don't see the point in this, as in I'm not convinced it will save that much time (plus it feels like overkill). The tool authors said that a trial of the tool at another company had saved a team there 5 weeks of development time (from 6 weeks down to about 1 week) using this tool. I find the accuracy of that estimate hard to believe. My main concern is whether using this tool is going slow down my productivity. For example - Say I have a UML model which I built a VS solution from. Now, I want to rename a class method to something else; will this mean having to update the UML model first and then rebuilding the code? Is this how case tools normally work? Something I will need to check with the authors is the structure of the generated VS solution. I like the Domain Driven Design way of project structure - Infrstructure, Services, Model, etc. I doubt very much this tool will do that. Also, I've been playing around with Entity Framework Code First and think it's a great way to build the data model. I have nice repositories, unit of work classes and other design patterns that work well with EF. I have data anootations and stuff like that working great. By not having EF (the CASE tool uses it's own data layer code) I'm concerned that this tool's data layer code might not be a nice to integrate in the UoW pattern, repositories, etc. This I will need to verify when I get a closer look at the generated code. What are other people's experiences with CASE tools? Am I being paranoid about nothing? Am I being unfair - are my negativities unfounded? EDIT: I like to use TDD/BDD for building my code, and using a CASE tool looks like it will make this difficult. Again, any feedback on this would be great. Cheers. Jas.

    Read the article

  • How do you keep code with continuations/callbacks readable?

    - by Heinzi
    Summary: Are there some well-established best-practice patterns that I can follow to keep my code readable in spite of using asynchronous code and callbacks? I'm using a JavaScript library that does a lot of stuff asynchronously and heavily relies on callbacks. It seems that writing a simple "load A, load B, ..." method becomes quite complicated and hard to follow using this pattern. Let me give a (contrived) example. Let's say I want to load a bunch of images (asynchronously) from a remote web server. In C#/async, I'd write something like this: disableStartButton(); foreach (myData in myRepository) { var result = await LoadImageAsync("http://my/server/GetImage?" + myData.Id); if (result.Success) { myData.Image = result.Data; } else { write("error loading Image " + myData.Id); return; } } write("success"); enableStartButton(); The code layout follows the "flow of events": First, the start button is disabled, then the images are loaded (await ensures that the UI stays responsive) and then the start button is enabled again. In JavaScript, using callbacks, I came up with this: disableStartButton(); var count = myRepository.length; function loadImage(i) { if (i >= count) { write("success"); enableStartButton(); return; } myData = myRepository[i]; LoadImageAsync("http://my/server/GetImage?" + myData.Id, function(success, data) { if (success) { myData.Image = data; } else { write("error loading image " + myData.Id); return; } loadImage(i+1); } ); } loadImage(0); I think the drawbacks are obvious: I had to rework the loop into a recursive call, the code that's supposed to be executed in the end is somewhere in the middle of the function, the code starting the download (loadImage(0)) is at the very bottom, and it's generally much harder to read and follow. It's ugly and I don't like it. I'm sure that I'm not the first one to encounter this problem, so my question is: Are there some well-established best-practice patterns that I can follow to keep my code readable in spite of using asynchronous code and callbacks?

    Read the article

  • ArchBeat Facebook Friday: Top 10 Posts - August 8-14, 2014

    - by Bob Rhubart-Oracle
    5,307 people pay attention to the OTN ArchBeat Facebook Page. Here are the Top 10 posts from that page for the last seven days, August 8-14, 2014. Podcast: ODTUG Kscope 2014: Anatomy of a User Conference - Part 3 There is more to a great user conference than a shared interest in Oracle products. In the final segment of this 3-part OTN ArchBeat Podcast panelists Danny Bryant , Chet "ORACLENERD" Justice, Cameron Lackpour, Debra Lilley, and Mike Riley discuss the nature and importance of community Oracle SOA Suite 12c: The LDAP Adapter quick and easy | Maarten Smeets Maarten Smeets' how-to post describes the installation and configuration of an LDAP server and browser (ApacheDS and Apache Directory Studio). Process level Exception Handling in BPM12c | Abhishek Mittal When an exception occurs while running a process flow you have two choices: 1) retry running the flow object that caused that process flow or 2) move the process instance to the next flow object in the main process flow. Abhishek Mittal shows you how to do both. Building a Responsive WebCenter Portal Application | JayJay Zheng Oracle ACE JayJay Zheng's article addresses the essentials of responsive web design, shows you how to design and develop a responsive WebCenter Portal application, and reviews key development considerations. Cloud Control authorization with Active Directory | Jeroen Gouma Jeroen Gouma takes you step-by-step through the user authortization process in Oracle Enterprise Manager Cloud Control 12c. Video: CIOs Guide to Oracle Products and Solutions | Jessica Keyes The CIO's Guide to Oracle Products and Solutions author Jessica Keyes talks about why input from users and developers is essential to CIOs who want to avoid being escorted out of the building by security guards. Read A CIO's Guide to Oracle Cloud Computing, a sample chapter from the book. Twitter Tuesday - Top 10 @ArchBeat Tweets - August 5-11, 2014 @OTNArchBeat followers from across the galaxy have spoken! Here are the Top 10 tweets for the past seven days. Topics include: Hyperion, OBIEE, ODI, Oracle MAF, and SOA Suite. Recap: Fusion Middleware Summer Camps - Lisbon 2014 | Simon Haslam Oracle ACE Director Simon Haslam's recap of his experience at the Oracle Fusion Middleware Summer Camp in Lisbon, Portugal will make you wish you had been there. WebLogic Data Source Connection Labeling | Steve Felts The connection labeling feature was added in WLS release 10.3.6, and enhanced in release WLS 12.1.3. This post by Steve Felts describes two new connection properties that can be configured on the data source descriptor. Why Mobile Apps <3 REST/JSON | Martin Jarvis Martin Jarvis explores the preference for REST and JSON over SOAP and XML for mobile web services.

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • java+netbeans+mysql+ubuntu 64 bits LTS VS C#+MS SQL for fast develop trading system

    - by crunchor
    I am going to build a trading system and use with the broker "Interactive Broker" API. The API supports C++ Socket, Java Socket, DDE, Active X APIs in Windows. The API supports Java Socket API, Posix C++ Socket API in Unix kind like Ubuntu. My trading system has some real time long calculations to do and a lot of maths for backtest. I am using a retail trading program Amibroker which is written in C++ and I run it in windows xp 32 bits, it will take me days to do one serious backtest with my G620 Sandy Bridge CPU and 3GB of ram. So for my trading system, I need to have 1. speed, 2. stability, 3. fast development I have done some research, C++ is fastest but I am not good at it and it takes much longer time to develop. Other than C++, Java in linux has the best speed. I also did some research for database and look like mysql has the best speed. Mysql and PostgreSQL both are very popular open source sql db, which one should I choose? I see MySQL has Workbench now which looks like similar to MS SQL management studio so look like a good start. Netbeans should be the most popular Java IDE now and seem like its GUI design can be as easy as Virtual Studio now. I am not sure if made by Netbeans would affect the speed and if its GUI design is really that good and easy to use. Ubuntu 64 bits LTS has good long term support, good community support, and stable. I will buy a new computer if I can create a good trading system for live trading and backtest. Very likely I will buy a I7 or I5 depends on if I7 can really have better speed for my case. Actually I mainly deal with C# in my jobs and I just knew java but not good at java. What would you guys recommend? Any better solution? This will be a big project and very likely life long project for me so I seriously do research including asking you guys before I start and focus on what I should, thanks!

    Read the article

  • Fixing the #mvvmlight code snippets in Visual Studio 11

    - by Laurent Bugnion
    If you installed the latest MVVM Light version for Windows 8, you may encounter an issue where code snippets are not displayed correctly in the Intellisense popup. I am working on a fix, but for now here is how you can solve the issue manually. The code snippets MVVM Light, when installed correctly, will install a set of code snippets that are very useful to allow you to type less code. As I use to say, code is where bugs are, so you want to type as little of that as possible ;) With code snippets, you can easily auto-insert segments of code and easily replace the keywords where needed. For instance, every coder who uses MVVM as his favorite UI pattern for XAML based development is used to the INotifyPropertyChanged implementation, and how boring it can be to type these “observable properties”. Obviously a good fix would be something like an “Observable” attribute, but that is not supported in the language or the framework for the moment. Another fix involves “IL weaving”, which is a post-build operation modifying the generate IL code and inserting the “RaisePropertyChanged” instruction. I admire the invention of those who developed that, but it feels a bit too much like magic to me. I prefer more “down to earth” solutions, and thus I use the code snippets. Fixing the issue Normally, you should see the code snippets in Intellisense when you position your cursor in a C# file and type mvvm. All MVVM Light snippets start with these 4 letters. Normal MVVM Light code snippets However, in Windows 8 CP, there is an issue that prevents them to appear correctly, so you won’t see them in the Intellisense windows. To restore that, follow the steps: In Visual Studio 11, open the menu Tools, Code Snippets Manager. In the combobox, select Visual C#. Press Add… Navigate to C:\Program Files (x86)\Laurent Bugnion (GalaSoft)\Mvvm Light Toolkit\SnippetsWin8 and select the CSharp folder. Press Select Folder. Press OK to close the Code Snippets Manager. Now if you type mvvm in a C# file, you should see the snippets in your Intellisense window. Cheers Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Bluetooth Dial-Up Networking using Blueman

    - by leemes
    I want to configure a dial up network connection via bluetooth to my phone in order to access the internet. I use Lubuntu 12.04 (Ubuntu with LXDE) which has the Network Manager Applet and Blueman applet installed. I guess these are the same tools than on an Ubuntu installation, hence I ask my question on this site. My phone is a Sony Ericsson W810i, my laptop is a Lenovo S10-2, my mobile phone provider is o2 Germany. I scanned for my mobile phone using the Blueman applet. I connected the dial-up network via the context menu - Serial Ports - Dial-up Networking. A notification bubble says that the connection is available on the interface named ppp0. ipconfig is telling something different: There is no ppp0 or something similar. I only see my eth0 (wired ethernet), eth1 (wifi) and lo interfaces. Of course, I can't ping google.com as the interface really seems to be not present at all. When the dial-up network is being connected, my mobile phone says that it connects to the internet. Afterwards, I see the active connection on the phone's screen. When successfully connecting with the phone using another computer, it behaves exactly the same, so I guess that the phone isn't the problem. I don't know if I configured the Dial-Up correctly. I use the phone number *99# which is very common on most mobile ISPs. I use the APN which my ISP is telling me to use. (I can't find the number on their support page, so I just use the default value *99#.) My mobile ISP is o2 Germany. There are How-Tos out there which use the Network Manager Applet to setup a bluetooth dial-up connection, but I can't see any bluetooth devices in the context menu as on the screenshots in those How-Tos. Do you have any suggestions what might be wrong / what I should try? EDIT: When choosing "Network Access Point" in the device's context menu instead of Serial Ports - Dial-Up Networking, an interface bnep0 appears. However, neither an IPv4 address is assigned for that interface (but IPv6), nor the phone connects to the internet. Am I missing something? Can I connect to the internet after setting up this network connection?

    Read the article

  • The Red Gate Guide to SQL Server Team based Development Free e-book

    - by Mladen Prajdic
    After about 6 months of work, the new book I've coauthored with Grant Fritchey (Blog|Twitter), Phil Factor (Blog|Twitter) and Alex Kuznetsov (Blog|Twitter) is out. They're all smart folks I talk to online and this book is packed with good ideas backed by years of experience. The book contains a good deal of information about things you need to think of when doing any kind of multi person database development. Although it's meant for SQL Server, the principles can be applied to any database platform out there. In the book you will find information on: writing readable code, documenting code, source control and change management, deploying code between environments, unit testing, reusing code, searching and refactoring your code base. I've written chapter 5 about Database testing and chapter 11 about SQL Refactoring. In the database testing chapter (chapter 5) I cover why you should test your database, why it is a good idea to have a database access interface composed of stored procedures, views and user defined functions, what and how to test. I talk about how there are many testing methods like black and white box testing, unit and integration testing, error and stress testing and why and how you should do all those. Sometimes you have to convince management to go for testing in the development lifecycle so I give some pointers and tips how to do that. Testing databases is a bit different from testing object oriented code in a way that to have independent unit tests you need to rollback your code after each test. The chapter shows you ways to do this and also how to avoid it. At the end I show how to test various database objects and how to test access to them. In the SQL Refactoring chapter (chapter 11) I cover why refactor and where to even begin refactoring. I also who you a way to achieve a set based mindset to solve SQL problems which is crucial to good SQL set based programming and a few commonly seen problems to refactor. These problems include: using functions on columns in the where clause, SELECT * problems, long stored procedure with many input parameters, one subquery per condition in the select statement, cursors are good for anything problem, using too large data types everywhere and using your data in code for business logic anti-pattern. You can read more about it and download it here: The Red Gate Guide to SQL Server Team-based Development Hope you like it and send me feedback if you wish too.

    Read the article

  • Oracle ODP.NET und Windows PowerShell

    - by cjandaus
    In der Microsoft Welt wohlbekannt, in der Oracle Welt nur ein Schulterzucken hervorrufend - die sogenannten Scripting Guys. Wie der Name bereits vermuten lässt, geht es in deren Hey, Scripting Guy! Blog um Scripting. Und damit natürlich um die Windows PowerShell. Ja, die Zeiten des DOS-Kommandofensters und Batch-Dateien ist vorbei. Die PowerShell ist eine mächtige Scripting-Umgebung unter Windows, die selbst unter Unix/Linux-Administratoren Gefallen finden sollte. Dass man damit wunderbar auch auf Oracle Datenbanken zugreifen kann, haben wir bereits vor Jahren in einer Oracle Workshop Reihe bewiesen. Damals begleitete mich Klaus Rohe von Microsoft, der mit mir dann auch gemeinsam einen Vortrag auf DOAG Konferenz hielt. Unser gemeinsames Ziel war es damals wie heute, die Oracle Anwender von der hervorragenden Integration zwischen Oracle, Windows und .NET zu überzeugen. Was lag näher, als sich dies von beiden Herstellern gemeinsam bestätigen zu lassen? Vor allem die ewigen Zweifler begrüßten dies. Seither war die PowerShell bei mir nicht mehr auf dem Radar und auch Oracle Anwender haben das Thema nicht mehr aufgeworfen. Möglicherweise auch deshalb, weil es zu neu oder zu unbekannt ist? Eher unwahrscheinlich ... Vielleicht liegt es vielmehr daran, dass man einfach mal davon ausgeht, dass PowerShell nur für Microsoft Produkte richtig nutzbar ist? Oder man bekommt erzählt, dass nur die Integration mit der Microsoft-eigenen Datenbank SQL Server möglich ist? Und das ist natürlich nicht richtig - so wie immer (ich denke dabei unter anderem an das Microsoft Active Directory - aber dazu ein andermal mehr). Umso mehr freut es mich, einen brandneuen Blog-Beitrag zu genau diesem Thema zu lesen, auf den mich Alex Keh, (Produkt Manager für Windows und .NET im Oracle Headquarter in San Francisco) aufmerksam gemacht hat. Was die Sache noch besser macht, dieser Beitrag stammt aus der Microsoft Welt und belegt damit zwischen den Zeilen, dass die Oracle Datenbank und unsere .NET Integration via dem Oracle Data Provider for .NET (ODP.NET) auch hier eine bedeutende Rolle spielt. In diesem Sinne: Beide Daumen hoch für die Scripting Guys! Der Beitrag nennt sich Use Oracle ODP.NET and PowerShell to Simplify Data Access und trotz ein paar weniger Ausreißer, ist der Artikel sehr zu empfehlen, um in das Thema einzusteigen. Lassen Sie es mich wissen, wie Sie zu dieser Integration stehen, ob die PowerShell für Sie in der Praxis wichtig ist oder werden könnte, und falls Sie Features vermissen, die Oracle künftig umsetzen sollte. Danke!

    Read the article

  • Transfer websites and domains to new server

    - by Albert
    We have currently around 40 websites and 80+ domains/sub-domains in a shared 1&1 hosting package, and we just acquired a managed dedicated server with 1&1 as well. Now it's time to start transferring everything over to the new server. Transferring just the websites and databases wouldn't be a problem, it would take time but it's pretty straight forward. The problem comes when transferring the domains, let me explain why. Many of the websites we have are accessible via sub-domains of a parent domain. Ideally, we would like to transfer the sites one by one, in order to check for each one that everything works fine in the new server. However, since we also need to transfer the domain so it's managed in the new server, once we do that means that all the websites using that domain need to be already in the new server before transferring that domain, thus not allowing the "one by one" philosophy. Another issue is the downtime when transferring the domain, from the moment it stops working in the hosting package and becomes active in the new server. I believe there's nothing we can do here. So my question is if there's any way we can do the "one by one" transferring of the websites (and their corresponding sub-domains) in the circumstances described above. One idea I had would be: 1. Let's say we have website A, which is accessible using subdomain.mydomain.com (and there are many other websites accessible via other sub-domains of mydomain.com) 2. Transfer the files of website A to the new server 3. Point a test domain in the new server to the website A's folder (the new server comes with a "test" domain) 4. Test if website A works with that "test" domain 5. In the old hosting, somehow point the real sub-domain (subdomain.mydomain.com) to the new location of website A, in a way that user always see the same URL as always 6. Repeat 2-5 for every website belonging to the same domain 7. Once all are working in the new server, do the actual transfer of the domain to the new server, and then re-create all the sub-domains and point them to their corresponding website That way, users wouldn't notice that there's been a change (except for a small down time of the websites when doing the domain transfer). The part I'm not sure about is point 5 of the above. Is there any way to do that? I mean do it in a way that users see the original domain all the time in their browser, even for internal pages (so not only for the "home page", which would be sub-domain.mydomain.com, but also for example for the contact page, which would be sub-domain.mydomain.com/contact.php). Is there any way to do this? Or are we SOL and we're going to have to transfer all at the same time?

    Read the article

  • What Is Nuclear Meltdown?

    - by Gopinath
    Japan was first hit by a massive earth quake, then a ruthless tsunami washed away thousands of homes and now they fear the worst – meltdown of nuclear power stations in the quake hit year. Nuclear meltdowns are horrifying – remember the Chernobyl incident in Russia? The Chernobyl reactor meltdown released 400 times more radio active material than the atomic bombing of Hiroshima. The effects of nuclear meltdowns are beyond imagination of a common man, thousands of people loose their lives and many more lakhs of people suffer with radiation related diseases for many years. Nuclear Meltdowns are dangerous, but how do they happen? What causes a nuclear meltdown? In simple terms – Nuclear meltdown is an accident that happens due to severe overheating of a nuclear reactor and results in release of nuclear radiation into the environment.  How A Nuclear Meltdown Happens? According to Wikipedia A meltdown occurs when a severe failure of a nuclear power plant system prevents proper cooling of the reactor core, to the extent that the nuclear fuel assemblies overheat and melt. A meltdown is considered very serious because of the potential that radioactive materials could be released into the environment. The fuel assemblies in a reactor core can melt if heat is not removed. A nuclear reactor does not have to remain critical for a core damage incident to occur, because decay heat continues to heat the reactor fuel assemblies after the reactor has shut down, though this heat decreases with time. A core damage accident is caused by the loss of sufficient cooling for the nuclear fuel within the reactor core. The reason may be one of several factors, including a loss of pressure control accident, a loss of coolant accident (LOCA), an uncontrolled power excursion or, in some types, a fire within the reactor core. Failures in control systems may cause a series of events resulting in loss of cooling. Contemporary safety principles of defense in depth, ensure that multiple layers of safety systems are always present to make such accidents unlikely. Video – What Causes Nuclear Meltdown AlJazeera news has a good analysis on feared nuclear meltdown of Japan’s nuclear plants and also an animation on what causes Nuclear Meltdown. cc image credit: flickr/jtjdt This article titled,What Is Nuclear Meltdown?, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Free eBook with SQL Server performance tips and nuggets

    - by Claire Brooking
    I’ve often found that the kind of tips that turn out to be helpful are the ones that encourage me to make a small step outside of a routine. No dramatic changes – just a quick suggestion that changes an approach. As a languages student at university, one of the best I spotted came from outside the lecture halls and ended up saving me time (and lots of huffing and puffing) – the use of a rainbow of sticky notes for well-used pages and letter categories in my dictionary. Simple, but armed with a heavy dictionary that could double up as a step stool, those markers were surprisingly handy. When the Simple-Talk editors told me about a book they were planning that would give a series of tips for developers on how to improve database performance, we all agreed it needed to contain a good range of pointers for big-hitter performance topics. But we wanted to include some of the smaller, time-saving nuggets too. We hope we’ve struck a good balance. The 45 Database Performance Tips eBook covers different tips to help you avoid code that saps performance, whether that’s the ‘gotchas’ to be aware of when using Object to Relational Mapping (ORM) tools, or what to be aware of for indexes, database design, and T-SQL. The eBook is also available to download with SQL Prompt from Red Gate. We often hear that it’s the productivity-boosting side of SQL Prompt that makes it useful for everyday coding. So when a member of the SQL Prompt team mentioned an idea to make the most of tab history, a new feature in SQL Prompt 6 for SQL Server Management Studio, we were intrigued. Now SQL Prompt can save tabs we have been working on in SSMS as a way to maintain an active template for queries we often recycle. When we need to reuse the same code again, we search for our saved tab (and we can also customize its name to speed up the search) to get started. We hope you find the eBook helpful, and as always on Simple-Talk, we’d love to hear from you too. If you have a performance tip for SQL Server you’d like to share, email Melanie on the Simple-Talk team ([email protected]) and we’ll publish a collection in a follow-up post.

    Read the article

  • ETPM Environment Health Monitoring Tools

    - by Paula Speranza-Hadley
    This post is to provide some useful information about the tools typically used by Oracle ETPM implementations for performance tuning and analysis.   This includes tools to monitor and gather performance information and statistics on the Database, Application Server, and Client (browser).  Enterprise Monitoring Tools Oracle Enterprise Manager - OEM Grid Control comes with a comprehensive set of performance and health metrics that allow monitoring of key components in your environment such as applications, application servers, databases, as well as the back-end components on which they rely, such as hosts, operating systems and storage. Tools for the Database Oracle Diagnostics Pack Automatic Workload Repository (AWR)  - this tool gets statistics from memory abut the Time Model or DB Time, Wait Events, Active Session History and High Load SWL queries Automatic Database Diagnostic Monitor (ADDM) - This self-diagnostic software is built into the database.  It examines and analyzes data captured in AWR to dertermine possible performance issues.  It locates the root cause of the issue, provides recommendations for correcting the issues and qualifies the expected benefit. Oracle Database Tuning Pack SQL Tuning Advisor - This enables you to submit one or more SQL statements as input and receive output in the form of specific advice or recommendations on how to tune statements.  The recommendation relates to collection of statistics on objects, creation on new indexes and restructuring of SQL statements. SQL Access Advisor - This enables you to optimize data access paths of SQL queries by recommending a proper set of materialized views, indexes and partitions for a given SQL workload. Tools for the Application Server Weblogic Console - is a web-based, user interface used to configure and control a set of WebLogic servers or clusters (i.e. a "domain").  In any logical group of WebLogic servers there must exist one admin server, which hosts the WebLogic Admin console application and manages the associated configuratoin files. WebLogic Administrators will use the Administration Console for a number of tasks, including: Starting and stopping WebLogic servers or entire clusters. Configuring server parameters, security, database connections and deployed applications. Viewing server status, health and metrics. Yourkit for Profiling - helps analyze synchronization issues, including: Which threads were calling wait(), and for how long Which threads were blocked on attempt to acquire a monitor held by another thread (synchronized methods/blocks), and for how long Tools for the Client Fiddler - allows you to inspect traffic logs, debug and set breakpoints. Firebug – allows you to inspect and edit HTML, monitor network activity and debug JavaScript

    Read the article

  • links for 2011-02-16

    - by Bob Rhubart
    On the Software Architect Trail Software architect is the #1 job, according to a 2010 CNN-Money poll. In this article in Oracle Magazine, several members of the OTN architect community talk about the career paths that led them to this lucrative role.  (tags: oracle oraclemagazine softwarearchitect) Oracle Technology Network Architect Day: Denver Registration opens soon for this event to be held in Denver on March 23, 2011.  (tags: oracle otn entarch) How the Internet Gets Inside Us : The New Yorker "It isn’t just that we’ve lived one technological revolution among many; it’s that our technological revolution is the big social revolution that we live with." - Adam Gopnik (tags: internet progress technology innovation) The Insider Threat: Understand and Mitigate Your Risks: CSO Webcast February 23, 2011 at 1:00 PM EST/ 10:00 AM PST .  Speakers: Randy Trzeciak, lead for the CERT Insider Threat research team, and  Roxana Bradescu, Director of Database Security at Oracle. (tags: oracle CERT security) The Tom Kyte Blog: An Interesting Read... Tom looks at "an internet security firm brought down by not following the most *basic* of security principals." (tags: security oracle) Jason Williamson: Oracle as a Service in the Cloud "It is not trivial to migrate large amounts of pre-relational or 'devolved' relational data. To do this, we again must revert back to a tight roadmap to migration and leverage the growing tools and services that we have." - Jason Williamson (tags: oracle cloud soa) Edwin Biemond: Java / Oracle SOA blog: Building an asynchronous web service with JAX-WS "Building an asynchronous web service can be complex especially when you are used to synchronous Web services where you can wait for the response in your favorite tool." - Oracle ACE Edwin Biemond (tags: oracle oracleace java soa) Shared Database Servers (The SaaS Report) "Outside the virtualization world, there are capabilities of Oracle Database which can be used to prevent resource contention and guarantee SLA." - Shivanshu Upadhyay (tags: oracle database cloud SaaS) White Paper: Experiencing the New Social Enterprise "Increasingly organizations recognize the mandate to create a modern user experience that transforms existing business processes and increases business efficiency and agility." (tags: e20 enterprise2.0 socialcomputing oracle) Clusterware 11gR2 - Setting up an Active/Passive failover configuration Gilles Haro illustrates the steps necessary to achieve "a fully operational 11gR2 database protected by automatic failover capabilities." (tags: oracle clusterware) Oracle ERP: How to overcome local hurdles in a global implementation "The corporate world becomes a global village as many companies expand their business and offices around different countries and even continents. And this number keeps increasing. This globalization raises interesting questions..." - Jan Verhallen (tags: oracle capgemini entarch erp) Webcast: Successful Strategies for Optimizing Your Data Warehouse. March 3. 10 a.m. PT/1 p.m. ET Thursday, March 3, 2011. 10 a.m. PT/1 p.m. ET. Speakers: Mala Narasimharajan (Senior Product Marketing Manager, Oracle Data Integration) and Denis Gray (Principal Product Manager, Oracle Data Integration) (tags: oracle dataintegration datawarehousing)

    Read the article

  • grub problem after grub-repair

    - by alireza
    I had two OS,win 7 and Ubuntu, I was working with win and went out of the room and when came back I saw Grub Rescue!!! I booted live and did grub_repaire and it still says multiple active partitions.operating system not found I feel that it can not find the ubuntu hard drive at all! ! ubuntu@ubuntu:/dev$ sudo fdisk -l Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xde771741 Device Boot Start End Blocks Id System /dev/sda1 * 61 61 0 0 Empty /dev/sda2 2048 30146559 15072256 27 Hidden NTFS WinRE /dev/sda3 * 30146560 30351359 102400 7 HPFS/NTFS/exFAT /dev/sda4 30351360 334483299 152065970 7 HPFS/NTFS/exFAT ubuntu@ubuntu:/dev$ ls autofs dvdrw loop4 ppp ram5 shm tty15 tty29 tty42 tty56 ttyS10 ttyS24 uinput vcsa1 block ecryptfs loop5 psaux ram6 snapshot tty16 tty3 tty43 tty57 ttyS11 ttyS25 urandom vcsa2 bsg fb0 loop6 ptmx ram7 snd tty17 tty30 tty44 tty58 ttyS12 ttyS26 usbmon0 vcsa3 btrfs-control fd loop7 pts ram8 sr0 tty18 tty31 tty45 tty59 ttyS13 ttyS27 usbmon1 vcsa4 bus full loop-control ram0 ram9 stderr tty19 tty32 tty46 tty6 ttyS14 ttyS28 usbmon2 vcsa5 cdrom fuse mapper ram1 random stdin tty2 tty33 tty47 tty60 ttyS15 ttyS29 v4l vcsa6 cdrw fw0 mcelog ram10 rfkill stdout tty20 tty34 tty48 tty61 ttyS16 ttyS3 vcs vcsa7 char hpet mei ram11 rtc tty tty21 tty35 tty49 tty62 ttyS17 ttyS30 vcs1 vga_arbiter console input mem ram12 rtc0 tty0 tty22 tty36 tty5 tty63 ttyS18 ttyS31 vcs2 video0 core kmsg net ram13 sda tty1 tty23 tty37 tty50 tty7 ttyS19 ttyS4 vcs3 zero cpu log network_latency ram14 sda2 tty10 tty24 tty38 tty51 tty8 ttyS2 ttyS5 vcs4 cpu_dma_latency loop0 network_throughput ram15 sda3 tty11 tty25 tty39 tty52 tty9 ttyS20 ttyS6 vcs5 disk loop1 null ram2 sda4 tty12 tty26 tty4 tty53 ttyprintk ttyS21 ttyS7 vcs6 dri loop2 oldmem ram3 sg0 tty13 tty27 tty40 tty54 ttyS0 ttyS22 ttyS8 vcs7 dvd loop3 port ram4 sg1 tty14 tty28 tty41 tty55 ttyS1 ttyS23 ttyS9 vcsa ubuntu@ubuntu:/dev$ someone said { Remove the boot flag from sda1, by typing in a terminal: sudo fdisk /dev/sda1 Then press the 'a' key, and 'Enter'. Then press the '1' key, and 'Enter'. Then press the 'w' key, and 'Enter'. } but after grub-repair I got this log: http://paste.ubuntu.com/1371297/ then I try sudo fdisk /dev/sda1 and But It says unable to open /dev/sda1! which is right as sda1 is not in dev folder (look at the picture!) Can anybody help me? I need to work on my windows for the rest of the semester and don't care of my ubuntu. thank you

    Read the article

  • New Options for MySQL High Availability

    - by Mat Keep
    Data is the currency of today’s web, mobile, social, enterprise and cloud applications. Ensuring data is always available is a top priority for any organization – minutes of downtime will result in significant loss of revenue and reputation. There is not a “one size fits all” approach to delivering High Availability (HA). Unique application attributes, business requirements, operational capabilities and legacy infrastructure can all influence HA technology selection. And then technology is only one element in delivering HA – “People and Processes” are just as critical as the technology itself. For this reason, MySQL Enterprise Edition is available supporting a range of HA solutions, fully certified and supported by Oracle. MySQL Enterprise HA is not some expensive add-on, but included within the core Enterprise Edition offering, along with the management tools, consulting and 24x7 support needed to deliver true HA. At the recent MySQL Connect conference, we announced new HA options for MySQL users running on both Linux and Solaris: - DRBD for MySQL - Oracle Solaris Clustering for MySQL DRBD (Distributed Replicated Block Device) is an open source Linux kernel module which leverages synchronous replication to deliver high availability database applications across local storage. DRBD synchronizes database changes by mirroring data from an active node to a standby node and supports automatic failover and recovery. Linux, DRBD, Corosync and Pacemaker, provide an integrated stack of mature and proven open source technologies. DRBD Stack: Providing Synchronous Replication for the MySQL Database with InnoDB Download the DRBD for MySQL whitepaper to learn more, including step-by-step instructions to install, configure and provision DRBD with MySQL Oracle Solaris Cluster provides high availability and load balancing to mission-critical applications and services in physical or virtualized environments. With Oracle Solaris Cluster, organizations have a scalable and flexible solution that is suited equally to small clusters in local datacenters or larger multi-site, multi-cluster deployments that are part of enterprise disaster recovery implementations. The Oracle Solaris Cluster MySQL agent integrates seamlessly with MySQL offering a selection of configuration options in the various Oracle Solaris Cluster topologies. Putting it All Together When you add MySQL Replication and MySQL Cluster into the HA mix, along with 3rd party solutions, users have extensive choice (and decisions to make) to deliver HA services built on MySQL To make the decision process simpler, we have also published a new MySQL HA Solutions Guide. Exploring beyond just the technology, the guide presents a methodology to select the best HA solution for your new web, cloud and mobile services, while also discussing the importance of people and process in ensuring service continuity. This is subject recently presented at Oracle Open World, and the slides are available here. Whatever your uptime requirements, you can be sure MySQL has an HA solution for your needs Please don't hesitate to let us know of your HA requirements in the comments section of this blog. You can also contact MySQL consulting to learn more about their HA Jumpstart offering which will help you scope out your scaling and HA requirements.

    Read the article

  • Oracle Developer Day, Poland, 2012

    - by Geertjan
    Oracle Developer Day took place in Poland today. Oracle's Gregor Rayman did the keynote, where NetBeans was positioned, yet again, as Oracle's IDE for the Java Platform, via the JavaFX Roadmap: Well, it's not so clear from my pic above, but NetBeans is closely tied to the JavaFX Roadmap, as well as the JDK Roadmap too. Then the tracks started, one of which was the Java Track (the other two tracks were on ADF/WebLogic and SOA/BPM/BAM), where among other things I demonstrated the Java EE 6 Platform via tools in NetBeans IDE at some length. The room could hardly have been fuller, chairs had to be brought in and people were standing along the walls. The above pic shows the session being set up, with the room full of developers ready to hear about Java EE 6. I also did a session on pluggable Java desktop development (i.e., NetBeans Platform) and on "What's New in NetBeans IDE 7.1?", while Martin Grebac had a session on Java EE Web Services. Some of the many questions asked during the day that I thought were interesting: Is there localization support for the @Pattern annotation? I.e., what if I want to display the error message in Polish, what do I do? Is there filtering/sorting support for the DataTable component in JSF? Why is there no visual editor for ejb-jar.xml, in the same way that there is for web.xml? "Would be handy if there were to be a JSR for IDE Keyboard Shortcuts." (Two different people asked this question, separately, without knowing about each other. The second didn't know about the Eclipse and IntelliJ keyboard shortcut support in NetBeans IDE and was happy when I told them about it.) Wouldn't it be cool if, on start up, or during installation, there'd be a question: "Are you migrating from Eclipse/IntelliJ?" Then, if "yes", reset the keyboard shortcuts to match the IDE they're coming from.Is there a way in NetBeans to find subclasses of a class? "Would be cool if HTML or JSF files could be visualized in the same way as JavaFX and Swing classes." I.e., Visual Debugger for web developers. I had a great day and am looking at the Oracle Developer Day that will be held in Cluj, Romania, on Friday.

    Read the article

  • Bad Data is Really the Monster

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Bad Data is really the monster – is an article written by Bikram Sinha who I borrowed the title and the inspiration for this blog. Sinha writes: “Bad or missing data makes application systems fail when they process order-level data. One of the key items in the supply-chain industry is the product (aka SKU). Therefore, it becomes the most important data element to tie up multiple merchandising processes including purchase order allocation, stock movement, shipping notifications, and inventory details… Bad data can cause huge operational failures and cost millions of dollars in terms of time, resources, and money to clean up and validate data across multiple participating systems. Yes bad data really is the monster, so what do we do about it? Close our eyes and hope it stays in the closet? We’ve tacked this problem for some years now at Oracle, and with our latest introduction of Oracle Enterprise Data Quality along with our integrated Oracle Master Data Management products provides a complete, best-in-class answer to the bad data monster. What’s unique about it? Oracle Enterprise Data Quality also combines powerful data profiling, cleansing, matching, and monitoring capabilities while offering unparalleled ease of use. What makes it unique is that it has dedicated capabilities to address the distinct challenges of both customer and product data quality – [different monsters have different needs of course!]. And the ability to profile data is just as important to identify and measure poor quality data and identify new rules and requirements. Included are semantic and pattern-based recognition to accurately parse and standardize data that is poorly structured. Finally all of the data quality components are integrated with Oracle Master Data Management, including Oracle Customer Hub and Oracle Product Hub, as well as Oracle Data Integrator Enterprise Edition and Oracle CRM. Want to learn more? On Tuesday Nov 15th, I invite you to listen to our webcast on Reduce ERP consolidation risks with Oracle Master Data Management I’ll be joined by our partner iGate Patni and be talking about one specific way to deal with the bad data monster specifically around ERP consolidation. Look forward to seeing you there!

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >