Search Results

Search found 14772 results on 591 pages for 'full trust'.

Page 415/591 | < Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >

  • Frequent Disconnects ubuntu desktop 12.10 x64 intel 82579V e1000e

    - by user112055
    I'm having frequent disconnects with my new install of Ubuntu 12.10. I tried updating the kernel driver to the latest intel release to no avail. My expertise is spent. It happens anywhere between 1 min and 10 min. Any ideas? syslog: Dec 1 13:51:39 andromeda kernel: [ 972.188809] audit_printk_skb: 6 callbacks suppressed Dec 1 13:51:39 andromeda kernel: [ 972.188813] type=1701 audit(1354398699.418:199): auid=4294967295 uid=1000 gid=1000 ses=4294967295 pid=6039 comm="chrome" reason="seccomp" sig=0 syscall=4 compat=0 ip=0x7f26777d9205 code=0x50000 Dec 1 13:51:39 andromeda kernel: [ 972.188817] type=1701 audit(1354398699.418:200): auid=4294967295 uid=1000 gid=1000 ses=4294967295 pid=6039 comm="chrome" reason="seccomp" sig=0 syscall=4 compat=0 ip=0x7f26777d9205 code=0x50000 Dec 1 13:51:39 andromeda kernel: [ 972.188820] type=1701 audit(1354398699.418:201): auid=4294967295 uid=1000 gid=1000 ses=4294967295 pid=6039 comm="chrome" reason="seccomp" sig=0 syscall=4 compat=0 ip=0x7f26777d9205 code=0x50000 Dec 1 13:51:39 andromeda kernel: [ 972.188823] type=1701 audit(1354398699.418:202): auid=4294967295 uid=1000 gid=1000 ses=4294967295 pid=6039 comm="chrome" reason="seccomp" sig=0 syscall=4 compat=0 ip=0x7f26777d9205 code=0x50000 Dec 1 13:51:39 andromeda kernel: [ 972.188825] type=1701 audit(1354398699.418:203): auid=4294967295 uid=1000 gid=1000 ses=4294967295 pid=6039 comm="chrome" reason="seccomp" sig=0 syscall=4 compat=0 ip=0x7f26777d9205 code=0x50000 Dec 1 13:51:39 andromeda kernel: [ 972.331419] type=1701 audit(1354398699.558:204): auid=4294967295 uid=1000 gid=1000 ses=4294967295 pid=6039 comm="chrome" reason="seccomp" sig=0 syscall=2 compat=0 ip=0x7f26777d96b0 code=0x50000 Dec 1 13:53:12 andromeda NetworkManager[1115]: <info> (eth0): carrier now OFF (device state 100, deferring action for 4 seconds) Dec 1 13:53:12 andromeda kernel: [ 1064.894387] e1000e: e1000e: eth0 NIC Link is Down Dec 1 13:53:16 andromeda NetworkManager[1115]: <info> (eth0): device state change: activated -> unavailable (reason 'carrier-changed') [100 20 40] Dec 1 13:53:16 andromeda NetworkManager[1115]: <info> (eth0): deactivating device (reason 'carrier-changed') [40] Dec 1 13:53:16 andromeda NetworkManager[1115]: <info> (eth0): canceled DHCP transaction, DHCP client pid 5946 Dec 1 13:53:16 andromeda avahi-daemon[890]: Withdrawing address record for fe80::ea40:f2ff:fee2:4d86 on eth0. Dec 1 13:53:16 andromeda avahi-daemon[890]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::ea40:f2ff:fee2:4d86. Dec 1 13:53:16 andromeda avahi-daemon[890]: Interface eth0.IPv6 no longer relevant for mDNS. Dec 1 13:53:16 andromeda kernel: [ 1069.025288] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Dec 1 13:53:16 andromeda avahi-daemon[890]: Withdrawing address record for 192.168.11.17 on eth0. Dec 1 13:53:16 andromeda avahi-daemon[890]: Leaving mDNS multicast group on interface eth0.IPv4 with address 192.168.11.17. Dec 1 13:53:16 andromeda avahi-daemon[890]: Interface eth0.IPv4 no longer relevant for mDNS. Dec 1 13:53:16 andromeda NetworkManager[1115]: <warn> DNS: plugin dnsmasq update failed Dec 1 13:53:16 andromeda NetworkManager[1115]: <info> ((null)): removing resolv.conf from /sbin/resolvconf Dec 1 13:53:16 andromeda dnsmasq[1907]: setting upstream servers from DBus Dec 1 13:53:16 andromeda dbus[800]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) Dec 1 13:53:16 andromeda dbus[800]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> (eth0): carrier now ON (device state 20) Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> (eth0): device state change: unavailable -> disconnected (reason 'carrier-changed') [20 30 40] Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Auto-activating connection '82579V'. Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) starting connection '82579V' Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> (eth0): device state change: disconnected -> prepare (reason 'none') [30 40 0] Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) scheduled... Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) started... Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) scheduled... Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) complete. Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) starting... Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> (eth0): device state change: prepare -> config (reason 'none') [40 50 0] Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) successful. Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) scheduled. Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) complete. Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) started... Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> (eth0): device state change: config -> ip-config (reason 'none') [50 70 0] Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Beginning DHCPv4 transaction (timeout in 45 seconds) Dec 1 13:53:32 andromeda kernel: [ 1084.938042] e1000e: e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: Rx/Tx Dec 1 13:53:32 andromeda kernel: [ 1084.938049] e1000e 0000:00:19.0: eth0: 10/100 speed: disabling TSO Dec 1 13:53:32 andromeda kernel: [ 1084.938815] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> dhclient started with pid 6080 Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) complete. Dec 1 13:53:32 andromeda dhclient: Internet Systems Consortium DHCP Client 4.2.4 Dec 1 13:53:32 andromeda dhclient: Copyright 2004-2012 Internet Systems Consortium. Dec 1 13:53:32 andromeda dhclient: All rights reserved. Dec 1 13:53:32 andromeda dhclient: For info, please visit https://www.isc.org/software/dhcp/ Dec 1 13:53:32 andromeda dhclient: Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> (eth0): DHCPv4 state changed nbi -> preinit Dec 1 13:53:32 andromeda dhclient: Listening on LPF/eth0/e8:40:f2:e2:4d:86 Dec 1 13:53:32 andromeda dhclient: Sending on LPF/eth0/e8:40:f2:e2:4d:86 Dec 1 13:53:32 andromeda dhclient: Sending on Socket/fallback Dec 1 13:53:32 andromeda dhclient: DHCPREQUEST of 192.168.11.17 on eth0 to 255.255.255.255 port 67 Dec 1 13:53:32 andromeda dhclient: DHCPACK of 192.168.11.17 from 192.168.11.1 Dec 1 13:53:32 andromeda dhclient: bound to 192.168.11.17 -- renewal in 33576 seconds. Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> (eth0): DHCPv4 state changed preinit -> reboot Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> address 192.168.11.17 Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> prefix 24 (255.255.255.0) Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> gateway 192.168.11.1 Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> hostname 'andromeda' Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> nameserver '192.168.11.1' Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> domain name 'hsd1.ca.comcast.net' Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Configure Commit) scheduled... Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) started... Dec 1 13:53:32 andromeda avahi-daemon[890]: Joining mDNS multicast group on interface eth0.IPv4 with address 192.168.11.17. Dec 1 13:53:32 andromeda avahi-daemon[890]: New relevant interface eth0.IPv4 for mDNS. Dec 1 13:53:32 andromeda avahi-daemon[890]: Registering new address record for 192.168.11.17 on eth0.IPv4. Dec 1 13:53:33 andromeda NetworkManager[1115]: <info> (eth0): device state change: ip-config -> activated (reason 'none') [70 100 0] Dec 1 13:53:33 andromeda NetworkManager[1115]: <info> ((null)): writing resolv.conf to /sbin/resolvconf Dec 1 13:53:33 andromeda dnsmasq[1907]: setting upstream servers from DBus Dec 1 13:53:33 andromeda dnsmasq[1907]: using nameserver 192.168.11.1#53 Dec 1 13:53:33 andromeda NetworkManager[1115]: <info> Policy set '82579V' (eth0) as default for IPv4 routing and DNS. Dec 1 13:53:33 andromeda NetworkManager[1115]: <info> Activation (eth0) successful, device activated. Dec 1 13:53:33 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) complete. Dec 1 13:53:33 andromeda dbus[800]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) Dec 1 13:53:33 andromeda dbus[800]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Dec 1 13:53:33 andromeda avahi-daemon[890]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::ea40:f2ff:fee2:4d86. Dec 1 13:53:33 andromeda avahi-daemon[890]: New relevant interface eth0.IPv6 for mDNS. Dec 1 13:53:33 andromeda avahi-daemon[890]: Registering new address record for fe80::ea40:f2ff:fee2:4d86 on eth0.*. Dec 1 13:53:41 andromeda ntpdate[6154]: adjust time server 91.189.94.4 offset 0.000928 sec Dec 1 13:53:50 andromeda NetworkManager[1115]: <info> (eth0): carrier now OFF (device state 100, deferring action for 4 seconds) Dec 1 13:53:50 andromeda kernel: [ 1102.980003] e1000e: e1000e: eth0 NIC Link is Down Dec 1 13:53:54 andromeda NetworkManager[1115]: <info> (eth0): device state change: activated -> unavailable (reason 'carrier-changed') [100 20 40] Dec 1 13:53:54 andromeda NetworkManager[1115]: <info> (eth0): deactivating device (reason 'carrier-changed') [40] Dec 1 13:53:54 andromeda NetworkManager[1115]: <info> (eth0): canceled DHCP transaction, DHCP client pid 6080 Dec 1 13:53:54 andromeda avahi-daemon[890]: Withdrawing address record for fe80::ea40:f2ff:fee2:4d86 on eth0. Dec 1 13:53:54 andromeda avahi-daemon[890]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::ea40:f2ff:fee2:4d86. Dec 1 13:53:54 andromeda avahi-daemon[890]: Interface eth0.IPv6 no longer relevant for mDNS. Dec 1 13:53:54 andromeda avahi-daemon[890]: Withdrawing address record for 192.168.11.17 on eth0. Dec 1 13:53:54 andromeda avahi-daemon[890]: Leaving mDNS multicast group on interface eth0.IPv4 with address 192.168.11.17. Dec 1 13:53:54 andromeda kernel: [ 1107.025959] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Dec 1 13:53:54 andromeda NetworkManager[1115]: <warn> DNS: plugin dnsmasq update failed Dec 1 13:53:54 andromeda NetworkManager[1115]: <info> ((null)): removing resolv.conf from /sbin/resolvconf Dec 1 13:53:54 andromeda avahi-daemon[890]: Interface eth0.IPv4 no longer relevant for mDNS. Dec 1 13:53:54 andromeda dnsmasq[1907]: setting upstream servers from DBus Dec 1 13:53:54 andromeda dbus[800]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) Dec 1 13:53:54 andromeda dbus[800]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> (eth0): carrier now ON (device state 20) Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> (eth0): device state change: unavailable -> disconnected (reason 'carrier-changed') [20 30 40] Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Auto-activating connection '82579V'. Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) starting connection '82579V' Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> (eth0): device state change: disconnected -> prepare (reason 'none') [30 40 0] Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) scheduled... Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) started... Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) scheduled... Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) complete. Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) starting... Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> (eth0): device state change: prepare -> config (reason 'none') [40 50 0] Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) successful. Dec 1 13:54:10 andromeda kernel: [ 1123.167668] e1000e: e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: Rx/Tx Dec 1 13:54:10 andromeda kernel: [ 1123.167675] e1000e 0000:00:19.0: eth0: 10/100 speed: disabling TSO Dec 1 13:54:10 andromeda kernel: [ 1123.168430] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) scheduled. Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) complete. Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) started... Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> (eth0): device state change: config -> ip-config (reason 'none') [50 70 0] Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Beginning DHCPv4 transaction (timeout in 45 seconds) Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> dhclient started with pid 6212 Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) complete. Dec 1 13:54:10 andromeda dhclient: Internet Systems Consortium DHCP Client 4.2.4 Dec 1 13:54:10 andromeda dhclient: Copyright 2004-2012 Internet Systems Consortium. Dec 1 13:54:10 andromeda dhclient: All rights reserved. Dec 1 13:54:10 andromeda dhclient: For info, please visit https://www.isc.org/software/dhcp/ Dec 1 13:54:10 andromeda dhclient: Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> (eth0): DHCPv4 state changed nbi -> preinit Dec 1 13:54:10 andromeda dhclient: Listening on LPF/eth0/e8:40:f2:e2:4d:86 Dec 1 13:54:10 andromeda dhclient: Sending on LPF/eth0/e8:40:f2:e2:4d:86 Dec 1 13:54:10 andromeda dhclient: Sending on Socket/fallback Dec 1 13:54:10 andromeda dhclient: DHCPREQUEST of 192.168.11.17 on eth0 to 255.255.255.255 port 67 Dec 1 13:54:10 andromeda dhclient: DHCPACK of 192.168.11.17 from 192.168.11.1 Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> (eth0): DHCPv4 state changed preinit -> reboot Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> address 192.168.11.17 Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> prefix 24 (255.255.255.0) Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> gateway 192.168.11.1 Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> hostname 'andromeda' Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> nameserver '192.168.11.1' Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> domain name 'hsd1.ca.comcast.net' Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Configure Commit) scheduled... Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) started... Dec 1 13:54:10 andromeda avahi-daemon[890]: Joining mDNS multicast group on interface eth0.IPv4 with address 192.168.11.17. Dec 1 13:54:10 andromeda dhclient: bound to 192.168.11.17 -- renewal in 35416 seconds. Dec 1 13:54:10 andromeda avahi-daemon[890]: New relevant interface eth0.IPv4 for mDNS. Dec 1 13:54:10 andromeda avahi-daemon[890]: Registering new address record for 192.168.11.17 on eth0.IPv4. Dec 1 13:54:11 andromeda NetworkManager[1115]: <info> (eth0): device state change: ip-config -> activated (reason 'none') [70 100 0] Dec 1 13:54:11 andromeda NetworkManager[1115]: <info> ((null)): writing resolv.conf to /sbin/resolvconf Dec 1 13:54:11 andromeda dnsmasq[1907]: setting upstream servers from DBus Dec 1 13:54:11 andromeda dnsmasq[1907]: using nameserver 192.168.11.1#53 Dec 1 13:54:11 andromeda NetworkManager[1115]: <info> Policy set '82579V' (eth0) as default for IPv4 routing and DNS. Dec 1 13:54:11 andromeda NetworkManager[1115]: <info> Activation (eth0) successful, device activated. Dec 1 13:54:11 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) complete. Dec 1 13:54:11 andromeda dbus[800]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) Dec 1 13:54:11 andromeda dbus[800]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Dec 1 13:54:12 andromeda avahi-daemon[890]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::ea40:f2ff:fee2:4d86. Dec 1 13:54:12 andromeda avahi-daemon[890]: New relevant interface eth0.IPv6 for mDNS. Dec 1 13:54:12 andromeda avahi-daemon[890]: Registering new address record for fe80::ea40:f2ff:fee2:4d86 on eth0.*. Dec 1 13:54:19 andromeda ntpdate[6286]: adjust time server 91.189.94.4 offset 0.001142 sec $ lspci -v 00:19.0 Ethernet controller: Intel Corporation 82579V Gigabit Network Connection (rev 04) Subsystem: Intel Corporation Device 2031 Flags: bus master, fast devsel, latency 0, IRQ 45 Memory at f7f00000 (32-bit, non-prefetchable) [size=128K] Memory at f7f39000 (32-bit, non-prefetchable) [size=4K] I/O ports at f040 [size=32] Capabilities: [c8] Power Management version 2 Capabilities: [d0] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [e0] PCI Advanced Features Kernel driver in use: e1000e Kernel modules: e1000e $ modinfo e1000e filename: /lib/modules/3.5.0-19-generic/kernel/drivers/net/e1000e/e1000e.ko version: 2.1.4-NAPI license: GPL description: Intel(R) PRO/1000 Network Driver author: Intel Corporation, <[email protected]> srcversion: 0809529BE0BBC44883956AF alias: pci:v00008086d0000153Bsv*sd*bc*sc*i* alias: pci:v00008086d0000153Asv*sd*bc*sc*i* alias: pci:v00008086d00001503sv*sd*bc*sc*i* alias: pci:v00008086d00001502sv*sd*bc*sc*i* alias: pci:v00008086d000010F0sv*sd*bc*sc*i* alias: pci:v00008086d000010EFsv*sd*bc*sc*i* alias: pci:v00008086d000010EBsv*sd*bc*sc*i* alias: pci:v00008086d000010EAsv*sd*bc*sc*i* alias: pci:v00008086d00001525sv*sd*bc*sc*i* alias: pci:v00008086d000010DFsv*sd*bc*sc*i* alias: pci:v00008086d000010DEsv*sd*bc*sc*i* alias: pci:v00008086d000010CEsv*sd*bc*sc*i* alias: pci:v00008086d000010CDsv*sd*bc*sc*i* alias: pci:v00008086d000010CCsv*sd*bc*sc*i* alias: pci:v00008086d000010CBsv*sd*bc*sc*i* alias: pci:v00008086d000010F5sv*sd*bc*sc*i* alias: pci:v00008086d000010BFsv*sd*bc*sc*i* alias: pci:v00008086d000010E5sv*sd*bc*sc*i* alias: pci:v00008086d0000294Csv*sd*bc*sc*i* alias: pci:v00008086d000010BDsv*sd*bc*sc*i* alias: pci:v00008086d000010C3sv*sd*bc*sc*i* alias: pci:v00008086d000010C2sv*sd*bc*sc*i* alias: pci:v00008086d000010C0sv*sd*bc*sc*i* alias: pci:v00008086d00001501sv*sd*bc*sc*i* alias: pci:v00008086d00001049sv*sd*bc*sc*i* alias: pci:v00008086d0000104Dsv*sd*bc*sc*i* alias: pci:v00008086d0000104Bsv*sd*bc*sc*i* alias: pci:v00008086d0000104Asv*sd*bc*sc*i* alias: pci:v00008086d000010C4sv*sd*bc*sc*i* alias: pci:v00008086d000010C5sv*sd*bc*sc*i* alias: pci:v00008086d0000104Csv*sd*bc*sc*i* alias: pci:v00008086d000010BBsv*sd*bc*sc*i* alias: pci:v00008086d00001098sv*sd*bc*sc*i* alias: pci:v00008086d000010BAsv*sd*bc*sc*i* alias: pci:v00008086d00001096sv*sd*bc*sc*i* alias: pci:v00008086d0000150Csv*sd*bc*sc*i* alias: pci:v00008086d000010F6sv*sd*bc*sc*i* alias: pci:v00008086d000010D3sv*sd*bc*sc*i* alias: pci:v00008086d0000109Asv*sd*bc*sc*i* alias: pci:v00008086d0000108Csv*sd*bc*sc*i* alias: pci:v00008086d0000108Bsv*sd*bc*sc*i* alias: pci:v00008086d0000107Fsv*sd*bc*sc*i* alias: pci:v00008086d0000107Esv*sd*bc*sc*i* alias: pci:v00008086d0000107Dsv*sd*bc*sc*i* alias: pci:v00008086d000010B9sv*sd*bc*sc*i* alias: pci:v00008086d000010D5sv*sd*bc*sc*i* alias: pci:v00008086d000010DAsv*sd*bc*sc*i* alias: pci:v00008086d000010D9sv*sd*bc*sc*i* alias: pci:v00008086d00001060sv*sd*bc*sc*i* alias: pci:v00008086d000010A5sv*sd*bc*sc*i* alias: pci:v00008086d000010BCsv*sd*bc*sc*i* alias: pci:v00008086d000010A4sv*sd*bc*sc*i* alias: pci:v00008086d0000105Fsv*sd*bc*sc*i* alias: pci:v00008086d0000105Esv*sd*bc*sc*i* depends: vermagic: 3.5.0-19-generic SMP mod_unload modversions parm: copybreak:Maximum size of packet that is copied to a new buffer on receive (uint) parm: TxIntDelay:Transmit Interrupt Delay (array of int) parm: TxAbsIntDelay:Transmit Absolute Interrupt Delay (array of int) parm: RxIntDelay:Receive Interrupt Delay (array of int) parm: RxAbsIntDelay:Receive Absolute Interrupt Delay (array of int) parm: InterruptThrottleRate:Interrupt Throttling Rate (array of int) parm: IntMode:Interrupt Mode (array of int) parm: SmartPowerDownEnable:Enable PHY smart power down (array of int) parm: KumeranLockLoss:Enable Kumeran lock loss workaround (array of int) parm: CrcStripping:Enable CRC Stripping, disable if your BMC needs the CRC (array of int) parm: EEE:Enable/disable on parts that support the feature (array of int) parm: Node:[ROUTING] Node to allocate memory on, default -1 (array of int) parm: debug:Debug level (0=none,...,16=all) (int)

    Read the article

  • Using a WCF Message Inspector to extend AppFabric Monitoring

    - by Shawn Cicoria
    I read through Ron Jacobs post on Monitoring WCF Data Services with AppFabric http://blogs.msdn.com/b/endpoint/archive/2010/06/09/tracking-wcf-data-services-with-windows-server-appfabric.aspx What is immediately striking are 2 things – it’s so easy to get monitoring data into a viewer (AppFabric Dashboard) w/ very little work.  And the 2nd thing is, why can’t this be a WCF message inspector on the dispatch side. So, I took the base class WCFUserEventProvider that’s located in the WCF/WF samples [1] in the following path, \WF_WCF_Samples\WCF\Basic\Management\AnalyticTraceExtensibility\CS\WCFAnalyticTracingExtensibility\  and then created a few classes that project the injection as a IEndPointBehavior There are just 3 classes to drive injection of the inspector at runtime via config: IDispatchMessageInspector implementation BehaviorExtensionElement implementation IEndpointBehavior implementation The full source code is below with a link to the solution file here: [Solution File] using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.ServiceModel.Dispatcher; using System.ServiceModel.Channels; using System.ServiceModel; using System.ServiceModel.Configuration; using System.ServiceModel.Description; using Microsoft.Samples.WCFAnalyticTracingExtensibility; namespace Fabrikam.Services { public class AppFabricE2EInspector : IDispatchMessageInspector { static WCFUserEventProvider evntProvider = null; static AppFabricE2EInspector() { evntProvider = new WCFUserEventProvider(); } public object AfterReceiveRequest( ref Message request, IClientChannel channel, InstanceContext instanceContext) { OperationContext ctx = OperationContext.Current; var opName = ctx.IncomingMessageHeaders.Action; evntProvider.WriteInformationEvent("start", string.Format("operation: {0} at address {1}", opName, ctx.EndpointDispatcher.EndpointAddress)); return null; } public void BeforeSendReply(ref System.ServiceModel.Channels.Message reply, object correlationState) { OperationContext ctx = OperationContext.Current; var opName = ctx.IncomingMessageHeaders.Action; evntProvider.WriteInformationEvent("end", string.Format("operation: {0} at address {1}", opName, ctx.EndpointDispatcher.EndpointAddress)); } } public class AppFabricE2EBehaviorElement : BehaviorExtensionElement { #region BehaviorExtensionElement /// <summary> /// Gets the type of behavior. /// </summary> /// <value></value> /// <returns>The type that implements the end point behavior<see cref="T:System.Type"/>.</returns> public override Type BehaviorType { get { return typeof(AppFabricE2EEndpointBehavior); } } /// <summary> /// Creates a behavior extension based on the current configuration settings. /// </summary> /// <returns>The behavior extension.</returns> protected override object CreateBehavior() { return new AppFabricE2EEndpointBehavior(); } #endregion BehaviorExtensionElement } public class AppFabricE2EEndpointBehavior : IEndpointBehavior //, IServiceBehavior { #region IEndpointBehavior public void AddBindingParameters(ServiceEndpoint endpoint, BindingParameterCollection bindingParameters) {} public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime) { throw new NotImplementedException(); } public void ApplyDispatchBehavior(ServiceEndpoint endpoint, EndpointDispatcher endpointDispatcher) { endpointDispatcher.DispatchRuntime.MessageInspectors.Add(new AppFabricE2EInspector()); } public void Validate(ServiceEndpoint endpoint) { ; } #endregion IEndpointBehavior } }     [1] http://www.microsoft.com/downloads/details.aspx?FamilyID=35ec8682-d5fd-4bc3-a51a-d8ad115a8792&displaylang=en

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • Silverlight Cream for March 06, 2010 -- #808

    - by Dave Campbell
    In this Issue: András Velvárt, felix corke, Colin Eberhardt, Christopher Bennage, Gergely Orosz, Entity Spaces Team Blog, Mike Taulty(-2-), Jit Ghosh, and Jesse Liberty. Shoutouts: Jeremy Likness expands on the Silverlight Team's post Vancouver Olympics - How'd We Do That? Gavin Wignall has a post up Creating a 360 photograph of an object with Silverlight Photosynth From SilverlightCream.com: Transforming an Ugly Duckling into a Graceful Swan With Expression Blend and Silverlight - Part 2 Intro Animation András Velvárt has part 2 of his Transformation series up at SilverlightShow... he's taking the initro animation to a new length, allowing playback even... cool video tutorial! Free Silverlight 4 beta skin! felix corke has a Silerlight 4 theme up for us all to use. If you like a dark theme like Blend, you'll like this... I like it! Linq to Visual Tree Colin Eberhardt has a great tutorial up for using LINQ to query the WPF or Silverlight Visual Tree while retaining the tree structure. He also has links out to other techniques. XAML Attributes on Separate Lines Christopher Bennage has a post up showing how to easily get all your XAML attributes on separate lines using a VS menu option... I didn't know that! Using built-in, embedded and streamed fonts in Silverlight Gergely Orosz has a post up at ScottLogic going over Fonts in Silverlight -- built-in, embedded, or streamed, and examples with code. EntitySpaces 2010 Two Part Series on Silverlight and WCF Entity Spaces Team Blog has a pair of videos up on Entity Spaces 2010, WCF, and Silverlight. Part 1 is the intro and explanation, part 2 is a full-up app demonstrating it. MEF, Silverlight and the DeploymentCatalog In an attempt to respond fully to a query, Mike Taulty literally pushed the record button and took off on what became a tutorial video on building a real Silverlight app utilizing MEF. Silverlight 4, Experiment with Pluggable Navigation and a WCF Data Service Mike Taulty has an experiment detailed on his blog about pluggable navigation and Silverlight 4. He walks through the history of how we got to this point then takes on in an example... good external links too Enhancing Silverlight Video Experiences with Contextual Data This is a post on the MSDN Magazine site where Jit Ghosh has a great long post about not only Smooth Streaming with Silverlight, but also adding context data to your video. When Is It OK To Hack? Read what all Jesse Liberty gets involved in when he's trying to get something out the door and has to work around a problem. Just about as interesting are the comments ... check it out and leave your own! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    MIX10

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 1 of 2 &ndash; CLR Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible.  Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind…  In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve.  One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well.  In this review, I am going to cover some of the features of the ANTS profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program.  I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction The ANTS Profiler pack provided by Red Gate was something that I had not heard of before receiving an email regarding an offer to review it for a license.  Since I look to make my code efficient, it was a no brainer for me to try it out!  One thing that I have to say took me by surprise is that upon downloading the program and installing it you fill out a form for your usual contact information.  Sure enough within 2 hours, I received an email from a sales representative at Red Gate asking if she could help me to achieve the most out of my trial time so it wouldn’t go to waste.  After replying to her and explaining that I was looking to review its feature set, she put me in contact with someone that setup a demo session to give me a quick rundown of its features via an online meeting.  After having dealt with a massive ordeal with one of my utility companies and their complete lack of customer service, Red Gates friendly and helpful representatives were a breath of fresh air, and something I was thankful for. ANTS CLR Profiler The ANTS CLR profiler is the thing I want to focus on the most in this post, so I am going to dive right in now. Install was simple and took no time at all.  It installed both the profiler for the CLR and Memory, but also visual studio extensions to facilitate the usage of the profilers (click any images for full size images): The Visual Studio menu options (under ANTS menu) Starting the CLR Performance Profiler from the start menu yields this window If you follow the instructions after launching the program from the start menu (Click File > New Profiling Session to start a new project), you are given a dialog with plenty of options for profiling: The New Session dialog.  Lots of options.  One thing I noticed is that the buttons in the lower right were half-covered by the panel of the application.  If I had to guess, I would imagine that this is caused by my DPI settings being set to 125%.  This is a problem I have seen in other applications as well that don’t scale well to different dpi scales. The profiler options give you the ability to profile: .NET Executable ASP.NET web application (hosted in IIS) ASP.NET web application (hosted in IIS express) ASP.NET web application (hosted in Cassini Web Development Server) SharePoint web application (hosted in IIS) Silverlight 4+ application Windows Service COM+ server XBAP (local XAML browser application) Attach to an already running .NET 4 process Choosing each option provides a varying set of other variables/options that one can set including options such as application arguments, operating path, record I/O performance performance counters to record (43 counters in all!), etc…  All in all, they give you the ability to profile many different .Net project types, and make it simple to do so.  In most cases of my using this application, I would be using the built in Visual Studio extensions, as they automatically start a new profiling project in ANTS with the options setup, and start your program, however RedGate has made it easy enough to profile outside of Visual Studio as well. On the flip side of this, as someone who lives most of their work life in Visual Studio, one thing I do wish is that instead of opening an entirely separate application/gui to perform profiling after launching, that instead they would provide a Visual Studio panel with the information, and integrate more of the profiling project information into Visual Studio.  So, now that we have an idea of what options that the profiler gives us, its time to test its abilities and features. Horrendous Example Code – Prime Number Generator One of my interests besides development, is Physics and Math – what I went to college for.  I have especially always been interested in prime numbers, as they are something of a mystery…  So, I decided that I would go ahead and to test the abilities of the profiler, I would write a small program, website, and library to generate prime numbers in the quantity that you ask for.  I am going to start off with some terrible code, and show how I would see the profiler being used as a development tool. First off, the IPrimes interface (all code is downloadable at the end of the post): interface IPrimes { IEnumerable<int> GetPrimes(int retrieve); } Simple enough, right?  Anything that implements the interface will (hopefully) provide an IEnumerable of int, with the quantity specified in the parameter argument.  Next, I am going to implement this interface in the most basic way: public class DumbPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _analyzing = 4; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; //start dividing at 2 //divide until number is reached, or determined not prime for (int i = 2; i < _analyzing && isPrime; i++) { //if (i) goes into _analyzing without a remainder, //_analyzing is NOT prime if (_analyzing % i == 0) isPrime = false; } //if it is prime, add to found list if (isPrime) _foundPrimes.Add(_analyzing); //increment number to analyze next _analyzing++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } This is the simplest way to get primes in my opinion.  Checking each number by the straight definition of a prime – is it divisible by anything besides 1 and itself. I have included this code in a base class library for my solution, as I am going to use it to demonstrate a couple of features of ANTS.  This class library is consumed by a simple non-MVVM WPF application, and a simple MVC4 website.  I will not post the WPF code here inline, as it is simply an ObservableCollection<int>, a label, two textbox’s, and a button. Starting a new Profiling Session So, in Visual Studio, I have just completed my first stint developing the GUI and DumbPrimes IPrimes class, so now I want to check my codes efficiency by profiling it.  All I have to do is build the solution (surprised initiating a profiling session doesn’t do this, but I suppose I can understand it), and then click the ANTS menu, followed by Profile Performance.  I am then greeted by the profiler starting up and already monitoring my program live: You are provided with a realtime graph at the top, and a pane at the bottom giving you information on how to proceed.  I am going to start by asking my program to show me the first 15000 primes: After the program finally began responding again (I did all the work on the main UI thread – how bad!), I stopped the profiler, which did kill the process of my program too.  One important thing to note, is that the profiler by default wants to give you a lot of detail about the operation – line hit counts, time per line, percent time per line, etc…  The important thing to remember is that this itself takes a lot of time.  When running my program without the profiler attached, it can generate the 15000 primes in 5.18 seconds, compared to 74.5 seconds – almost a 1500 percent increase.  While this may seem like a lot, remember that there is a trade off.  It may be WAY more inefficient, however, I am able to drill down and make improvements to specific problem areas, and then decrease execution time all around. Analyzing the Profiling Session After clicking ‘Stop Profiling’, the process running my application stopped, and the entire execution time was automatically selected by ANTS, and the results shown below: Now there are a number of interesting things going on here, I am going to cover each in a section of its own: Real Time Performance Counter Bar (top of screen) At the top of the screen, is the real time performance bar.  As your application is running, this will constantly update with the currently selected performance counters status.  A couple of cool things to note are the fact that you can drag a selection around specific time periods to drill down the detail views in the lower 2 panels to information pertaining to only that period. After selecting a time period, you can bookmark a section and name it, so that it is easy to find later, or after reloaded at a later time.  You can also zoom in, out, or fit the graph to the space provided – useful for drilling down. It may be hard to see, but at the top of the processor time graph below the time ticks, but above the red usage graph, there is a green bar. This bar shows at what times a method that is selected in the ‘Call tree’ panel is called. Very cool to be able to click on a method and see at what times it made an impact. As I said before, ANTS provides 43 different performance counters you can hook into.  Click the arrow next to the Performance tab at the top will allow you to change between different counters if you have them selected: Method Call Tree, ADO.Net Database Calls, File IO – Detail Panel Red Gate really hit the mark here I think. When you select a section of the run with the graph, the call tree populates to fill a hierarchical tree of method calls, with information regarding each of the methods.   By default, methods are hidden where the source is not provided (framework type code), however, Red Gate has integrated Reflector into ANTS, so even if you don’t have source for something, you can select a method and get the source if you want.  Methods are also hidden where the impact is seen as insignificant – methods that are only executed for 1% of the time of the overall calling methods time; in other words, working on making them better is not where your efforts should be focused. – Smart! Source Panel – Detail Panel The source panel is where you can see line level information on your code, showing the code for the currently selected method from the Method Call Tree.  If the code is not available, Reflector takes care of it and shows the code anyways! As you can notice, there does seem to be a problem with how ANTS determines what line is the actual line that a call is completed on.  I have suspicions that this may be due to some of the inline code optimizations that the CLR applies upon compilation of the assembly.  In a method with comments, the problem is much more severe: As you can see here, apparently the most offending code in my base library was a comment – *gasp*!  Removing the comments does help quite a bit, however I hope that Red Gate works on their counter algorithm soon to improve the logic on positioning for statistics: I did a small test just to demonstrate the lines are correct without comments. For me, it isn’t a deal breaker, as I can usually determine the correct placements by looking at the application code in the region and determining what makes sense, but it is something that would probably build up some irritation with time. Feature – Suggest Method for Optimization A neat feature to really help those in need of a pointer, is the menu option under tools to automatically suggest methods to optimize/improve: Nice feature – clicking it filters the call tree and stars methods that it thinks are good candidates for optimization.  I do wish that they would have made it more visible for those of use who aren’t great on sight: Process Integration I do think that this could have a place in my process.  After experimenting with the profiler, I do think it would be a great benefit to do some development, testing, and then after all the bugs are worked out, use the profiler to check on things to make sure nothing seems like it is hogging more than its fair share.  For example, with this program, I would have developed it, ran it, tested it – it works, but slowly. After looking at the profiler, and seeing the massive amount of time spent in 1 method, I might go ahead and try to re-implement IPrimes (I actually would probably rewrite the offending code, but so that I can distribute both sets of code easily, I’m just going to make another implementation of IPrimes).  Using two pieces of knowledge about prime numbers can make this method MUCH more efficient – prime numbers fall into two buckets 6k+/-1 , and a number is prime if it is not divisible by any other primes before it: public class SmartPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _k = 1; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; int potentialPrime; //analyze 6k-1 //assign the value to potential potentialPrime = 6 * _k - 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); if (_foundPrimes.Count() == retrieve) break; //analyze 6k+1 //assign the value to potential potentialPrime = 6 * _k + 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); //increment k to analyze next _k++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } Now there are definitely more things I can do to help make this more efficient, but for the scope of this example, I think this is fine (but still hideous)! Profiling this now yields a happy surprise 27 seconds to generate the 15000 primes with the profiler attached, and only 1.43 seconds without.  One important thing I wanted to call out though was the performance graph now: Notice anything odd?  The %Processor time is above 100%.  This is because there is now more than 1 core in the operation.  A better label for the chart in my mind would have been %Core time, but to each their own. Another odd thing I noticed was that the profiler seemed to be spot on this time in my DumbPrimes class with line details in source, even with comments..  Odd. Profiling Web Applications The last thing that I wanted to cover, that means a lot to me as a web developer, is the great amount of work that Red Gate put into the profiler when profiling web applications.  In my solution, I have a simple MVC4 application setup with 1 page, a single input form, that will output prime values as my WPF app did.  Launching the profiler from Visual Studio as before, nothing is really different in the profiler window, however I did receive a UAC prompt for a Red Gate helper app to integrate with the web server without notification. After requesting 500, 1000, 2000, and 5000 primes, and looking at the profiler session, things are slightly different from before: As you can see, there are 4 spikes of activity in the processor time graph, but there is also something new in the call tree: That’s right – ANTS will actually group method calls by get/post operations, so it is easier to find out what action/page is giving the largest problems…  Pretty cool in my mind! Overview Overall, I think that Red Gate ANTS CLR Profiler has a lot to offer, however I think it also has a long ways to go.  3 Biggest Pros: Ability to easily drill down from time graph, to method calls, to source code Wide variety of counters to choose from when profiling your application Excellent integration/grouping of methods being called from web applications by request – BRILLIANT! 3 Biggest Cons: Issue regarding line details in source view Nit pick – Processor time vs. Core time Nit pick – Lack of full integration with Visual Studio Ratings Ease of Use (7/10) – I marked down here because of the problems with the line level details and the extra work that that entails, and the lack of better integration with Visual Studio. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Especially with its large variety of performance counters, a definite plus! Features (9/10) – Besides the real time performance monitoring, and the drill downs that I’ve shown here, ANTS also has great integration with ADO.Net, with the ability to show database queries run by your application in the profiler.  This, with the line level details, the web request grouping, reflector integration, and various options to customize your profiling session I think create a great set of features! Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (8/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (8/10) – Overall, I am happy with the Performance Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  I WOULD recommend you trying the application and seeing if it would fit into your process, BUT, remember there are still some kinks in it to hopefully be worked out. My next post will definitely be shorter (hopefully), but thank you for reading up to here, or skipping ahead!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • My thoughts on the future of the web with respect to flash, plugins, etc…

    - by joelvarty
    More than 10 years ago I was coding Java applets.  They were great at the time because I could reasonably expect them to run the same way in Netscape and Internet Explorer.  I could also reliably do asynchronous networking back to the server.  But then, Microsoft pulled their native Java runtime from Windows and Internet Explorer.  It got a lot harder to get applets running in people’s browsers. So I started writing ActiveX controls for IE and Java applets for Netscape. Then I switched to Flash, not for too long, but it was enough for me to see that it was a capable and curious implementation of animation, multimedia and script. I even wrote a few Silverlight controls, but then I stopped. I stepped back from all of the “richness” and “interactivity” and I thought about things like accessibility and SEO.  I wondered how my apps and sites might appear to the greater world.  I wondered how the developers I am working with, or who might be inheriting my code down the road, might interact with it. And I thought to myself, What the hell was I thinking? Those embedded controls are not what the web is about, and they run contrary to nearly all of the things that makes the web exciting and fosters innovation within and around.   Those plugins or controls, or whatever you want to refer to them as, are only stop-gaps that fill a hole in the basic HTML/Script/CSS specifications, and that’s all they should ever be used for.  Full stop.  Period.  For instance, I still make use of a nifty little flash control called SWFUpload because it lets me check file size before an upload starts.  I can do the same thing from a Silverlight control.  But rest assured, if I could do this from native javascript, I would in a second.  In fact, the only reason I chose SWFUpload over a ton of other alternatives is that it has a great javascript API so I can do (nearly) all of the UI in regular HTML.  And I ALWAYS provide a non-flash alternative for uploading, and for the rest of any website where the designer has insisted on some piece of creativity that requires flash (usually because the designer is also the flash developer, but that’s an aside…). The web is about openness, and about exposing that openness in such a way that it can be taken advantage of as a small part of a greater whole.  Sure we need security and authentication and ssl and all that stuff, but for me, its something more profound.  For me, the majority of what the web is, is about exposing something that delivers meaning.  What meaning can we derive from an <object> tag?   more later - joel

    Read the article

  • CodePlex Daily Summary for Saturday, April 07, 2012

    CodePlex Daily Summary for Saturday, April 07, 2012Popular ReleasesHarness - Internet Explorer Automation: Harness 2.0.3: support the operation fo frameset, frame and iframe Add commands SwitchFrame GetUrl GoBack GoForward Refresh SetTimeout GetTimeout Rename commands GetActiveWindow to GetActiveBrowser SetActiveWindow to SetActiveBrowser FindWindowAll to FindBrowser NewWindow to NewBrowser GetMajorVersion to GetVersionBetter Explorer: Better Explorer 2.0.0.861 Alpha: - fixed new folder button operation not work well in some situations - removed some unnecessary code like subclassing that is not needed anymore - Added option to make Better Exlorer default (at least for WIN+E operations) - Added option to enable file operation replacements (like Terracopy) to work with Better Explorer - Added some basic usability to "Share" button - Other fixesLightFarsiDictionary - ??????? ??? ?????/???????: LightFarsiDictionary - v1: LightFarsiDictionary - v1WPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.3: Version: 2.5.0.3 (Milestone 3): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete [O] WAF: Mark the StringBuilderExtensions class as obsolete because the AppendInNewLine method can be replaced with string.Jo...RiP-Ripper & PG-Ripper: RiP-Ripper 2.9.30: changes NEW: Added Support for "DirectUpload.net" links NEW: Added Support for "PixRoute.com" links NEW: Added Support for "ImagePicasa.com" links FIXED: "PixHub.eu" linksCommunity TFS Build Extensions: April 2012: Release notes to follow...ClosedXML - The easy way to OpenXML: ClosedXML 0.65.2: Aside from many bug fixes we now have Conditional Formatting The conditional formatting was sponsored by http://www.bewing.nl (big thanks) New on v0.65.1 Fixed issue when loading conditional formatting with default values for icon sets New on v0.65.2 Fixed issue loading conditional formatting Improved inserts performanceLiberty: v3.2.0.0 Release 4th April 2012: Change Log-Added -Halo 3 support (invincibility, ammo editing) -Halo 3: ODST support (invincibility, ammo editing) -The file transfer page now shows its progress in the Windows 7 taskbar -"About this build" settings page -Reach Change what an object is carrying -Reach Change which node a carried object is attached to -Reach Object node viewer and exporter -Reach Change which weapons you are carrying from the object editor -Reach Edit the weapon controller of vehicles and turrets -An error dia...MSBuild Extension Pack: April 2012: Release Blog Post The MSBuild Extension Pack April 2012 release provides a collection of over 435 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GUID’...DotNetNuke® Community Edition CMS: 06.01.05: Major Highlights Fixed issue that stopped users from creating vocabularies when the portal ID was not zero Fixed issue that caused modules configured to be displayed on all pages to be added to the wrong container in new pages Fixed page quota restriction issue in the Ribbon Bar Removed restriction that would not allow users to use a dash in page names. Now users can create pages with names like "site-map" Fixed issue that was causing the wrong container to be loaded in modules wh...51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.3.1: One Click Install from NuGet Changes to Version 2.1.3.11. [assembly: AllowPartiallyTrustedCallers] has been added back into the AssemblyInfo.cs file to prevent failures with other assemblies in Medium trust environments. 2. The Lite data embedded into the assembly has been updated to include devices from December 2011. The 42 new RingMark properties will return Unknown if RingMark data is not available. Changes to Version 2.1.2.11Code Changes 1. The project is now licenced under the Mozilla...MVC Controls Toolkit: Mvc Controls Toolkit 2.0.0: Added Support for Mvc4 beta and WebApi The SafeqQuery and HttpSafeQuery IQueryable implementations that works as wrappers aroung any IQueryable to protect it from unwished queries. "Client Side" pager specialized in paging javascript data coming either from a remote data source, or from local data. LinQ like fluent javascript api to build queries either against remote data sources, or against local javascript data, with exactly the same interface. There are 3 different query objects exp...ExtAspNet: ExtAspNet v3.1.2: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://extasp.net/ ??:http://bbs.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-04-04 v3.1.2 -??IE?????????????BUG(??"about:blank"?...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.50: Highlight features & improvements: • Significant performance optimization. • Allow store owners to create several shipments per order. Added a new shipping status: “Partially shipped”. • Pre-order support added. Enables your customers to place a Pre-Order and pay for the item in advance. Displays “Pre-order” button instead of “Buy Now” on the appropriate pages. Makes it possible for customer to buy available goods and Pre-Order items during one session. It can be managed on a product variant ...WiX Toolset: WiX v3.6 RC0: WiX v3.6 RC0 (3.6.2803.0) provides support for VS11 and a more stable Burn engine. For more information see Rob's blog post about the release: http://robmensching.com/blog/posts/2012/4/3/WiX-v3.6-Release-Candidate-Zero-availableSageFrame: SageFrame 2.0: Sageframe is an open source ASP.NET web development framework developed using ASP.NET 3.5 with service pack 1 (sp1) technology. It is designed specifically to help developers build dynamic website by providing core functionality common to most web applications.iTuner - The iTunes Companion: iTuner 1.5.4475: Fix to parse empty playlists in iTunes LibraryDocument.Editor: 2012.2: Whats New for Document.Editor 2012.2: New Save Copy support New Page Setup support Minor Bug Fix's, improvements and speed upsVidCoder: 1.3.2: Added option for the minimum title length to scan. Added support to enable or disable LibDVDNav. Added option to prompt to delete source files after clearing successful completed items. Added option to disable remembering recent files and folders. Tweaked number box to only select all on a quick click.MJP's DirectX 11 Samples: Light Indexed Deferred Rendering: Implements light indexed deferred using per-tile light lists calculated in a compute shader, as well as a traditional deferred renderer that uses a compute shader for per-tile light culling and per-pixel shading.New ProjectsAdvertising Management: Ph?n m?m qu?n lý qu?ng cáoAgile Compact Database: It is database for all. AssemblyTransformer: AssemblyTransformer is a tool for modifying .NET assemblies using Mono Cecil. It handles the entire transformation process including strong name signing and offers a simple command-line interface and a basic framework for creating and configuring specific transformations.Cafe For You: Ph?n m?m gi?i thi?u và qu?n lý quán cafeClient-side Templated Script Control: Allows a developer to add a repeater-style templated list control to a web page that will be data bound client-side, and may respond to client events. The control may be data bound by a web service call on initialization, and may also have it's data source set via client code.CRM Project - Beginner Sample: Sample to help beginners to start in C# development. Ejemplo para ayudar a quienes inician con el desarrollo en C#.Deployment Made Easy: The goal of this project is to make deployments to windows servers easy using the web deployment toolEasyCMS: EasyCMSExcel to SQL Server Database Bulk Transfer: Quick and simple WPF tool to allow users export data from an Excel spreadsheet to a SQL Server database table. Provided as is. But if you need any help let me know. HTML Client demo for WCF RIA Services: Demo application with HTML client (upshot.js + knockout.js) on WCF RIA ServicesKOI: Kinect Open Interface: Kinect Open Interface, KOI, provides a way to detect and have the user confirm 11 gestures for your UI. Please read my blog for info: http://www.kinecthelp.com/2012/04/koi-kinect-open-interface.htmlLazyWinAdmin: LazyWinAdmin is a Powershell script to manage local or remote machine ressources.LCDSmartie dll to display Audio spectrum on Windows 7: An LCDSmartie plugin that displays anything being played as an audio spectrum.LiveHelpChatApp: With Live chat help you can provide online / Offline help to your client it has facebook style chat for online and offline users Download and EnjoyMailSender: Small tool for sending mail messages contains multiple attachements with sum size bigger than allowed size. You can drag'n'drop attachments and click send - application split all attachments to parts and sent it separately. There is not address book yet. Mauricio: Mauricio Lima PageMiddleware and Enterprise services foundation: Define a model of deployment and management for Middleware and enterprise applicationsMyFirstPro: This is a test projOld Games Launcher: Old Games Launcher is a combined DosBox frontend & a Direct Draw game/application starter.Pharmakos Studio: Pharmakos Studio is an extensible IDE. It was originally written specifically as an UnrealScript editor for the UDK, however it is being written so that any language can be supported via plugins.Proyecto Eclipse-Android: Proyectos con Eclipse-AdroidProyectos II: Proyecto para Farmaciapullsource: pull source directsource filterQuizzer: Awesome program for quizzes and tests.Solution Settings for Visual Studio: Solution Settings for Visual Studio allows a file containing settings such as formatting, fonts and colors to be included with a project. When the solution is opened, these settings are automatically applied, and when it is closed, the changes are reverted.sundance: test test testWebcomic Reader: A little Idea for an on-, and offline usable, touch-friendly Windows 7 Webcomic Reader.WinRT PathTextBlock: WinRT PathTextBlock is a control that overcomes some of the limitations in the built in WinRT TextBlock, such as not being able to outline the text, and not being able to distort the text, for example to draw it along a circle. Previously, you could use a tool like Expression Design to create the text and export it as a Path, but this wouldn't work for text that needed to be specified at run time. This control allows you to specify the Text property and it will generate the proper Path obj...Yaplex open source projects: Yaplex open source projects????API SDK-VB6(oauth2): ????API SDK-VB6(oauth2)????????API SDK VB6: ??????????API SDK vb?

    Read the article

  • A toolset for self improvement and learning [closed]

    - by Sebastian
    Possible Duplicate: I’m having trouble learning I've been working as an IT consultant for 1½ years and I am very passionate about programming. Before that I studied MSc Software Engineering and had both a part time job as a developer for a big telecom company. During that time I also took extra courses and earned a SCJP certificate. I have been continuously reading a lot of books during the last 3½ years. Now to my problem. I want to continue learning and become a really, really good developer. Apart from my daytime job as a full time java developer I have taken university courses in, for me, new languages and paradigms. Most recently, android game development and then functional programming with Scala. I've read books, went to conferences and had a couple of presentations for internal training purposes in our local office. I want to have some advice from other people who have previously been in my situation or currently are. What are you guys doing to keep improving yourselves? Here is some things that I have found are working for me: Reading books I've mostly read books about best practices for programming, OO-design, refactoring, design patterns, tdd. Software craftmanship if you like. I keep a reading list and my current book is Apprenticeship patterns. Taking courses In my country we have a really good system for taking online distance courses. I have also taken one course at coursera.org and a highly recommend that platform. Ive looked at courses at oreilly.com, industriallogic, javaspecialists.eu and they seem to be okay. If someone gives these type of courses a really good review, I can probably convince my boss. Workshops that span over a couple of days would probably be harder, but Ive seen that uncle Bob will have one about refactoring and tdd in 6months not far from here.. :) Are their possibly some online learning platforms that I dont know about? Educational videos I've bought uncle bobs videos from cleancoders.com and I highly recommend them. The only thing I dont like is that they are quite expensive and that he talks about astronomy for ~10 minutes in every episode. Getting certified I had a lot of fun and learned a lot when I studied for the SCJP. I have also done some preparation for the microsoft equivalent but never went for it. I think it is a good when selling yourself as a newly graduated student and also will boost your knowledge if your are interested in it. Now I would like others to start sharing their experiences and possibly give me some advice! BR Sebastian

    Read the article

  • Getting Started With Sinatra

    - by Liam McLennan
    Sinatra is a Ruby DSL for building web applications. It is distinguished from its peers by its minimalism. Here is hello world in Sinatra: require 'rubygems' require 'sinatra' get '/hi' do "Hello World!" end A haml view is rendered by: def '/' haml :name_of_your_view end Haml is also new to me. It is a ruby-based view engine that uses significant white space to avoid having to close tags. A hello world web page in haml might look like: %html %head %title Hello World %body %div Hello World You see how the structure is communicated using indentation instead of opening and closing tags. It makes views more concise and easier to read. Based on my syntax highlighter for Gherkin I have started to build a sinatra web application that publishes syntax highlighted gherkin feature files. I have found that there is a need to have features online so that customers can access them, and so that they can be linked to project management tools like Jira, Mingle, trac etc. The first thing I want my application to be able to do is display a list of the features that it knows about. This will happen when a user requests the root of the application. Here is my sinatra handler: get '/' do feature_service = Finding::FeatureService.new(Finding::FeatureFileFinder.new, Finding::FeatureReader.new) @features = feature_service.features(settings.feature_path, settings.feature_extensions) haml :index end The handler and the view are in the same scope so the @features variable will be available in the view. This is the same way that rails passes data between actions and views. The view to render the result is: %h2 Features %ul - @features.each do |feature| %li %a{:href => "/feature/#{feature.name}"}= feature.name Clearly this is not a complete web page. I am using a layout to provide the basic html page structure. This view renders an <li> for each feature, with a link to /feature/#{feature.name}. Here is what the page looks like: When the user clicks on one of the links I want to display the contents of that feature file. The required handler is: get '/feature/:feature' do @feature_name = params[:feature] feature_service = Finding::FeatureService.new(Finding::FeatureFileFinder.new, Finding::FeatureReader.new) # TODO replace with feature_service.feature(name) @feature = feature_service.features(settings.feature_path, settings.feature_extensions).find do |feature| feature.name == @feature_name end haml :feature end and the view: %h2= @feature.name %pre{:class => "brush: gherkin"}= @feature.description %div= partial :_back_to_index %script{:type => "text/javascript", :src => "/scripts/shCore.js"} %script{:type => "text/javascript", :src => "/scripts/shBrushGherkin.js"} %script{:type => "text/javascript" } SyntaxHighlighter.all(); Now when I click on the Search link I get a nicely formatted feature file: If you would like see the full source it is available on bitbucket.

    Read the article

  • Oracle AIM, Oracle ABF, and Siebel Results Roadmap Officially Retired as of January 31, 2011

    - by tom.spitz
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} It seems somehow appropriate that the first entry of the Oracle® Unified Method (OUM) blog is about the retirement of several of our legacy methods, most notably AIM Foundation.If you're reading this, you're probably aware that Oracle has been developing OUM to support the entire Enterprise IT lifecycle, including support for the successful implementation of every Oracle product. As Oracle has continued to acquire new companies and technologies, it has become essential that we also create a single, unified language and approach for implementation - across the Oracle ecosystem.With the release of OUM 5.1 in 2009, OUM provided full support for all enterprise application implementation projects including Oracle E-Business Suite R12, Siebel CRM, PeopleSoft Enterprise, and JD Edwards EnterpriseOne projects. In 2010, we released OUM training that supports the use of OUM on these types of projects.That support represented a major milestone in the evolution of OUM and enabled implementers to transition to OUM. Consequently, we announced a staggered retirement schedule for Oracle's legacy methods. On January 31, 2011 we announced the retirement of:Oracle Application Implementation Method (AIM)Oracle AIM for Business Flows (ABF)Siebel Results RoadmapLater this year, we will announce the retirement of Compass - the legacy PeopleSoft method - and Data Warehouse Method Fast Track.OUM is available free of charge to Oracle Gold, Platinum, and Diamond partners through the Oracle Partner Network (OPN) [OUM on OPN]. The OUM Customer Program allows customers to obtain copies of the method for their internal use by contracting with Oracle for an engagement of two weeks or longer meeting some additional minimum criteria.There be more retirement announcements in the coming months. For now it's "Adios AIM." Thanks for the memories...

    Read the article

  • The 2012 Gartner-FEI CFO Technology Survey -- Reviewed by Jeff Henley, Oracle Chairman

    - by Di Seghposs
    Jeff Henley and Oracle Business Analytics VP Rich Clayton break down the findings of the 2012 Gartner-FEI CFO Technology Survey.  The survey produced by Gartner gathers CFOs perceptions about technology, trends and planned improvements to operations.  Financial executives and IT professionals can use these findings to align spending and organizational priorities and understand how technology should support corporate performance.    Listen to the webcast with Jeff Henley and Rich Clayton - Watch Now » Download the full report for all the details -   Read the Report »        Key Findings ·        Despite slow economic growth, CFOs expect conservative, steady IT spending. ·        The CFOs role in IT investment has increased again in 2012. ·        The 45% of IT leaders that report to the CFO are more than report to any other executive, and represent an increase of 3%. ·        Business analytics needs technology improvement. ·        CFOs are focused on business analytics and business applications more than on technology. ·        Information, social, cloud and mobile technology trends are on CFOs' radar. ·        Focusing on corporate performance management (CPM) projects, 63% of CFOs plan to upgrade business intelligence (BI), analytics and performance management in 2012. ·        Despite advancements in strategy management technologies, CFOs still focus on lagging key performance indicators (KPIs) only. ·        A pace-layered strategy for applications is needed (92% of CFOs believe IT doesn't provide transformation/differentiation). ·        New applications in financial governance rank high on improving compliance and efficiency.

    Read the article

  • Skechers Leverages Oracle Applications, Business Intelligence and On Demand Offerings to Drive Long-Term Growth

    - by user801960
    This month Oracle Retail in the USA announced that Skechers - a world leading lifestyle footwear retailer - would be adopting several Oracle Retail products as part of their global growth strategy and to maximise business efficiency.  While based primarily in the USA, Skechers is a respected retailer across the world and has been an Oracle customer since 1997.  The key information about the announcement is below.  To find out more about Skechers visit their website: http://www.skechers.com/  Skechers U.S.A. Inc., an award-winning global leader in the lifestyle footwear industry, has upgraded and expanded its Oracle® Applications investment, implemented Oracle Database and moved to Oracle On Demand, Oracle’s premier cloud service to support rapid growth across its retail and wholesale channels. The new business information systems are part of a larger initiative for the billion-dollar-plus footwear company to fuel growth, reduce total cost of ownership and enable the business to respond faster to market opportunities. With more than 3,000 styles of shoes to design, develop and market, Skechers upgraded to Oracle’s PeopleSoft Enterprise Financial Management and PeopleSoft Supply Chain Management to increase operational efficiencies and improve controls by establishing an integrated, industry-specific platform. An Oracle customer since 1997, Skechers implemented PeopleSoft Enterprise Real Estate Management to meet the rapid growth of its retail stores worldwide. The company is the first customer to go live on the Real Estate Management module and worked closely with Oracle to provide development insight. Skechers also implemented Oracle Fusion Governance, Risk, and Compliance applications. This deployment enabled the company to leverage its existing corporate governance and compliance efforts throughout the global enterprise and more effectively manage the audit processes across multiple business units, processes and systems while reducing audit costs. Next, Skechers leveraged Oracle Financial Analytics, a pre-built Oracle Business Intelligence Application and PeopleSoft Enterprise Project Costing and PeopleSoft Enterprise Contracts to develop a custom Royalty Management dashboard, providing managers with better financial visibility to the company’s licensing contracts. The company switched to Oracle Database and moved database hosting and management to Oracle On Demand to reduce maintenance, implementation and system administration costs. As a result, Skechers is also achieving a better response time and is delivering a higher level of 24x7 support. OSI Consulting, a Platinum partner in Oracle PartnerNetwork (OPN), provided implementation and integration services to Skechers.   To view the full announcement please click here

    Read the article

  • System crashes/lockups + compiz/cairo/gnome-panel crashing due to cached ram, please help?

    - by Kristian Thomson
    Can someone help me to troubleshoot system crashes and lockups which result in compiz/cairo dock and gnome-panel crashing? I also get no window borders after the crash and a lot of kernel memory errors. Logs are telling me that apps were killed due to not enough memory, but the system is caching like 14GB of my ram so I'm a bit stuck on what/how to stop it. I'm running Ubuntu 12.10 on a 2011 Mac Mini with 16 GB ram. Here's some of the logs that look like they could be causing trouble. I woke up this morning to find chrome/skype/cairo dock and a few others had been killed and here is what the log said. Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9310.959890] Out of memory: Kill process 12247 (chromium-browse) score 101 or sacrifice child Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9310.959893] Killed process 12247 (chromium-browse) total-vm:238948kB, anon-rss:17064kB, file-rss:20008kB Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9310.972283] Out of memory: Kill process 10976 (dropbox) score 3 or sacrifice child Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9310.972288] Killed process 10976 (dropbox) total-vm:316392kB, anon-rss:115484kB, file-rss:16504kB Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9310.975890] Out of memory: Kill process 10887 (rhythmbox) score 3 or sacrifice child Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9310.975895] Killed process 11515 (tray_icon_worke) total-vm:63336kB, anon-rss:15960kB, file-rss:11436kB Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9311.281535] Out of memory: Kill process 10887 (rhythmbox) score 3 or sacrifice child Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9311.281539] Killed process 10887 (rhythmbox) total-vm:528980kB, anon-rss:92272kB, file-rss:36520kB Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9311.283110] Out of memory: Kill process 10889 (skype) score 3 or sacrifice child Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9311.283113] Killed process 10889 (skype) total-vm:415056kB, anon-rss:84880kB, file-rss:22160kB I went to look deeper into things and saw that the whole time I'm having these kernel errors with out of memory and something mentioning radeon. I have a Radeon HD 6600M graphics card using the open source driver, not the proprietary one. I was wondering if perhaps using the proprietary one would solve the problem. Also, while writing this in Chrome rhythmbox and chrome just got killed while typing this, due to out of memory errors or so it reports, though I have 7 GB of free RAM at the time with 7 GB cached as well. Here is a full copy of my logs that happened in kern.log simply from when I began typing this question. http://pastebin.com/cdxxDktG Thanks in advance, Kris

    Read the article

  • Exceptional DBA Awards 2011

    - by Rebecca Amos
    From today, we’re accepting nominations for the 2011 Exceptional DBA Awards. DBAs make a vital contribution to the running of the companies they work for, and the Exceptional DBA Awards aim to acknowledge this and make this contribution more widely known. Check out our new website for all the info: www.exceptionaldba.com  Being an exceptional DBA doesn’t mean you have to sleep at the office, or know everything there is to know about SQL Server; who ever could? It means that you make an effort to make your servers secure and reliable, and to make your users’ lives easier. Maybe you’ve helped a junior colleague learn something new about server backups? Or cancelled your coffee break to get a database back online? Or contributed to a forum post on performance monitoring? All of these actions show that you might be an exceptional DBA. So have a think about the tasks you do every day that already make you exceptional – and then get started on your entry! You just need to answer a few questions on our website about your experience as a DBA, some of your biggest achievements, and any other activities you participate in within the SQL Server community. Anyone who is currently working as a SQL Server database administrator can enter, or be nominated by someone else. We’ve got four fantastic judges for the Awards, who you’ll be familiar with already: Brent Ozar, Brad McGehee, Rodney Landrum and Steve Jones. They’ll pick five finalists, and then we’ll ask the SQL Server community to vote for their winner. Not only could you win the respect and recognition of peers and colleagues, but the prizes also include full conference registration for the 2011 PASS Summit in Seattle (where the awards ceremony will take place), four nights' hotel accommodation, and $300 towards travel expenses. The winner will get a copy of Red Gate’s SQL DBA Bundle – and they’ll also be featured here, on Simple-Talk. So what are you waiting for? Chances are you’ve already made a small effort for someone today that means you might be an exceptional DBA. Visit the website now, and start writing your entry – or nominate your favourite DBA to enter: www.exceptionaldba.com

    Read the article

  • SQL SERVER – Working with FileTables in SQL Server 2012 – Part 1 – Setting Up Environment

    - by pinaldave
    Filestream is a very interesting feature, and an enhancement of FileTable with Filestream is equally exciting. Today in this post, we will learn how to set up the FileTable Environment in SQL Server. The major advantage of FileTable is it has Windows API compatibility for file data stored within an SQL Server database. In simpler words, FileTables remove a barrier so that SQL Server can be used for the storage and management of unstructured data that are currently residing as files on file servers. Another advantage is that the Windows Application Compatibility for their existing Windows applications enables to see these data as files in the file system. This way, you can use SQL Server to access the data using T-SQL enhancements, and Windows can access the file using its applications. So for the first step, you will need to enable the Filestream feature at the database level in order to use the FileTable. -- Enable Filestream EXEC sp_configure filestream_access_level, 2 RECONFIGURE GO -- Create Database CREATE DATABASE FileTableDB ON PRIMARY (Name = FileTableDB, FILENAME = 'D:\FileTable\FTDB.mdf'), FILEGROUP FTFG CONTAINS FILESTREAM (NAME = FileTableFS, FILENAME='D:\FileTable\FS') LOG ON (Name = FileTableDBLog, FILENAME = 'D:\FileTable\FTDBLog.ldf') WITH FILESTREAM (NON_TRANSACTED_ACCESS = FULL, DIRECTORY_NAME = N'FileTableDB'); GO Now, you can run the following code and figure out if FileStream options are enabled at the database level. -- Check the Filestream Options SELECT DB_NAME(database_id), non_transacted_access, non_transacted_access_desc FROM sys.database_filestream_options; GO You can see the resultset of the above query which returns resultset as the following image shows. As you can see , the file level access is set to 2 (filestream enabled). Now let us create the filetable in the newly created database. -- Create FileTable Table USE FileTableDB GO CREATE TABLE FileTableTb AS FileTable WITH (FileTable_Directory = 'FileTableTb_Dir'); GO Now you can select data using a regular select table. SELECT * FROM FileTableTb GO It will return all the important columns which are related to the file. It will provide details like filesize, archived, file types etc. You can also see the FileTable in SQL Server Management Studio. Go to Databases >> Newly Created Database (FileTableDB) >> Expand Tables Here, you will see a new folder which says “FileTables”. When expanded, it gives the name of the newly created FileTableTb. You can right click on the newly created table and click on “Explore FileTable Directory”. This will open up the folder where the FileTable data will be stored. When you click on the option, it will open up the following folder in my local machine where the FileTable data will be stored: \\127.0.0.1\mssqlserver\FileTableDB\FileTableTb_Dir In tomorrow’s blog post as Part 2, we will go over two methods of inserting the data into this FileTable. Reference : Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Filestream

    Read the article

  • COLLABORATE 12: Oracle WebCenter Featured at Largest Oracle User Conference

    - by kellsey.ruppel
    With more than 70 out of about 800 individual sessions, Oracle WebCenter will be a major focus of COLLABORATE 12, this year's Independent Oracle User Group (IOUG) conference, taking place April 22–26 in Las Vegas, Nevada. "COLLABORATE 12 provides a unique chance to share experiences with Oracle customers, product managers, and partners, so you can deepen your knowledge about Oracle WebCenter upgrades, user provisioning, workflow, integration, and much more," says Roel Stalman, vice president of product management for Oracle WebCenter. "In fact, COLLABORATE can form a key part of your training plans for 2012." Full-Day Oracle WebCenter Deep Dive On Sunday, April 22, from 9 a.m. to 3 p.m., registered conference attendees can attend a special deep dive into Oracle WebCenter. During the program, experts from Oracle product management and development teams will delve into all four pillars of Oracle WebCenter—and explore how all four are integrated together. Attendees can also expect A preview of Oracle WebCenter 12c Detailed product demos Prize giveaways throughout the day Going Mobile Oracle WebCenter and mobile technology will be a major theme at this year's conference, with a number of sessions devoted to maximizing the availability of content while also ensuring security. Sessions include Are You Making These Mistakes in Your Oracle Site Studio Implementations? Monday, April 23 at 11 a.m. Case Study: How Medtronic Brought Oracle WebCenter Content to the iPad Tuesday, April 24 at 10:45 a.m. Exposing Oracle WebCenter Data on Mobile and Desktop Devices Through the REST API Tuesday, April 24 at 10:45 a.m. Mobile First: Delivering a Compelling Mobile Experience with Oracle WebCenter Tuesday, April 24 at 4:30 p.m. Optimizing Your Oracle WebCenter Portal Solution for Mobile Devices Wednesday, April 25 at 8:15 a.m. Build an iPhone App Using Oracle WebCenter Portal REST APIs Wednesday, April 25 at 9:30 a.m. Other Don't-Miss Sessions Conference organizers have indicated that the following sessions in particular should be of wide interest to attendees. Oracle WebCenter: Vision, Strategy, and Overview Monday, April 23 at 9:45 a.m. This session explores Oracle's integrated approach to portals and composite applications, Web experience management, enterprise content management, and enterprise social collaboration. It also provides insight into Oracle's strategic direction for Oracle WebCenter. Oracle Webcenter Content, Oracle WebCenter Spaces, Oracle WebCenter Sites: Which Is Right for Me? Monday, April 23 at 1:15 p.m. This session helps attendees determine the best Oracle WebCenter solution to meet their needs for an intranet, corporate Website, or partner portal. Learn more and register to attend COLLABORATE 12.

    Read the article

  • BOX2D and AS3: Mouse Event not working

    - by Gabriel Meono
    Background: Trying to make a simple "drop the ball" game. The code is located inside the first frame of the timeline. Nothing more is on the stage. Issue: Using QuickBox2D I made a simple If statement that drops and object acording the Mouse-x position: if (MouseEvent.CLICK) { sim.addCircle({x:mouseX, y:1, radius:0.25, density:5}); I imported the MouseEvent library: import flash.events.MouseEvent; Nothing happens if I click, no output errors either. See it in action: http://gabrielmeono.com/download/Lucky_Hit_Alpha.swf http://gabrielmeono.com/download/Lucky_Hit_Alpha.fla Full Code: [SWF(width = 350, height = 600, frameRate = 60)] import com.actionsnippet.qbox.*; import flash.events.MouseEvent; var sim:QuickBox2D = new QuickBox2D(this); sim.createStageWalls(); //var ball:sim.addCircle({x:mouseX, y:1, radius:0.25, density:5}); // // make a heavy circle sim.addCircle({x:3, y:1, radius:0.25, density:5}); sim.addCircle({x:2, y:1, radius:0.25, density:5}); sim.addCircle({x:4, y:1, radius:0.25, density:5}); sim.addCircle({x:5, y:1, radius:0.25, density:5}); sim.addCircle({x:6, y:1, radius:0.25, density:5}); // create a few platforms sim.addBox({x:3, y:2, width:4, height:0.2, density:0, angle:0.1}); // make 26 dominoes for (var i:int = 0; i<7; i++){ //End sim.addCircle({x:1 + i * 1.5, y:16, radius:0.1, density:0}); sim.addCircle({x:2 + i * 1.5, y:15, radius:0.1, density:0}); //Mid end sim.addCircle({x:0 + i * 2, y:14, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:13, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:12, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:11, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:10, radius:0.1, density:0}); //Middle Start sim.addCircle({x:0 + i * 1.5, y:09, radius:0.1, density:0}); sim.addCircle({x:1 + i * 1.5, y:08, radius:0.1, density:0}); sim.addCircle({x:0 + i * 1.5, y:07, radius:0.1, density:0}); sim.addCircle({x:1 + i * 1.5, y:06, radius:0.1, density:0}); } if (MouseEvent.CLICK) { sim.addCircle({x:mouseX, y:1, radius:0.25, density:5}); sim.start(); /*sim.mouseDrag();*/ }

    Read the article

  • Understanding Oracle: Demystifying OpenWorld

    - by mseika
    Seminar: Wednesday 24th October 2012: Avnet, Bracknell Oracle OpenWorld is the world's largest event dedicated to helping enterprises harness the power of technology, during a full week in October. Oracle Corporation always uses Oracle OpenWorld to make its most important product announcements, and this year is no exception. We realise that not all our partners can attend this prestigious event in San Francisco, primarily due to time and cost pressures. Oracle OpenWorld is the only conference that goes this deep and wide with Oracle technology, providing thousands of sessions and hundreds of demonstrations geared toward helping partners and customers get better results with the technology it has —and plan strategically for the technology it will need to keep ahead of the competition in the years to come. With the sheer number of announcements planned, it is sometimes difficult to find your way through the fog and identify the opportunities relevant to your business to take advantage of, this coming year. So why not engage with the Oracle's UK team via Avnet and get the announcements shared with you face-to-face, in the UK? As a key Value Added Distributor of Oracle Applications, Technology and Hardware solutions, Avnet has been attending Oracle OpenWorld for a number of years and invites our partners to attend a half day summary event which will share the keynote announcements. We will also help prioritise for you the announcements of greatest interest and business opportunity for the UK channel. Agenda Time Module 12:00-13:15 Registration and lunch 13:15-14:00 Introductions and Key Hardware announcements Discover how Oracle's complete and integrated application-aware virtualization solutions, including virtualization for SPARC and x86 architectures, can help you gain better efficiencies across your business. Get updates on how Oracle storage products and solutions can accelerate database performance, improve application responsiveness, and meet your data protection needs. 14:00-14:15 Q&A and Break 14:15-15:00 Key Technology announcements Technology products, encompassing Oracle's Database 12c and Middleware, are revolutionizing the industry with record-breaking performance, helping customers consolidate onto private clouds and achieve high returns on investment. 15:00-15:15 Q&A and Break 15:15-16:00 Key Applications announcements Presentations focused on Oracle's strategy and vision for its applications business, including Oracle E-Business Suite; Oracle's PeopleSoft, JD Edwards, Siebel, Hyperion, and Agile products; and the newly available Oracle Fusion Applications. 16:00-16:30 Oracle-on-Oracle announcements & business opportunities with Avnet Learn about Oracle's cloud computing and Oracle-on-Oracle strategies and find out more about Oracle's engineered systems for the broad market 16:30 Close * Please note agenda may be subject to change What do you need to do now Register now or for more information email our Oracle events team at [email protected]. N.B. Places are limited, so please register early to avoid disappointment.

    Read the article

  • Is SugarCRM really adequate for custom development?

    - by dukeofgaming
    I used SugarCRM for a project about two years ago, I ran into errors from the very installation, having to hack the actual installation file to deploy the software in the server... and other erros that I can't recall now. Two years after, I'm picking it up for a project once again... oh dear, I'm feeling like I should have developed the whole thing from scratch myself. Some examples: I couldn't install it in the server (again)... I had to install it locally, then copy the files and database over to the server and manually edit the config file. Constantly getting deployment errors from the module builder... one reason is SugarCRM keeps creating a record in the upgrade_history table for a file that does not exist, I keep deleting such record and it keeps coming back corrupt. I get other deployment errors, but have not figured them out... then I have to rollback all files and database to try again. I deleted a custom module with relationships, the relationships stayed in the other modules and cannot be deleted anymore, PHP warnings all over the place. Quick create for custom modules does not appear, hack needed. Its whole cache directory is a joke, permanent data/files are stored there. The module builder interface disappears required fields. Edit the wrong thing, module builder won't deploy again... then pray Quick Repair and/or Rebuild Relationships do the trick. My impression of SugarCRM now is that, regardless of its pretty exterior and apparent functionality, it is a very low quality piece of software. This even scared me more: http://amplicate.com/hate/sugarcrm; a quote: I wis this info had been available when I tried to implement it 2 years ago... I searched high and low and the only info I found was positive. Yes, it's a piece of crap. The community edition was full of bugs... nothing worked. Essentially I got fired for implementing it. I'm glad though, because now I work for myself, am much happier and make more money... so, I should really thank SugarCRM for sucking so much I guess! I figured that perhaps some of you have had similar experiences, and have either sticked with SugarCRM or moved on to another solution. I'm very interested in knowing what your resolutions were -or your current situations are- to make up my own mind, since the project I'm working on is long term and I'm feeling SugarCRM will be more an obstacle than an aid. Thanks in advance.

    Read the article

  • Silverlight Cream for February 23, 2011 -- #1051

    - by Dave Campbell
    In this Issue: Ian T. Lackey, Kevin Hoffman, Kunal Chowdhury, Jesse Liberty(-2-), Page Brooks, Deborah Kurata(-2-), and Paul Sheriff. Above the Fold: Silverlight: "Building a Radar Control in Silverlight–Part 2" Page Brooks WP7: "Reactive Drag and Drop Part 2" Jesse Liberty Expression Blend: "Simple RadioButtonList and / or CheckBoxList in Silverlight Using a Behavior" Ian T. Lackey Shoutouts: Kunal Chowdhury delivered a full day session on Silverlight at the Microsoft Imagine Cup Championship event in Mumbai... you can Download Microsoft Imagine Cup Session PPT on Silverlight Dennis Doomen has appeared in my blog any number of times... he's looking for some assistance: Get me on stage on the Developer Days 2011 Steve Wortham posted An Interview with Jeff Wilcox From SilverlightCream.com: Simple RadioButtonList and / or CheckBoxList in Silverlight Using a Behavior Ian T. Lackey bemoans the lack of a RadioButtonList or CheckBoxList, and jumps into Blend to show us how to make one using a behavior... and the code is available too! WP7 for iPhone and Android Developers - Introduction to XAML and Silverlight Continuing his series at SilvelightShow for iPhone and Android devs, Kevin Hoffman has part 2 up getting into the UI with an intro to XAML and Silverlight. Day 1: Working with Telerik Silverlight RadControls Kunal Chowdhury kicked my tires that I had missed his Telerik control series... He's detailing his experience getting up to speed with the Silverlight RadControls. Day 1 is intro, what there is, installing, stuff like that. Part 2 continues: Day 2: Working with BusyIndicator of Telerik Silverlight RadControls, followed (so far) by part 3: Day 3: Working with Masked TextBox of Telerik Silverlight RadControls Reactive Drag and Drop Part 2 Jesse Liberty has his 7th part about Rx up ... and the 2nd part of Reactive Drag and Drop, and oh yeah... it's for WP7 as well! Yet Another Podcast #25–Glenn Block / WCF Next Jesse Liberty has Glenn Block on stage for his Yet Another Podcast number 25... talking WCF with Glenn. Building a Radar Control in Silverlight–Part 2 Page Brooks has part 2 of his 'radar' control for Silverlight up... I don't know where I'd use this, but it's darned cool... and the live demo is amazing. Silverlight Charting: Setting Colors Deborah Kurata is looking at the charting controls now, and how to set colors. She begins with a previous post on charts and adds color definitions to that post. Silverlight Charting: Setting the Tooltip Deborah Kurata next gets into formatting the tooltip you can get when the user hovers over a chart to make it make more sense to your user 'Content' is NOT 'Text' in XAML Paul Sheriff discusses the Content property of XAML controls and how it can be pretty much any other XAML you want it to be, then goes on to show some nice examples. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • 2D Barcode Addendum

    - by Tim Dexter
    Having finally got my external drive back(long story) today from Oklahoma (thank you so much Sammy) Im back with a full compliment of Oracle and blogging tools at my disposal. I have missed JDeveloper this past week, which I have found, I immensely prefer over Eclipse (let the flaming commence :0) I use Zoundry Raven for writing articles and its not installed locally but on my external drove, so I have been soldiering on with the blog server's pain in the backside UI for writing. Now I have my favority editor back and things are calming down workwise, I will start to get the Excel template posts out. Today thou, a note about 2D barcode support or more specifically any barcode that needs some data manipulation before the barcode font is applied. I wrote about these fonts a long time back and laid out the java class you would need to write if you had an algorithm from the font manufacturer to use. I missed out a valuable point and James at Luminex fell into the trap. He was wanting to use the datamatrix font from IDAutomation but and had built the java class to be called from the RTF template but it was not encoding or at least did not appear to be. New debugging feature to the rescue. Kan over at the bipconsultng blog documented the feature a while back. Just adding <?xdo-debug-level:'STATEMENT'?> to my test template generated all the debug files in my c:\temp directory. No messing with files, just a simple command ... at last! Kan has documented the feature here. With the log in hand I spotted a java error stack referencing a missing code128a method, huh? Looking at James' class he had the following snippet: ENCODERS.put("code128a",mUtility.getClass().getMethod("code128a",clazz)); ENCODERS.put("code128b",mUtility.getClass().getMethod("code128b", clazz)); ENCODERS.put("code128c",mUtility.getClass().getMethod("code128c", clazz)); ENCODERS.put("pdf417",mUtility.getClass().getMethod("pdf417", clazz)); ENCODERS.put("datamatrix",mUtility.getClass().getMethod("datamatrix", clazz)); His class did not include the other code128 and pdf147 methods and BIP was expecting them. An easy fix, just comment them out, rebuild and deploy and the encoding started working. If you are hitting similar problems, check that class and ensure all of the referenced methods are available, if not, delete or get commenting. James now has purdy labels popping out that his hard ware can read, sweet!

    Read the article

  • Mini Theater at OTN Lounge During JavaOne

    - by Tori Wieldt
    This year, the Oracle Technology Network Lounge at JavaOne will be in the Hilton Ballroom, right in the center of theJavaOne DEMOgrounds. We'll have Java experts, community members and OTN staff to answer your questions. We've also even created a "Mini Theater" for casual demos from community members and Oracle staff. We are keeping the slots short, there will be no tests afterwards. It's your chance to talk to the experts 1 on 1. See how easy it is to turn on a lightbulb with Java and a violin. Here is the full schedule: Monday, October 1 9:40-9:50am  Learn about the Oracle Social Network Developer Challenge 11:20-11:30  Update from the Oracle Academy 11:40-11:50  Caroline Kvitka, @OracleJavaMag, Editor-in-Chief of Java Magazine 12:00-12:20pm  SouJava demonstrates Duke's Choice Award Winner JHome 12:20-12:30pm  Geertjan Wielenga (@geertjanw) Shows What's new in NetBeans 12:40-12:50pm  Learn about the OSN Developer Challenge  2:00-2:10pm  Java.net Robotics  2:30-2:40pm  Geertjan Wielenga (@geertjanw) Java EE and NetBeans Tuesday, October 2 9:40-9:50am  Greenfoot/Kinect demo by Michael Kolling 11:20-11:30  Caroline Kvitka, @OracleJavaMag, Editor-in-Chief of Java Magazine 11:40-11:50  Stephen Chin and Jim Weaver, Top Ten JavaFX Features 12:00-12:10pm  Nokia Student Developer 12:20-12:30pm Arun Gupta, HTML 5 and Java EE 7 1:00-1:10pm Update on the Java Community Process (JCP) 1:20-1:30pm  Update from the Oracle Academy  2:00-2:10pm  Java.net Robotics  2:30-2:40pm  Geertjan Wielenga (@geertjanw) NetBeans Java Editor Wednesday, October 3 9:40-9:50am  Greenfoot/Kinect demo by Michael Kolling 11:00-11:10  Caroline Kvitka, @OracleJavaMag, Editor-in-Chief of Java Magazine 11:20-11:30  Angela Caicedo and Jim Weaver, Leveraging JavaFX and HTML5 12:00-12:10pm  Nokia Student Developer 12:10-12:30pm  SouJava demonstrates Duke's Choice Award Winner JHome  2:00-2:10pm  Stephen Chin and Jim Weaver, JavaFX Deployment with Self-Contained Apps  2:30-2:40pm  Geertjan Wielenga (@geertjanw) NetBeans Platform  2:50-3:00pm  Petr Jiricka, Project Easel Changes to this schedule will be announced on @JavaOneConf.

    Read the article

  • Mini Theater at OTN Lounge During JavaOne

    - by Tori Wieldt
    This year, the Oracle Technology Network Lounge at JavaOne will be in the Hilton Ballroom, right in the center of theJavaOne DEMOgrounds. We'll have Java experts, community members and OTN staff to answer your questions. We've also even created a "Mini Theater" for casual demos from community members and Oracle staff. We are keeping the slots short, there will be no tests afterwards. It's your chance to talk to the experts 1 on 1. See how easy it is to turn on a lightbulb with Java and a violin. Here is the full schedule: Monday, October 1 9:40-9:50am  Learn about the Oracle Social Network Developer Challenge 11:20-11:30  Update from the Oracle Academy 11:40-11:50  Caroline Kvitka, @OracleJavaMag, Editor-in-Chief of Java Magazine 12:00-12:20pm  SouJava demonstrates Duke's Choice Award Winner JHome 12:20-12:30pm  Geertjan Wielenga (@geertjanw) Shows What's new in NetBeans 12:40-12:50pm  Learn about the OSN Developer Challenge  2:00-2:10pm  Java.net Robotics  2:30-2:40pm  Geertjan Wielenga (@geertjanw) Java EE and NetBeans Tuesday, October 2 9:40-9:50am  Greenfoot/Kinect demo by Michael Kolling 11:20-11:30  Caroline Kvitka, @OracleJavaMag, Editor-in-Chief of Java Magazine 11:40-11:50  Stephen Chin and Jim Weaver, Top Ten JavaFX Features 12:00-12:10pm  Nokia Student Developer 12:20-12:30pm Arun Gupta, HTML 5 and Java EE 7 1:00-1:10pm Update on the Java Community Process (JCP) 1:20-1:30pm  Update from the Oracle Academy  2:00-2:10pm  Java.net Robotics  2:30-2:40pm  Geertjan Wielenga (@geertjanw) NetBeans Java Editor Wednesday, October 3 9:40-9:50am  Greenfoot/Kinect demo by Michael Kolling 11:00-11:10  Caroline Kvitka, @OracleJavaMag, Editor-in-Chief of Java Magazine 11:20-11:30  Angela Caicedo and Jim Weaver, Leveraging JavaFX and HTML5 12:00-12:10pm  Nokia Student Developer 12:10-12:30pm  SouJava demonstrates Duke's Choice Award Winner JHome  2:00-2:10pm  Stephen Chin and Jim Weaver, JavaFX Deployment with Self-Contained Apps  2:30-2:40pm  Geertjan Wielenga (@geertjanw) NetBeans Platform  2:50-3:00pm  Petr Jiricka, Project Easel Changes to this schedule will be announced on @JavaOneConf.

    Read the article

  • StackWrap4J Java wrapper

    - by Bill the Lizard
    The StackWrap4J 1.0.1 jar is now available! (See the changelog) Sample Code / Screen Shot The following code snippet was used to test the wrapper in the Android emulator: TextView text = (TextView)findViewById(R.id.output); StackWrapper stackWrap = new StackOverflow(); String displayText = null; try { Stats stats = stackWrap.getStats(); displayText = "Stack Overflow Statistics"; displayText += "\nTotal Questions: " + stats.getTotalQuestions(); displayText += "\nTotal Unanswered: " + stats.getTotalUnanswered(); displayText += "\nTotal Answers: " + stats.getTotalAnswers(); displayText += "\nTotal Comments: " + stats.getTotalComments(); displayText += "\nTotal Votes: " + stats.getTotalVotes(); displayText += "\nTotal Users: " + stats.getTotalUsers(); } catch(Exception e){ displayText = e.getMessage(); } text.setText(displayText); About StackWrap4J is a Java wrapper for the Stack Exchange API. It is designed to be easy to use, and intuitive to learn while providing the full functionality of the API. License StackWrap4J is available under the MIT license. Download StackWrap4J Platform StackWrap4J was built using Java 1.5 and tested on Sun's JVM. It should run on any implementation of the JVM (1.5 or later). It's also been tested on the Android emulator. It also runs under the Google App Engine. Code You can download the code from our SVN repository hosted on SourceForge. Documentation for the code is also available on the SourceForge site. Authors Bill Cruise Justin Nelson Contact Please feel free to leave feedback here in the Answers section or on the StackWrap4J project discussion forum. Alternatively: Bill is available at: lizard.bill (at) gmail.com Justin can be reached at: jjnguy13 (at) gmail.com Future Currently we are focusing on adding more tests and fixing bugs. We are also working on adding serialization so that our objects can be easily persisted, and throttling so that users of our library don't have to worry about breaking the terms of use of the API. Notes The latest build was tested against version 1.0 of the API on July 28th.

    Read the article

  • FREE Windows Azure evening in London on April 15th including FREE access to Windows Azure

    - by Eric Nelson
    [Did I overdo the use of FREE in the title? :-)] April 12th to 16th is Microsoft Tech Days – 5 days of sessions on Visual Studio 2010 through to Windows 7 Phone Series. Many of these days are now full (Tip - Thursday still has room if rich client applications is your thing) but the good news is the development community in the UK has pulled together an awesome series of “fringe events” during April in London and elsewhere in the UK. There are sessions on Silverlight, SQL Server 2008 R2, Sharepoint 2010 and … the Windows Azure Platform. The UK AzureNET user group is planning to put on a great evening and AzureNET will be giving away hundreds of free subscriptions to the Windows Azure Platform during the evening. The subscription includes up to 20 Windows Azure Compute nodes and 3 SQL Azure databases for you to play with over the 2 weeks following the event. This is a great opportunity to really explore the Windows Azure Platform in detail – without a credit card! Register now! (and you might also want to join the UK Fans of Azure Community while I have your attention) FYI The Thursday day time event includes an introduction to Windows Azure session delivered by my colleague David – which would be an ideal session to attend if you are new to Azure and want to get the most out of the evening session. 7:00pm: See the difference: How Windows Azure helped build a new way of giving Simon Evans and James Broome (@broomej) They will cover the business context for Azure and then go into patterns used and lessons learnt from the project....as well as showing off the app of course! 8:00pm: UK AzureNET update 8:15pm: NoSQL databases or: How I learned to love the hash table Mark Rendle (@markrendle) In this session Mark will look at how Azure Table Service works and how to use it. We’ll look briefly at the high-level Data Services SDK, talk about its limitations, and then quickly move on to the REST API and how to use it to improve performance and reduce costs. We’ll make-up some pretend real-world problems and solve them in new and interesting ways. We’ll denormalise data (for fun and profit). We’ll talk about how certain social networking sites can deal with huge volumes of data so quickly, and why it sometimes goes wrong. Check out the complete list of fringe events which covers the UK fairly well:

    Read the article

< Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >