Search Results

Search found 14131 results on 566 pages for 'of note'.

Page 560/566 | < Previous Page | 556 557 558 559 560 561 562 563 564 565 566  | Next Page >

  • What is auto-mounting my media volume?

    - by user285277
    Something is repeatedly mounting my "media" share, doing something with it, then quietly un-mounting it with no notifications at the user level. from the little I can gleaned from the console messages below, I thought I'd managed to stop it, if not understand it last night when I followed instructions for deleting all traces of the Google Update Daemon. I've not been using any Google apps whatsoever, so I was surprised to see that in Console. What's more surprising, and perhaps a little distressing, is that the same thing occurred this evening, when the Google Daemon is long gone. I don't have that log because I can't recall precisely what time it occurred. I'll do a search and provide it if you wish, though. In the meantime, any help with this would be extremely appreciated. I've asked over at Apple Discussions but I think it might be a little deeper than those manning the boards this weekend are comfortable with. It's certainly beyond my meager skills. With apologies in advance if this is more lines thank you need. Please note that I've transformed the Google links a little because the forum here requires more reputation points before one can post more than two links. 12/27/13 10:47:31.000 PM kernel[0]: memorystatus_thread: idle exiting pid 53629 [distnoted] 12/27/13 10:48:10.433 PM com.apple.Preview.TrustedBookmarksService[53640]: Failed to resolve bookmark data at index: 0; not stale; error: The file doesn’t exist. 12/27/13 10:48:10.434 PM com.apple.Preview.TrustedBookmarksService[53640]: Failed to resolve bookmark data at index: 1; not stale; error: The file doesn’t exist. 12/27/13 10:48:10.950 PM com.apple.SecurityServer[17]: Session 103257 created 12/27/13 10:48:34.328 PM com.apple.Preview.TrustedBookmarksService[53640]: Failed to resolve bookmark data at index: 2; not stale; error: The file doesn’t exist. 12/27/13 10:48:34.000 PM kernel[0]: AFP_VFS afpfs_mount: /Volumes/Media Archive-1, pid 53641 12/27/13 10:48:34.000 PM kernel[0]: AFP_VFS afpfs_mount : succeeded on volume 0xffffff80d6355008 /Volumes/Media Archive-1 (error = 0, retval = 0) 12/27/13 10:49:32.000 PM kernel[0]: wlEvent: en0 en0 Link DOWN virtIf = 0 12/27/13 10:49:32.000 PM kernel[0]: AirPort: Link Down on en0. Reason 8 (Disassociated because station leaving). 12/27/13 10:49:32.000 PM kernel[0]: en0::IO80211Interface::postMessage bssid changed 12/27/13 10:49:33.681 PM configd[16]: network changed: v4(en0-:10.0.1.12) DNS- Proxy- SMB 12/27/13 10:49:33.697 PM configd[16]: network changed: DNS* Proxy 12/27/13 10:49:35.475 PM KernelEventAgent[57]: tid 00000000 received event(s) VQ_NOTRESP (1) 12/27/13 10:49:35.000 PM kernel[0]: ASP_TCP Disconnect: triggering reconnect by bumping reconnTrigger from curr value 0 on so 0xffffff802176b4a0 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect started /Volumes/Media Archive-1 prevTrigger 0 currTrigger 1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: doing reconnect on /Volumes/Media Archive-1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: posting to KEA EINPROGRESS for /Volumes/Media Archive-1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: Max reconnect time: 600 secs, Connect timeout: 15 secs for /Volumes/Media Archive-1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 1 seconds and then try again 12/27/13 10:49:35.479 PM KernelEventAgent[57]: tid 00000000 type 'afpfs', mounted on '/Volumes/Media Archive-1', from '//Me@Capsule._afpovertcp._tcp.local/Media%20Archive', not responding 12/27/13 10:49:35.487 PM KernelEventAgent[57]: tid 00000000 found 1 filesystem(s) with problem(s) 12/27/13 10:49:36.692 PM com.bourgeoisbits.cloak.agent[14503]: NetworkProfile: (null), (null), (null) (Connected: NO, Airport: NO, Open: NO) [trusted] 12/27/13 10:49:36.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:36.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:36.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 2 seconds and then try again 12/27/13 10:49:38.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:38.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:38.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 4 seconds and then try again 12/27/13 10:49:41.000 PM kernel[0]: CODE SIGNING: cs_invalid_page(0x1000): p=53662[GoogleSoftwareUp] clearing CS_VALID 12/27/13 10:49:42.102 PM GoogleSoftwareUpdateDaemon[53663]: -[KeystoneDaemon logServiceState] GoogleSoftwareUpdate daemon (1.1.0.3659) vending: com.google.Keystone.Daemon.UpdateEngine: 2 connection(s) com.google.Keystone.Daemon.Administration: 0 connection(s) 12/27/13 10:49:42.113 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateEngine updateProductID:] KSUpdateEngine updating product ID: "com.google.Keystone" 12/27/13 10:49:42.116 PM GoogleSoftwareUpdateDaemon[53663]: -[KSCheckAction performAction] KSCheckAction checking 1 ticket(s). 12/27/13 10:49:42.121 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction performAction] KSUpdateCheckAction starting update check for ticket(s): {( <KSTicket:0x531870 productID=com.google.Keystone version=1.1.0.3659 xc=<KSPathExistenceChecker:0x5302d0 path=/Library/Google/GoogleSoftwareUpdate/GoogleSoftwareUpdate.bundle/> serverType=Omaha url=htt[PeeEs]://tools.google.com/service/update2 creationDate=2012-08-12 14:47:10 > )} Using server: <KSOmahaServer:0x534340 engine=<KSDaemonUpdateEngine:0x52e530> params={ EngineVersion = "1.1.0.3659"; ActivesInfo = { "com.google.talkplugin" = { LastRollCallPingDate = 2013-10-06 07:00:00 +0000; }; "com.google.Keystone" = { LastRollCallPingDate = 2013-10-06 07:00:00 +0000; LastActivePingDate = 2013-10-06 07:00:00 +0000; LastActiveDate = 2013-12-28 03:49:42 +0000; }; "com.google.picasa" = { LastActiveDate = 2012-08-29 10:15:42 +0000; }; }; UserInitiated = 0; IsSystem = 1; OmahaOSVersion = "10.8.5_i486"; Identity = KeystoneDaemon; AllowedSubdomains = ( ".omaha.sandbox.google.com", ".tools.google.com", ".www.google.com", ".corp.google.com" ); } > 12/27/13 10:49:42.130 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction performAction] KSUpdateCheckAction running KSServerUpdateRequest: <KSOmahaServerUpdateRequest:0x1a31a90 server=<KSOmahaServer:0x534340> url="htt[PeeEs]://tools.google.com/service/update2" runningFetchers=0 tickets=1 activeTickets=1 rollCallTickets=1 body= <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <o:gupdate xmlns:o="htt[Pee]://www.google.com/update2/request" protocol="2.0" version="KeystoneDaemon-1.1.0.3659" ismachine="1"> <o:os platform="mac" version="MacOSX" sp="10.8.5_i486"></o:os> <o:app appid="com.google.Keystone" version="1.1.0.3659" lang="en-us" installage="502" brand="GGLG"> <o:ping r="82" a="82"></o:ping> <o:updatecheck></o:updatecheck> </o:app> </o:gupdate> > 12/27/13 10:49:42.291 PM GoogleSoftwareUpdateDaemon[53663]: -[KSOutOfProcessFetcher(PrivateMethods) helperDidTerminate:] The Internet connection appears to be offline. [NSURLErrorDomain:-1009] 12/27/13 10:49:42.291 PM GoogleSoftwareUpdateDaemon[53663]: -[KSServerUpdateRequest(PrivateMethods) fetcher:failedWithError:] KSServerUpdateRequest fetch failed. (productIDs: com.google.Keystone) [com.google.UpdateEngine.CoreErrorDomain:702 - 'htt[PeeEs]://tools.google.com/service/update2'] (The Internet connection appears to be offline. [NSURLErrorDomain:-1009]) 12/27/13 10:49:42.292 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction(PrivateMethods) finishAction] KSUpdateCheckAction found updates: {( )} 12/27/13 10:49:42.295 PM GoogleSoftwareUpdateDaemon[53663]: -[KSPrefetchAction performAction] KSPrefetchAction no updates to prefetch. 12/27/13 10:49:42.295 PM GoogleSoftwareUpdateDaemon[53663]: -[KSMultiUpdateAction performAction] KSSilentUpdateAction had no updates to apply. 12/27/13 10:49:42.296 PM GoogleSoftwareUpdateDaemon[53663]: -[KSMultiUpdateAction performAction] KSPromptAction had no updates to apply. 12/27/13 10:49:42.299 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateEngine(PrivateMethods) updateFinish] KSUpdateEngine update processing complete. 12/27/13 10:49:42.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:42.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:42.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 8 seconds and then try again 12/27/13 10:49:43.152 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateEngine updateAllProducts] KSUpdateEngine updating all installed products. 12/27/13 10:49:43.153 PM GoogleSoftwareUpdateDaemon[53663]: -[KSCheckAction performAction] KSCheckAction checking 2 ticket(s). 12/27/13 10:49:43.158 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction performAction] KSUpdateCheckAction starting update check for ticket(s): {( <KSTicket:0x18367a0 productID=com.google.Keystone version=1.1.0.3659 xc=<KSPathExistenceChecker:0x1837e10 path=/Library/Google/GoogleSoftwareUpdate/GoogleSoftwareUpdate.bundle/> serverType=Omaha url=htt[PeeEs]://tools.google.com/service/update2 creationDate=2012-08-12 14:47:10 >, <KSTicket:0x1834750 productID=com.google.talkplugin version=4.7.0.15362 xc=<KSPathExistenceChecker:0x1833890 path=/Library/Application Support/Google/GoogleTalkPlugin.app> serverType=Omaha url=htt[PeeEs]://tools.google.com/service/update2 creationDate=2012-08-12 14:47:10 > )} Using server: <KSOmahaServer:0x52e930 engine=<KSDaemonUpdateEngine:0x52e530> params={ EngineVersion = "1.1.0.3659"; ActivesInfo = { "com.google.talkplugin" = { LastRollCallPingDate = 2013-10-06 07:00:00 +0000; }; "com.google.Keystone" = { LastRollCallPingDate = 2013-10-06 07:00:00 +0000; LastActivePingDate = 2013-10-06 07:00:00 +0000; LastActiveDate = 2013-12-28 03:49:42 +0000; }; "com.google.picasa" = { LastActiveDate = 2012-08-29 10:15:42 +0000; }; }; UserInitiated = 0; IsSystem = 1; OmahaOSVersion = "10.8.5_i486"; Identity = KeystoneDaemon; AllowedSubdomains = ( ".omaha.sandbox.google.com", ".tools.google.com", ".www.google.com", ".corp.google.com" ); } > 12/27/13 10:49:43.159 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction performAction] KSUpdateCheckAction running KSServerUpdateRequest: <KSOmahaServerUpdateRequest:0x53a210 server=<KSOmahaServer:0x52e930> url="htt[PeeEs]://tools.google.com/service/update2" runningFetchers=0 tickets=2 activeTickets=1 rollCallTickets=2 body= <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <o:gupdate xmlns:o="htt[Pee]://www.google.com/update2/request" protocol="2.0" version="KeystoneDaemon-1.1.0.3659" ismachine="1"> <o:os platform="mac" version="MacOSX" sp="10.8.5_i486"></o:os> <o:app appid="com.google.Keystone" version="1.1.0.3659" lang="en-us" installage="502" brand="GGLG"> <o:ping r="82" a="82"></o:ping> <o:updatecheck></o:updatecheck> </o:app> <o:app appid="com.google.talkplugin" version="4.7.0.15362" lang="en-us" installage="502" brand="GGLG"> <o:ping r="82"></o:ping> <o:updatecheck></o:updatecheck> </o:app> </o:gupdate> > 12/27/13 10:49:43.243 PM GoogleSoftwareUpdateDaemon[53663]: -[KSOutOfProcessFetcher(PrivateMethods) helperDidTerminate:] The Internet connection appears to be offline. [NSURLErrorDomain:-1009] 12/27/13 10:49:43.243 PM GoogleSoftwareUpdateDaemon[53663]: -[KSServerUpdateRequest(PrivateMethods) fetcher:failedWithError:] KSServerUpdateRequest fetch failed. (productIDs: com.google.Keystone, ... (2)) [com.google.UpdateEngine.CoreErrorDomain:702 - 'htt[PeeEs]://tools.google.com/service/update2'] (The Internet connection appears to be offline. [NSURLErrorDomain:-1009]) 12/27/13 10:49:43.244 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction(PrivateMethods) finishAction] KSUpdateCheckAction found updates: {( )} 12/27/13 10:49:43.247 PM GoogleSoftwareUpdateDaemon[53663]: -[KSPrefetchAction performAction] KSPrefetchAction no updates to prefetch. 12/27/13 10:49:43.248 PM GoogleSoftwareUpdateDaemon[53663]: -[KSMultiUpdateAction performAction] KSSilentUpdateAction had no updates to apply. 12/27/13 10:49:43.248 PM GoogleSoftwareUpdateDaemon[53663]: -[KSMultiUpdateAction performAction] KSPromptAction had no updates to apply. 12/27/13 10:49:43.250 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateEngine(PrivateMethods) updateFinish] KSUpdateEngine update processing complete. 12/27/13 10:49:45.570 PM GoogleSoftwareUpdateDaemon[53663]: -[KeystoneDaemon logServiceState] GoogleSoftwareUpdate daemon (1.1.0.3659) vending: com.google.Keystone.Daemon.UpdateEngine: 1 connection(s) com.google.Keystone.Daemon.Administration: 0 connection(s) 12/27/13 10:49:50.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:50.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:50.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 10 seconds and then try again 12/27/13 10:49:53.828 PM KernelEventAgent[57]: tid 00000000 unmounting 1 filesystems 12/27/13 10:49:53.000 PM kernel[0]: AFP_VFS afpfs_unmount: /Volumes/Media Archive-1, flags 524288, pid 57 12/27/13 10:49:54.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: get the reconnect token 12/27/13 10:49:54.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: GetReconnectToken failed 32 /Volumes/Media Archive-1 12/27/13 10:49:54.000 PM kernel[0]: AFP_VFS afpfs_unmount : afpfs_DoReconnect sent signal for unmount to proceed 12/27/13 10:50:12.104 PM GoogleSoftwareUpdateDaemon[53663]: -[KeystoneDaemon main] GoogleSoftwareUpdateDaemon inactive, shutdown. 12/27/13 10:50:29.396 PM Dock[93157]: no information back from LS about running process

    Read the article

  • Cannot open root device xvda1 or unknown-block(0,0)

    - by svoop
    I'm putting together a Dom0 and three DomU (all Gentoo) with kernel 3.5.7 and Xen 4.1.1. Each Dom has it's own md (md0 for Dom0, md1 for Dom1 etc). Dom0 works fine so far, however, I'm stuck trying to create DomUs. It appears the xvda1 device on DomU is not created or accessible: Parsing config file dom1 domainbuilder: detail: xc_dom_allocate: cmdline="root=/dev/xvda1 console=hvc0 root=/dev/xvda1 ro 3", features="(null)" domainbuilder: detail: xc_dom_kernel_mem: called domainbuilder: detail: xc_dom_boot_xen_init: ver 4.1, caps xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 domainbuilder: detail: xc_dom_parse_image: called domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ... domainbuilder: detail: loader probe failed domainbuilder: detail: xc_dom_find_loader: trying Linux bzImage loader ... domainbuilder: detail: xc_dom_malloc : 10530 kB domainbuilder: detail: xc_dom_do_gunzip: unzip ok, 0x2f7a4f -> 0xa48888 domainbuilder: detail: loader probe OK xc: detail: elf_parse_binary: phdr: paddr=0x1000000 memsz=0x558000 xc: detail: elf_parse_binary: phdr: paddr=0x1558000 memsz=0x690e8 xc: detail: elf_parse_binary: phdr: paddr=0x15c2000 memsz=0x127c0 xc: detail: elf_parse_binary: phdr: paddr=0x15d5000 memsz=0x533000 xc: detail: elf_parse_binary: memory: 0x1000000 -> 0x1b08000 xc: detail: elf_xen_parse_note: GUEST_OS = "linux" xc: detail: elf_xen_parse_note: GUEST_VERSION = "2.6" xc: detail: elf_xen_parse_note: XEN_VERSION = "xen-3.0" xc: detail: elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000 xc: detail: elf_xen_parse_note: ENTRY = 0xffffffff815d5210 xc: detail: elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000 xc: detail: elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb" xc: detail: elf_xen_parse_note: PAE_MODE = "yes" xc: detail: elf_xen_parse_note: LOADER = "generic" xc: detail: elf_xen_parse_note: unknown xen elf note (0xd) xc: detail: elf_xen_parse_note: SUSPEND_CANCEL = 0x1 xc: detail: elf_xen_parse_note: HV_START_LOW = 0xffff800000000000 xc: detail: elf_xen_parse_note: PADDR_OFFSET = 0x0 xc: detail: elf_xen_addr_calc_check: addresses: xc: detail: virt_base = 0xffffffff80000000 xc: detail: elf_paddr_offset = 0x0 xc: detail: virt_offset = 0xffffffff80000000 xc: detail: virt_kstart = 0xffffffff81000000 xc: detail: virt_kend = 0xffffffff81b08000 xc: detail: virt_entry = 0xffffffff815d5210 xc: detail: p2m_base = 0xffffffffffffffff domainbuilder: detail: xc_dom_parse_elf_kernel: xen-3.0-x86_64: 0xffffffff81000000 -> 0xffffffff81b08000 domainbuilder: detail: xc_dom_mem_init: mem 5000 MB, pages 0x138800 pages, 4k each domainbuilder: detail: xc_dom_mem_init: 0x138800 pages domainbuilder: detail: xc_dom_boot_mem_init: called domainbuilder: detail: x86_compat: guest xen-3.0-x86_64, address size 64 domainbuilder: detail: xc_dom_malloc : 10000 kB domainbuilder: detail: xc_dom_build_image: called domainbuilder: detail: xc_dom_alloc_segment: kernel : 0xffffffff81000000 -> 0xffffffff81b08000 (pfn 0x1000 + 0xb08 pages) domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x1000+0xb08 at 0x7fdec9b85000 xc: detail: elf_load_binary: phdr 0 at 0x0x7fdec9b85000 -> 0x0x7fdeca0dd000 xc: detail: elf_load_binary: phdr 1 at 0x0x7fdeca0dd000 -> 0x0x7fdeca1460e8 xc: detail: elf_load_binary: phdr 2 at 0x0x7fdeca147000 -> 0x0x7fdeca1597c0 xc: detail: elf_load_binary: phdr 3 at 0x0x7fdeca15a000 -> 0x0x7fdeca1cd000 domainbuilder: detail: xc_dom_alloc_segment: phys2mach : 0xffffffff81b08000 -> 0xffffffff824cc000 (pfn 0x1b08 + 0x9c4 pages) domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x1b08+0x9c4 at 0x7fdec91c1000 domainbuilder: detail: xc_dom_alloc_page : start info : 0xffffffff824cc000 (pfn 0x24cc) domainbuilder: detail: xc_dom_alloc_page : xenstore : 0xffffffff824cd000 (pfn 0x24cd) domainbuilder: detail: xc_dom_alloc_page : console : 0xffffffff824ce000 (pfn 0x24ce) domainbuilder: detail: nr_page_tables: 0x0000ffffffffffff/48: 0xffff000000000000 -> 0xffffffffffffffff, 1 table(s) domainbuilder: detail: nr_page_tables: 0x0000007fffffffff/39: 0xffffff8000000000 -> 0xffffffffffffffff, 1 table(s) domainbuilder: detail: nr_page_tables: 0x000000003fffffff/30: 0xffffffff80000000 -> 0xffffffffbfffffff, 1 table(s) domainbuilder: detail: nr_page_tables: 0x00000000001fffff/21: 0xffffffff80000000 -> 0xffffffff827fffff, 20 table(s) domainbuilder: detail: xc_dom_alloc_segment: page tables : 0xffffffff824cf000 -> 0xffffffff824e6000 (pfn 0x24cf + 0x17 pages) domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x24cf+0x17 at 0x7fdece676000 domainbuilder: detail: xc_dom_alloc_page : boot stack : 0xffffffff824e6000 (pfn 0x24e6) domainbuilder: detail: xc_dom_build_image : virt_alloc_end : 0xffffffff824e7000 domainbuilder: detail: xc_dom_build_image : virt_pgtab_end : 0xffffffff82800000 domainbuilder: detail: xc_dom_boot_image: called domainbuilder: detail: arch_setup_bootearly: doing nothing domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_64 <= matches domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_32p domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32 domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32p domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_64 domainbuilder: detail: xc_dom_update_guest_p2m: dst 64bit, pages 0x138800 domainbuilder: detail: clear_page: pfn 0x24ce, mfn 0x37ddee domainbuilder: detail: clear_page: pfn 0x24cd, mfn 0x37ddef domainbuilder: detail: xc_dom_pfn_to_ptr: domU mapping: pfn 0x24cc+0x1 at 0x7fdece675000 domainbuilder: detail: start_info_x86_64: called domainbuilder: detail: setup_hypercall_page: vaddr=0xffffffff81001000 pfn=0x1001 domainbuilder: detail: domain builder memory footprint domainbuilder: detail: allocated domainbuilder: detail: malloc : 20658 kB domainbuilder: detail: anon mmap : 0 bytes domainbuilder: detail: mapped domainbuilder: detail: file mmap : 0 bytes domainbuilder: detail: domU mmap : 21392 kB domainbuilder: detail: arch_setup_bootlate: shared_info: pfn 0x0, mfn 0xbaa6f domainbuilder: detail: shared_info_x86_64: called domainbuilder: detail: vcpu_x86_64: called domainbuilder: detail: vcpu_x86_64: cr3: pfn 0x24cf mfn 0x37dded domainbuilder: detail: launch_vm: called, ctxt=0x7fff224e4ea0 domainbuilder: detail: xc_dom_release: called Daemon running with PID 4639 [ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Linux version 3.5.7-gentoo (root@majordomo) (gcc version 4.5.4 (Gentoo 4.5.4 p1.0, pie-0.4.7) ) #1 SMP Tue Nov 20 10:49:51 CET 2012 [ 0.000000] Command line: root=/dev/xvda1 console=hvc0 root=/dev/xvda1 ro 3 [ 0.000000] ACPI in unprivileged domain disabled [ 0.000000] e820: BIOS-provided physical RAM map: [ 0.000000] Xen: [mem 0x0000000000000000-0x000000000009ffff] usable [ 0.000000] Xen: [mem 0x00000000000a0000-0x00000000000fffff] reserved [ 0.000000] Xen: [mem 0x0000000000100000-0x0000000138ffffff] usable [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] MPS support code is not built-in. [ 0.000000] Using acpi=off or acpi=noirq or pci=noacpi may have problem [ 0.000000] DMI not present or invalid. [ 0.000000] No AGP bridge found [ 0.000000] e820: last_pfn = 0x139000 max_arch_pfn = 0x400000000 [ 0.000000] e820: last_pfn = 0x100000 max_arch_pfn = 0x400000000 [ 0.000000] init_memory_mapping: [mem 0x00000000-0xffffffff] [ 0.000000] init_memory_mapping: [mem 0x100000000-0x138ffffff] [ 0.000000] NUMA turned off [ 0.000000] Faking a node at [mem 0x0000000000000000-0x0000000138ffffff] [ 0.000000] Initmem setup node 0 [mem 0x00000000-0x138ffffff] [ 0.000000] NODE_DATA [mem 0x1387fc000-0x1387fffff] [ 0.000000] Zone ranges: [ 0.000000] DMA [mem 0x00010000-0x00ffffff] [ 0.000000] DMA32 [mem 0x01000000-0xffffffff] [ 0.000000] Normal [mem 0x100000000-0x138ffffff] [ 0.000000] Movable zone start for each node [ 0.000000] Early memory node ranges [ 0.000000] node 0: [mem 0x00010000-0x0009ffff] [ 0.000000] node 0: [mem 0x00100000-0x138ffffff] [ 0.000000] SMP: Allowing 1 CPUs, 0 hotplug CPUs [ 0.000000] No local APIC present [ 0.000000] APIC: disable apic facility [ 0.000000] APIC: switched to apic NOOP [ 0.000000] e820: cannot find a gap in the 32bit address range [ 0.000000] e820: PCI devices with unassigned 32bit BARs may break! [ 0.000000] e820: [mem 0x139100000-0x1394fffff] available for PCI devices [ 0.000000] Booting paravirtualized kernel on Xen [ 0.000000] Xen version: 4.1.1 (preserve-AD) [ 0.000000] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:1 nr_node_ids:1 [ 0.000000] PERCPU: Embedded 26 pages/cpu @ffff880138400000 s75712 r8192 d22592 u2097152 [ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 1259871 [ 0.000000] Policy zone: Normal [ 0.000000] Kernel command line: root=/dev/xvda1 console=hvc0 root=/dev/xvda1 ro 3 [ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes) [ 0.000000] __ex_table already sorted, skipping sort [ 0.000000] Checking aperture... [ 0.000000] No AGP bridge found [ 0.000000] Memory: 4943980k/5128192k available (3937k kernel code, 448k absent, 183764k reserved, 1951k data, 524k init) [ 0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1 [ 0.000000] Hierarchical RCU implementation. [ 0.000000] NR_IRQS:4352 nr_irqs:256 16 [ 0.000000] Console: colour dummy device 80x25 [ 0.000000] console [tty0] enabled [ 0.000000] console [hvc0] enabled [ 0.000000] installing Xen timer for CPU 0 [ 0.000000] Detected 3411.602 MHz processor. [ 0.000999] Calibrating delay loop (skipped), value calculated using timer frequency.. 6823.20 BogoMIPS (lpj=3411602) [ 0.000999] pid_max: default: 32768 minimum: 301 [ 0.000999] Security Framework initialized [ 0.001355] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes) [ 0.002974] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes) [ 0.003441] Mount-cache hash table entries: 256 [ 0.003595] Initializing cgroup subsys cpuacct [ 0.003599] Initializing cgroup subsys freezer [ 0.003637] ENERGY_PERF_BIAS: Set to 'normal', was 'performance' [ 0.003637] ENERGY_PERF_BIAS: View and update with x86_energy_perf_policy(8) [ 0.003643] CPU: Physical Processor ID: 0 [ 0.003645] CPU: Processor Core ID: 0 [ 0.003702] SMP alternatives: switching to UP code [ 0.011791] Freeing SMP alternatives: 12k freed [ 0.011835] Performance Events: unsupported p6 CPU model 42 no PMU driver, software events only. [ 0.011886] Brought up 1 CPUs [ 0.011998] Grant tables using version 2 layout. [ 0.012009] Grant table initialized [ 0.012034] NET: Registered protocol family 16 [ 0.012328] PCI: setting up Xen PCI frontend stub [ 0.015089] bio: create slab <bio-0> at 0 [ 0.015158] ACPI: Interpreter disabled. [ 0.015180] xen/balloon: Initialising balloon driver. [ 0.015180] xen-balloon: Initialising balloon driver. [ 0.015180] vgaarb: loaded [ 0.016126] SCSI subsystem initialized [ 0.016314] PCI: System does not support PCI [ 0.016320] PCI: System does not support PCI [ 0.016435] NetLabel: Initializing [ 0.016438] NetLabel: domain hash size = 128 [ 0.016440] NetLabel: protocols = UNLABELED CIPSOv4 [ 0.016447] NetLabel: unlabeled traffic allowed by default [ 0.016475] Switching to clocksource xen [ 0.017434] pnp: PnP ACPI: disabled [ 0.017501] NET: Registered protocol family 2 [ 0.017864] IP route cache hash table entries: 262144 (order: 9, 2097152 bytes) [ 0.019322] TCP established hash table entries: 524288 (order: 11, 8388608 bytes) [ 0.020376] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes) [ 0.020497] TCP: Hash tables configured (established 524288 bind 65536) [ 0.020500] TCP: reno registered [ 0.020525] UDP hash table entries: 4096 (order: 5, 131072 bytes) [ 0.020564] UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes) [ 0.020624] NET: Registered protocol family 1 [ 0.020658] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 0.020662] software IO TLB [mem 0xfb632000-0xff631fff] (64MB) mapped at [ffff8800fb632000-ffff8800ff631fff] [ 0.020750] platform rtc_cmos: registered platform RTC device (no PNP device found) [ 0.021378] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 0.023378] msgmni has been set to 9656 [ 0.023544] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) [ 0.023549] io scheduler noop registered [ 0.023551] io scheduler deadline registered [ 0.023580] io scheduler cfq registered (default) [ 0.023650] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 0.023845] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 0.024082] Non-volatile memory driver v1.3 [ 0.024085] Linux agpgart interface v0.103 [ 0.024207] Event-channel device installed. [ 0.024265] [drm] Initialized drm 1.1.0 20060810 [ 0.024268] [drm:i915_init] *ERROR* drm/i915 can't work without intel_agp module! [ 0.025145] brd: module loaded [ 0.025565] loop: module loaded [ 0.045646] Initialising Xen virtual ethernet driver. [ 0.198264] i8042: PNP: No PS/2 controller found. Probing ports directly. [ 0.199096] i8042: No controller found [ 0.199139] mousedev: PS/2 mouse device common for all mice [ 0.259303] rtc_cmos rtc_cmos: rtc core: registered rtc_cmos as rtc0 [ 0.259353] rtc_cmos: probe of rtc_cmos failed with error -38 [ 0.259440] md: raid1 personality registered for level 1 [ 0.259542] nf_conntrack version 0.5.0 (16384 buckets, 65536 max) [ 0.259732] ip_tables: (C) 2000-2006 Netfilter Core Team [ 0.259747] TCP: cubic registered [ 0.259886] NET: Registered protocol family 10 [ 0.260031] ip6_tables: (C) 2000-2006 Netfilter Core Team [ 0.260070] sit: IPv6 over IPv4 tunneling driver [ 0.260194] NET: Registered protocol family 17 [ 0.260213] Bridge firewalling registered [ 5.360075] XENBUS: Waiting for devices to initialise: 25s...20s...15s...10s...5s...0s...235s...230s...225s...220s...215s...210s...205s...200s...195s...190s...185s...180s...175s...170s...165s...160s...155s...150s...145s...140s...135s...130s...125s...120s...115s...110s...105s...100s...95s...90s...85s...80s...75s...70s...65s...60s...55s...50s...45s...40s...35s...30s...25s...20s...15s...10s...5s...0s... [ 270.360180] XENBUS: Timeout connecting to device: device/vbd/51713 (local state 3, remote state 1) [ 270.360273] md: Waiting for all devices to be available before autodetect [ 270.360277] md: If you don't use raid, use raid=noautodetect [ 270.360388] md: Autodetecting RAID arrays. [ 270.360392] md: Scanned 0 and added 0 devices. [ 270.360394] md: autorun ... [ 270.360395] md: ... autorun DONE. [ 270.360431] VFS: Cannot open root device "xvda1" or unknown-block(0,0): error -6 [ 270.360435] Please append a correct "root=" boot option; here are the available partitions: [ 270.360440] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) [ 270.360444] Pid: 1, comm: swapper/0 Not tainted 3.5.7-gentoo #1 [ 270.360446] Call Trace: [ 270.360454] [<ffffffff813d2205>] ? panic+0xbe/0x1c5 [ 270.360459] [<ffffffff813d2358>] ? printk+0x4c/0x51 [ 270.360464] [<ffffffff815d5fb7>] ? mount_block_root+0x24f/0x26d [ 270.360469] [<ffffffff815d62b6>] ? prepare_namespace+0x168/0x192 [ 270.360474] [<ffffffff815d5ca7>] ? kernel_init+0x1b0/0x1c2 [ 270.360477] [<ffffffff815d5500>] ? loglevel+0x34/0x34 [ 270.360482] [<ffffffff813d5a64>] ? kernel_thread_helper+0x4/0x10 [ 270.360486] [<ffffffff813d4038>] ? retint_restore_args+0x5/0x6 [ 270.360490] [<ffffffff813d5a60>] ? gs_change+0x13/0x13 The config: name = "dom1" bootloader = "/usr/bin/pygrub" root = "/dev/xvda1 ro" extra = "3" # runlevel memory = 5000 disk = [ 'phy:/dev/md1,xvda1,w' ] # vif = [ 'ip=..., vifname=veth1' ] # none for now Here are some details on the Dom0 kernel (grepping for "xen"): CONFIG_XEN=y CONFIG_XEN_DOM0=y CONFIG_XEN_PRIVILEGED_GUEST=y CONFIG_XEN_PVHVM=y CONFIG_XEN_MAX_DOMAIN_MEMORY=500 CONFIG_XEN_SAVE_RESTORE=y CONFIG_PCI_XEN=y CONFIG_XEN_PCIDEV_FRONTEND=y # CONFIG_XEN_BLKDEV_FRONTEND is not set CONFIG_XEN_BLKDEV_BACKEND=y # CONFIG_XEN_NETDEV_FRONTEND is not set CONFIG_XEN_NETDEV_BACKEND=y CONFIG_INPUT_XEN_KBDDEV_FRONTEND=y CONFIG_HVC_XEN=y CONFIG_HVC_XEN_FRONTEND=y # CONFIG_XEN_WDT is not set # CONFIG_XEN_FBDEV_FRONTEND is not set # Xen driver support CONFIG_XEN_BALLOON=y # CONFIG_XEN_SELFBALLOONING is not set CONFIG_XEN_SCRUB_PAGES=y CONFIG_XEN_DEV_EVTCHN=y CONFIG_XEN_BACKEND=y CONFIG_XENFS=y CONFIG_XEN_COMPAT_XENFS=y CONFIG_XEN_SYS_HYPERVISOR=y CONFIG_XEN_XENBUS_FRONTEND=y CONFIG_XEN_GNTDEV=m CONFIG_XEN_GRANT_DEV_ALLOC=m CONFIG_SWIOTLB_XEN=y CONFIG_XEN_TMEM=y CONFIG_XEN_PCIDEV_BACKEND=m CONFIG_XEN_PRIVCMD=y CONFIG_XEN_ACPI_PROCESSOR=m And the DomU kernel (grepping for "xen"): CONFIG_XEN=y CONFIG_XEN_DOM0=y CONFIG_XEN_PRIVILEGED_GUEST=y CONFIG_XEN_PVHVM=y CONFIG_XEN_MAX_DOMAIN_MEMORY=500 CONFIG_XEN_SAVE_RESTORE=y CONFIG_PCI_XEN=y CONFIG_XEN_PCIDEV_FRONTEND=y CONFIG_XEN_BLKDEV_FRONTEND=y CONFIG_XEN_NETDEV_FRONTEND=y CONFIG_INPUT_XEN_KBDDEV_FRONTEND=y CONFIG_HVC_XEN=y CONFIG_HVC_XEN_FRONTEND=y # CONFIG_XEN_WDT is not set # CONFIG_XEN_FBDEV_FRONTEND is not set # Xen driver support CONFIG_XEN_BALLOON=y # CONFIG_XEN_SELFBALLOONING is not set CONFIG_XEN_SCRUB_PAGES=y CONFIG_XEN_DEV_EVTCHN=y # CONFIG_XEN_BACKEND is not set CONFIG_XENFS=y CONFIG_XEN_COMPAT_XENFS=y CONFIG_XEN_SYS_HYPERVISOR=y CONFIG_XEN_XENBUS_FRONTEND=y CONFIG_XEN_GNTDEV=m CONFIG_XEN_GRANT_DEV_ALLOC=m CONFIG_SWIOTLB_XEN=y CONFIG_XEN_TMEM=y CONFIG_XEN_PRIVCMD=y CONFIG_XEN_ACPI_PROCESSOR=m Any ideas what I'm doing wrong here? Thanks a lot!

    Read the article

  • DNS resolution problems; dig SERVFAIL error

    - by JustinP
    I'm setting up a couple of dedicated servers, and having problems setting up my nameservers properly. One of these is a LEMP server (LAMP with nginx in place of Apache), and the other will function solely as an email server, running exim/dovecot/ASSP antispam (no Apache). The LEMP server is CentOS 5.5, with no control panel, while the email server is CentOS 5.5 as well, with cPanel/WHM. So, I've had problems getting DNS set up properly. I have two domains, each one pointing to one of these servers. The nameservers are registered correctly with the domain registrar, and the nameserver IPs are entered correctly as well. I've spoken to tech support at the registrar and they confirm that everything is set up on their end. Not knowing much about DNS, I googled nameservers and DNS until I nearly went blind, and spent hours messing with the configuration. Eventually, I got the LEMP server's DNS working properly (no cPanel). Pleased with this triumph, I'm trying to mimic that configuration and repeat the process with the email server, and it's just not happening. The nameserver starts and stops, but the domain doesn't resolve. Things I have tried Going through standard procedures to set up DNS in WHM Clearing all DNS information, uninstalling BIND, then reinstalling all of that and again going through WHM procedures for setting up DNS Clearing all DNS information, and setting up BIND via shell (completely outside of cPanel) by using my config and zone files from the LEMP server as a template named runs just fine, but nothing is resolving. When I "dig any example.com" I get a SERVFAIL message. Nslookups return no information. Here are my config and zone files. named.conf controls { inet 127.0.0.1 allow { localhost; } keys { coretext-key; }; }; options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; // Those options should be used carefully because they disable port // randomization // query-source port 53; // query-source-v6 port 53; allow-query { any; }; allow-query-cache { any; }; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; view "localhost_resolver" { match-clients { 127.0.0.0/24; }; match-destinations { localhost; }; recursion yes; //zone "." IN { // type hint; // file "/var/named/named.ca"; //}; include "/etc/named.rfc1912.zones"; }; view "internal" { /* This view will contain zones you want to serve only to "internal" clients that connect via your directly attached LAN interfaces - "localnets" . */ match-clients { localnets; }; match-destinations { localnets; }; recursion yes; zone "." IN { type hint; file "/var/named/named.ca"; }; // include "/var/named/named.rfc1912.zones"; // you should not serve your rfc1912 names to non-localhost clients. // These are your "authoritative" internal zones, and would probably // also be included in the "localhost_resolver" view above : zone "example.com" { type master; file "data/db.example.com"; }; zone "3.2.1.in-addr.arpa" { type master; file "data/db.1.2.3"; }; }; view "external" { /* This view will contain zones you want to serve only to "external" clients * that have addresses that are not on your directly attached LAN interface subnets: */ match-clients { any; }; match-destinations { any; }; recursion no; // you'd probably want to deny recursion to external clients, so you don't // end up providing free DNS service to all takers allow-query-cache { none; }; // Disable lookups for any cached data and root hints // all views must contain the root hints zone: //include "/etc/named.rfc1912.zones"; zone "." IN { type hint; file "/var/named/named.ca"; }; zone "example.com" { type master; file "data/db.example.com"; }; zone "3.2.1.in-addr.arpa" { type master; file "data/db.1.2.3"; }; }; include "/etc/rndc.key"; db.example.com $TTL 1D ; ; Zone file for example.com ; ; Mandatory minimum for a working domain ; @ IN SOA ns1.example.com. contact.example.com. ( 2011042905 ; serial 8H ; refresh 2H ; retry 4W ; expire 1D ; default_ttl ) NS ns1.example.com. NS ns2.example.com. ns1 A 1.2.3.4 ns2 A 1.2.3.5 example.com. A 1.2.3.4 localhost A 127.0.0.1 www CNAME example.com. mail CNAME example.com. ; db.1.2.3 $TTL 1D $ORIGIN 3.2.1.in-addr.arpa. @ IN SOA ns1.example.com contact.example.com. ( 2011042908 ; 8H ; 2H ; 4W ; 1D ; ) NS ns1.example.com. NS ns2.example.com. 4 PTR hostname.example.com. 5 PTR hostname.example.com. ; Also of note: both of these servers are managed. Tech support is very responsive, and largely useless. Hours go by with them asking me questions to narrow down what could be wrong, then they pass the ticket to the tech on the next shift, who ignores everything that's happened already and spend his whole shift asking all the same questions the last guy asked. So, in summary: *Nameservers, with IPs, are correctly registered with domain registrar *named is configured and running *...and must not be configured correctly, because nothing resolves. Any help would be great. I changed domains and IPs in the files to generics, but let me know if you need to know the domain in question. Thanks! UPDATE I found that I didn't have 127.0.0.1 in /etc/resolv.conf, so I added it, along with my two public IPs that I have named listening on. resolv.conf search www.example.com example.com nameserver 127.0.0.1 nameserver 7.8.9.10 ;Was in here by default, authoritative nameserver of hosting company nameserver 1.2.3.4 ;Public IP #1 nameserver 1.2.3.5 ;Public IP #2 Now when I DIG example.com from the host, it resolves. If I try to DIG from my other server (in the same datacenter), or from the internet, it times out or I get SERVFAIL.

    Read the article

  • How to improve Varnish performance?

    - by Darkseal
    We're experiencing a strange problem with our current Varnish configuration. 4x Web Servers (IIS 6.5 on Windows 2003 Server, each installed on a Intel(R) Xeon(R) CPU E5450 @ 3.00GHz Quad Core, 4GB RAM) 3x Varnish Servers (varnish-3.0.3 revision 9e6a70f on Ubuntu 12.04.2 LTS - 64 bit/precise, Kernel Linux 3.2.0-29-generic, each installed on a Intel(R) Xeon(R) CPU E5450 @ 3.00GHz Quad Core, 4GB RAM) The Varnish Servers performance are awfully bad in general, to the point that if we shut down one of them the other two are unable to fullfill all the requests and start to skip beats resulting in pending requests, timeouts, 404, etc. What can we do to improve our Varnish performance? Considering that we're getting less than 5k request per seconds during our max peak, we should be able to serve our pages even with a single one of them without any problem. We use a standard, vanilla CFG, as shown by this varnishadm param.show output: acceptor_sleep_decay 0.900000 [] acceptor_sleep_incr 0.001000 [s] acceptor_sleep_max 0.050000 [s] auto_restart on [bool] ban_dups on [bool] ban_lurker_sleep 0.010000 [s] between_bytes_timeout 60.000000 [s] cc_command "exec gcc -std=gnu99 -g -O2 -pthread -fpic -shared - Wl,-x -o %o %s" cli_buffer 8192 [bytes] cli_timeout 20 [seconds] clock_skew 10 [s] connect_timeout 0.700000 [s] critbit_cooloff 180.000000 [s] default_grace 10.000000 [seconds] default_keep 0.000000 [seconds] default_ttl 120.000000 [seconds] diag_bitmap 0x0 [bitmap] esi_syntax 0 [bitmap] expiry_sleep 1.000000 [seconds] fetch_chunksize 128 [kilobytes] fetch_maxchunksize 262144 [kilobytes] first_byte_timeout 60.000000 [s] group varnish (113) gzip_level 6 [] gzip_memlevel 8 [] gzip_stack_buffer 32768 [Bytes] gzip_tmp_space 0 [] gzip_window 15 [] http_gzip_support off [bool] http_max_hdr 64 [header lines] http_range_support on [bool] http_req_hdr_len 8192 [bytes] http_req_size 32768 [bytes] http_resp_hdr_len 8192 [bytes] http_resp_size 32768 [bytes] idle_send_timeout 60 [seconds] listen_address :80 listen_depth 1024 [connections] log_hashstring on [bool] log_local_address off [bool] lru_interval 2 [seconds] max_esi_depth 5 [levels] max_restarts 4 [restarts] nuke_limit 50 [allocations] pcre_match_limit 10000 [] pcre_match_limit_recursion 10000 [] ping_interval 3 [seconds] pipe_timeout 60 [seconds] prefer_ipv6 off [bool] queue_max 100 [%] rush_exponent 3 [requests per request] saintmode_threshold 10 [objects] send_timeout 600 [seconds] sess_timeout 5 [seconds] sess_workspace 16384 [bytes] session_linger 50 [ms] session_max 100000 [sessions] shm_reclen 255 [bytes] shm_workspace 8192 [bytes] shortlived 10.000000 [s] syslog_cli_traffic on [bool] thread_pool_add_delay 2 [milliseconds] thread_pool_add_threshold 2 [requests] thread_pool_fail_delay 200 [milliseconds] thread_pool_max 2000 [threads] thread_pool_min 5 [threads] thread_pool_purge_delay 1000 [milliseconds] thread_pool_stack unlimited [bytes] thread_pool_timeout 300 [seconds] thread_pool_workspace 65536 [bytes] thread_pools 2 [pools] thread_stats_rate 10 [requests] user varnish (106) vcc_err_unref on [bool] vcl_dir /etc/varnish vcl_trace off [bool] vmod_dir /usr/lib/varnish/vmods waiter default (epoll, poll) This is our default.vcl file: LINK sub vcl_recv { # BASIC recv COMMANDS: # # lookup -> search the item in the cache # pass -> always serve a fresh item (no-caching) # pipe -> like pass but ensures a direct-connection with the backend (no-cache AND no-proxy) # Allow the backend to serve up stale content if it is responding slow. # This defines when Varnish should use a stale object if it has one in the cache. set req.grace = 30s; if (client.ip == "127.0.0.1") { # request from NGINX - do not alter X-Forwarded-For set req.http.HTTPS = "on"; } else { # Add an X-Forwarded-For to keep track of original request unset req.http.HTTPS; unset req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; } set req.backend = www_director; # Strip all cookies to force an anonymous request when the back-end servers are down. if (!req.backend.healthy) { unset req.http.Cookie; } ## HHTP Accept-Encoding if (req.http.Accept-Encoding) { if (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { unset req.http.Accept-Encoding; } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { /* non-RFC2616 or CONNECT */ return (pipe); } if (req.request != "GET" && req.request != "HEAD") { /* only deal with GET and HEAD by default */ return (pass); } if (req.http.Authorization) { return (pass); } if (req.http.HTTPS ~ "on") { return (pass); } ###################################################### # COOKIE HANDLING ###################################################### # METHOD 1: do not remove cookies, but pass the page if they contain TB_NC if (!(req.url ~ "(?i)\.(png|gif|ipeg|jpg|ico|swf|css|js)(\?[a-z0-9]+)?$")) { if (req.http.Cookie && req.http.Cookie ~ "TB_NC") { return (pass); } } return (lookup); } # Code determining what to do when serving items from the IIS Server sub vcl_fetch { unset beresp.http.Server; set beresp.http.Server = "Server-1"; # Allow items to be stale if needed. This is the maximum time Varnish should keep an object. set beresp.grace = 1h; if (req.url ~ "(?i)\.(png|gif|ipeg|jpg|ico|swf|css|js)(\?[a-z0-9]+)?$") { unset beresp.http.set-cookie; } # Default Varnish VCL logic if (!beresp.cacheable || beresp.ttl <= 0s || beresp.http.Set-Cookie || beresp.http.Vary == "*") { set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has specific TB_NC no-caching cookie if (req.http.Cookie && req.http.Cookie ~ "TB_NC") { set beresp.http.X-Cacheable = "NO:Got Cookie"; set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has Cache-Control private else if (beresp.http.Cache-Control ~ "private") { set beresp.http.X-Cacheable = "NO:Cache-Control=private"; set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has Cache-Control no-cache or Pragma no-cache else if (beresp.http.Cache-Control ~ "no-cache" || beresp.http.Pragma ~ "no-cache") { set beresp.http.X-Cacheable = "NO:Cache-Control=no-cache (or pragma no-cache)"; set beresp.ttl = 120 s; return(hit_for_pass); } # If we reach to this point, the object is cacheable. # Cacheable but with not enough ttl: we need to extend the lifetime of the object artificially # NOTE: Varnish default TTL is set in /etc/sysconfig/varnish # and can be checked using the following command: # varnishadm param.show default_ttl else if (beresp.ttl < 1s) { set beresp.ttl = 5s; set beresp.grace = 5s; set beresp.http.X-Cacheable = "YES:FORCED"; } # Cacheable and with valid TTL. else { set beresp.http.X-Cacheable = "YES"; } # DEBUG INFO (Cookies) # set beresp.http.X-Cookie-Debug = "Request cookie: " + req.http.Cookie; return(deliver); } sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; if (obj.status == 404) { synthetic {" <!-- Markup for the 404 page goes here --> "}; } else if (obj.status == 500) { synthetic {" <!-- Markup for the 500 page goes here --> "}; } else if (obj.status == 503) { if (req.restarts < 4) { return(restart); } else { synthetic {" <!-- Markup for the 503 page goes here --> "}; } } else { synthetic {" <!-- Markup for a generic error page goes here --> "}; } } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } } Thanks in advance,

    Read the article

  • Windows using the DNS suffix search list on all lookups, even valid FQDNs. How to stop this?

    - by RealityGone
    When doing DNS lookups (specifically using nslookup, for some reason most things are not effected) Windows XP Pro SP3 is using the DNS suffix search list for every single one. Even for fully qualified domain names. For example I lookup "www.microsoft.com" but windows actually asks for "www.microsoft.com.eondream.com" (eondream.com is my primary domain). Now I can fix the issue by removing the Primary DNS suffix, but it seems to me that the DNS suffix search list should be for short, invalid names (where dots=0 or something). I'm sure I have a misconfiguration somewhere in windows but I don't know where. I've changed every option I can think of or find. Below is the output of ipconfig /all and nslookup (with debug & db2 enabled). This is using a static IP & (internal) DNS server. C:\ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : frayedlogic Primary Dns Suffix . . . . . . . : eondream.com Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : eondream.com Ethernet adapter Wireless Network Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Dell Wireless 1390 WLAN Mini-Card Physical Address. . . . . . . . . : 00-1B-FC-29-EB-6B Dhcp Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.13.32 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.13.13 DNS Servers . . . . . . . . . . . : 192.168.19.19 C:\nslookup Default Server: shardik.eondream.com Address: 192.168.19.19 set debug set db2 www.microsoft.com Server: shardik.eondream.com Address: 192.168.19.19 ------------ Got answer: HEADER: opcode = QUERY, id = 2, rcode = NOERROR header flags: response, want recursion, recursion avail. questions = 1, answers = 1, authority records = 0, additional = 0 QUESTIONS: www.microsoft.com.eondream.com, type = A, class = IN ANSWERS: - www.microsoft.com.eondream.com internet address = 208.69.36.132 ttl = 0 (0 secs) ------------ Non-authoritative answer: Name: www.microsoft.com.eondream.com Address: 208.69.36.132 (Note: it resolves to that IP because I use the opendns service and that is their suggestion page or whatever you want to call it) If I am reading the nslookup output correctly then it is not a problem with my DNS server because windows is actually asking for the incorrect domain.

    Read the article

  • SOA PARTNER COMMUNITY NEWSLETTER JULY 2012

    - by mseika
    SOA PARTNER COMMUNITY NEWSLETTER JULY 2012 Dear SOA partner community member To provide our community members the best of our knowledge, we want your feedback on our SOA Partner community. Thus we are organizing SOA Partner Community Survey 2012. We request you to participate in the survey and give your valuable feedback on various areas of marketing, sales and education. To continue our successful BPM Suite, Oracle is launching together with you Process Accelerators initiative. It’s your opportunity to co-develop and market predefined processes. Oracle Fusion Applications Design Patterns are a great tool to develop your SOA or BPM solution or process accelerators. To promote your SOA & BPM Specialization we continue to offer several benefits. This month we would like to highlight our Specialization Plaques - make sure you request one for your office! Our Fusion Middleware Summer Camps are booked out, if could not get a seat you can attend the SOA & BPM track @ Virtual Developer Day: Oracle Fusion Development Oracle demo systems offer´s two new demos: Business Driven Development based on BPM Suite & SOA Lifecycle Management. Jürgen KressOracle SOA & BPM Partner Adoption EMEA NEW CONTENT Community SurveyProcess Accelerators KitPlaques SOA & BPM SpecializedSOA & BPM at Virtual Developer Day News from our Partners & CommunityOverview of SOA Diagnostics in 11.1.1.6 Business driven development(BDD) demo now available! SOA Lifecycle Management Oracle Fusion applications design patterns Updated material by Oracle Connect and Network SOA Blogs SOA on Facebook SOA on LinkedIn SOA on Twitter Mix SOA Forum COMMUNITY SURVEY Like every year we would like to get your feedback in our SOA Partner Community Survey 2012. Make sure that You attend to further develop our community and support our planning! It is key for us to get your feedback to prepare for the next fiscal year. Back to top PROCESS ACCELERATORS KIT Oracle is very interested to co-develop and market with you, our partners, pre-defined processes for BPM Suite.I am very happy to announce a new program called “Oracle BPM Partner Solution Catalog”. This program will provide a one-stop shop for our customers looking for Oracle BPM partner solutions available in the market today.The Oracle BPM Solution Catalog will be hosted on our very popular Oracle Technology Network (OTN). To give you an idea of the scale of customer visibility, OTN today receives over 1Million hits per day from our business and developer community. We would like to invite you to list your Oracle BPM 11g solutions available today.In order to participate in this program, you need to do the following: Fill in the attached slide templates - #3 and #4 for each Oracle BPM 11g solution you would like to list on OTN.Please add links to whitepapers , videos, references to the specific solution in the template slide. We recommend that you create a landing page on your website for these linked artifacts and just point to the same from within the PowerPoint template. This will give you the flexibility to update the information as frequently as needed. If you have the particular solution in production or a reference available, please list them as well. Send the PowerPoint template slides (1 set of slides for each Oracle BPM solution) to [email protected]. In addition to having the opportunity to list your solutions on OTN for Oracle customers, you will have the chance to advertise your new wins/implementations/solutions in an Oracle Sponsored PM Webinar held every quarter. This program is targeted to go live by the end of summer 2012. At this point, we are targeting a soft launch in July end 2012 so send on your BPM solutions information as soon as possible. We would love to have your solution(s) listed in the “Oracle BPM Partner Solution Catalog” at the time of the launch. This will be a live repository so you can keep adding more solutions as they become available. If you have any questions, please feel free to contact us [email protected], Product Strategy Director, Oracle BPM , Phone +1 650.506.5486.Thank you and look forward to hearing from you. Oracle BPM team Process Accelerators Overview.pdf ProcessAcceleratorsDataSheet.pdf Demos draUPK.zip & trmUPK.zip BPM Solution repository slides.ppt Additional BPM material BPM Process Development Lifecycle Document that describes recommended approach to collaborative process modeling across business and IT tools ADF 11g PS5 Application with Customized BPM Worklist Task Flow (MDS Seeded Customization) by Andrejus Baranovskis BPMN process editor problems in 11.1.1.6 by Mark Nelson BPM – Disable DBMS job to refresh B2B Materialized View by Mark Nelson For the complete kit please visit the BPM folder at our SOA Community Workspace (SOA Community membership required). For the complete presentation please visit our SOA Community Workspace (SOA Community membership required). Information is Oracle and Partner confidential! Back to top PLAQUES SOA & BPM SPECIALIZED We continue to offer you a nice SOA & BPM Specialization plaque with your logo to proof your success. If you are a SOA or BPM Specialized partner and would like to request the plaque please send Brigitte an e-mail with the following information: Partner Name Partner logo (preferred eps file) Partner Status gold or platinum Your shipping address Your Specialization: SOA or BPM We recommend to mount the plaque at your office reception in addition you can use the SOA Specialization logos at your website download Logo: Gold & Platinum or the BPM logos Gold & Platinum Back to top SOA & BPM AT VIRTUAL DEVELOPER DAY Register now for this FREE hands-on online workshop Get up to date and learn everything you wanted to know about Oracle ADF & Fusion Development plus live Q&A chats with Oracle technical staffOracle Application Development Framework (ADF) is the standards based, strategic framework for Oracle Fusion Applications and Oracle Fusion Middleware. Oracle ADF’s integration with the Oracle SOA Suite, Oracle WebCenter and Oracle BI creates a complete productive development platform for your custom applications.Join us at this FREE virtual event and learn the latest in Fusion Development including: Is Oracle ADF development faster and simpler than Forms, Apex or .Net? Mobile Application Development with ADF Mobile Oracle ADF development with Eclipse Oracle WebCenter Portal and ADF Development Application Lifecycle Management with ADF Building Process Centric Applications with ADF and BPM Oracle Business Intelligence and ADF Integration Live Q&A chats with Oracle technical staff Developer lead, manager or architect - this event has something for everyone. Don’t miss this opportunity.Tuesday, July 10, 2012. 9:00 a.m. PT -1:00 p.m. PT 11:00 a.m. CT - 3:00 p.m. CT 12:00 p.m. ET - 4:00 p.m. ET 1:00 p.m. BRT - 5:00 p.m. BRT Register online now! for this FREE event. Agenda: 09:00 am Opening 09:30 am Keynote: Oracle Fusion Development Track1Introduction to Fusion Development Track2What's New in Fusion Development Track3Fusion Development in the Enterprise 10:00 am Is Oracle ADF Development Faster and Simpler than Oracle Forms, APEX or .Net? Mobile Application Development with ADF Mobile Oracle WebCenter Portal and ADF Development 11:00 am Rich Web UI made simple - an ADF Faces Overview Oracle Enterprise Pack for Eclipse - ADF Development Building Process Centric Applications with ADF and BPM 12:00 noon Next Generation Controller for JSF Application Lifecycle Management for ADF Oracle Business Intelligence and ADF Integration *Hands On Lab – WebCenter and ADF Lab w/ JDeveloper - Lab materials will be provided ahead of the event to give you ample time to work through the lab and increase the productivity of the live chat sessions the day of the event. Sessions abstractsRegister online now! for this FREE event Read more on Community Events and post your comment here. Back to top NEWS FROM OUR PARTNERS AND COMMUNITY Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity JDeveloper & ADF?Troubleshooting BPMN process editor problems in 11.1.1.6http://dlvr.it/1p0FfS SOA Community?SOA & BPM @ Virtual Developer Day: Oracle Fusion Development - July 10th 2012https://soacommunity.wordpress.com/2012/07/02/soa-bpm-virtual-developer-day- oracle-fusion-developmentjuly-10th-2012/#soacommunity #soa #bom #education orclateamsoa ?A-Team Blog #ateam: BAM design pointers - In working recently with a large Oracle customer on SOA and BAM, I discove.http://ow.ly/1kYqES SOA CommunitySOA Community Newsletter June 2012http://wp.me/p10C8u-qw SOA CommunityBPMN process editor problems in 11.1.1.6 by Mark Nelsonhttp://redstack.wordpress.com/2012/06/27/ bpmn-process-editor-problems-in-11-1-1-6 #soacommunity #bpm OTNArchBeat ?SOA Learning Library: free short, topic-focused training on Oracle SOA & BPM products | @SOACommunity http://pub.vitrue.com/NE1G Andrejus Baranovskis ?ADF 11g PS5 Application with Customized BPM Worklist Task Flow (MDS Seeded Customization)http://fb.me/1coX4r1X1 SOA CommunitySOA Learning Library provides a comprehensive curriculum for the SOA and BPM product suites https://soacommunity.wordpress.com/2012/06/27/soa-learning-library #soacommunity #soa #bpm OTNArchBeat ?A Universal JMX Client for Weblogic - Part 1: Monitoring BPEL Thread Pools in SOA 11g | Stefan Koserhttp://pub.vitrue.com/mQVZ OTNArchBeat ?BPM - Disable DBMS job to refresh B2B Materialized View | Mark Nelson http://pub.vitrue.com/3PR0Oracle SOA ?Learn how Choice Hotels Implements Innovative Google Maps Solution with #OracleSOA http://bit.ly/MTwIJ3 SOA Communitytop Tweets SOA Partner Community - June 2012 Send your tweets @soacommunity #soacommunity https://soacommunity.wordpress.com/2012/06/25/top-tweets-soa-partner-community-june-2012 Torsten Winterberg#OPITZ is pushing Oracle commitment to the next level: New Specializations done: ADF, BPM, WLS, Exadatahttp://bit.ly/KX1WVS ServiceTechSymposium ?Only 8 more days left until Super Early Bird Registration Discount expires! http://www.servicetechsymposium.com OracleBlogsSOA Management in 3 minutes - Video explainerhttp://ow.ly/1kN5pn SOA Community ?SOA, Cloud & Service Technology Symposium 2012 London - Enter Promo Code: Djmxz370https://soacommunity.wordpress.com/2012/06/22/soa-cloud-service-technology-symposium-2012-london #soasymposium #soacommunity #soa Heidi BuelowGreat course! w David Read RT @soacommunity: product management ADF for BPM training 5 seats left https://soacommunity.wordpress.com/2012/06/12/fusion-middleware-summer-campsadvanced-partner-trainings/ #bpm #soacommunity SOA Community ?product management ADF for BPM training 5 seats lefthttps://soacommunity.wordpress.com/2012/06/12/fusion-middleware-summer-campsadvanced-partner-trainings/ #bpm #soacommunity OTNArchBeat ?Oacle Fusion Applications Design Patterns Now Available For Developers | Ultan O'Broinhttp://pub.vitrue.com/UEiF OTNArchBeat ?SOA, Cloud & Service Technology Symposium 2012London - Special Oracle Discounthttp://pub.vitrue.com/8E0J SOA CommunityBecome a facebook fan of soacommunity http://www.facebook.com/soacommunity #soacommunity SOA Community ?SOA Suite HealthCare Integration Architecture https://blogs.oracle.com/SOAForHealthcare/entry/soa_suite_healthcare_integration_architecture #soacommunity #soa Andrejus Baranovskis ?Running Pre-built Virtual Machine for SOA Suite and BPM Suite 11g PS5 on Mac OS X Snow Leopard (10.6http://fb.me/vB8nO0Vg OracleBlogsPrinciples of Service-Oriented Architecture by Douwe P. van den Bos http://ow.ly/1kIcOP OTNArchBeatOracle Public Cloud Architecture | @TylerJewell http://ow.ly/bHAcL The SOA Network ?Business Process Management, Service-Oriented Architecture, and Web 2.0: Business Transformation or.http://bit.ly/LBgREL #ITNews #SOA OracleBlogs ?Oracle SOA Foundation Practitioner Certificationhttp://ow.ly/1kGYYg Frank Nimphius ?Learn Advanced ADF. ORACLE Fusion Middleware Summer Camps in Lisbon - July 9th - 13thhttp://bit.ly/KGCl3i SOA CommunityTransform Your Application Integration with Best Practices from Oracle Customershttps://blogs.oracle.com/SOA/entry/transform_your_application_integration_with #soacommunity #soa #bpm Simone GeibWhat you always wanted to know about #oraclesoa diagnostics: Shawn Bailey, Overview of SOA Diagnostics in 11.1.1.6,http://ow.ly/bxK0M Oracle SOA ?Save the date: Jun 21 10AM, SOA & BPM Customer Insight Series. Hear how Choice Hotels went from legacy to #oraclesoa http://bit.ly/LsNDGl OTNArchBeat ?New VirtualBox images for Oracle SOA Suite & Oracle BPM Suite 11.1.1.6.0http://ow.ly/bwDAl OracleBlogs ?Process development lifecycle in Oracle BPM 11g http://ow.ly/1ktesY Daniel AmadeiNew post: Oracle BPEL 11g Message Delivery & Recovery.http://amadei.com.br/blog/index.php /oracle-bpel-11g-message-delivery SOA Community ?Sending out the June edition of the #soacommunity newsletter - read it or become a member http://www.oracle.com/goto/emea/soa!#soa #bpm Arun Pareek ?For the past six months Ahmed Aboulnaga and me have been working on Oracle SOA Suite 11g Administrator's Handbook.http://lnkd.in/CAvpUQ SOA CommunitySun shine all day no clouds - solar eclipse is over... #sunshine #cloud http://www.infoq.com/presentations/Swarm-Computing Michel SchildmeijerWatch my blog Oracle Service Bus 11g: listing projects and services with WLST - part 1 http://lnkd.in/B7f3GQ @TITAN_GS @wlscommunity OTNArchBeatBook Review: Oracle Application Integration Architecture (AIA) Foundation Pack 11gR1: Essentials | Rajesh Rahejahttp://ow.ly/bn2cc OTNArchBeat ?Driving from Business Architecture to Business Process Services | @vghariharan http://ow.ly/bn5UB OTNArchBeat ?SOA Analysis within the Department of Defense Architecture Framework (DoDAF) 2.0 - Part II | Dawit Lessanu http://ow.ly/bn6sX Simone Geib ?Contact me directly for ideas how to improvehttp://bit.ly/advancedsoasuite and additional posts, presentations, white papers, ... #soasuite Simone Geib ?#soasuite advanced OTN page has become too cluttered. Broke it into separate pages to start with. http://bit.ly/advancedsoasuite OracleBlogs ?June Webcast: SOA Gateway Implementation and Troubleshooting (2 sessions) http://ow.ly/1kbRFA ServiceTechSymposium ?New session just posted to calendar: "NoSQL for Data Services, Data Virtualization & Big Data" by Guido Schmutz, Trivadis AG ://ow.ly/bjjOeDebra Lilley ?looks good - real proof people are using the apps ! RT @fteter: Very cool Fusion Applications Help site: http://bit.ly/L3nvOR #FusionApps demed ?rapid proliferation of cloud computing will drive convergence of SOA and cloud paradigms" http://ovum.com/2012/05/18/soa-paves-the-way-for-cloud/ SOA CommunityMiddleware Oracle Excellence Awards 2012-HAPPY NEW YEAR! https://soacommunity.wordpress.com/2012/05/31/middleware-oracle-excellence-awards-2012happy-new-year/ #soacommunity #opn #opnaward #specialization #oracle SOA CommunityHappy New Year #soacommunity thanks for the business! Time for a drink http://pic.twitter.com/zkK08KWB OTNArchBeat ?Who should ‘own’ the Enterprise Architecture? | Michael Glas http://bit.ly/K0ge0Q SOA Communitytop Tweets SOA Partner Community &ndash; May 2012 http://wp.me/p10C8u-pP ServiceTechSymposiumNew session just posted to Symposium calendar: "Elastic SOA in the Cloud" by Steve Millidge, C2B2 Consulting http://www.servicetechsymposium.com/agenda2012.php #elastic_soa_in_the_cloud orclateamsoa ?A-Team Blog #ateam: How to Set JVM Parameters in Oracle SOA 11Ghttp://ow.ly/1k2cnl ServiceTechSymposium ?New session just posted to Symposium calendar: "SOA Governance at EDP: A Global Energy Company" by Manuel Rosa, Linkhttp://www.servicetechsymposium.com/agenda2012.php#soa_governance_at_edp SOA Community ?VirtualBox image SOA Suite & BPM Suite 11.1.1.6.0&ndash;Your feedback?http://wp.me/p10C8u-qh Oracle MiddlewareSave the date: Jun 21 10AM, SOA & BPM Customer Insight Series. Hear how Choice Hotels went from legacy to#oraclesoa http://bit.ly/LU1y5N OTNArchBeat ?Goodbye, Silos. Hello SOA. | @stephanieoverbyhttp://pub.vitrue.com/NJJO SOA CommunityBPM Standard Edition - to start your BPM project http://wp.me/p10C8u-qj Please feel free to send us your news! And add your blog to our SOA blog wiki. Back to top OVERVIEW OF SOA DIAGNOSTICS IN 11.1.1.6 What tools are available for diagnosing SOA Suite issues? There are a variety of tools available to help you and Support diagnose SOA Suite issues in 11g but it can be confusing as to which tool is appropriate for a particular situation and what their relationships are. This blog post will introduce the various tools and attempt to clarify what each is for and how they are related. Let's first list the tools we'll be addressing: RDA: Remote Diagnostic Agent DFW: Diagnostic Framework Selective Tracing DMS: Dynamic Monitoring Service ODL: Oracle Diagnostic Logging ADR: Automatic Diagnostics Repository ADRCI: Automatic Diagnostics Repository Command Interpreter WLDF: WebLogic Diagnostic Framework This overview is not mean to be a comprehensive guide on using all of these tools, however, extensive reference materials are included that will provide many more details on their execution. Another point to note is that all of these tools are applicable for Fusion Middleware as a whole but specific products may or may not have implemented features to leverage them. A couple of the tools have a WebLogic Scripting Tool or 'WLST' interface. WLST is a command interface for executing pre-built functions and custom scripts against a domain. A detailed WLST tutorial is beyond the scope of this post but you can find general information here. There are more specific resources in the below sections.In this post when we refer to 'Enterprise Manager' or 'EM' we are referring to Enterprise Manager Fusion Middleware Control. read the full blog post here. Read more on Oracle and post your comment here. Back to top BUSINESS DRIVEN DEVELOPMENT (BDD) DEMO NOW AVAILABLE! For access to the Oracle demo systems please visit OPN and talk to your Partner Expert DSS is pleased to announce the availability of the demo “Business Driven Development“. This innovative demonstration uses a case-study approach to show business users how they can easily streamline their Business Processes - delivering greater efficiency, agility, visibility and collaboration with Oracle BPM and WebCenter. The BDD demonstration uses a case study-based approach to highlight a business problem at a fictional company, Avitek Corporation, and uses Oracle BPM and Oracle WebCenter to solve the business problem. This holistic approach has specifically been used to appeal to a non-technical business analyst user. This demo is NOT focused on product features, but aims to guide users through a complete BPM lifecycle. The scenario is based on improving a simple order process (scenario details are in the demo script). Avitek Corporation is sufferinng from a manual email-driven ordering process. Sales reps don’t know where the customer orders are stuck (no visibility) and finance users are unable to manually approve every order (no automation). There are several areas where this process can be improved with Business Process Management technology. This demo shows how improving following areas will ignificantly help resolve the business problems Avitek Corporation is facing. Areas for improvement include: Utilizing BPM for process management, rather than an unregulated, email-based process. Utilizing automated services, rather than requiring a human to key into a system. For example, Finance checking the customer’s credit rating is something that could be automated. Centralizing business rules that can be integrated into a business process, rather than requiring a human to process them. For example, Finance must determine when orders can be automatically approved. Provide insight and visibility into the process. For example, Sales Rep needs to know the status of their customer’s orders. The BDD Demo uses the following products. Oracle BPM Suite 11g PS4FP Oracle WebCenter 11g PS4FP (for Process Spaces) Oracle Business Activity Monitoring 11g Oracle Database 11g Back to top SOA LIFECYCLE MANAGEMENT For access to the Oracle demo systems please visit OPN and talk to your Partner Expert We are pleased to announce the availability of the SOA Management demo that showcases some of the key provisioning and lifecycle management capabilities of SOA Management Pack Enterprise Edition (EE). This demo specifically focuses on some of the lifecycle management solutions for Oracle SOA Suite and Oracle Service Bus (OSB). Demo Highlights The demo showcases the following capabilities. Provisioning of SOA Composites Provisioning of OSB Projects Provision SOA and OSB artifacts in a future maintenance window Back to top ORACLE FUSION APPLICATIONS DESIGN PATTERNS The Oracle Fusion Applications user experience design patterns are published! These new, reusable usability solutions and best-practices, which will join the Oracle dashboard patterns and guidelines that are already available online, are used by Oracle to artfully bring to life a new standard in the user experience, or UX, of enterprise applications. Now, the Oracle applications development community can benefit from the science behind the Oracle Fusion Applications user experience, too. These Oracle Fusion Applications UX Design Patterns, or blueprints, enable Oracle applications developers and system implementers everywhere to leverage professional usability insight when: tailoring an Oracle Fusion application, creating coexistence solutions that existing users will be delighted with, thus enabling graceful user transitions to Oracle Fusion Applications down the road, or designing exciting, new, highly usable applications in the cloud or on-premise. Based on the Oracle Application Development Framework (ADF) components, the Oracle Fusion Applications patterns and guidelines are proven with real users and in the Applications UX usability labs, so you can get right to work coding productivity-enhancing designs that provide an advantage for your entire business. What’s the best way to get started? We’ve made that easy, too. The Design Filter Tool (DeFT) selects the best pattern for your user type and task. Simply adapt your selection for your own task flow and content, and you’re on your way to a really great applications user experience. More Oracle applications design patterns and training are coming your way in the future. To provide feedback on the sets that are currently available, let me know in the comments! Read more on Fusionapps and post your comment here. Back to top UPDATED ORACLE MATERIAL Integrated SOA Gateway Documentation - Implementation Guide | Developer’s Guide Webcast Series: Oracle’s SOA and Oracle Business Process Management Solutions (Choice Hotels, Eaton, Farmers Insurance) BAM design pointers By Kavitha Srinivasan Seeking Oracle Fusion Middleware Go Live StoriesOracle Fusion Middleware product management is looking for recent go live stories to share with the Oracle sales team, sales consulting, product management and other internal groups. Customer contact details may remain anonymous. Your successful implementation will be featured in a quarterly report. The chance to present on an internal webcast is also available. Contact Maria Forney ([email protected]) if you have a noteworthy implementation success story. This is a good opportunity for partners interested in showcasing Oracle Fusion Middleware implementations, and gaining more exposure within Oracle. Performance tuning resources. All in one: docs, blogs, WPs, ppts: http://bit.ly/soa_resources Back to top HAVE YOU MISSED OUR LAST SOA PARTNER COMMUNITY WEBCASTS? UPK Webcast Business Driven Application Management & BPM11g & Application Grid & GoldenGate & Fusion Middleware Pricing & OC4J to WebLogic & Next Generation SOA & Fusion Middleware in Utility & Fusion Middleware in Communications & Fusion Middleware in Public Services & Fusion Middleware in Financial Services Please check your local OPN trainings calendar for additional training dates and locations. Back to top SOA PARTNER COMMUNITY CALENDAR On-Demand Trainings Event Name Language Type SOA Virtual Developers Day English Tech In-Class Trainings Date Event name Location / Country Contact person Type 09-13.07.2012 BPM Suite 11g advanced training by David Read Lisbon, Portugal Jürgen Kress Tech 09-13.07.2012 ADF 11g advanced training by Grant Ronald and Frank Nimphius Lisbon, Portugal Jürgen Kress Tech 09-13.07.2012 WebCenter Portal advanced training by Stefan Krantz and Angelo Santagata Lisbon, Portugal Jürgen Kress Tech 10.07.2012 Fusion Middleware Virtual Developer Day Online OTN Tech 10- 12.07.2012 WebLogic 12c training by Cosmin Tudor Lisbon, Portugal Jürgen Kress Tech 16-18.07.2012 SOA Suite 11g advanced training by Niall Commiskey Munich, Germany Jürgen Kress Tech 16-18.07.2012 ADF for BPM Suite 11g advanced training by David Read Munich, Germany Jürgen Kress Tech 16-18.07.2012 WebCenter Sites 11g advanced training by Product Management Munich, Germany Jürgen Kress Tech 17-20.07.2012 Oracle BPM 11g Implementation Bootcamp Live Virtual Class Oracle University Tech 23-26.07.2012 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 29-31.08.2012 Oracle BPM 11g Implementation Bootcamp Live Virtual Class Oracle University Tech 02-05.10.2012 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 15-18.10.2012 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 28-30.11.2012 Oracle AIA 11g Implementation Bootcamp Live Virtual Class Oracle University Tech 11-14.12.2012 Oracle BPM 11g Implementation Bootcamp Live Virtual Class Oracle University Tech 20-22.2.2013 Oracle AIA 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 14-17.1.2013 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 15-18.3.2013 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech Please check your local OPN Training Calendar for additional training and locations here. Back to top SOASCHOOL.COM - SOA CERTIFIED PROFESSIONAL(SOACP) PROGRAM The SOASchool.com - SOA Certified Professional (SOACP) program is dedicated to excellence in the field of SOA and service-oriented computing. Through a series of seasoned course modules and exams, IT professionals have the opportunity to obtain a number of different certifications to recognize their accomplishment of gaining "project ready" SOA proficiency. This comprehensive and strictly vendor-neutral program was developed in cooperation with best-selling SOA author Thomas Erl and several major SOA organizations and academic institutions. Through the involvement of the SOA Education Committee, course contents and certification requirements are constantly reviewed and revised to stay current with developments in the service-oriented computing industry. The program is currently comprised of 12 course modules and 5 certifications and is expanding to 18 course modules and 8 certifications throughout 2009. For more information, visit www.soaschool.com and www.soacp.com. Blog Twitter LinkedIn Mix Forum Wiki Back to top YOUR CONTENT ON THE NEWSLETTER AND ON THE SOA COMMUNITY PORTAL Publishing Your StoriesWe would like to invite our partners to publish information in the newsletter or on our SOA Community portal. Especially we are looking for your real life experience with our SOA technology. Please send your documents to Jürgen Kress. We look forward to getting your suggestions! Back to top SOA DISCUSSION FORUM BECOMES INTERACTIVE AT THE SOA COMMUNITY! Do you want to chat to experts, including partners and Oracle SOA Product Development? Do you want to get the latest information about our SOA solutions and events?Attend our private online SOA Discussion Forum at OTN. Please send your OTN forums user name to Brigitte Felisaz. You must be a registered user to access the SOA Discussion Forum. Back to top INVITE YOUR COLLEAGUES TO JOIN THE SOA COMMUNITY Please feel free to invite your colleagues to join the SOA Community and to participate in the SOA Assessment tests. For registration please login the Oracle PartnerNetwork and go to: www.oracle.com/goto/emea/soa For any questions on the above or concerning SOA and Oracle in general please contact the Oracle EMEA Alliances & Channels SOA Team. Best regardsOracle EMEA SOA TeamJürgen Kress Jürgen KressSOA Partner Adoption EMEATel. +49 89 1430 1479E-Mail: [email protected]

    Read the article

  • Why i disconnect every few seconds? using USB wireless adapter

    - by Rev3rse
    i know it's for ubuntu questions..but mint and ubuntu are very similiar and i had the same problem with linux ubuntu too..so i think this is the right place for my question anyway i don't have experience with drivers and other things,after installing Linux on my machine( i did dist-upgrade btw) everything seem to be great because i didn't have to install any driver, after a while i realized that my connection stop after few minutes(actually it shows that I'm connected but it's not) so i have to reconnect and after few minutes it disconnect again. I'm using Alfa USB wireless adapter AWS036H, and my Linux version is 11 i think the driver i'm using is Realtek i searched in the Internet and i found nothing. these are some outputs of few things people usually ask for: Note: I'm NOT using a laptop. dmsg: [19445.604448] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=2.174.220.77 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=104 ID=10466 DF PROTO=TCP SPT=55150 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19448.164050] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=41982 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=7566 DF PROTO=TCP INCOMPLETE [8 bytes] ] [19465.079565] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=80.128.216.31 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=5100 DF PROTO=TCP SPT=50169 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19486.270328] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=90.130.13.122 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=109 ID=22207 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19497.480522] wlan0: deauthenticating from 00:24:c8:4b:46:e0 by local choice (reason=3) [19497.593276] cfg80211: All devices are disconnected, going to restore regulatory settings [19497.593282] cfg80211: Restoring regulatory settings [19497.593346] cfg80211: Calling CRDA to update world regulatory domain [19497.638740] cfg80211: Updating information on frequency 2412 MHz for a 20 MHz width channel with regulatory rule: [19497.638745] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638749] cfg80211: Updating information on frequency 2417 MHz for a 20 MHz width channel with regulatory rule: [19497.638753] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638756] cfg80211: Updating information on frequency 2422 MHz for a 20 MHz width channel with regulatory rule: [19497.638760] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638763] cfg80211: Updating information on frequency 2427 MHz for a 20 MHz width channel with regulatory rule: [19497.638766] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638770] cfg80211: Updating information on frequency 2432 MHz for a 20 MHz width channel with regulatory rule: [19497.638773] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638776] cfg80211: Updating information on frequency 2437 MHz for a 20 MHz width channel with regulatory rule: [19497.638780] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638783] cfg80211: Updating information on frequency 2442 MHz for a 20 MHz width channel with regulatory rule: [19497.638787] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638790] cfg80211: Updating information on frequency 2447 MHz for a 20 MHz width channel with regulatory rule: [19497.638794] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638797] cfg80211: Updating information on frequency 2452 MHz for a 20 MHz width channel with regulatory rule: [19497.638801] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638804] cfg80211: Updating information on frequency 2457 MHz for a 20 MHz width channel with regulatory rule: [19497.638807] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638811] cfg80211: Updating information on frequency 2462 MHz for a 20 MHz width channel with regulatory rule: [19497.638814] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638817] cfg80211: Updating information on frequency 2467 MHz for a 20 MHz width channel with regulatory rule: [19497.638821] cfg80211: 2457000 KHz - 2482000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638824] cfg80211: Updating information on frequency 2472 MHz for a 20 MHz width channel with regulatory rule: [19497.638828] cfg80211: 2457000 KHz - 2482000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638831] cfg80211: Updating information on frequency 2484 MHz for a 20 MHz width channel with regulatory rule: [19497.638835] cfg80211: 2474000 KHz - 2494000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638838] cfg80211: World regulatory domain updated: [19497.638841] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [19497.638845] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [19497.638848] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [19497.638852] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [19497.638855] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [19497.638859] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [19513.145150] wlan0: authenticate with 00:24:c8:4b:46:e0 (try 1) [19513.146910] wlan0: authenticated [19513.252775] wlan0: associate with 00:24:c8:4b:46:e0 (try 1) [19513.255149] wlan0: RX AssocResp from 00:24:c8:4b:46:e0 (capab=0x411 status=0 aid=2) [19513.255154] wlan0: associated [19515.675091] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=91.79.8.40 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x20 TTL=110 ID=42720 DF PROTO=TCP SPT=1945 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [19525.684312] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=78.13.80.169 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=109 ID=49890 DF PROTO=TCP SPT=53401 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [19551.856766] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=85.228.39.93 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=103 ID=1162 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19564.623005] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=90.202.21.238 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=114 ID=17881 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19584.855364] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=2.49.151.87 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=117 ID=31716 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19604.688647] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=109.225.124.155 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=112 ID=6656 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19626.362529] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.184.50.41 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=114 ID=23241 DF PROTO=TCP SPT=1416 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [19645.040906] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=92.250.245.244 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=51 ID=0 DF PROTO=TCP SPT=50061 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [19665.212659] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=87.183.3.18 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=111 ID=1689 DF PROTO=TCP SPT=62817 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19685.036415] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=78.13.80.169 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=109 ID=50638 DF PROTO=TCP SPT=49624 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [19705.487915] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=217.122.17.82 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=112 ID=19070 DF PROTO=TCP SPT=54795 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19726.779185] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=80.88.116.239 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=109 ID=32168 DF PROTO=TCP SPT=57330 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19744.755673] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=109.124.5.43 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=2288 DF PROTO=TCP SPT=6475 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [19764.449183] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=79.216.35.19 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=4281 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19784.456189] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.82.25.149 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=114 ID=1866 DF PROTO=TCP SPT=59507 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19804.836687] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.56.199.3 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=108 ID=14749 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19824.812685] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=186.28.7.159 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=107 ID=44686 PROTO=UDP SPT=23418 DPT=6881 LEN=28 [19847.683314] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=78.13.80.169 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=108 ID=63046 DF PROTO=TCP SPT=52192 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [19884.711455] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=84.146.24.238 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=27914 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19884.983589] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=2.107.130.61 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=112 ID=7742 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19905.681078] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=95.21.11.121 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=114 ID=31775 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19926.035707] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=109.76.132.55 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=28140 DF PROTO=TCP SPT=51905 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19945.668326] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=188.92.0.197 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=7865 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19967.200339] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=83.252.102.172 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=105 ID=28408 DF PROTO=TCP SPT=63505 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19999.752732] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=79.166.171.200 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=110 ID=36405 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20007.928719] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=79.235.59.16 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=112 ID=46415 DF PROTO=TCP SPT=4537 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [20026.181726] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.182.169.36 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=106 ID=25126 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20048.845358] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=87.66.118.104 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=111 ID=18068 DF PROTO=TCP SPT=49928 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20064.341857] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=77.2.63.153 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=107 ID=7242 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20090.093490] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=93.16.17.210 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=108 ID=894 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20104.443995] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=89.83.235.99 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=114 ID=17295 DF PROTO=TCP SPT=58979 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20128.625374] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.62.91.79 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=107 ID=21793 DF PROTO=TCP SPT=51446 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20151.055506] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=84.135.217.213 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=112 ID=32452 DF PROTO=TCP SPT=55136 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20164.618874] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=91.79.8.40 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x20 TTL=110 ID=47784 DF PROTO=TCP SPT=2422 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [20184.337745] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=83.252.212.71 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=107 ID=14544 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20205.007512] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=91.62.158.247 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=110 ID=21562 DF PROTO=TCP SPT=3933 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [20225.204018] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=84.146.24.238 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=113 ID=15045 DF PROTO=TCP SPT=49630 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20244.842290] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=82.82.190.168 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=112 ID=23741 DF PROTO=TCP SPT=50766 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20266.701649] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=88.153.108.124 DST=192.168.1.6 LEN=48 TOS=0x02 PREC=0x00 TTL=111 ID=206 DF PROTO=TCP SPT=2451 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [20286.305414] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=78.240.86.73 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=107 ID=325 DF PROTO=TCP SPT=65184 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20294.293989] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43133 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=56899 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20294.297015] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43134 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.40 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=12080 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20294.297242] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43135 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=25195 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20295.478338] wlan0: deauthenticating from 00:24:c8:4b:46:e0 by local choice (reason=3) [20295.552735] cfg80211: All devices are disconnected, going to restore regulatory settings [20295.552742] cfg80211: Restoring regulatory settings [20295.552748] cfg80211: Calling CRDA to update world regulatory domain [20295.680635] cfg80211: Updating information on frequency 2412 MHz for a 20 MHz width channel with regulatory rule: [20295.680641] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680644] cfg80211: Updating information on frequency 2417 MHz for a 20 MHz width channel with regulatory rule: [20295.680648] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680652] cfg80211: Updating information on frequency 2422 MHz for a 20 MHz width channel with regulatory rule: [20295.680655] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680658] cfg80211: Updating information on frequency 2427 MHz for a 20 MHz width channel with regulatory rule: [20295.680662] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680665] cfg80211: Updating information on frequency 2432 MHz for a 20 MHz width channel with regulatory rule: [20295.680669] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680672] cfg80211: Updating information on frequency 2437 MHz for a 20 MHz width channel with regulatory rule: [20295.680676] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680679] cfg80211: Updating information on frequency 2442 MHz for a 20 MHz width channel with regulatory rule: [20295.680683] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680687] cfg80211: Updating information on frequency 2447 MHz for a 20 MHz width channel with regulatory rule: [20295.680690] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680693] cfg80211: Updating information on frequency 2452 MHz for a 20 MHz width channel with regulatory rule: [20295.680697] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680700] cfg80211: Updating information on frequency 2457 MHz for a 20 MHz width channel with regulatory rule: [20295.680704] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680708] cfg80211: Updating information on frequency 2462 MHz for a 20 MHz width channel with regulatory rule: [20295.680711] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680715] cfg80211: Updating information on frequency 2467 MHz for a 20 MHz width channel with regulatory rule: [20295.680718] cfg80211: 2457000 KHz - 2482000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680722] cfg80211: Updating information on frequency 2472 MHz for a 20 MHz width channel with regulatory rule: [20295.680725] cfg80211: 2457000 KHz - 2482000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680728] cfg80211: Updating information on frequency 2484 MHz for a 20 MHz width channel with regulatory rule: [20295.680732] cfg80211: 2474000 KHz - 2494000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680736] cfg80211: World regulatory domain updated: [20295.680738] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [20295.680742] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [20295.680745] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [20295.680749] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [20295.680752] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [20295.680756] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [20306.009341] wlan0: authenticate with 00:24:c8:4b:46:e0 (try 1) [20306.011225] wlan0: authenticated [20306.118095] wlan0: associate with 00:24:c8:4b:46:e0 (try 1) [20306.120963] wlan0: RX AssocResp from 00:24:c8:4b:46:e0 (capab=0x411 status=0 aid=2) [20306.120967] wlan0: associated [20307.364427] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=87.91.101.130 DST=192.168.1.6 LEN=64 TOS=0x00 PREC=0x00 TTL=49 ID=36839 DF PROTO=TCP SPT=62492 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [20310.914290] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43180 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=56900 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20310.936634] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43181 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.40 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=12081 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20310.939017] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43182 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=25196 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20325.941050] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=217.118.78.99 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=4407 PROTO=UDP SPT=2970 DPT=6881 LEN=28 [20328.801724] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43196 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=56901 DF PROTO=TCP INCOMPLETE [8 bytes] ] ... inxi -N Network: Card-1 Realtek RTL8101E/RTL8102E PCI Express Fast Ethernet controller driver r8169 Card-2 Realtek RTL-8139/8139C/8139C+ driver 8139too /usr/lib/linuxmint/mintWifi/mintWifi.py ------------------------- * I. scanning WIFI PCI devices... ------------------------- * II. querying ndiswrapper... ------------------------- * III. querying iwconfig... lo no wireless extensions. eth0 no wireless extensions. eth1 no wireless extensions. wlan0 IEEE 802.11bg ESSID:"Home" Mode:Managed Frequency:2.437 GHz Access Point: 00:24:C8:4B:46:E0 Bit Rate=54 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=68/70 Signal level=-42 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:1132 Missed beacon:0 ------------------------- * IV. querying ifconfig... eth0 Link encap:Ethernet HWaddr 00:1f:d0:c9:b8:8e UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:43 Base address:0x4000 eth1 Link encap:Ethernet HWaddr 00:0e:2e:77:88:16 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:19 Base address:0xd000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:10696 errors:0 dropped:0 overruns:0 frame:0 TX packets:10696 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:3823011 (3.8 MB) TX bytes:3823011 (3.8 MB) wlan0 Link encap:Ethernet HWaddr 00:c0:ca:44:62:d1 inet addr:192.168.1.6 Bcast:255.255.255.255 Mask:255.255.255.0 inet6 addr: fe80::2c0:caff:fe44:62d1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:90424 errors:0 dropped:0 overruns:0 frame:0 TX packets:65201 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:98024465 (98.0 MB) TX bytes:10345450 (10.3 MB) ------------------------- * V. querying DHCP... lspci 00:00.0 Host bridge: Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller (rev 10) 00:01.0 PCI bridge: Intel Corporation 82G33/G31/P35/P31 Express PCI Express Root Port (rev 10) 00:1b.0 Audio device: Intel Corporation N10/ICH 7 Family High Definition Audio Controller (rev 01) 00:1c.0 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 1 (rev 01) 00:1c.1 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 2 (rev 01) 00:1d.0 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 (rev 01) 00:1d.1 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 (rev 01) 00:1d.2 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 (rev 01) 00:1d.3 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 (rev 01) 00:1d.7 USB Controller: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller (rev 01) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) 00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) 00:1f.2 IDE interface: Intel Corporation N10/ICH7 Family SATA IDE Controller (rev 01) 00:1f.3 SMBus: Intel Corporation N10/ICH 7 Family SMBus Controller (rev 01) 01:00.0 VGA compatible controller: nVidia Corporation G96 [GeForce 9400 GT] (rev a1) 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 02) 04:01.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) lsmod Module Size Used by ipt_REJECT 12512 1 ipt_LOG 12784 5 xt_limit 12541 7 xt_tcpudp 12531 8 ipt_addrtype 12535 4 xt_state 12514 7 ip6table_filter 12711 1 ip6_tables 22545 1 ip6table_filter nf_nat_irc 12542 0 nf_conntrack_irc 13138 1 nf_nat_irc nf_nat_ftp 12548 0 nf_nat 24827 2 nf_nat_irc,nf_nat_ftp nf_conntrack_ipv4 19024 9 nf_nat nf_defrag_ipv4 12649 1 nf_conntrack_ipv4 nf_conntrack_ftp 13106 1 nf_nat_ftp nf_conntrack 69744 7 xt_state,nf_nat_irc,nf_conntrack_irc,nf_nat_ftp,nf_nat,nf_conntrack_ipv4,nf_conntrack_ftp iptable_filter 12706 1 ip_tables 18125 1 iptable_filter x_tables 21907 10 ipt_REJECT,ipt_LOG,xt_limit,xt_tcpudp,ipt_addrtype,xt_state,ip6table_filter,ip6_tables,iptable_filter,ip_tables nls_utf8 12493 10 udf 83795 1 crc_itu_t 12627 1 udf usb_storage 43946 1 uas 17676 0 snd_seq_dummy 12686 0 cryptd 19801 0 aes_i586 16956 1 aes_generic 38023 1 aes_i586 binfmt_misc 13213 1 dm_crypt 22463 0 vesafb 13449 1 nvidia 9766978 44 arc4 12473 2 rtl8187 56206 0 mac80211 257001 1 rtl8187 cfg80211 156212 2 rtl8187,mac80211 ppdev 12849 0 snd_hda_codec_realtek 255882 1 parport_pc 32111 1 psmouse 73312 0 eeprom_93cx6 12653 1 rtl8187 snd_hda_intel 24113 5 snd_hda_codec 90901 2 snd_hda_codec_realtek,snd_hda_intel snd_hwdep 13274 1 snd_hda_codec snd_pcm 80042 3 snd_hda_intel,snd_hda_codec snd_seq_midi 13132 0 snd_rawmidi 25269 1 snd_seq_midi snd_seq_midi_event 14475 1 snd_seq_midi snd_seq 51291 3 snd_seq_dummy,snd_seq_midi,snd_seq_midi_event snd_timer 28659 2 snd_pcm,snd_seq snd_seq_device 14110 4 snd_seq_dummy,snd_seq_midi,snd_rawmidi,snd_seq joydev 17322 0 snd 55295 18 snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device serio_raw 12990 0 soundcore 12600 1 snd snd_page_alloc 14073 2 snd_hda_intel,snd_pcm lp 13349 0 parport 36746 3 ppdev,parport_pc,lp usbhid 41704 0 hid 77084 1 usbhid dm_raid45 88410 0 xor 21860 1 dm_raid45 btrfs 527388 0 zlib_deflate 26594 1 btrfs libcrc32c 12543 1 btrfs 8139too 23208 0 8139cp 22497 0 r8169 42534 0 floppy 60032 0

    Read the article

  • Top things web developers should know about the Visual Studio 2013 release

    - by Jon Galloway
    ASP.NET and Web Tools for Visual Studio 2013 Release NotesASP.NET and Web Tools for Visual Studio 2013 Release NotesSummary for lazy readers: Visual Studio 2013 is now available for download on the Visual Studio site and on MSDN subscriber downloads) Visual Studio 2013 installs side by side with Visual Studio 2012 and supports round-tripping between Visual Studio versions, so you can try it out without committing to a switch Visual Studio 2013 ships with the new version of ASP.NET, which includes ASP.NET MVC 5, ASP.NET Web API 2, Razor 3, Entity Framework 6 and SignalR 2.0 The new releases ASP.NET focuses on One ASP.NET, so core features and web tools work the same across the platform (e.g. adding ASP.NET MVC controllers to a Web Forms application) New core features include new templates based on Bootstrap, a new scaffolding system, and a new identity system Visual Studio 2013 is an incredible editor for web files, including HTML, CSS, JavaScript, Markdown, LESS, Coffeescript, Handlebars, Angular, Ember, Knockdown, etc. Top links: Visual Studio 2013 content on the ASP.NET site are in the standard new releases area: http://www.asp.net/vnext ASP.NET and Web Tools for Visual Studio 2013 Release Notes Short intro videos on the new Visual Studio web editor features from Scott Hanselman and Mads Kristensen Announcing release of ASP.NET and Web Tools for Visual Studio 2013 post on the official .NET Web Development and Tools Blog Scott Guthrie's post: Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework Okay, for those of you who are still with me, let's dig in a bit. Quick web dev notes on downloading and installing Visual Studio 2013 I found Visual Studio 2013 to be a pretty fast install. According to Brian Harry's release post, installing over pre-release versions of Visual Studio is supported.  I've installed the release version over pre-release versions, and it worked fine. If you're only going to be doing web development, you can speed up the install if you just select Web Developer tools. Of course, as a good Microsoft employee, I'll mention that you might also want to install some of those other features, like the Store apps for Windows 8 and the Windows Phone 8.0 SDK, but they do download and install a lot of other stuff (e.g. the Windows Phone SDK sets up Hyper-V and downloads several GB's of VM's). So if you're planning just to do web development for now, you can pick just the Web Developer Tools and install the other stuff later. If you've got a fast internet connection, I recommend using the web installer instead of downloading the ISO. The ISO includes all the features, whereas the web installer just downloads what you're installing. Visual Studio 2013 development settings and color theme When you start up Visual Studio, it'll prompt you to pick some defaults. These are totally up to you -whatever suits your development style - and you can change them later. As I said, these are completely up to you. I recommend either the Web Development or Web Development (Code Only) settings. The only real difference is that Code Only hides the toolbars, and you can switch between them using Tools / Import and Export Settings / Reset. Web Development settings Web Development (code only) settings Usually I've just gone with Web Development (code only) in the past because I just want to focus on the code, although the Standard toolbar does make it easier to switch default web browsers. More on that later. Color theme Sigh. Okay, everyone's got their favorite colors. I alternate between Light and Dark depending on my mood, and I personally like how the low contrast on the window chrome in those themes puts the emphasis on my code rather than the tabs and toolbars. I know some people got pretty worked up over that, though, and wanted the blue theme back. I personally don't like it - it reminds me of ancient versions of Visual Studio that I don't want to think about anymore. So here's the thing: if you install Visual Studio Ultimate, it defaults to Blue. The other versions default to Light. If you use Blue, I won't criticize you - out loud, that is. You can change themes really easily - either Tools / Options / Environment / General, or the smart way: ctrl+q for quick launch, then type Theme and hit enter. Signing in During the first run, you'll be prompted to sign in. You don't have to - you can click the "Not now, maybe later" link at the bottom of that dialog. I recommend signing in, though. It's not hooked in with licensing or tracking the kind of code you write to sell you components. It is doing good things, like  syncing your Visual Studio settings between computers. More about that here. So, you don't have to, but I sure do. Overview of shiny new things in ASP.NET land There are a lot of good new things in ASP.NET. I'll list some of my favorite here, but you can read more on the ASP.NET site. One ASP.NET You've heard us talk about this for a while. The idea is that options are good, but choice can be a burden. When you start a new ASP.NET project, why should you have to make a tough decision - with long-term consequences - about how your application will work? If you want to use ASP.NET Web Forms, but have the option of adding in ASP.NET MVC later, why should that be hard? It's all ASP.NET, right? Ideally, you'd just decide that you want to use ASP.NET to build sites and services, and you could use the appropriate tools (the green blocks below) as you needed them. So, here it is. When you create a new ASP.NET application, you just create an ASP.NET application. Next, you can pick from some templates to get you started... but these are different. They're not "painful decision" templates, they're just some starting pieces. And, most importantly, you can mix and match. I can pick a "mostly" Web Forms template, but include MVC and Web API folders and core references. If you've tried to mix and match in the past, you're probably aware that it was possible, but not pleasant. ASP.NET MVC project files contained special project type GUIDs, so you'd only get controller scaffolding support in a Web Forms project if you manually edited the csproj file. Features in one stack didn't work in others. Project templates were painful choices. That's no longer the case. Hooray! I just did a demo in a presentation last week where I created a new Web Forms + MVC + Web API site, built a model, scaffolded MVC and Web API controllers with EF Code First, add data in the MVC view, viewed it in Web API, then added a GridView to the Web Forms Default.aspx page and bound it to the Model. In about 5 minutes. Sure, it's a simple example, but it's great to be able to share code and features across the whole ASP.NET family. Authentication In the past, authentication was built into the templates. So, for instance, there was an ASP.NET MVC 4 Intranet Project template which created a new ASP.NET MVC 4 application that was preconfigured for Windows Authentication. All of that authentication stuff was built into each template, so they varied between the stacks, and you couldn't reuse them. You didn't see a lot of changes to the authentication options, since they required big changes to a bunch of project templates. Now, the new project dialog includes a common authentication experience. When you hit the Change Authentication button, you get some common options that work the same way regardless of the template or reference settings you've made. These options work on all ASP.NET frameworks, and all hosting environments (IIS, IIS Express, or OWIN for self-host) The default is Individual User Accounts: This is the standard "create a local account, using username / password or OAuth" thing; however, it's all built on the new Identity system. More on that in a second. The one setting that has some configuration to it is Organizational Accounts, which lets you configure authentication using Active Directory, Windows Azure Active Directory, or Office 365. Identity There's a new identity system. We've taken the best parts of the previous ASP.NET Membership and Simple Identity systems, rolled in a lot of feedback and made big enhancements to support important developer concerns like unit testing and extensiblity. I've written long posts about ASP.NET identity, and I'll do it again. Soon. This is not that post. The short version is that I think we've finally got just the right Identity system. Some of my favorite features: There are simple, sensible defaults that work well - you can File / New / Run / Register / Login, and everything works. It supports standard username / password as well as external authentication (OAuth, etc.). It's easy to customize without having to re-implement an entire provider. It's built using pluggable pieces, rather than one large monolithic system. It's built using interfaces like IUser and IRole that allow for unit testing, dependency injection, etc. You can easily add user profile data (e.g. URL, twitter handle, birthday). You just add properties to your ApplicationUser model and they'll automatically be persisted. Complete control over how the identity data is persisted. By default, everything works with Entity Framework Code First, but it's built to support changes from small (modify the schema) to big (use another ORM, store your data in a document database or in the cloud or in XML or in the EXIF data of your desktop background or whatever). It's configured via OWIN. More on OWIN and Katana later, but the fact that it's built using OWIN means it's portable. You can find out more in the Authentication and Identity section of the ASP.NET site (and lots more content will be going up there soon). New Bootstrap based project templates The new project templates are built using Bootstrap 3. Bootstrap (formerly Twitter Bootstrap) is a front-end framework that brings a lot of nice benefits: It's responsive, so your projects will automatically scale to device width using CSS media queries. For example, menus are full size on a desktop browser, but on narrower screens you automatically get a mobile-friendly menu. The built-in Bootstrap styles make your standard page elements (headers, footers, buttons, form inputs, tables etc.) look nice and modern. Bootstrap is themeable, so you can reskin your whole site by dropping in a new Bootstrap theme. Since Bootstrap is pretty popular across the web development community, this gives you a large and rapidly growing variety of templates (free and paid) to choose from. Bootstrap also includes a lot of very useful things: components (like progress bars and badges), useful glyphicons, and some jQuery plugins for tooltips, dropdowns, carousels, etc.). Here's a look at how the responsive part works. When the page is full screen, the menu and header are optimized for a wide screen display: When I shrink the page down (this is all based on page width, not useragent sniffing) the menu turns into a nice mobile-friendly dropdown: For a quick example, I grabbed a new free theme off bootswatch.com. For simple themes, you just need to download the boostrap.css file and replace the /content/bootstrap.css file in your project. Now when I refresh the page, I've got a new theme: Scaffolding The big change in scaffolding is that it's one system that works across ASP.NET. You can create a new Empty Web project or Web Forms project and you'll get the Scaffold context menus. For release, we've got MVC 5 and Web API 2 controllers. We had a preview of Web Forms scaffolding in the preview releases, but they weren't fully baked for RTM. Look for them in a future update, expected pretty soon. This scaffolding system wasn't just changed to work across the ASP.NET frameworks, it's also built to enable future extensibility. That's not in this release, but should also hopefully be out soon. Project Readme page This is a small thing, but I really like it. When you create a new project, you get a Project_Readme.html page that's added to the root of your project and opens in the Visual Studio built-in browser. I love it. A long time ago, when you created a new project we just dumped it on you and left you scratching your head about what to do next. Not ideal. Then we started adding a bunch of Getting Started information to the new project templates. That told you what to do next, but you had to delete all of that stuff out of your website. It doesn't belong there. Not ideal. This is a simple HTML file that's not integrated into your project code at all. You can delete it if you want. But, it shows a lot of helpful links that are current for the project you just created. In the future, if we add new wacky project types, they can create readme docs with specific information on how to do appropriately wacky things. Side note: I really like that they used the internal browser in Visual Studio to show this content rather than popping open an HTML page in the default browser. I hate that. It's annoying. If you're doing that, I hope you'll stop. What if some unnamed person has 40 or 90 tabs saved in their browser session? When you pop open your "Thanks for installing my Visual Studio extension!" page, all eleventy billion tabs start up and I wish I'd never installed your thing. Be like these guys and pop stuff Visual Studio specific HTML docs in the Visual Studio browser. ASP.NET MVC 5 The biggest change with ASP.NET MVC 5 is that it's no longer a separate project type. It integrates well with the rest of ASP.NET. In addition to that and the other common features we've already looked at (Bootstrap templates, Identity, authentication), here's what's new for ASP.NET MVC. Attribute routing ASP.NET MVC now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your routes by annotating your actions and controllers. This supports some pretty complex, customized routing scenarios, and it allows you to keep your route information right with your controller actions if you'd like. Here's a controller that includes an action whose method name is Hiding, but I've used AttributeRouting to configure it to /spaghetti/with-nesting/where-is-waldo public class SampleController : Controller { [Route("spaghetti/with-nesting/where-is-waldo")] public string Hiding() { return "You found me!"; } } I enable that in my RouteConfig.cs, and I can use that in conjunction with my other MVC routes like this: public class RouteConfig { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapMvcAttributeRoutes(); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); } } You can read more about Attribute Routing in ASP.NET MVC 5 here. Filter enhancements There are two new additions to filters: Authentication Filters and Filter Overrides. Authentication filters are a new kind of filter in ASP.NET MVC that run prior to authorization filters in the ASP.NET MVC pipeline and allow you to specify authentication logic per-action, per-controller, or globally for all controllers. Authentication filters process credentials in the request and provide a corresponding principal. Authentication filters can also add authentication challenges in response to unauthorized requests. Override filters let you change which filters apply to a given action method or controller. Override filters specify a set of filter types that should not be run for a given scope (action or controller). This allows you to configure filters that apply globally but then exclude certain global filters from applying to specific actions or controllers. ASP.NET Web API 2 ASP.NET Web API 2 includes a lot of new features. Attribute Routing ASP.NET Web API supports the same attribute routing system that's in ASP.NET MVC 5. You can read more about the Attribute Routing features in Web API in this article. OAuth 2.0 ASP.NET Web API picks up OAuth 2.0 support, using security middleware running on OWIN (discussed below). This is great for features like authenticated Single Page Applications. OData Improvements ASP.NET Web API now has full OData support. That required adding in some of the most powerful operators: $select, $expand, $batch and $value. You can read more about OData operator support in this article by Mike Wasson. Lots more There's a huge list of other features, including CORS (cross-origin request sharing), IHttpActionResult, IHttpRequestContext, and more. I think the best overview is in the release notes. OWIN and Katana I've written about OWIN and Katana recently. I'm a big fan. OWIN is the Open Web Interfaces for .NET. It's a spec, like HTML or HTTP, so you can't install OWIN. The benefit of OWIN is that it's a community specification, so anyone who implements it can plug into the ASP.NET stack, either as middleware or as a host. Katana is the Microsoft implementation of OWIN. It leverages OWIN to wire up things like authentication, handlers, modules, IIS hosting, etc., so ASP.NET can host OWIN components and Katana components can run in someone else's OWIN implementation. Howard Dierking just wrote a cool article in MSDN magazine describing Katana in depth: Getting Started with the Katana Project. He had an interesting example showing an OWIN based pipeline which leveraged SignalR, ASP.NET Web API and NancyFx components in the same stack. If this kind of thing makes sense to you, that's great. If it doesn't, don't worry, but keep an eye on it. You're going to see some cool things happen as a result of ASP.NET becoming more and more pluggable. Visual Studio Web Tools Okay, this stuff's just crazy. Visual Studio has been adding some nice web dev features over the past few years, but they've really cranked it up for this release. Visual Studio is by far my favorite code editor for all web files: CSS, HTML, JavaScript, and lots of popular libraries. Stop thinking of Visual Studio as a big editor that you only use to write back-end code. Stop editing HTML and CSS in Notepad (or Sublime, Notepad++, etc.). Visual Studio starts up in under 2 seconds on a modern computer with an SSD. Misspelling HTML attributes or your CSS classes or jQuery or Angular syntax is stupid. It doesn't make you a better developer, it makes you a silly person who wastes time. Browser Link Browser Link is a real-time, two-way connection between Visual Studio and all connected browsers. It's only attached when you're running locally, in debug, but it applies to any and all connected browser, including emulators. You may have seen demos that showed the browsers refreshing based on changes in the editor, and I'll agree that's pretty cool. But it's really just the start. It's a two-way connection, and it's built for extensiblity. That means you can write extensions that push information from your running application (in IE, Chrome, a mobile emulator, etc.) back to Visual Studio. Mads and team have showed off some demonstrations where they enabled edit mode in the browser which updated the source HTML back on the browser. It's also possible to look at how the rendered HTML performs, check for compatibility issues, watch for unused CSS classes, the sky's the limit. New HTML editor The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Here's a 3 minute tour from Mads Kristensen. The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Lots more Visual Studio web dev features That's just a sampling - there's a ton of great features for JavaScript editing, CSS editing, publishing, and Page Inspector (which shows real-time rendering of your page inside Visual Studio). Here are some more short videos showing those features. Lots, lots more Okay, that's just a summary, and it's still quite a bit. Head on over to http://asp.net/vnext for more information, and download Visual Studio 2013 now to get started!

    Read the article

  • Capturing and Transforming ASP.NET Output with Response.Filter

    - by Rick Strahl
    During one of my Handlers and Modules session at DevConnections this week one of the attendees asked a question that I didn’t have an immediate answer for. Basically he wanted to capture response output completely and then apply some filtering to the output – effectively injecting some additional content into the page AFTER the page had completely rendered. Specifically the output should be captured from anywhere – not just a page and have this code injected into the page. Some time ago I posted some code that allows you to capture ASP.NET Page output by overriding the Render() method, capturing the HtmlTextWriter() and reading its content, modifying the rendered data as text then writing it back out. I’ve actually used this approach on a few occasions and it works fine for ASP.NET pages. But this obviously won’t work outside of the Page class environment and it’s not really generic – you have to create a custom page class in order to handle the output capture. [updated 11/16/2009 – updated ResponseFilterStream implementation and a few additional notes based on comments] Enter Response.Filter However, ASP.NET includes a Response.Filter which can be used – well to filter output. Basically Response.Filter is a stream through which the OutputStream is piped back to the Web Server (indirectly). As content is written into the Response object, the filter stream receives the appropriate Stream commands like Write, Flush and Close as well as read operations although for a Response.Filter that’s uncommon to be hit. The Response.Filter can be programmatically replaced at runtime which allows you to effectively intercept all output generation that runs through ASP.NET. A common Example: Dynamic GZip Encoding A rather common use of Response.Filter hooking up code based, dynamic  GZip compression for requests which is dead simple by applying a GZipStream (or DeflateStream) to Response.Filter. The following generic routines can be used very easily to detect GZip capability of the client and compress response output with a single line of code and a couple of library helper routines: WebUtils.GZipEncodePage(); which is handled with a few lines of reusable code and a couple of static helper methods: /// <summary> ///Sets up the current page or handler to use GZip through a Response.Filter ///IMPORTANT:  ///You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() {     HttpResponse Response = HttpContext.Current.Response;     if(IsGZipSupported())     {         stringAcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];         if(AcceptEncoding.Contains("deflate"))         {             Response.Filter = newSystem.IO.Compression.DeflateStream(Response.Filter,                                        System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "deflate");         }         else        {             Response.Filter = newSystem.IO.Compression.GZipStream(Response.Filter,                                       System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "gzip");                            }     }     // Allow proxy servers to cache encoded and unencoded versions separately    Response.AppendHeader("Vary", "Content-Encoding"); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } GZipStream and DeflateStream are streams that are assigned to Response.Filter and by doing so apply the appropriate compression on the active Response. Response.Filter content is chunked So to implement a Response.Filter effectively requires only that you implement a custom stream and handle the Write() method to capture Response output as it’s written. At first blush this seems very simple – you capture the output in Write, transform it and write out the transformed content in one pass. And that indeed works for small amounts of content. But you see, the problem is that output is written in small buffer chunks (a little less than 16k it appears) rather than just a single Write() statement into the stream, which makes perfect sense for ASP.NET to stream data back to IIS in smaller chunks to minimize memory usage en route. Unfortunately this also makes it a more difficult to implement any filtering routines since you don’t directly get access to all of the response content which is problematic especially if those filtering routines require you to look at the ENTIRE response in order to transform or capture the output as is needed for the solution the gentleman in my session asked for. So in order to address this a slightly different approach is required that basically captures all the Write() buffers passed into a cached stream and then making the stream available only when it’s complete and ready to be flushed. As I was thinking about the implementation I also started thinking about the few instances when I’ve used Response.Filter implementations. Each time I had to create a new Stream subclass and create my custom functionality but in the end each implementation did the same thing – capturing output and transforming it. I thought there should be an easier way to do this by creating a re-usable Stream class that can handle stream transformations that are common to Response.Filter implementations. Creating a semi-generic Response Filter Stream Class What I ended up with is a ResponseFilterStream class that provides a handful of Events that allow you to capture and/or transform Response content. The class implements a subclass of Stream and then overrides Write() and Flush() to handle capturing and transformation operations. By exposing events it’s easy to hook up capture or transformation operations via single focused methods. ResponseFilterStream exposes the following events: CaptureStream, CaptureString Captures the output only and provides either a MemoryStream or String with the final page output. Capture is hooked to the Flush() operation of the stream. TransformStream, TransformString Allows you to transform the complete response output with events that receive a MemoryStream or String respectively and can you modify the output then return it back as a return value. The transformed output is then written back out in a single chunk to the response output stream. These events capture all output internally first then write the entire buffer into the response. TransformWrite, TransformWriteString Allows you to transform the Response data as it is written in its original chunk size in the Stream’s Write() method. Unlike TransformStream/TransformString which operate on the complete output, these events only see the current chunk of data written. This is more efficient as there’s no caching involved, but can cause problems due to searched content splitting over multiple chunks. Using this implementation, creating a custom Response.Filter transformation becomes as simple as the following code. To hook up the Response.Filter using the MemoryStream version event: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformStream += filter_TransformStream; Response.Filter = filter; and the event handler to do the transformation: MemoryStream filter_TransformStream(MemoryStream ms) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = encoding.GetString(ms.ToArray()); output = FixPaths(output); ms = new MemoryStream(output.Length); byte[] buffer = encoding.GetBytes(output); ms.Write(buffer,0,buffer.Length); return ms; } private string FixPaths(string output) { string path = HttpContext.Current.Request.ApplicationPath; // override root path wonkiness if (path == "/") path = ""; output = output.Replace("\"~/", "\"" + path + "/").Replace("'~/", "'" + path + "/"); return output; } The idea of the event handler is that you can do whatever you want to the stream and return back a stream – either the same one that’s been modified or a brand new one – which is then sent back to as the final response. The above code can be simplified even more by using the string version events which handle the stream to string conversions for you: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; and the event handler to do the transformation calling the same FixPaths method shown above: string filter_TransformString(string output) { return FixPaths(output); } The events for capturing output and capturing and transforming chunks work in a very similar way. By using events to handle the transformations ResponseFilterStream becomes a reusable component and we don’t have to create a new stream class or subclass an existing Stream based classed. By the way, the example used here is kind of a cool trick which transforms “~/” expressions inside of the final generated HTML output – even in plain HTML controls not HTML controls – and transforms them into the appropriate application relative path in the same way that ResolveUrl would do. So you can write plain old HTML like this: <a href=”~/default.aspx”>Home</a>  and have it turned into: <a href=”/myVirtual/default.aspx”>Home</a>  without having to use an ASP.NET control like Hyperlink or Image or having to constantly use: <img src=”<%= ResolveUrl(“~/images/home.gif”) %>” /> in MVC applications (which frankly is one of the most annoying things about MVC especially given the path hell that extension-less and endpoint-less URLs impose). I can’t take credit for this idea. While discussing the Response.Filter issues on Twitter a hint from Dylan Beattie who pointed me at one of his examples which does something similar. I thought the idea was cool enough to use an example for future demos of Response.Filter functionality in ASP.NET next I time I do the Modules and Handlers talk (which was great fun BTW). How practical this is is debatable however since there’s definitely some overhead to using a Response.Filter in general and especially on one that caches the output and the re-writes it later. Make sure to test for performance anytime you use Response.Filter hookup and make sure it' doesn’t end up killing perf on you. You’ve been warned :-}. How does ResponseFilterStream work? The big win of this implementation IMHO is that it’s a reusable  component – so for implementation there’s no new class, no subclassing – you simply attach to an event to implement an event handler method with a straight forward signature to retrieve the stream or string you’re interested in. The implementation is based on a subclass of Stream as is required in order to handle the Response.Filter requirements. What’s different than other implementations I’ve seen in various places is that it supports capturing output as a whole to allow retrieving the full response output for capture or modification. The exception are the TransformWrite and TransformWrite events which operate only active chunk of data written by the Response. For captured output, the Write() method captures output into an internal MemoryStream that is cached until writing is complete. So Write() is called when ASP.NET writes to the Response stream, but the filter doesn’t pass on the Write immediately to the filter’s internal stream. The data is cached and only when the Flush() method is called to finalize the Stream’s output do we actually send the cached stream off for transformation (if the events are hooked up) and THEN finally write out the returned content in one big chunk. Here’s the implementation of ResponseFilterStream: /// <summary> /// A semi-generic Stream implementation for Response.Filter with /// an event interface for handling Content transformations via /// Stream or String. /// <remarks> /// Use with care for large output as this implementation copies /// the output into a memory stream and so increases memory usage. /// </remarks> /// </summary> public class ResponseFilterStream : Stream { /// <summary> /// The original stream /// </summary> Stream _stream; /// <summary> /// Current position in the original stream /// </summary> long _position; /// <summary> /// Stream that original content is read into /// and then passed to TransformStream function /// </summary> MemoryStream _cacheStream = new MemoryStream(5000); /// <summary> /// Internal pointer that that keeps track of the size /// of the cacheStream /// </summary> int _cachePointer = 0; /// <summary> /// /// </summary> /// <param name="responseStream"></param> public ResponseFilterStream(Stream responseStream) { _stream = responseStream; } /// <summary> /// Determines whether the stream is captured /// </summary> private bool IsCaptured { get { if (CaptureStream != null || CaptureString != null || TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Determines whether the Write method is outputting data immediately /// or delaying output until Flush() is fired. /// </summary> private bool IsOutputDelayed { get { if (TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Event that captures Response output and makes it available /// as a MemoryStream instance. Output is captured but won't /// affect Response output. /// </summary> public event Action<MemoryStream> CaptureStream; /// <summary> /// Event that captures Response output and makes it available /// as a string. Output is captured but won't affect Response output. /// </summary> public event Action<string> CaptureString; /// <summary> /// Event that allows you transform the stream as each chunk of /// the output is written in the Write() operation of the stream. /// This means that that it's possible/likely that the input /// buffer will not contain the full response output but only /// one of potentially many chunks. /// /// This event is called as part of the filter stream's Write() /// operation. /// </summary> public event Func<byte[], byte[]> TransformWrite; /// <summary> /// Event that allows you to transform the response stream as /// each chunk of bytep[] output is written during the stream's write /// operation. This means it's possibly/likely that the string /// passed to the handler only contains a portion of the full /// output. Typical buffer chunks are around 16k a piece. /// /// This event is called as part of the stream's Write operation. /// </summary> public event Func<string, string> TransformWriteString; /// <summary> /// This event allows capturing and transformation of the entire /// output stream by caching all write operations and delaying final /// response output until Flush() is called on the stream. /// </summary> public event Func<MemoryStream, MemoryStream> TransformStream; /// <summary> /// Event that can be hooked up to handle Response.Filter /// Transformation. Passed a string that you can modify and /// return back as a return value. The modified content /// will become the final output. /// </summary> public event Func<string, string> TransformString; protected virtual void OnCaptureStream(MemoryStream ms) { if (CaptureStream != null) CaptureStream(ms); } private void OnCaptureStringInternal(MemoryStream ms) { if (CaptureString != null) { string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); OnCaptureString(content); } } protected virtual void OnCaptureString(string output) { if (CaptureString != null) CaptureString(output); } protected virtual byte[] OnTransformWrite(byte[] buffer) { if (TransformWrite != null) return TransformWrite(buffer); return buffer; } private byte[] OnTransformWriteStringInternal(byte[] buffer) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = OnTransformWriteString(encoding.GetString(buffer)); return encoding.GetBytes(output); } private string OnTransformWriteString(string value) { if (TransformWriteString != null) return TransformWriteString(value); return value; } protected virtual MemoryStream OnTransformCompleteStream(MemoryStream ms) { if (TransformStream != null) return TransformStream(ms); return ms; } /// <summary> /// Allows transforming of strings /// /// Note this handler is internal and not meant to be overridden /// as the TransformString Event has to be hooked up in order /// for this handler to even fire to avoid the overhead of string /// conversion on every pass through. /// </summary> /// <param name="responseText"></param> /// <returns></returns> private string OnTransformCompleteString(string responseText) { if (TransformString != null) TransformString(responseText); return responseText; } /// <summary> /// Wrapper method form OnTransformString that handles /// stream to string and vice versa conversions /// </summary> /// <param name="ms"></param> /// <returns></returns> internal MemoryStream OnTransformCompleteStringInternal(MemoryStream ms) { if (TransformString == null) return ms; //string content = ms.GetAsString(); string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); content = TransformString(content); byte[] buffer = HttpContext.Current.Response.ContentEncoding.GetBytes(content); ms = new MemoryStream(); ms.Write(buffer, 0, buffer.Length); //ms.WriteString(content); return ms; } /// <summary> /// /// </summary> public override bool CanRead { get { return true; } } public override bool CanSeek { get { return true; } } /// <summary> /// /// </summary> public override bool CanWrite { get { return true; } } /// <summary> /// /// </summary> public override long Length { get { return 0; } } /// <summary> /// /// </summary> public override long Position { get { return _position; } set { _position = value; } } /// <summary> /// /// </summary> /// <param name="offset"></param> /// <param name="direction"></param> /// <returns></returns> public override long Seek(long offset, System.IO.SeekOrigin direction) { return _stream.Seek(offset, direction); } /// <summary> /// /// </summary> /// <param name="length"></param> public override void SetLength(long length) { _stream.SetLength(length); } /// <summary> /// /// </summary> public override void Close() { _stream.Close(); } /// <summary> /// Override flush by writing out the cached stream data /// </summary> public override void Flush() { if (IsCaptured && _cacheStream.Length > 0) { // Check for transform implementations _cacheStream = OnTransformCompleteStream(_cacheStream); _cacheStream = OnTransformCompleteStringInternal(_cacheStream); OnCaptureStream(_cacheStream); OnCaptureStringInternal(_cacheStream); // write the stream back out if output was delayed if (IsOutputDelayed) _stream.Write(_cacheStream.ToArray(), 0, (int)_cacheStream.Length); // Clear the cache once we've written it out _cacheStream.SetLength(0); } // default flush behavior _stream.Flush(); } /// <summary> /// /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> /// <returns></returns> public override int Read(byte[] buffer, int offset, int count) { return _stream.Read(buffer, offset, count); } /// <summary> /// Overriden to capture output written by ASP.NET and captured /// into a cached stream that is written out later when Flush() /// is called. /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> public override void Write(byte[] buffer, int offset, int count) { if ( IsCaptured ) { // copy to holding buffer only - we'll write out later _cacheStream.Write(buffer, 0, count); _cachePointer += count; } // just transform this buffer if (TransformWrite != null) buffer = OnTransformWrite(buffer); if (TransformWriteString != null) buffer = OnTransformWriteStringInternal(buffer); if (!IsOutputDelayed) _stream.Write(buffer, offset, buffer.Length); } } The key features are the events and corresponding OnXXX methods that handle the event hookups, and the Write() and Flush() methods of the stream implementation. All the rest of the members tend to be plain jane passthrough stream implementation code without much consequence. I do love the way Action<t> and Func<T> make it so easy to create the event signatures for the various events – sweet. A few Things to consider Performance Response.Filter is not great for performance in general as it adds another layer of indirection to the ASP.NET output pipeline, and this implementation in particular adds a memory hit as it basically duplicates the response output into the cached memory stream which is necessary since you may have to look at the entire response. If you have large pages in particular this can cause potentially serious memory pressure in your server application. So be careful of wholesale adoption of this (or other) Response.Filters. Make sure to do some performance testing to ensure it’s not killing your app’s performance. Response.Filter works everywhere A few questions came up in comments and discussion as to capturing ALL output hitting the site and – yes you can definitely do that by assigning a Response.Filter inside of a module. If you do this however you’ll want to be very careful and decide which content you actually want to capture especially in IIS 7 which passes ALL content – including static images/CSS etc. through the ASP.NET pipeline. So it is important to filter only on what you’re looking for – like the page extension or maybe more effectively the Response.ContentType. Response.Filter Chaining Originally I thought that filter chaining doesn’t work at all due to a bug in the stream implementation code. But it’s quite possible to assign multiple filters to the Response.Filter property. So the following actually works to both compress the output and apply the transformed content: WebUtils.GZipEncodePage(); ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; However the following does not work resulting in invalid content encoding errors: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; WebUtils.GZipEncodePage(); In other words multiple Response filters can work together but it depends entirely on the implementation whether they can be chained or in which order they can be chained. In this case running the GZip/Deflate stream filters apparently relies on the original content length of the output and chokes when the content is modified. But if attaching the compression first it works fine as unintuitive as that may seem. Resources Download example code Capture Output from ASP.NET Pages © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Custom ASP.NET Routing to an HttpHandler

    - by Rick Strahl
    As of version 4.0 ASP.NET natively supports routing via the now built-in System.Web.Routing namespace. Routing features are automatically integrated into the HtttpRuntime via a few custom interfaces. New Web Forms Routing Support In ASP.NET 4.0 there are a host of improvements including routing support baked into Web Forms via a RouteData property available on the Page class and RouteCollection.MapPageRoute() route handler that makes it easy to route to Web forms. To map ASP.NET Page routes is as simple as setting up the routes with MapPageRoute:protected void Application_Start(object sender, EventArgs e) { RegisterRoutes(RouteTable.Routes); } void RegisterRoutes(RouteCollection routes) { routes.MapPageRoute("StockQuote", "StockQuote/{symbol}", "StockQuote.aspx"); routes.MapPageRoute("StockQuotes", "StockQuotes/{symbolList}", "StockQuotes.aspx"); } and then accessing the route data in the page you can then use the new Page class RouteData property to retrieve the dynamic route data information:public partial class StockQuote1 : System.Web.UI.Page { protected StockQuote Quote = null; protected void Page_Load(object sender, EventArgs e) { string symbol = RouteData.Values["symbol"] as string; StockServer server = new StockServer(); Quote = server.GetStockQuote(symbol); // display stock data in Page View } } Simple, quick and doesn’t require much explanation. If you’re using WebForms most of your routing needs should be served just fine by this simple mechanism. Kudos to the ASP.NET team for putting this in the box and making it easy! How Routing Works To handle Routing in ASP.NET involves these steps: Registering Routes Creating a custom RouteHandler to retrieve an HttpHandler Attaching RouteData to your HttpHandler Picking up Route Information in your Request code Registering routes makes ASP.NET aware of the Routes you want to handle via the static RouteTable.Routes collection. You basically add routes to this collection to let ASP.NET know which URL patterns it should watch for. You typically hook up routes off a RegisterRoutes method that fires in Application_Start as I did in the example above to ensure routes are added only once when the application first starts up. When you create a route, you pass in a RouteHandler instance which ASP.NET caches and reuses as routes are matched. Once registered ASP.NET monitors the routes and if a match is found just prior to the HttpHandler instantiation, ASP.NET uses the RouteHandler registered for the route and calls GetHandler() on it to retrieve an HttpHandler instance. The RouteHandler.GetHandler() method is responsible for creating an instance of an HttpHandler that is to handle the request and – if necessary – to assign any additional custom data to the handler. At minimum you probably want to pass the RouteData to the handler so the handler can identify the request based on the route data available. To do this you typically add  a RouteData property to your handler and then assign the property from the RouteHandlers request context. This is essentially how Page.RouteData comes into being and this approach should work well for any custom handler implementation that requires RouteData. It’s a shame that ASP.NET doesn’t have a top level intrinsic object that’s accessible off the HttpContext object to provide route data more generically, but since RouteData is directly tied to HttpHandlers and not all handlers support it it might cause some confusion of when it’s actually available. Bottom line is that if you want to hold on to RouteData you have to assign it to a custom property of the handler or else pass it to the handler via Context.Items[] object that can be retrieved on an as needed basis. It’s important to understand that routing is hooked up via RouteHandlers that are responsible for loading HttpHandler instances. RouteHandlers are invoked for every request that matches a route and through this RouteHandler instance the Handler gains access to the current RouteData. Because of this logic it’s important to understand that Routing is really tied to HttpHandlers and not available prior to handler instantiation, which is pretty late in the HttpRuntime’s request pipeline. IOW, Routing works with Handlers but not with earlier in the pipeline within Modules. Specifically ASP.NET calls RouteHandler.GetHandler() from the PostResolveRequestCache HttpRuntime pipeline event. Here’s the call stack at the beginning of the GetHandler() call: which fires just before handler resolution. Non-Page Routing – You need to build custom RouteHandlers If you need to route to a custom Http Handler or other non-Page (and non-MVC) endpoint in the HttpRuntime, there is no generic mapping support available. You need to create a custom RouteHandler that can manage creating an instance of an HttpHandler that is fired in response to a routed request. Depending on what you are doing this process can be simple or fairly involved as your code is responsible based on the route data provided which handler to instantiate, and more importantly how to pass the route data on to the Handler. Luckily creating a RouteHandler is easy by implementing the IRouteHandler interface which has only a single GetHttpHandler(RequestContext context) method. In this method you can pick up the requestContext.RouteData, instantiate the HttpHandler of choice, and assign the RouteData to it. Then pass back the handler and you’re done.Here’s a simple example of GetHttpHandler() method that dynamically creates a handler based on a passed in Handler type./// <summary> /// Retrieves an Http Handler based on the type specified in the constructor /// </summary> /// <param name="requestContext"></param> /// <returns></returns> IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext) { IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; // If we're dealing with a Callback Handler // pass the RouteData for this route to the Handler if (handler is CallbackHandler) ((CallbackHandler)handler).RouteData = requestContext.RouteData; return handler; } Note that this code checks for a specific type of handler and if it matches assigns the RouteData to this handler. This is optional but quite a common scenario if you want to work with RouteData. If the handler you need to instantiate isn’t under your control but you still need to pass RouteData to Handler code, an alternative is to pass the RouteData via the HttpContext.Items collection:IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext) { IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; requestContext.HttpContext.Items["RouteData"] = requestContext.RouteData; return handler; } The code in the handler implementation can then pick up the RouteData from the context collection as needed:RouteData routeData = HttpContext.Current.Items["RouteData"] as RouteData This isn’t as clean as having an explicit RouteData property, but it does have the advantage that the route data is visible anywhere in the Handler’s code chain. It’s definitely preferable to create a custom property on your handler, but the Context work-around works in a pinch when you don’t’ own the handler code and have dynamic code executing as part of the handler execution. An Example of a Custom RouteHandler: Attribute Based Route Implementation In this post I’m going to discuss a custom routine implementation I built for my CallbackHandler class in the West Wind Web & Ajax Toolkit. CallbackHandler can be very easily used for creating AJAX, REST and POX requests following RPC style method mapping. You can pass parameters via URL query string, POST data or raw data structures, and you can retrieve results as JSON, XML or raw string/binary data. It’s a quick and easy way to build service interfaces with no fuss. As a quick review here’s how CallbackHandler works: You create an Http Handler that derives from CallbackHandler You implement methods that have a [CallbackMethod] Attribute and that’s it. Here’s an example of an CallbackHandler implementation in an ashx.cs based handler:// RestService.ashx.cs public class RestService : CallbackHandler { [CallbackMethod] public StockQuote GetStockQuote(string symbol) { StockServer server = new StockServer(); return server.GetStockQuote(symbol); } [CallbackMethod] public StockQuote[] GetStockQuotes(string symbolList) { StockServer server = new StockServer(); string[] symbols = symbolList.Split(new char[2] { ',',';' },StringSplitOptions.RemoveEmptyEntries); return server.GetStockQuotes(symbols); } } CallbackHandler makes it super easy to create a method on the server, pass data to it via POST, QueryString or raw JSON/XML data, and then retrieve the results easily back in various formats. This works wonderful and I’ve used these tools in many projects for myself and with clients. But one thing missing has been the ability to create clean URLs. Typical URLs looked like this: http://www.west-wind.com/WestwindWebToolkit/samples/Rest/StockService.ashx?Method=GetStockQuote&symbol=msfthttp://www.west-wind.com/WestwindWebToolkit/samples/Rest/StockService.ashx?Method=GetStockQuotes&symbolList=msft,intc,gld,slw,mwe&format=xml which works and is clear enough, but also clearly very ugly. It would be much nicer if URLs could look like this: http://www.west-wind.com//WestwindWebtoolkit/Samples/StockQuote/msfthttp://www.west-wind.com/WestwindWebtoolkit/Samples/StockQuotes/msft,intc,gld,slw?format=xml (the Virtual Root in this sample is WestWindWebToolkit/Samples and StockQuote/{symbol} is the route)(If you use FireFox try using the JSONView plug-in make it easier to view JSON content) So, taking a clue from the WCF REST tools that use RouteUrls I set out to create a way to specify RouteUrls for each of the endpoints. The change made basically allows changing the above to: [CallbackMethod(RouteUrl="RestService/StockQuote/{symbol}")] public StockQuote GetStockQuote(string symbol) { StockServer server = new StockServer(); return server.GetStockQuote(symbol); } [CallbackMethod(RouteUrl = "RestService/StockQuotes/{symbolList}")] public StockQuote[] GetStockQuotes(string symbolList) { StockServer server = new StockServer(); string[] symbols = symbolList.Split(new char[2] { ',',';' },StringSplitOptions.RemoveEmptyEntries); return server.GetStockQuotes(symbols); } where a RouteUrl is specified as part of the Callback attribute. And with the changes made with RouteUrls I can now get URLs like the second set shown earlier. So how does that work? Let’s find out… How to Create Custom Routes As mentioned earlier Routing is made up of several steps: Creating a custom RouteHandler to create HttpHandler instances Mapping the actual Routes to the RouteHandler Retrieving the RouteData and actually doing something useful with it in the HttpHandler In the CallbackHandler routing example above this works out to something like this: Create a custom RouteHandler that includes a property to track the method to call Set up the routes using Reflection against the class Looking for any RouteUrls in the CallbackMethod attribute Add a RouteData property to the CallbackHandler so we can access the RouteData in the code of the handler Creating a Custom Route Handler To make the above work I created a custom RouteHandler class that includes the actual IRouteHandler implementation as well as a generic and static method to automatically register all routes marked with the [CallbackMethod(RouteUrl="…")] attribute. Here’s the code:/// <summary> /// Route handler that can create instances of CallbackHandler derived /// callback classes. The route handler tracks the method name and /// creates an instance of the service in a predictable manner /// </summary> /// <typeparam name="TCallbackHandler">CallbackHandler type</typeparam> public class CallbackHandlerRouteHandler : IRouteHandler { /// <summary> /// Method name that is to be called on this route. /// Set by the automatically generated RegisterRoutes /// invokation. /// </summary> public string MethodName { get; set; } /// <summary> /// The type of the handler we're going to instantiate. /// Needed so we can semi-generically instantiate the /// handler and call the method on it. /// </summary> public Type CallbackHandlerType { get; set; } /// <summary> /// Constructor to pass in the two required components we /// need to create an instance of our handler. /// </summary> /// <param name="methodName"></param> /// <param name="callbackHandlerType"></param> public CallbackHandlerRouteHandler(string methodName, Type callbackHandlerType) { MethodName = methodName; CallbackHandlerType = callbackHandlerType; } /// <summary> /// Retrieves an Http Handler based on the type specified in the constructor /// </summary> /// <param name="requestContext"></param> /// <returns></returns> IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext) { IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; // If we're dealing with a Callback Handler // pass the RouteData for this route to the Handler if (handler is CallbackHandler) ((CallbackHandler)handler).RouteData = requestContext.RouteData; return handler; } /// <summary> /// Generic method to register all routes from a CallbackHandler /// that have RouteUrls defined on the [CallbackMethod] attribute /// </summary> /// <typeparam name="TCallbackHandler">CallbackHandler Type</typeparam> /// <param name="routes"></param> public static void RegisterRoutes<TCallbackHandler>(RouteCollection routes) { // find all methods var methods = typeof(TCallbackHandler).GetMethods(BindingFlags.Instance | BindingFlags.Public); foreach (var method in methods) { var attrs = method.GetCustomAttributes(typeof(CallbackMethodAttribute), false); if (attrs.Length < 1) continue; CallbackMethodAttribute attr = attrs[0] as CallbackMethodAttribute; if (string.IsNullOrEmpty(attr.RouteUrl)) continue; // Add the route routes.Add(method.Name, new Route(attr.RouteUrl, new CallbackHandlerRouteHandler(method.Name, typeof(TCallbackHandler)))); } } } The RouteHandler implements IRouteHandler, and its responsibility via the GetHandler method is to create an HttpHandler based on the route data. When ASP.NET calls GetHandler it passes a requestContext parameter which includes a requestContext.RouteData property. This parameter holds the current request’s route data as well as an instance of the current RouteHandler. If you look at GetHttpHandler() you can see that the code creates an instance of the handler we are interested in and then sets the RouteData property on the handler. This is how you can pass the current request’s RouteData to the handler. The RouteData object also has a  RouteData.RouteHandler property that is also available to the Handler later, which is useful in order to get additional information about the current route. In our case here the RouteHandler includes a MethodName property that identifies the method to execute in the handler since that value no longer comes from the URL so we need to figure out the method name some other way. The method name is mapped explicitly when the RouteHandler is created and here the static method that auto-registers all CallbackMethods with RouteUrls sets the method name when it creates the routes while reflecting over the methods (more on this in a minute). The important point here is that you can attach additional properties to the RouteHandler and you can then later access the RouteHandler and its properties later in the Handler to pick up these custom values. This is a crucial feature in that the RouteHandler serves in passing additional context to the handler so it knows what actions to perform. The automatic route registration is handled by the static RegisterRoutes<TCallbackHandler> method. This method is generic and totally reusable for any CallbackHandler type handler. To register a CallbackHandler and any RouteUrls it has defined you simple use code like this in Application_Start (or other application startup code):protected void Application_Start(object sender, EventArgs e) { // Register Routes for RestService CallbackHandlerRouteHandler.RegisterRoutes<RestService>(RouteTable.Routes); } If you have multiple CallbackHandler style services you can make multiple calls to RegisterRoutes for each of the service types. RegisterRoutes internally uses reflection to run through all the methods of the Handler, looking for CallbackMethod attributes and whether a RouteUrl is specified. If it is a new instance of a CallbackHandlerRouteHandler is created and the name of the method and the type are set. routes.Add(method.Name,           new Route(attr.RouteUrl, new CallbackHandlerRouteHandler(method.Name, typeof(TCallbackHandler) )) ); While the routing with CallbackHandlerRouteHandler is set up automatically for all methods that use the RouteUrl attribute, you can also use code to hook up those routes manually and skip using the attribute. The code for this is straightforward and just requires that you manually map each individual route to each method you want a routed: protected void Application_Start(objectsender, EventArgs e){    RegisterRoutes(RouteTable.Routes);}void RegisterRoutes(RouteCollection routes) { routes.Add("StockQuote Route",new Route("StockQuote/{symbol}",                     new CallbackHandlerRouteHandler("GetStockQuote",typeof(RestService) ) ) );     routes.Add("StockQuotes Route",new Route("StockQuotes/{symbolList}",                     new CallbackHandlerRouteHandler("GetStockQuotes",typeof(RestService) ) ) );}I think it’s clearly easier to have CallbackHandlerRouteHandler.RegisterRoutes() do this automatically for you based on RouteUrl attributes, but some people have a real aversion to attaching logic via attributes. Just realize that the option to manually create your routes is available as well. Using the RouteData in the Handler A RouteHandler’s responsibility is to create an HttpHandler and as mentioned earlier, natively IHttpHandler doesn’t have any support for RouteData. In order to utilize RouteData in your handler code you have to pass the RouteData to the handler. In my CallbackHandlerRouteHandler when it creates the HttpHandler instance it creates the instance and then assigns the custom RouteData property on the handler:IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; if (handler is CallbackHandler) ((CallbackHandler)handler).RouteData = requestContext.RouteData; return handler; Again this only works if you actually add a RouteData property to your handler explicitly as I did in my CallbackHandler implementation:/// <summary> /// Optionally store RouteData on this handler /// so we can access it internally /// </summary> public RouteData RouteData {get; set; } and the RouteHandler needs to set it when it creates the handler instance. Once you have the route data in your handler you can access Route Keys and Values and also the RouteHandler. Since my RouteHandler has a custom property for the MethodName to retrieve it from within the handler I can do something like this now to retrieve the MethodName (this example is actually not in the handler but target is an instance pass to the processor): // check for Route Data method name if (target is CallbackHandler) { var routeData = ((CallbackHandler)target).RouteData; if (routeData != null) methodToCall = ((CallbackHandlerRouteHandler)routeData.RouteHandler).MethodName; } When I need to access the dynamic values in the route ( symbol in StockQuote/{symbol}) I can retrieve it easily with the Values collection (RouteData.Values["symbol"]). In my CallbackHandler processing logic I’m basically looking for matching parameter names to Route parameters: // look for parameters in the routeif(routeData != null){    string parmString = routeData.Values[parameter.Name] as string;    adjustedParms[parmCounter] = ReflectionUtils.StringToTypedValue(parmString, parameter.ParameterType);} And with that we’ve come full circle. We’ve created a custom RouteHandler() that passes the RouteData to the handler it creates. We’ve registered our routes to use the RouteHandler, and we’ve utilized the route data in our handler. For completeness sake here’s the routine that executes a method call based on the parameters passed in and one of the options is to retrieve the inbound parameters off RouteData (as well as from POST data or QueryString parameters):internal object ExecuteMethod(string method, object target, string[] parameters, CallbackMethodParameterType paramType, ref CallbackMethodAttribute callbackMethodAttribute) { HttpRequest Request = HttpContext.Current.Request; object Result = null; // Stores parsed parameters (from string JSON or QUeryString Values) object[] adjustedParms = null; Type PageType = target.GetType(); MethodInfo MI = PageType.GetMethod(method, BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic); if (MI == null) throw new InvalidOperationException("Invalid Server Method."); object[] methods = MI.GetCustomAttributes(typeof(CallbackMethodAttribute), false); if (methods.Length < 1) throw new InvalidOperationException("Server method is not accessible due to missing CallbackMethod attribute"); if (callbackMethodAttribute != null) callbackMethodAttribute = methods[0] as CallbackMethodAttribute; ParameterInfo[] parms = MI.GetParameters(); JSONSerializer serializer = new JSONSerializer(); RouteData routeData = null; if (target is CallbackHandler) routeData = ((CallbackHandler)target).RouteData; int parmCounter = 0; adjustedParms = new object[parms.Length]; foreach (ParameterInfo parameter in parms) { // Retrieve parameters out of QueryString or POST buffer if (parameters == null) { // look for parameters in the route if (routeData != null) { string parmString = routeData.Values[parameter.Name] as string; adjustedParms[parmCounter] = ReflectionUtils.StringToTypedValue(parmString, parameter.ParameterType); } // GET parameter are parsed as plain string values - no JSON encoding else if (HttpContext.Current.Request.HttpMethod == "GET") { // Look up the parameter by name string parmString = Request.QueryString[parameter.Name]; adjustedParms[parmCounter] = ReflectionUtils.StringToTypedValue(parmString, parameter.ParameterType); } // POST parameters are treated as methodParameters that are JSON encoded else if (paramType == CallbackMethodParameterType.Json) //string newVariable = methodParameters.GetValue(parmCounter) as string; adjustedParms[parmCounter] = serializer.Deserialize(Request.Params["parm" + (parmCounter + 1).ToString()], parameter.ParameterType); else adjustedParms[parmCounter] = SerializationUtils.DeSerializeObject( Request.Params["parm" + (parmCounter + 1).ToString()], parameter.ParameterType); } else if (paramType == CallbackMethodParameterType.Json) adjustedParms[parmCounter] = serializer.Deserialize(parameters[parmCounter], parameter.ParameterType); else adjustedParms[parmCounter] = SerializationUtils.DeSerializeObject(parameters[parmCounter], parameter.ParameterType); parmCounter++; } Result = MI.Invoke(target, adjustedParms); return Result; } The code basically uses Reflection to loop through all the parameters available on the method and tries to assign the parameters from RouteData, QueryString or POST variables. The parameters are converted into their appropriate types and then used to eventually make a Reflection based method call. What’s sweet is that the RouteData retrieval is just another option for dealing with the inbound data in this scenario and it adds exactly two lines of code plus the code to retrieve the MethodName I showed previously – a seriously low impact addition that adds a lot of extra value to this endpoint callback processing implementation. Debugging your Routes If you create a lot of routes it’s easy to run into Route conflicts where multiple routes have the same path and overlap with each other. This can be difficult to debug especially if you are using automatically generated routes like the routes created by CallbackHandlerRouteHandler.RegisterRoutes. Luckily there’s a tool that can help you out with this nicely. Phill Haack created a RouteDebugging tool you can download and add to your project. The easiest way to do this is to grab and add this to your project is to use NuGet (Add Library Package from your Project’s Reference Nodes):   which adds a RouteDebug assembly to your project. Once installed you can easily debug your routes with this simple line of code which needs to be installed at application startup:protected void Application_Start(object sender, EventArgs e) { CallbackHandlerRouteHandler.RegisterRoutes<StockService>(RouteTable.Routes); // Debug your routes RouteDebug.RouteDebugger.RewriteRoutesForTesting(RouteTable.Routes); } Any routed URL then displays something like this: The screen shows you your current route data and all the routes that are mapped along with a flag that displays which route was actually matched. This is useful – if you have any overlap of routes you will be able to see which routes are triggered – the first one in the sequence wins. This tool has saved my ass on a few occasions – and with NuGet now it’s easy to add it to your project in a few seconds and then remove it when you’re done. Routing Around Custom routing seems slightly complicated on first blush due to its disconnected components of RouteHandler, route registration and mapping of custom handlers. But once you understand the relationship between a RouteHandler, the RouteData and how to pass it to a handler, utilizing of Routing becomes a lot easier as you can easily pass context from the registration to the RouteHandler and through to the HttpHandler. The most important thing to understand when building custom routing solutions is to figure out how to map URLs in such a way that the handler can figure out all the pieces it needs to process the request. This can be via URL routing parameters and as I did in my example by passing additional context information as part of the RouteHandler instance that provides the proper execution context. In my case this ‘context’ was the method name, but it could be an actual static value like an enum identifying an operation or category in an application. Basically user supplied data comes in through the url and static application internal data can be passed via RouteHandler property values. Routing can make your application URLs easier to read by non-techie types regardless of whether you’re building Service type or REST applications, or full on Web interfaces. Routing in ASP.NET 4.0 makes it possible to create just about any extensionless URLs you can dream up and custom RouteHanmdler References Sample ProjectIncludes the sample CallbackHandler service discussed here along with compiled versionsof the Westwind.Web and Westwind.Utilities assemblies.  (requires .NET 4.0/VS 2010) West Wind Web Toolkit includes full implementation of CallbackHandler and the Routing Handler West Wind Web Toolkit Source CodeContains the full source code to the Westwind.Web and Westwind.Utilities assemblies usedin these samples. Includes the source described in the post.(Latest build in the Subversion Repository) CallbackHandler Source(Relevant code to this article tree in Westwind.Web assembly) JSONView FireFoxPluginA simple FireFox Plugin to easily view JSON data natively in FireFox.For IE you can use a registry hack to display JSON as raw text.© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  AJAX  HTTP  

    Read the article

  • Another "Windows 7 entry missing from Grub2" Question

    - by 4x10
    Like many before me had the following problem that after installing Ubuntu (with windows 7 already installed), the grub boot loader wouldnt show windows 7 as a boot option, though i can boot fine if I use the "Choose Boot Device" options on the x220. The difference is that I try using UEFI only so many answers didn't really fit my problem, though i tried several stuffs: after running boot repair it destroyed the ubuntu boot loader custom entry in /etc/grub.d/40_custom for windows which doesnt show up many update-grub and reboots trying windows repair recovery thing while being there i also did bootrec.exe /FixBoot and update-grub and reboot again and finaly because it was so much fun, i installed linux all over again, while formatting and deleting everything linux related before that. Now that i think of it, Ubuntu also didn't notice Windows being there during the Setup and it still doesnt according to the Boot Info from Boot Repair. Boot Info Script 0.61-git-patched [23 April 2012] ============================= Boot Info Summary: =============================== => No boot loader is installed in the MBR of /dev/sda. sda1: __________________________________________________________________________ File system: vfat Boot sector type: Windows 7: FAT32 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /efi/Boot/bootx64.efi /efi/ubuntu/grubx64.efi sda2: __________________________________________________________________________ File system: Boot sector type: - Boot sector info: Mounting failed: mount: unknown filesystem type '' sda3: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /Windows/System32/winload.exe sda4: __________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu precise (development branch) Boot files: /boot/grub/grub.cfg /etc/fstab sda5: __________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Boot files: sda6: __________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: ============================ Drive/Partition Info: ============================= Drive: sda _____________________________________________________________________ Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 1 625,142,447 625,142,447 ee GPT GUID Partition Table detected. Partition Start Sector End Sector # of Sectors System /dev/sda1 2,048 206,847 204,800 EFI System partition /dev/sda2 206,848 468,991 262,144 Microsoft Reserved Partition (Windows) /dev/sda3 468,992 170,338,303 169,869,312 Data partition (Windows/Linux) /dev/sda4 170,338,304 330,338,304 160,000,001 Data partition (Windows/Linux) /dev/sda5 330,338,305 617,141,039 286,802,735 Data partition (Windows/Linux) /dev/sda6 617,141,040 625,141,040 8,000,001 Swap partition (Linux) "blkid" output: ________________________________________________________________ Device UUID TYPE LABEL /dev/sda1 885C-ED1B vfat /dev/sda3 EE06CC0506CBCCB1 ntfs /dev/sda4 604dd3b2-64ca-4200-b8fb-820e8d0ca899 ext4 /dev/sda5 d62515fd-8120-4a74-b17b-0bdf244124a3 ext4 /dev/sda6 7078b649-fb2a-4c59-bd03-fd31ef440d37 swap ================================ Mount points: ================================= Device Mount_Point Type Options /dev/sda1 /boot/efi vfat (rw) /dev/sda4 / ext4 (rw,errors=remount-ro) /dev/sda5 /home ext4 (rw) =========================== sda4/boot/grub/grub.cfg: =========================== -------------------------------------------------------------------------------- # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod efi_gop insmod efi_uga insmod video_bochs insmod video_cirrus } insmod part_gpt insmod ext2 set root='(hd0,gpt4)' search --no-floppy --fs-uuid --set=root 604dd3b2-64ca-4200-b8fb-820e8d0ca899 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod part_gpt insmod ext2 set root='(hd0,gpt4)' search --no-floppy --fs-uuid --set=root 604dd3b2-64ca-4200-b8fb-820e8d0ca899 set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="$1" if [ "$1" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ ${recordfail} != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "$linux_gfx_mode" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 3.2.0-20-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 set root='(hd0,gpt4)' search --no-floppy --fs-uuid --set=root 604dd3b2-64ca-4200-b8fb-820e8d0ca899 linux /boot/vmlinuz-3.2.0-20-generic root=UUID=604dd3b2-64ca-4200-b8fb-820e8d0ca899 ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-20-generic } menuentry 'Ubuntu, with Linux 3.2.0-20-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_gpt insmod ext2 set root='(hd0,gpt4)' search --no-floppy --fs-uuid --set=root 604dd3b2-64ca-4200-b8fb-820e8d0ca899 echo 'Loading Linux 3.2.0-20-generic ...' linux /boot/vmlinuz-3.2.0-20-generic root=UUID=604dd3b2-64ca-4200-b8fb-820e8d0ca899 ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.2.0-20-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_gpt insmod ext2 set root='(hd0,gpt4)' search --no-floppy --fs-uuid --set=root 604dd3b2-64ca-4200-b8fb-820e8d0ca899 linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_gpt insmod ext2 set root='(hd0,gpt4)' search --no-floppy --fs-uuid --set=root 604dd3b2-64ca-4200-b8fb-820e8d0ca899 linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### -------------------------------------------------------------------------------- =============================== sda4/etc/fstab: ================================ -------------------------------------------------------------------------------- # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda4 during installation UUID=604dd3b2-64ca-4200-b8fb-820e8d0ca899 / ext4 errors=remount-ro 0 1 # /boot/efi was on /dev/sda1 during installation UUID=885C-ED1B /boot/efi vfat defaults 0 1 # /home was on /dev/sda5 during installation UUID=d62515fd-8120-4a74-b17b-0bdf244124a3 /home ext4 defaults 0 2 # swap was on /dev/sda6 during installation UUID=7078b649-fb2a-4c59-bd03-fd31ef440d37 none swap sw 0 0 -------------------------------------------------------------------------------- =================== sda4: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) 129.422874451 = 138.966753280 boot/grub/grub.cfg 1 83.059570312 = 89.184534528 boot/initrd.img-3.2.0-20-generic 2 101.393131256 = 108.870045696 boot/vmlinuz-3.2.0-20-generic 1 83.059570312 = 89.184534528 initrd.img 2 101.393131256 = 108.870045696 vmlinuz 1 ADDITIONAL INFORMATION : =================== log of boot-repair 2012-04-25__23h40 =================== boot-repair version : 3.18-0ppa3~precise boot-sav version : 3.18-0ppa4~precise glade2script version : 0.3.2.1-0ppa7~precise internet: connected python-software-properties version : 0.82.7 0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 591 not upgraded. dpkg-preconfigure: unable to re-open stdin: No such file or directory boot-repair is executed in installed-session (Ubuntu precise (development branch) , precise , Ubuntu , x86_64) WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. =================== OSPROBER: /dev/sda4:The OS now in use - Ubuntu precise (development branch) CurrentSession:linux =================== BLKID: /dev/sda3: UUID="EE06CC0506CBCCB1" TYPE="ntfs" /dev/sda1: UUID="885C-ED1B" TYPE="vfat" /dev/sda4: UUID="604dd3b2-64ca-4200-b8fb-820e8d0ca899" TYPE="ext4" /dev/sda5: UUID="d62515fd-8120-4a74-b17b-0bdf244124a3" TYPE="ext4" /dev/sda6: UUID="7078b649-fb2a-4c59-bd03-fd31ef440d37" TYPE="swap" 1 disks with OS, 1 OS : 1 Linux, 0 MacOS, 0 Windows, 0 unknown type OS. WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util sfdisk doesn't support GPT. Use GNU Parted. =================== /etc/default/grub : # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 #GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" EFI_OF_PART[1] (, ) =================== dmesg | grep EFI : [ 0.000000] EFI v2.00 by Lenovo [ 0.000000] Kernel-defined memdesc doesn't match the one from EFI! [ 0.000000] EFI: mem00: type=3, attr=0xf, range=[0x0000000000000000-0x0000000000001000) (0MB) [ 0.000000] EFI: mem01: type=7, attr=0xf, range=[0x0000000000001000-0x000000000004e000) (0MB) [ 0.000000] EFI: mem02: type=3, attr=0xf, range=[0x000000000004e000-0x0000000000058000) (0MB) [ 0.000000] EFI: mem03: type=10, attr=0xf, range=[0x0000000000058000-0x0000000000059000) (0MB) [ 0.000000] EFI: mem04: type=7, attr=0xf, range=[0x0000000000059000-0x000000000005e000) (0MB) [ 0.000000] EFI: mem05: type=4, attr=0xf, range=[0x000000000005e000-0x000000000005f000) (0MB) [ 0.000000] EFI: mem06: type=3, attr=0xf, range=[0x000000000005f000-0x00000000000a0000) (0MB) [ 0.000000] EFI: mem07: type=2, attr=0xf, range=[0x0000000000100000-0x00000000005b9000) (4MB) [ 0.000000] EFI: mem08: type=7, attr=0xf, range=[0x00000000005b9000-0x0000000020000000) (506MB) [ 0.000000] EFI: mem09: type=0, attr=0xf, range=[0x0000000020000000-0x0000000020200000) (2MB) [ 0.000000] EFI: mem10: type=7, attr=0xf, range=[0x0000000020200000-0x00000000364e4000) (354MB) [ 0.000000] EFI: mem11: type=2, attr=0xf, range=[0x00000000364e4000-0x000000003726a000) (13MB) [ 0.000000] EFI: mem12: type=7, attr=0xf, range=[0x000000003726a000-0x0000000040000000) (141MB) [ 0.000000] EFI: mem13: type=0, attr=0xf, range=[0x0000000040000000-0x0000000040200000) (2MB) [ 0.000000] EFI: mem14: type=7, attr=0xf, range=[0x0000000040200000-0x000000009df35000) (1501MB) [ 0.000000] EFI: mem15: type=2, attr=0xf, range=[0x000000009df35000-0x00000000d39a0000) (858MB) [ 0.000000] EFI: mem16: type=4, attr=0xf, range=[0x00000000d39a0000-0x00000000d39c0000) (0MB) [ 0.000000] EFI: mem17: type=7, attr=0xf, range=[0x00000000d39c0000-0x00000000d5df5000) (36MB) [ 0.000000] EFI: mem18: type=4, attr=0xf, range=[0x00000000d5df5000-0x00000000d6990000) (11MB) [ 0.000000] EFI: mem19: type=7, attr=0xf, range=[0x00000000d6990000-0x00000000d6b82000) (1MB) [ 0.000000] EFI: mem20: type=1, attr=0xf, range=[0x00000000d6b82000-0x00000000d6b9f000) (0MB) [ 0.000000] EFI: mem21: type=7, attr=0xf, range=[0x00000000d6b9f000-0x00000000d77b0000) (12MB) [ 0.000000] EFI: mem22: type=4, attr=0xf, range=[0x00000000d77b0000-0x00000000d780a000) (0MB) [ 0.000000] EFI: mem23: type=7, attr=0xf, range=[0x00000000d780a000-0x00000000d7826000) (0MB) [ 0.000000] EFI: mem24: type=4, attr=0xf, range=[0x00000000d7826000-0x00000000d7868000) (0MB) [ 0.000000] EFI: mem25: type=7, attr=0xf, range=[0x00000000d7868000-0x00000000d7869000) (0MB) [ 0.000000] EFI: mem26: type=4, attr=0xf, range=[0x00000000d7869000-0x00000000d786a000) (0MB) [ 0.000000] EFI: mem27: type=7, attr=0xf, range=[0x00000000d786a000-0x00000000d786b000) (0MB) [ 0.000000] EFI: mem28: type=4, attr=0xf, range=[0x00000000d786b000-0x00000000d786c000) (0MB) [ 0.000000] EFI: mem29: type=7, attr=0xf, range=[0x00000000d786c000-0x00000000d786d000) (0MB) [ 0.000000] EFI: mem30: type=4, attr=0xf, range=[0x00000000d786d000-0x00000000d825f000) (9MB) [ 0.000000] EFI: mem31: type=7, attr=0xf, range=[0x00000000d825f000-0x00000000d8261000) (0MB) [ 0.000000] EFI: mem32: type=4, attr=0xf, range=[0x00000000d8261000-0x00000000d82f7000) (0MB) [ 0.000000] EFI: mem33: type=7, attr=0xf, range=[0x00000000d82f7000-0x00000000d82f8000) (0MB) [ 0.000000] EFI: mem34: type=4, attr=0xf, range=[0x00000000d82f8000-0x00000000d8705000) (4MB) [ 0.000000] EFI: mem35: type=7, attr=0xf, range=[0x00000000d8705000-0x00000000d8706000) (0MB) [ 0.000000] EFI: mem36: type=4, attr=0xf, range=[0x00000000d8706000-0x00000000d8761000) (0MB) [ 0.000000] EFI: mem37: type=7, attr=0xf, range=[0x00000000d8761000-0x00000000d8768000) (0MB) [ 0.000000] EFI: mem38: type=4, attr=0xf, range=[0x00000000d8768000-0x00000000d9b9f000) (20MB) [ 0.000000] EFI: mem39: type=7, attr=0xf, range=[0x00000000d9b9f000-0x00000000d9e4c000) (2MB) [ 0.000000] EFI: mem40: type=2, attr=0xf, range=[0x00000000d9e4c000-0x00000000d9e52000) (0MB) [ 0.000000] EFI: mem41: type=3, attr=0xf, range=[0x00000000d9e52000-0x00000000da59f000) (7MB) [ 0.000000] EFI: mem42: type=5, attr=0x800000000000000f, range=[0x00000000da59f000-0x00000000da6c3000) (1MB) [ 0.000000] EFI: mem43: type=5, attr=0x800000000000000f, range=[0x00000000da6c3000-0x00000000da79f000) (0MB) [ 0.000000] EFI: mem44: type=6, attr=0x800000000000000f, range=[0x00000000da79f000-0x00000000da8b1000) (1MB) [ 0.000000] EFI: mem45: type=6, attr=0x800000000000000f, range=[0x00000000da8b1000-0x00000000da99f000) (0MB) [ 0.000000] EFI: mem46: type=0, attr=0xf, range=[0x00000000da99f000-0x00000000daa22000) (0MB) [ 0.000000] EFI: mem47: type=0, attr=0xf, range=[0x00000000daa22000-0x00000000daa9b000) (0MB) [ 0.000000] EFI: mem48: type=0, attr=0xf, range=[0x00000000daa9b000-0x00000000daa9c000) (0MB) [ 0.000000] EFI: mem49: type=0, attr=0xf, range=[0x00000000daa9c000-0x00000000daa9f000) (0MB) [ 0.000000] EFI: mem50: type=10, attr=0xf, range=[0x00000000daa9f000-0x00000000daadd000) (0MB) [ 0.000000] EFI: mem51: type=10, attr=0xf, range=[0x00000000daadd000-0x00000000dab9f000) (0MB) [ 0.000000] EFI: mem52: type=9, attr=0xf, range=[0x00000000dab9f000-0x00000000dabdc000) (0MB) [ 0.000000] EFI: mem53: type=9, attr=0xf, range=[0x00000000dabdc000-0x00000000dabff000) (0MB) [ 0.000000] EFI: mem54: type=4, attr=0xf, range=[0x00000000dabff000-0x00000000dac00000) (0MB) [ 0.000000] EFI: mem55: type=7, attr=0xf, range=[0x0000000100000000-0x000000021e600000) (4582MB) [ 0.000000] EFI: mem56: type=11, attr=0x8000000000000001, range=[0x00000000f80f8000-0x00000000f80f9000) (0MB) [ 0.000000] EFI: mem57: type=11, attr=0x8000000000000001, range=[0x00000000fed1c000-0x00000000fed20000) (0MB) [ 0.000000] ACPI: UEFI 00000000dabde000 0003E (v01 LENOVO TP-8D 00001280 PTL 00000002) [ 0.000000] ACPI: UEFI 00000000dabdd000 00042 (v01 PTL COMBUF 00000001 PTL 00000001) [ 0.000000] ACPI: UEFI 00000000dabdc000 00292 (v01 LENOVO TP-8D 00001280 PTL 00000002) [ 0.795807] fb0: EFI VGA frame buffer device [ 1.057243] EFI Variables Facility v0.08 2004-May-17 [ 9.122104] fb: conflicting fb hw usage inteldrmfb vs EFI VGA - removing generic driver ReadEFI: /dev/sda , N 128 , 0 , , PRStart 1024 , PRSize 128 WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. =================== PARTITIONS & DISKS: sda4 : sda, not-sepboot, grubenv-ok grub2, grub-efi, update-grub, 64, with-boot, is-os, gpt-but-not-EFI, fstab-has-bad-efi, no-nt, no-winload, no-recov-nor-hid, no-bmgr, no-grldr, no-b-bcd, apt-get, grub-install, . sda3 : sda, maybesepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, no-os, gpt-but-not-EFI, part-has-no-fstab, no-nt, haswinload, no-recov-nor-hid, no-bmgr, no-grldr, no-b-bcd, nopakmgr, nogrubinstall, /mnt/boot-sav/sda3. sda1 : sda, maybesepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, no-os, is-correct-EFI, part-has-no-fstab, no-nt, no-winload, no-recov-nor-hid, no-bmgr, no-grldr, no-b-bcd, nopakmgr, nogrubinstall, /boot/efi. sda5 : sda, maybesepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, no-os, gpt-but-not-EFI, part-has-no-fstab, no-nt, no-winload, no-recov-nor-hid, no-bmgr, no-grldr, no-b-bcd, nopakmgr, nogrubinstall, /home. sda : GPT-BIS, GPT, no-BIOS_boot, has-correctEFI, 2048 sectors * 512 bytes =================== PARTED: Model: ATA HITACHI HTS72323 (scsi) Disk /dev/sda: 320GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 106MB 105MB fat32 EFI system partition boot 2 106MB 240MB 134MB Microsoft reserved partition msftres 3 240MB 87.2GB 87.0GB ntfs Basic data partition 4 87.2GB 169GB 81.9GB ext4 5 169GB 316GB 147GB ext4 6 316GB 320GB 4096MB linux-swap(v1) =================== MOUNT: /dev/sda4 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) /dev/sda1 on /boot/efi type vfat (rw) /dev/sda5 on /home type ext4 (rw) gvfs-fuse-daemon on /home/vierlex/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=vierlex) /dev/sda3 on /mnt/boot-sav/sda3 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) /sys/block/sda: alignment_offset bdi capability dev device discard_alignment events events_async events_poll_msecs ext_range holders inflight power queue range removable ro sda1 sda2 sda3 sda4 sda5 sda6 size slaves stat subsystem trace uevent /dev: agpgart autofs block bsg btrfs-control bus char console core cpu cpu_dma_latency disk dri ecryptfs fb0 fd full fuse hpet input kmsg log mapper mcelog mei mem net network_latency network_throughput null oldmem port ppp psaux ptmx pts random rfkill rtc rtc0 sda sda1 sda2 sda3 sda4 sda5 sda6 sg0 shm snapshot snd stderr stdin stdout tpm0 uinput urandom usbmon0 usbmon1 usbmon2 v4l vga_arbiter video0 watchdog zero /dev/mapper: control /boot/efi: EFI /boot/efi/EFI: Boot Microsoft ubuntu /boot/efi/efi: Boot Microsoft ubuntu /boot/efi/efi/Boot: bootx64.efi /boot/efi/efi/ubuntu: grubx64.efi WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. =================== DF: Filesystem Type Size Used Avail Use% Mounted on /dev/sda4 ext4 77G 4.1G 69G 6% / udev devtmpfs 3.9G 12K 3.9G 1% /dev tmpfs tmpfs 1.6G 864K 1.6G 1% /run none tmpfs 5.0M 0 5.0M 0% /run/lock none tmpfs 3.9G 152K 3.9G 1% /run/shm /dev/sda1 vfat 96M 18M 79M 19% /boot/efi /dev/sda5 ext4 137G 2.2G 128G 2% /home /dev/sda3 fuseblk 81G 30G 52G 37% /mnt/boot-sav/sda3 =================== FDISK: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xf34fe538 Device Boot Start End Blocks Id System /dev/sda1 1 625142447 312571223+ ee GPT =================== Before mainwindow FSCK no PASTEBIN yes WUBI no WINBOOT yes recommendedrepair, purge, QTY_OF_PART_FOR_REINSTAL 1 no-kernel-purge UNHIDEBOOT_ACTION yes (10s), noflag () PART_TO_REINSTALL_GRUB sda4, FORCE_GRUB no (sda) REMOVABLEDISK no USE_SEPARATEBOOTPART no (sda3) grub2 () UNCOMMENT_GFXMODE no ATA ADD_KERNEL_OPTION no (acpi=off) MBR_TO_RESTORE ( ) EFI detected. Please check the options. =================== Actions FSCK no PASTEBIN yes WUBI no WINBOOT no bootinfo, nombraction, QTY_OF_PART_FOR_REINSTAL 1 no-kernel-purge UNHIDEBOOT_ACTION no (10s), noflag () PART_TO_REINSTALL_GRUB sda4, FORCE_GRUB no (sda) REMOVABLEDISK no USE_SEPARATEBOOTPART no (sda3) grub2 () UNCOMMENT_GFXMODE no ATA ADD_KERNEL_OPTION no (acpi=off) MBR_TO_RESTORE ( ) No change has been performed on your computer. See you soon! internet: connected Thanks for your time and attention. EDIT: additional Info Request =No boot loader is installed in the MBR of /dev/sda. But maybe this is how it is supposed to work? yea this is ok. boot stuff seems to be on a seperate partition, in my case sda1. I'm very new to this UEFI thing too. missing files like bootmgr i don't really have a clue :D but yea, maybe thats how it suppose to be? Instead and whats not shown in the log for some reason: There is additional microsoft bootfiles on sda1 under /efi/microsoft/ [much stuff] I remember also doing some kind of hack to make a UEFI windows 7 usb stick. http://jake.io/b/2011/installing-windows-7-with-uefi-boot-on-an-x220-from-usb/ In short: creating and placing bootx64.efi on the stick so it can be booted in UEFI mode. boot order i decide that in my BIOS. i read somwhere that the thinkpad x220 (essential part of the serial number: 4921 http://www.lenovo.com/shop/americas/content/user_guides/x220_x220i_x220tablet_x220itablet_ug_en.pdf) doesnt really have UEFI interface or something, still, these 2 options are listed with all the other usual devices you can give a boot priority to. Right now it looks like this: Boot Priority Order 1. ubuntu 2. Windows Boot Manager 3. USB FDD 4. USB HDD 5. ATA HDD0 HITACHI [random string]

    Read the article

  • JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue

    - by John-Brown.Evans
    JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue ol{margin:0;padding:0} .c11_4{vertical-align:top;width:129.8pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c9_4{vertical-align:top;width:207pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt}.c14{vertical-align:top;width:207pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c17_4{vertical-align:top;width:129.8pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c7_4{vertical-align:top;width:130pt;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c19_4{vertical-align:top;width:468pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c22_4{background-color:#ffffff} .c20_4{list-style-type:disc;margin:0;padding:0} .c6_4{font-size:8pt;font-family:"Courier New"} .c24_4{color:inherit;text-decoration:inherit} .c23_4{color:#1155cc;text-decoration:underline} .c0_4{height:11pt;direction:ltr} .c10_4{font-size:10pt;font-family:"Courier New"} .c3_4{padding-left:0pt;margin-left:36pt} .c18_4{font-size:8pt} .c8_4{text-align:center} .c12_4{background-color:#ffff00} .c2_4{font-weight:bold} .c21_4{background-color:#00ff00} .c4_4{line-height:1.0} .c1_4{direction:ltr} .c15_4{background-color:#f3f3f3} .c13_4{font-family:"Courier New"} .c5_4{font-style:italic} .c16_4{border-collapse:collapse} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:bold;padding-bottom:0pt} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-style:italic;font-size:11pt;font-family:"Arial";padding-bottom:0pt} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-style:italic;font-size:10pt;font-family:"Arial";padding-bottom:0pt} This post continues the series of JMS articles which demonstrate how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue In this example we will create a BPEL process which will write (enqueue) a message to a JMS queue using a JMS adapter. The JMS adapter will enqueue the full XML payload to the queue. This sample will use the following WebLogic Server objects. The first two, the Connection Factory and JMS Queue, were created as part of the first blog post in this series, JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g. If you haven't created those objects yet, please see that post for details on how to do so. The Connection Pool will be created as part of this example. Object Name Type JNDI Name TestConnectionFactory Connection Factory jms/TestConnectionFactory TestJMSQueue JMS Queue jms/TestJMSQueue eis/wls/TestQueue Connection Pool eis/wls/TestQueue 1. Verify Connection Factory and JMS Queue As mentioned above, this example uses a WLS Connection Factory called TestConnectionFactory and a JMS queue TestJMSQueue. As these are prerequisites for this example, let us verify they exist. Log in to the WebLogic Server Administration Console. Select Services > JMS Modules > TestJMSModule You should see the following objects: If not, or if the TestJMSModule is missing, please see the abovementioned article and create these objects before continuing. 2. Create a JMS Adapter Connection Pool in WebLogic Server The BPEL process we are about to create uses a JMS adapter to write to the JMS queue. The JMS adapter is deployed to the WebLogic server and needs to be configured to include a connection pool which references the connection factory associated with the JMS queue. In the WebLogic Server Console Go to Deployments > Next and select (click on) the JmsAdapter Select Configuration > Outbound Connection Pools and expand oracle.tip.adapter.jms.IJmsConnectionFactory. This will display the list of connections configured for this adapter. For example, eis/aqjms/Queue, eis/aqjms/Topic etc. These JNDI names are actually quite confusing. We are expecting to configure a connection pool here, but the names refer to queues and topics. One would expect these to be called *ConnectionPool or *_CF or similar, but to conform to this nomenclature, we will call our entry eis/wls/TestQueue . This JNDI name is also the name we will use later, when creating a BPEL process to access this JMS queue! Select New, check the oracle.tip.adapter.jms.IJmsConnectionFactory check box and Next. Enter JNDI Name: eis/wls/TestQueue for the connection instance, then press Finish. Expand oracle.tip.adapter.jms.IJmsConnectionFactory again and select (click on) eis/wls/TestQueue The ConnectionFactoryLocation must point to the JNDI name of the connection factory associated with the JMS queue you will be writing to. In our example, this is the connection factory called TestConnectionFactory, with the JNDI name jms/TestConnectionFactory.( As a reminder, this connection factory is contained in the JMS Module called TestJMSModule, under Services > Messaging > JMS Modules > TestJMSModule which we verified at the beginning of this document. )Enter jms/TestConnectionFactory  into the Property Value field for Connection Factory Location. After entering it, you must press Return/Enter then Save for the value to be accepted. If your WebLogic server is running in Development mode, you should see the message that the changes have been activated and the deployment plan successfully updated. If not, then you will manually need to activate the changes in the WebLogic server console. Although the changes have been activated, the JmsAdapter needs to be redeployed in order for the changes to become effective. This should be confirmed by the message Remember to update your deployment to reflect the new plan when you are finished with your changes as can be seen in the following screen shot: The next step is to redeploy the JmsAdapter.Navigate back to the Deployments screen, either by selecting it in the left-hand navigation tree or by selecting the “Summary of Deployments” link in the breadcrumbs list at the top of the screen. Then select the checkbox next to JmsAdapter and press the Update button On the Update Application Assistant page, select “Redeploy this application using the following deployment files” and press Finish. After a few seconds you should get the message that the selected deployments were updated. The JMS adapter configuration is complete and it can now be used to access the JMS queue. To summarize: we have created a JMS adapter connection pool connector with the JNDI name jms/TestConnectionFactory. This is the JNDI name to be accessed by a process such as a BPEL process, when using the JMS adapter to access the previously created JMS queue with the JNDI name jms/TestJMSQueue. In the following step, we will set up a BPEL process to use this JMS adapter to write to the JMS queue. 3. Create a BPEL Composite with a JMS Adapter Partner Link This step requires that you have a valid Application Server Connection defined in JDeveloper, pointing to the application server on which you created the JMS Queue and Connection Factory. You can create this connection in JDeveloper under the Application Server Navigator. Give it any name and be sure to test the connection before completing it. This sample will use the connection name jbevans-lx-PS5, as that is the name of the connection pointing to my SOA PS5 installation. When using a JMS adapter from within a BPEL process, there are various configuration options, such as the operation type (consume message, produce message etc.), delivery mode and message type. One of these options is the choice of the format of the JMS message payload. This can be structured around an existing XSD, in which case the full XML element and tags are passed, or it can be opaque, meaning that the payload is sent as-is to the JMS adapter. In the case of an XSD-based message, the payload can simply be copied to the input variable of the JMS adapter. In the case of an opaque message, the JMS adapter’s input variable is of type base64binary. So the payload needs to be converted to base64 binary first. I will go into this in more detail in a later blog entry. This sample will pass a simple message to the adapter, based on the following simple XSD file, which consists of a single string element: stringPayload.xsd <?xml version="1.0" encoding="windows-1252" ?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://www.example.org" targetNamespace="http://www.example.org" elementFormDefault="qualified" <xsd:element name="exampleElement" type="xsd:string"> </xsd:element> </xsd:schema> The following steps are all executed in JDeveloper. The SOA project will be created inside a JDeveloper Application. If you do not already have an application to contain the project, you can create a new one via File > New > General > Generic Application. Give the application any name, for example JMSTests and, when prompted for a project name and type, call the project JmsAdapterWriteWithXsd and select SOA as the project technology type. If you already have an application, continue below. Create a SOA Project Create a new project and choose SOA Tier > SOA Project as its type. Name it JmsAdapterWriteSchema. When prompted for the composite type, choose Composite With BPEL Process. When prompted for the BPEL Process, name it JmsAdapterWriteSchema too and choose Synchronous BPEL Process as the template. This will create a composite with a BPEL process and an exposed SOAP service. Double-click the BPEL process to open and begin editing it. You should see a simple BPEL process with a Receive and Reply activity. As we created a default process without an XML schema, the input and output variables are simple strings. Create an XSD File An XSD file is required later to define the message format to be passed to the JMS adapter. In this step, we create a simple XSD file, containing a string variable and add it to the project. First select the xsd item in the left-hand navigation tree to ensure that the XSD file is created under that item. Select File > New > General > XML and choose XML Schema. Call it stringPayload.xsd and when the editor opens, select the Source view. then replace the contents with the contents of the stringPayload.xsd example above and save the file. You should see it under the xsd item in the navigation tree. Create a JMS Adapter Partner Link We will create the JMS adapter as a service at the composite level. If it is not already open, double-click the composite.xml file in the navigator to open it. From the Component Palette, drag a JMS adapter over onto the right-hand swim lane, under External References. This will start the JMS Adapter Configuration Wizard. Use the following entries: Service Name: JmsAdapterWrite Oracle Enterprise Messaging Service (OEMS): Oracle Weblogic JMS AppServer Connection: Use an existing application server connection pointing to the WebLogic server on which the above JMS queue and connection factory were created. You can use the “+” button to create a connection directly from the wizard, if you do not already have one. This example uses a connection called jbevans-lx-PS5. Adapter Interface > Interface: Define from operation and schema (specified later) Operation Type: Produce Message Operation Name: Produce_message Destination Name: Press the Browse button, select Destination Type: Queues, then press Search. Wait for the list to populate, then select the entry for TestJMSQueue , which is the queue created earlier. JNDI Name: The JNDI name to use for the JMS connection. This is probably the most important step in this exercise and the most common source of error. This is the JNDI name of the JMS adapter’s connection pool created in the WebLogic Server and which points to the connection factory. JDeveloper does not verify the value entered here. If you enter a wrong value, the JMS adapter won’t find the queue and you will get an error message at runtime, which is very difficult to trace. In our example, this is the value eis/wls/TestQueue . (See the earlier step on how to create a JMS Adapter Connection Pool in WebLogic Server for details.) MessagesURL: We will use the XSD file we created earlier, stringPayload.xsd to define the message format for the JMS adapter. Press the magnifying glass icon to search for schema files. Expand Project Schema Files > stringPayload.xsd and select exampleElement: string. Press Next and Finish, which will complete the JMS Adapter configuration. Wire the BPEL Component to the JMS Adapter In this step, we link the BPEL process/component to the JMS adapter. From the composite.xml editor, drag the right-arrow icon from the BPEL process to the JMS adapter’s in-arrow. This completes the steps at the composite level. 4. Complete the BPEL Process Design Invoke the JMS Adapter Open the BPEL component by double-clicking it in the design view of the composite.xml, or open it from the project navigator by selecting the JmsAdapterWriteSchema.bpel file. This will display the BPEL process in the design view. You should see the JmsAdapterWrite partner link under one of the two swim lanes. We want it in the right-hand swim lane. If JDeveloper displays it in the left-hand lane, right-click it and choose Display > Move To Opposite Swim Lane. An Invoke activity is required in order to invoke the JMS adapter. Drag an Invoke activity between the Receive and Reply activities. Drag the right-hand arrow from the Invoke activity to the JMS adapter partner link. This will open the Invoke editor. The correct default values are entered automatically and are fine for our purposes. We only need to define the input variable to use for the JMS adapter. By pressing the green “+” symbol, a variable of the correct type can be auto-generated, for example with the name Invoke1_Produce_Message_InputVariable. Press OK after creating the variable. ( For some reason, while I was testing this, the JMS Adapter moved back to the left-hand swim lane again after this step. There is no harm in leaving it there, but I find it easier to follow if it is in the right-hand lane, because I kind-of think of the message coming in on the left and being routed through the right. But you can follow your personal preference here.) Assign Variables Drag an Assign activity between the Receive and Invoke activities. We will simply copy the input variable to the JMS adapter and, for completion, so the process has an output to print, again to the process’s output variable. Double-click the Assign activity and create two Copy rules: for the first, drag Variables > inputVariable > payload > client:process > client:input_string to Invoke1_Produce_Message_InputVariable > body > ns2:exampleElement for the second, drag the same input variable to outputVariable > payload > client:processResponse > client:result This will create two copy rules, similar to the following: Press OK. This completes the BPEL and Composite design. 5. Compile and Deploy the Composite We won’t go into too much detail on how to compile and deploy. In JDeveloper, compile the process by pressing the Make or Rebuild icons or by right-clicking the project name in the navigator and selecting Make... or Rebuild... If the compilation is successful, deploy it to the SOA server connection defined earlier. (Right-click the project name in the navigator, select Deploy to Application Server, choose the application server connection, choose the partition on the server (usually default) and press Finish. You should see the message ---- Deployment finished. ---- in the Deployment frame, if the deployment was successful. 6. Test the Composite This is the exciting part. Open two tabs in your browser and log in to the WebLogic Administration Console in one tab and the Enterprise Manager 11g Fusion Middleware Control (EM) for your SOA installation in the other. We will use the Console to monitor the messages being written to the queue and the EM to execute the composite. In the Console, go to Services > Messaging > JMS Modules > TestJMSModule > TestJMSQueue > Monitoring. Note the number of messages under Messages Current. In the EM, go to SOA > soa-infra (soa_server1) > default (or wherever you deployed your composite to) and click on JmsAdapterWriteSchema [1.0], then press the Test button. Under Input Arguments, enter any string into the text input field for the payload, for example Test Message then press Test Web Service. If the instance is successful you should see the same text in the Response message, “Test Message”. In the Console, refresh the Monitoring screen to confirm a new message has been written to the queue. Check the checkbox and press Show Messages. Click on the newest message and view its contents. They should include the full XML of the entered payload. 7. Troubleshooting If you get an exception similar to the following at runtime ... BINDING.JCA-12510 JCA Resource Adapter location error. Unable to locate the JCA Resource Adapter via .jca binding file element The JCA Binding Component is unable to startup the Resource Adapter specified in the element: location='eis/wls/QueueTest'. The reason for this is most likely that either 1) the Resource Adapters RAR file has not been deployed successfully to the WebLogic Application server or 2) the '' element in weblogic-ra.xml has not been set to eis/wls/QueueTest. In the last case you will have to add a new WebLogic JCA connection factory (deploy a RAR). Please correct this and then restart the Application Server at oracle.integration.platform.blocks.adapter.fw.AdapterBindingException. createJndiLookupException(AdapterBindingException.java:130) at oracle.integration.platform.blocks.adapter.fw.jca.cci. JCAConnectionManager$JCAConnectionPool.createJCAConnectionFactory (JCAConnectionManager.java:1387) at oracle.integration.platform.blocks.adapter.fw.jca.cci. JCAConnectionManager$JCAConnectionPool.newPoolObject (JCAConnectionManager.java:1285) ... then this is very likely due to an incorrect JNDI name entered for the JMS Connection in the JMS Adapter Wizard. Recheck those steps. The error message prints the name of the JNDI name used. In this example, it was incorrectly entered as eis/wls/QueueTest instead of eis/wls/TestQueue. This concludes this example. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Red Gate Coder interviews: Alex Davies

    - by Michael Williamson
    Alex Davies has been a software engineer at Red Gate since graduating from university, and is currently busy working on .NET Demon. We talked about tackling parallel programming with his actors framework, a scientific approach to debugging, and how JavaScript is going to affect the programming languages we use in years to come. So, if we start at the start, how did you get started in programming? When I was seven or eight, I was given a BBC Micro for Christmas. I had asked for a Game Boy, but my dad thought it would be better to give me a proper computer. For a year or so, I only played games on it, but then I found the user guide for writing programs in it. I gradually started doing more stuff on it and found it fun. I liked creating. As I went into senior school I continued to write stuff on there, trying to write games that weren’t very good. I got a real computer when I was fourteen and found ways to write BASIC on it. Visual Basic to start with, and then something more interesting than that. How did you learn to program? Was there someone helping you out? Absolutely not! I learnt out of a book, or by experimenting. I remember the first time I found a loop, I was like “Oh my God! I don’t have to write out the same line over and over and over again any more. It’s amazing!” When did you think this might be something that you actually wanted to do as a career? For a long time, I thought it wasn’t something that you would do as a career, because it was too much fun to be a career. I thought I’d do chemistry at university and some kind of career based on chemical engineering. And then I went to a careers fair at school when I was seventeen or eighteen, and it just didn’t interest me whatsoever. I thought “I could be a programmer, and there’s loads of money there, and I’m good at it, and it’s fun”, but also that I shouldn’t spoil my hobby. Now I don’t really program in my spare time any more, which is a bit of a shame, but I program all the rest of the time, so I can live with it. Do you think you learnt much about programming at university? Yes, definitely! I went into university knowing how to make computers do anything I wanted them to do. However, I didn’t have the language to talk about algorithms, so the algorithms course in my first year was massively important. Learning other language paradigms like functional programming was really good for breadth of understanding. Functional programming influences normal programming through design rather than actually using it all the time. I draw inspiration from it to write imperative programs which I think is actually becoming really fashionable now, but I’ve been doing it for ages. I did it first! There were also some courses on really odd programming languages, a bit of Prolog, a little bit of C. Having a little bit of each of those is something that I would have never done on my own, so it was important. And then there are knowledge-based courses which are about not programming itself but things that have been programmed like TCP. Those are really important for examples for how to approach things. Did you do any internships while you were at university? Yeah, I spent both of my summers at the same company. I thought I could code well before I went there. Looking back at the crap that I produced, it was only surpassed in its crappiness by all of the other code already in that company. I’m so much better at writing nice code now than I used to be back then. Was there just not a culture of looking after your code? There was, they just didn’t hire people for their abilities in that area. They hired people for raw IQ. The first indicator of it going wrong was that they didn’t have any computer scientists, which is a bit odd in a programming company. But even beyond that they didn’t have people who learnt architecture from anyone else. Most of them had started straight out of university, so never really had experience or mentors to learn from. There wasn’t the experience to draw from to teach each other. In the second half of my second internship, I was being given tasks like looking at new technologies and teaching people stuff. Interns shouldn’t be teaching people how to do their jobs! All interns are going to have little nuggets of things that you don’t know about, but they shouldn’t consistently be the ones who know the most. It’s not a good environment to learn. I was going to ask how you found working with people who were more experienced than you… When I reached Red Gate, I found some people who were more experienced programmers than me, and that was difficult. I’ve been coding since I was tiny. At university there were people who were cleverer than me, but there weren’t very many who were more experienced programmers than me. During my internship, I didn’t find anyone who I classed as being a noticeably more experienced programmer than me. So, it was a shock to the system to have valid criticisms rather than just formatting criticisms. However, Red Gate’s not so big on the actual code review, at least it wasn’t when I started. We did an entire product release and then somebody looked over all of the UI of that product which I’d written and say what they didn’t like. By that point, it was way too late and I’d disagree with them. Do you think the lack of code reviews was a bad thing? I think if there’s going to be any oversight of new people, then it should be continuous rather than chunky. For me I don’t mind too much, I could go out and get oversight if I wanted it, and in those situations I felt comfortable without it. If I was managing the new person, then maybe I’d be keener on oversight and then the right way to do it is continuously and in very, very small chunks. Have you had any significant projects you’ve worked on outside of a job? When I was a teenager I wrote all sorts of stuff. I used to write games, I derived how to do isomorphic projections myself once. I didn’t know what the word was so I couldn’t Google for it, so I worked it out myself. It was horrifically complicated. But it sort of tailed off when I started at university, and is now basically zero. If I do side-projects now, they tend to be work-related side projects like my actors framework, NAct, which I started in a down tools week. Could you explain a little more about NAct? It is a little C# framework for writing parallel code more easily. Parallel programming is difficult when you need to write to shared data. Sometimes parallel programming is easy because you don’t need to write to shared data. When you do need to access shared data, you could just have your threads pile in and do their work, but then you would screw up the data because the threads would trample on each other’s toes. You could lock, but locks are really dangerous if you’re using more than one of them. You get interactions like deadlocks, and that’s just nasty. Actors instead allows you to say this piece of data belongs to this thread of execution, and nobody else can read it. If you want to read it, then ask that thread of execution for a piece of it by sending a message, and it will send the data back by a message. And that avoids deadlocks as long as you follow some obvious rules about not making your actors sit around waiting for other actors to do something. There are lots of ways to write actors, NAct allows you to do it as if it was method calls on other objects, which means you get all the strong type-safety that C# programmers like. Do you think that this is suitable for the majority of parallel programming, or do you think it’s only suitable for specific cases? It’s suitable for most difficult parallel programming. If you’ve just got a hundred web requests which are all independent of each other, then I wouldn’t bother because it’s easier to just spin them up in separate threads and they can proceed independently of each other. But where you’ve got difficult parallel programming, where you’ve got multiple threads accessing multiple bits of data in multiple ways at different times, then actors is at least as good as all other ways, and is, I reckon, easier to think about. When you’re using actors, you presumably still have to write your code in a different way from you would otherwise using single-threaded code. You can’t use actors with any methods that have return types, because you’re not allowed to call into another actor and wait for it. If you want to get a piece of data out of another actor, then you’ve got to use tasks so that you can use “async” and “await” to await asynchronously for it. But other than that, you can still stick things in classes so it’s not too different really. Rather than having thousands of objects with mutable state, you can use component-orientated design, where there are only a few mutable classes which each have a small number of instances. Then there can be thousands of immutable objects. If you tend to do that anyway, then actors isn’t much of a jump. If I’ve already built my system without any parallelism, how hard is it to add actors to exploit all eight cores on my desktop? Usually pretty easy. If you can identify even one boundary where things look like messages and you have components where some objects live on one side and these other objects live on the other side, then you can have a granddaddy object on one side be an actor and it will parallelise as it goes across that boundary. Not too difficult. If we do get 1000-core desktop PCs, do you think actors will scale up? It’s hard. There are always in the order of twenty to fifty actors in my whole program because I tend to write each component as actors, and I tend to have one instance of each component. So this won’t scale to a thousand cores. What you can do is write data structures out of actors. I use dictionaries all over the place, and if you need a dictionary that is going to be accessed concurrently, then you could build one of those out of actors in no time. You can use queuing to marshal requests between different slices of the dictionary which are living on different threads. So it’s like a distributed hash table but all of the chunks of it are on the same machine. That means that each of these thousand processors has cached one small piece of the dictionary. I reckon it wouldn’t be too big a leap to start doing proper parallelism. Do you think it helps if actors get baked into the language, similarly to Erlang? Erlang is excellent in that it has thread-local garbage collection. C# doesn’t, so there’s a limit to how well C# actors can possibly scale because there’s a single garbage collected heap shared between all of them. When you do a global garbage collection, you’ve got to stop all of the actors, which is seriously expensive, whereas in Erlang garbage collections happen per-actor, so they’re insanely cheap. However, Erlang deviated from all the sensible language design that people have used recently and has just come up with crazy stuff. You can definitely retrofit thread-local garbage collection to .NET, and then it’s quite well-suited to support actors, even if it’s not baked into the language. Speaking of language design, do you have a favourite programming language? I’ll choose a language which I’ve never written before. I like the idea of Scala. It sounds like C#, only with some of the niggles gone. I enjoy writing static types. It means you don’t have to writing tests so much. When you say it doesn’t have some of the niggles? C# doesn’t allow the use of a property as a method group. It doesn’t have Scala case classes, or sum types, where you can do a switch statement and the compiler checks that you’ve checked all the cases, which is really useful in functional-style programming. Pattern-matching, in other words. That’s actually the major niggle. C# is pretty good, and I’m quite happy with C#. And what about going even further with the type system to remove the need for tests to something like Haskell? Or is that a step too far? I’m quite a pragmatist, I don’t think I could deal with trying to write big systems in languages with too few other users, especially when learning how to structure things. I just don’t know anyone who can teach me, and the Internet won’t teach me. That’s the main reason I wouldn’t use it. If I turned up at a company that writes big systems in Haskell, I would have no objection to that, but I wouldn’t instigate it. What about things in C#? For instance, there’s contracts in C#, so you can try to statically verify a bit more about your code. Do you think that’s useful, or just not worthwhile? I’ve not really tried it. My hunch is that it needs to be built into the language and be quite mathematical for it to work in real life, and that doesn’t seem to have ended up true for C# contracts. I don’t think anyone who’s tried them thinks they’re any good. I might be wrong. On a slightly different note, how do you like to debug code? I think I’m quite an odd debugger. I use guesswork extremely rarely, especially if something seems quite difficult to debug. I’ve been bitten spending hours and hours on guesswork and not being scientific about debugging in the past, so now I’m scientific to a fault. What I want is to see the bug happening in the debugger, to step through the bug happening. To watch the program going from a valid state to an invalid state. When there’s a bug and I can’t work out why it’s happening, I try to find some piece of evidence which places the bug in one section of the code. From that experiment, I binary chop on the possible causes of the bug. I suppose that means binary chopping on places in the code, or binary chopping on a stage through a processing cycle. Basically, I’m very stupid about how I debug. I won’t make any guesses, I won’t use any intuition, I will only identify the experiment that’s going to binary chop most effectively and repeat rather than trying to guess anything. I suppose it’s quite top-down. Is most of the time then spent in the debugger? Absolutely, if at all possible I will never debug using print statements or logs. I don’t really hold much stock in outputting logs. If there’s any bug which can be reproduced locally, I’d rather do it in the debugger than outputting logs. And with SmartAssembly error reporting, there’s not a lot that can’t be either observed in an error report and just fixed, or reproduced locally. And in those other situations, maybe I’ll use logs. But I hate using logs. You stare at the log, trying to guess what’s going on, and that’s exactly what I don’t like doing. You have to just look at it and see does this look right or wrong. We’ve covered how you get to grip with bugs. How do you get to grips with an entire codebase? I watch it in the debugger. I find little bugs and then try to fix them, and mostly do it by watching them in the debugger and gradually getting an understanding of how the code works using my process of binary chopping. I have to do a lot of reading and watching code to choose where my slicing-in-half experiment is going to be. The last time I did it was SmartAssembly. The old code was a complete mess, but at least it did things top to bottom. There wasn’t too much of some of the big abstractions where flow of control goes all over the place, into a base class and back again. Code’s really hard to understand when that happens. So I like to choose a little bug and try to fix it, and choose a bigger bug and try to fix it. Definitely learn by doing. I want to always have an aim so that I get a little achievement after every few hours of debugging. Once I’ve learnt the codebase I might be able to fix all the bugs in an hour, but I’d rather be using them as an aim while I’m learning the codebase. If I was a maintainer of a codebase, what should I do to make it as easy as possible for you to understand? Keep distinct concepts in different places. And name your stuff so that it’s obvious which concepts live there. You shouldn’t have some variable that gets set miles up the top of somewhere, and then is read miles down to choose some later behaviour. I’m talking from a very much SmartAssembly point of view because the old SmartAssembly codebase had tons and tons of these things, where it would read some property of the code and then deal with it later. Just thousands of variables in scope. Loads of things to think about. If you can keep concepts separate, then it aids me in my process of fixing bugs one at a time, because each bug is going to more or less be understandable in the one place where it is. And what about tests? Do you think they help at all? I’ve never had the opportunity to learn a codebase which has had tests, I don’t know what it’s like! What about when you’re actually developing? How useful do you find tests in finding bugs or regressions? Finding regressions, absolutely. Running bits of code that would be quite hard to run otherwise, definitely. It doesn’t happen very often that a test finds a bug in the first place. I don’t really buy nebulous promises like tests being a good way to think about the spec of the code. My thinking goes something like “This code works at the moment, great, ship it! Ah, there’s a way that this code doesn’t work. Okay, write a test, demonstrate that it doesn’t work, fix it, use the test to demonstrate that it’s now fixed, and keep the test for future regressions.” The most valuable tests are for bugs that have actually happened at some point, because bugs that have actually happened at some point, despite the fact that you think you’ve fixed them, are way more likely to appear again than new bugs are. Does that mean that when you write your code the first time, there are no tests? Often. The chance of there being a bug in a new feature is relatively unaffected by whether I’ve written a test for that new feature because I’m not good enough at writing tests to think of bugs that I would have written into the code. So not writing regression tests for all of your code hasn’t affected you too badly? There are different kinds of features. Some of them just always work, and are just not flaky, they just continue working whatever you throw at them. Maybe because the type-checker is particularly effective around them. Writing tests for those features which just tend to always work is a waste of time. And because it’s a waste of time I’ll tend to wait until a feature has demonstrated its flakiness by having bugs in it before I start trying to test it. You can get a feel for whether it’s going to be flaky code as you’re writing it. I try to write it to make it not flaky, but there are some things that are just inherently flaky. And very occasionally, I’ll think “this is going to be flaky” as I’m writing, and then maybe do a test, but not most of the time. How do you think your programming style has changed over time? I’ve got clearer about what the right way of doing things is. I used to flip-flop a lot between different ideas. Five years ago I came up with some really good ideas and some really terrible ideas. All of them seemed great when I thought of them, but they were quite diverse ideas, whereas now I have a smaller set of reliable ideas that are actually good for structuring code. So my code is probably more similar to itself than it used to be back in the day, when I was trying stuff out. I’ve got more disciplined about encapsulation, I think. There are operational things like I use actors more now than I used to, and that forces me to use immutability more than I used to. The first code that I wrote in Red Gate was the memory profiler UI, and that was an actor, I just didn’t know the name of it at the time. I don’t really use object-orientation. By object-orientation, I mean having n objects of the same type which are mutable. I want a constant number of objects that are mutable, and they should be different types. I stick stuff in dictionaries and then have one thing that owns the dictionary and puts stuff in and out of it. That’s definitely a pattern that I’ve seen recently. I think maybe I’m doing functional programming. Possibly. It’s plausible. If you had to summarise the essence of programming in a pithy sentence, how would you do it? Programming is the form of art that, without losing any of the beauty of architecture or fine art, allows you to produce things that people love and you make money from. So you think it’s an art rather than a science? It’s a little bit of engineering, a smidgeon of maths, but it’s not science. Like architecture, programming is on that boundary between art and engineering. If you want to do it really nicely, it’s mostly art. You can get away with doing architecture and programming entirely by having a good engineering mind, but you’re not going to produce anything nice. You’re not going to have joy doing it if you’re an engineering mind. Architects who are just engineering minds are not going to enjoy their job. I suppose engineering is the foundation on which you build the art. Exactly. How do you think programming is going to change over the next ten years? There will be an unfortunate shift towards dynamically-typed languages, because of JavaScript. JavaScript has an unfair advantage. JavaScript’s unfair advantage will cause more people to be exposed to dynamically-typed languages, which means other dynamically-typed languages crop up and the best features go into dynamically-typed languages. Then people conflate the good features with the fact that it’s dynamically-typed, and more investment goes into dynamically-typed languages. They end up better, so people use them. What about the idea of compiling other languages, possibly statically-typed, to JavaScript? It’s a reasonable idea. I would like to do it, but I don’t think enough people in the world are going to do it to make it pick up. The hordes of beginners are the lifeblood of a language community. They are what makes there be good tools and what makes there be vibrant community websites. And any particular thing which is the same as JavaScript only with extra stuff added to it, although it might be technically great, is not going to have the hordes of beginners. JavaScript is always to be quickest and easiest way for a beginner to start programming in the browser. And dynamically-typed languages are great for beginners. Compilers are pretty scary and beginners don’t write big code. And having your errors come up in the same place, whether they’re statically checkable errors or not, is quite nice for a beginner. If someone asked me to teach them some programming, I’d teach them JavaScript. If dynamically-typed languages are great for beginners, when do you think the benefits of static typing start to kick in? The value of having a statically typed program is in the tools that rely on the static types to produce a smooth IDE experience rather than actually telling me my compile errors. And only once you’re experienced enough a programmer that having a really smooth IDE experience makes a blind bit of difference, does static typing make a blind bit of difference. So it’s not really about size of codebase. If I go and write up a tiny program, I’m still going to get value out of writing it in C# using ReSharper because I’m experienced with C# and ReSharper enough to be able to write code five times faster if I have that help. Any other visions of the future? Nobody’s going to use actors. Because everyone’s going to be running on single-core VMs connected over network-ready protocols like JSON over HTTP. So, parallelism within one operating system is going to die. But until then, you should use actors. More Red Gater Coder interviews

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • Unable to boot Windows 7 after installing Ubuntu

    - by Devendra
    I have Windows 7 on my machine and then installed Ubuntu 12.04 using a live CD. I can see both Windows 7 and Ubuntu in the grub menu, but when I select Windows 7 it shows a black screen for about 2 seconds and the returns to the Grub menu. But if I select Ubuntu it's working fine. This is the contents of the boot-repair log: Boot Info Script 0.61.full + Boot-Repair extra info [Boot-Info November 20th 2012] ============================= Boot Info Summary: =============================== => Grub2 (v2.00) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks in partition 1 for (,msdos6)/boot/grub. sda1: __________________________________________________________________________ File system: ntfs Boot sector type: Grub2 (v1.99-2.00) Boot sector info: Grub2 (v2.00) is installed in the boot sector of sda1 and looks at sector 388911128 of the same hard drive for core.img. core.img is at this location and looks in partition 1 for (,msdos6)/boot/grub. No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /bootmgr /Boot/BCD /Windows/System32/winload.exe sda2: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sda3: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sda4: __________________________________________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: According to the info in the boot sector, sda5 starts at sector 2048. Operating System: Boot files: sda6: __________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 12.10 Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/i386-pc/core.img sda7: __________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: ============================ Drive/Partition Info: ============================= Drive: sda _____________________________________________________________________ Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 * 206,848 146,802,687 146,595,840 7 NTFS / exFAT / HPFS /dev/sda2 147,007,488 293,623,807 146,616,320 7 NTFS / exFAT / HPFS /dev/sda3 293,623,808 332,820,613 39,196,806 7 NTFS / exFAT / HPFS /dev/sda4 332,822,526 1,465,145,343 1,132,322,818 f W95 Extended (LBA) /dev/sda5 461,342,720 1,465,145,343 1,003,802,624 7 NTFS / exFAT / HPFS /dev/sda6 332,822,528 453,171,199 120,348,672 83 Linux /dev/sda7 453,173,248 461,338,623 8,165,376 82 Linux swap / Solaris "blkid" output: ________________________________________________________________ Device UUID TYPE LABEL /dev/sda1 F6AE2C13AE2BCB47 ntfs /dev/sda2 DC2273012272DFC6 ntfs /dev/sda3 1E76E43376E40D79 ntfs New Volume /dev/sda5 5ED60ACDD60AA57D ntfs /dev/sda6 9e70fd16-b48b-4f88-adcf-e443aef83124 ext4 /dev/sda7 52f3dd94-6be7-4a7b-a3ae-f43eb8810483 swap ================================ Mount points: ================================= Device Mount_Point Type Options /dev/sda6 / ext4 (rw,errors=remount-ro) =========================== sda6/boot/grub/grub.cfg: =========================== -------------------------------------------------------------------------------- # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ x"${feature_menuentry_id}" = xy ]; then menuentry_id_option="--id" else menuentry_id_option="" fi export menuentry_id_option if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { if [ x$feature_all_video_module = xy ]; then insmod all_video else insmod efi_gop insmod efi_uga insmod ieee1275_fb insmod vbe insmod vga insmod video_bochs insmod video_cirrus fi } if [ x$feature_default_font_path = xy ] ; then font=unicode else insmod part_msdos insmod ext2 set root='hd0,msdos6' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 9e70fd16-b48b-4f88-adcf-e443aef83124 else search --no-floppy --fs-uuid --set=root 9e70fd16-b48b-4f88-adcf-e443aef83124 fi font="/usr/share/grub/unicode.pf2" fi if loadfont $font ; then set gfxmode=auto load_video insmod gfxterm set locale_dir=$prefix/locale set lang=en_IN insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=10 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="${1}" if [ "${1}" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ "${recordfail}" != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "${linux_gfx_mode}" != "text" ]; then load_video; fi menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-9e70fd16-b48b-4f88-adcf-e443aef83124' { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos6' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 9e70fd16-b48b-4f88-adcf-e443aef83124 else search --no-floppy --fs-uuid --set=root 9e70fd16-b48b-4f88-adcf-e443aef83124 fi linux /boot/vmlinuz-3.5.0-17-generic root=UUID=9e70fd16-b48b-4f88-adcf-e443aef83124 ro quiet splash $vt_handoff initrd /boot/initrd.img-3.5.0-17-generic } submenu 'Advanced options for Ubuntu' $menuentry_id_option 'gnulinux-advanced-9e70fd16-b48b-4f88-adcf-e443aef83124' { menuentry 'Ubuntu, with Linux 3.5.0-17-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.5.0-17-generic-advanced-9e70fd16-b48b-4f88-adcf-e443aef83124' { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos6' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 9e70fd16-b48b-4f88-adcf-e443aef83124 else search --no-floppy --fs-uuid --set=root 9e70fd16-b48b-4f88-adcf-e443aef83124 fi echo 'Loading Linux 3.5.0-17-generic ...' linux /boot/vmlinuz-3.5.0-17-generic root=UUID=9e70fd16-b48b-4f88-adcf-e443aef83124 ro quiet splash $vt_handoff echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.5.0-17-generic } menuentry 'Ubuntu, with Linux 3.5.0-17-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.5.0-17-generic-recovery-9e70fd16-b48b-4f88-adcf-e443aef83124' { recordfail insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos6' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 9e70fd16-b48b-4f88-adcf-e443aef83124 else search --no-floppy --fs-uuid --set=root 9e70fd16-b48b-4f88-adcf-e443aef83124 fi echo 'Loading Linux 3.5.0-17-generic ...' linux /boot/vmlinuz-3.5.0-17-generic root=UUID=9e70fd16-b48b-4f88-adcf-e443aef83124 ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.5.0-17-generic } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='hd0,msdos6' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 9e70fd16-b48b-4f88-adcf-e443aef83124 else search --no-floppy --fs-uuid --set=root 9e70fd16-b48b-4f88-adcf-e443aef83124 fi linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='hd0,msdos6' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 9e70fd16-b48b-4f88-adcf-e443aef83124 else search --no-floppy --fs-uuid --set=root 9e70fd16-b48b-4f88-adcf-e443aef83124 fi linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry 'Windows 7 (loader) (on /dev/sda1)' --class windows --class os $menuentry_id_option 'osprober-chain-F6AE2C13AE2BCB47' { insmod part_msdos insmod ntfs set root='hd0,msdos1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 F6AE2C13AE2BCB47 else search --no-floppy --fs-uuid --set=root F6AE2C13AE2BCB47 fi chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/30_uefi-firmware ### ### END /etc/grub.d/30_uefi-firmware ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f ${config_directory}/custom.cfg ]; then source ${config_directory}/custom.cfg elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### -------------------------------------------------------------------------------- =============================== sda6/etc/fstab: ================================ -------------------------------------------------------------------------------- # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda6 during installation UUID=9e70fd16-b48b-4f88-adcf-e443aef83124 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda7 during installation UUID=52f3dd94-6be7-4a7b-a3ae-f43eb8810483 none swap sw 0 0 -------------------------------------------------------------------------------- =================== sda6: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) 162.831275940 = 174.838751232 boot/grub/grub.cfg 1 163.036647797 = 175.059267584 boot/initrd.img-3.5.0-17-generic 1 206.871749878 = 222.126850048 boot/vmlinuz-3.5.0-17-generic 1 163.036647797 = 175.059267584 initrd.img 1 163.036647797 = 175.059267584 initrd.img.old 1 206.871749878 = 222.126850048 vmlinuz 1 =============================== StdErr Messages: =============================== cat: write error: Broken pipe cat: write error: Broken pipe ADDITIONAL INFORMATION : =================== log of boot-repair 2012-12-11__00h59 =================== boot-repair version : 3.195~ppa28~quantal boot-sav version : 3.195~ppa28~quantal glade2script version : 3.2.2~ppa45~quantal boot-sav-extra version : 3.195~ppa28~quantal boot-repair is executed in installed-session (Ubuntu 12.10, quantal, Ubuntu, x86_64) CPU op-mode(s): 32-bit, 64-bit BOOT_IMAGE=/boot/vmlinuz-3.5.0-17-generic root=UUID=9e70fd16-b48b-4f88-adcf-e443aef83124 ro quiet splash vt.handoff=7 =================== os-prober: /dev/sda6:The OS now in use - Ubuntu 12.10 CurrentSession:linux /dev/sda1:Windows 7 (loader):Windows:chain =================== blkid: /dev/sda1: UUID="F6AE2C13AE2BCB47" TYPE="ntfs" /dev/sda2: UUID="DC2273012272DFC6" TYPE="ntfs" /dev/sda3: LABEL="New Volume" UUID="1E76E43376E40D79" TYPE="ntfs" /dev/sda5: UUID="5ED60ACDD60AA57D" TYPE="ntfs" /dev/sda6: UUID="9e70fd16-b48b-4f88-adcf-e443aef83124" TYPE="ext4" /dev/sda7: UUID="52f3dd94-6be7-4a7b-a3ae-f43eb8810483" TYPE="swap" 1 disks with OS, 2 OS : 1 Linux, 0 MacOS, 1 Windows, 0 unknown type OS. Warning: extended partition does not start at a cylinder boundary. DOS and Linux will interpret the contents differently. =================== /etc/default/grub : # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" =================== /etc/grub.d/ : drwxr-xr-x 2 root root 4096 Oct 17 20:29 grub.d total 72 -rwxr-xr-x 1 root root 7541 Oct 14 23:06 00_header -rwxr-xr-x 1 root root 5488 Oct 4 15:00 05_debian_theme -rwxr-xr-x 1 root root 10891 Oct 14 23:06 10_linux -rwxr-xr-x 1 root root 10258 Oct 14 23:06 20_linux_xen -rwxr-xr-x 1 root root 1688 Oct 11 19:40 20_memtest86+ -rwxr-xr-x 1 root root 10976 Oct 14 23:06 30_os-prober -rwxr-xr-x 1 root root 1426 Oct 14 23:06 30_uefi-firmware -rwxr-xr-x 1 root root 214 Oct 14 23:06 40_custom -rwxr-xr-x 1 root root 216 Oct 14 23:06 41_custom -rw-r--r-- 1 root root 483 Oct 14 23:06 README =================== UEFI/Legacy mode: This installed-session is not in EFI-mode. EFI in dmesg. Please report this message to [email protected] [ 0.000000] ACPI: UEFI 00000000bafe7000 0003E (v01 DELL QA09 00000002 PTL 00000002) [ 0.000000] ACPI: UEFI 00000000bafe6000 00042 (v01 PTL COMBUF 00000001 PTL 00000001) [ 0.000000] ACPI: UEFI 00000000bafe3000 00256 (v01 DELL QA09 00000002 PTL 00000002) SecureBoot disabled. =================== PARTITIONS & DISKS: sda6 : sda, not-sepboot, grubenv-ok grub2, grub-pc , update-grub, 64, with-boot, is-os, not--efi--part, fstab-without-boot, fstab-without-efi, no-nt, no-winload, no-recov-nor-hid, no-bmgr, notwinboot, apt-get, grub-install, with--usr, fstab-without-usr, not-sep-usr, standard, farbios, . sda1 : sda, not-sepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, is-os, not--efi--part, part-has-no-fstab, part-has-no-fstab, no-nt, haswinload, no-recov-nor-hid, bootmgr, is-winboot, nopakmgr, nogrubinstall, no---usr, part-has-no-fstab, not-sep-usr, standard, not-far, /mnt/boot-sav/sda1. sda2 : sda, not-sepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, no-os, not--efi--part, part-has-no-fstab, part-has-no-fstab, no-nt, no-winload, no-recov-nor-hid, no-bmgr, notwinboot, nopakmgr, nogrubinstall, no---usr, part-has-no-fstab, not-sep-usr, standard, farbios, /mnt/boot-sav/sda2. sda3 : sda, not-sepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, no-os, not--efi--part, part-has-no-fstab, part-has-no-fstab, no-nt, no-winload, no-recov-nor-hid, no-bmgr, notwinboot, nopakmgr, nogrubinstall, no---usr, part-has-no-fstab, not-sep-usr, standard, farbios, /mnt/boot-sav/sda3. sda5 : sda, not-sepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, no-os, not--efi--part, part-has-no-fstab, part-has-no-fstab, no-nt, no-winload, no-recov-nor-hid, no-bmgr, notwinboot, nopakmgr, nogrubinstall, no---usr, part-has-no-fstab, not-sep-usr, standard, farbios, /mnt/boot-sav/sda5. sda : not-GPT, BIOSboot-not-needed, has-no-EFIpart, not-usb, has-os, 2048 sectors * 512 bytes =================== parted -l: Model: ATA WDC WD7500BPKT-7 (scsi) Disk /dev/sda: 750GB Sector size (logical/physical): 512B/4096B Partition Table: msdos Number Start End Size Type File system Flags 1 106MB 75.2GB 75.1GB primary ntfs boot 2 75.3GB 150GB 75.1GB primary ntfs 3 150GB 170GB 20.1GB primary ntfs 4 170GB 750GB 580GB extended lba 6 170GB 232GB 61.6GB logical ext4 7 232GB 236GB 4181MB logical linux-swap(v1) 5 236GB 750GB 514GB logical ntfs =================== parted -lm: BYT; /dev/sda:750GB:scsi:512:4096:msdos:ATA WDC WD7500BPKT-7; 1:106MB:75.2GB:75.1GB:ntfs::boot; 2:75.3GB:150GB:75.1GB:ntfs::; 3:150GB:170GB:20.1GB:ntfs::; 4:170GB:750GB:580GB:::lba; 6:170GB:232GB:61.6GB:ext4::; 7:232GB:236GB:4181MB:linux-swap(v1)::; 5:236GB:750GB:514GB:ntfs::; =================== mount: /dev/sda6 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755) gvfsd-fuse on /run/user/dev/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,user=dev) /dev/sda1 on /mnt/boot-sav/sda1 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) /dev/sda2 on /mnt/boot-sav/sda2 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) /dev/sda3 on /mnt/boot-sav/sda3 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) /dev/sda5 on /mnt/boot-sav/sda5 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) =================== ls: /sys/block/sda (filtered): alignment_offset bdi capability dev device discard_alignment events events_async events_poll_msecs ext_range holders inflight power queue range removable ro sda1 sda2 sda3 sda4 sda5 sda6 sda7 size slaves stat subsystem trace uevent /sys/block/sr0 (filtered): alignment_offset bdi capability dev device discard_alignment events events_async events_poll_msecs ext_range holders inflight power queue range removable ro size slaves stat subsystem trace uevent /dev (filtered): alarm ashmem autofs binder block bsg btrfs-control bus cdrom cdrw char console core cpu cpu_dma_latency disk dri dvd dvdrw ecryptfs fb0 fb1 fd full fuse hpet input kmsg kvm log mapper mcelog mei mem net network_latency network_throughput null oldmem port ppp psaux ptmx pts random rfkill rtc rtc0 sda sda1 sda2 sda3 sda4 sda5 sda6 sda7 sg0 sg1 shm snapshot snd sr0 stderr stdin stdout uinput urandom v4l vga_arbiter vhost-net video0 zero ls /dev/mapper: control =================== df -Th: Filesystem Type Size Used Avail Use% Mounted on /dev/sda6 ext4 57G 2.7G 51G 6% / udev devtmpfs 1.9G 12K 1.9G 1% /dev tmpfs tmpfs 770M 892K 769M 1% /run none tmpfs 5.0M 0 5.0M 0% /run/lock none tmpfs 1.9G 260K 1.9G 1% /run/shm none tmpfs 100M 44K 100M 1% /run/user /dev/sda1 fuseblk 70G 36G 35G 51% /mnt/boot-sav/sda1 /dev/sda2 fuseblk 70G 66G 4.8G 94% /mnt/boot-sav/sda2 /dev/sda3 fuseblk 19G 87M 19G 1% /mnt/boot-sav/sda3 /dev/sda5 fuseblk 479G 436G 44G 92% /mnt/boot-sav/sda5 =================== fdisk -l: Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x1dc69d0b Device Boot Start End Blocks Id System /dev/sda1 * 206848 146802687 73297920 7 HPFS/NTFS/exFAT /dev/sda2 147007488 293623807 73308160 7 HPFS/NTFS/exFAT /dev/sda3 293623808 332820613 19598403 7 HPFS/NTFS/exFAT /dev/sda4 332822526 1465145343 566161409 f W95 Ext'd (LBA) Partition 4 does not start on physical sector boundary. /dev/sda5 461342720 1465145343 501901312 7 HPFS/NTFS/exFAT /dev/sda6 332822528 453171199 60174336 83 Linux /dev/sda7 453173248 461338623 4082688 82 Linux swap / Solaris Partition table entries are not in disk order =================== Recommended repair Recommended-Repair This setting will reinstall the grub2 of sda6 into the MBR of sda. Additional repair will be performed: unhide-bootmenu-10s grub-install (GRUB) 2.00-7ubuntu11,grub-install (GRUB) 2. Reinstall the GRUB of sda6 into the MBR of sda Installation finished. No error reported. grub-install /dev/sda: exit code of grub-install /dev/sda:0 update-grub Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.5.0-17-generic Found initrd image: /boot/initrd.img-3.5.0-17-generic Found memtest86+ image: /boot/memtest86+.bin Found Windows 7 (loader) on /dev/sda1 Unhide GRUB boot menu in sda6/boot/grub/grub.cfg Boot successfully repaired. You can now reboot your computer. The boot files of [The OS now in use - Ubuntu 12.10] are far from the start of the disk. Your BIOS may not detect them. You may want to retry after creating a /boot partition (EXT4, >200MB, start of the disk). This can be performed via tools such as gParted. Then select this partition via the [Separate /boot partition:] option of [Boot Repair]. (https://help.ubuntu.com/community/BootPartition)

    Read the article

  • 12.04lts: no network internet

    - by dgermann
    Friends-- Cannot connect reliably to ethernet nor at all to Internet: Symptoms: About 2 weeks ago did an upgrade. Have not been able to connect to ethernet nor Internet. Today, for example, boot up this System76 laptop and there was no network connection. Did sudo mount -a and got some internal network connectivity: doug@ubuntu:/sam$ ping earth PING earth (192.168.0.201) 56(84) bytes of data. 64 bytes from earth (192.168.0.201): icmp_req=1 ttl=64 time=0.160 ms 64 bytes from earth (192.168.0.201): icmp_req=2 ttl=64 time=0.177 ms 64 bytes from earth (192.168.0.201): icmp_req=3 ttl=64 time=0.159 ms ^C --- earth ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1998ms rtt min/avg/max/mdev = 0.159/0.165/0.177/0.013 ms doug@ubuntu:/sam$ ping doug2 PING doug (192.168.0.4) 56(84) bytes of data. ^C --- doug ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 1999ms doug@ubuntu:/sam$ ping sharon PING sharon (192.168.0.111) 56(84) bytes of data. 64 bytes from sharon (192.168.0.111): icmp_req=1 ttl=128 time=0.276 ms ^C --- sharon ping statistics --- 6 packets transmitted, 1 received, 83% packet loss, time 5031ms rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms doug@ubuntu:/sam$ ping 192.168.0.1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. ^C --- 192.168.0.1 ping statistics --- 6 packets transmitted, 0 received, 100% packet loss, time 4999ms doug@ubuntu:/sam$ ping earth PING earth (192.168.0.201) 56(84) bytes of data. ^C --- earth ping statistics --- 5 packets transmitted, 0 received, 100% packet loss, time 4032ms doug@ubuntu:/sam$ ping yahoo.com ping: unknown host yahoo.com doug@ubuntu:/sam$ ping ubuntu.com ping: unknown host ubuntu.com doug@ubuntu:/sam$ ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. ^C --- 8.8.8.8 ping statistics --- 14 packets transmitted, 0 received, 100% packet loss, time 13103ms Note that earth is the cifs server, and one time pinging it worked, later failed. Clues: doug@ubuntu:/sam$ grep -i eth /var/log/syslog |tail Aug 23 15:32:46 ubuntu kernel: [ 5328.070401] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 Aug 23 15:32:48 ubuntu kernel: [ 5330.651139] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=19090 PROTO=2 Aug 23 15:34:51 ubuntu kernel: [ 5453.072279] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 Aug 23 15:34:55 ubuntu kernel: [ 5457.085433] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16137 PROTO=2 Aug 23 15:36:56 ubuntu kernel: [ 5578.074492] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 Aug 23 15:37:00 ubuntu kernel: [ 5582.359006] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16150 PROTO=2 Aug 23 15:39:01 ubuntu kernel: [ 5703.074410] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 Aug 23 15:39:03 ubuntu kernel: [ 5705.070122] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16163 PROTO=2 Aug 23 15:41:06 ubuntu kernel: [ 5828.074387] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 Aug 23 15:41:13 ubuntu kernel: [ 5835.319941] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=23298 PROTO=2 doug@ubuntu:/sam$ ifconfig -a eth0 Link encap:Ethernet HWaddr [BLANKED] inet addr:192.168.0.7 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::21b:fcff:fe29:9dfc/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3961 errors:0 dropped:0 overruns:0 frame:0 TX packets:2007 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:991204 (991.2 KB) TX bytes:252908 (252.9 KB) Interrupt:16 Base address:0xec00 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:2190 errors:0 dropped:0 overruns:0 frame:0 TX packets:2190 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:168052 (168.0 KB) TX bytes:168052 (168.0 KB) wlan0 Link encap:Ethernet HWaddr 00:19:d2:72:5a:0c UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) doug@ubuntu:/sam$ iwconfig lo no wireless extensions. wlan0 IEEE 802.11abg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off eth0 no wireless extensions. doug@ubuntu:/sam$ lsmod Module Size Used by des_generic 21191 0 md4 12523 0 nls_iso8859_1 12617 1 nls_cp437 12751 1 vfat 17308 1 fat 55605 1 vfat usb_storage 39646 1 dm_crypt 22528 1 joydev 17393 0 snd_hda_codec_analog 75395 1 snd_hda_intel 32719 2 pcmcia 39826 0 snd_hda_codec 109562 2 snd_hda_codec_analog,snd_hda_intel snd_hwdep 13276 1 snd_hda_codec ip6t_LOG 16846 4 xt_hl 12465 6 ip6t_rt 12473 3 snd_pcm 80916 2 snd_hda_intel,snd_hda_codec nf_conntrack_ipv6 13581 7 nf_defrag_ipv6 13175 1 nf_conntrack_ipv6 ipt_REJECT 12512 1 ipt_LOG 12783 5 xt_limit 12541 12 xt_tcpudp 12531 21 xt_addrtype 12596 4 snd_seq_midi 13132 0 xt_state 12514 14 ip6table_filter 12711 1 ip6_tables 22528 3 ip6t_LOG,ip6t_rt,ip6table_filter nf_conntrack_netbios_ns 12585 0 nf_conntrack_broadcast 12541 1 nf_conntrack_netbios_ns nf_nat_ftp 12595 0 nf_nat 24959 1 nf_nat_ftp nf_conntrack_ipv4 19084 9 nf_nat nf_defrag_ipv4 12649 1 nf_conntrack_ipv4 nf_conntrack_ftp 13183 1 nf_nat_ftp nf_conntrack 73847 8 nf_conntrack_ipv6,xt_state,nf_conntrack_netbios_ns,nf_conntrack_broadcast,nf_nat_ftp,nf_nat,nf_conntrack_ipv4,nf_conntrack_ftp iptable_filter 12706 1 ip_tables 18106 1 iptable_filter snd_rawmidi 25424 1 snd_seq_midi psmouse 86982 0 x_tables 22011 13 ip6t_LOG,xt_hl,ip6t_rt,ipt_REJECT,ipt_LOG,xt_limit,xt_tcpudp,xt_addrtype,xt_state,ip6table_filter,ip6_tables,iptable_filter,ip_tables arc4 12473 2 r592 17808 0 snd_seq_midi_event 14475 1 snd_seq_midi memstick 15857 1 r592 yenta_socket 27465 0 serio_raw 13027 0 pcmcia_rsrc 18367 1 yenta_socket iwl3945 73186 0 pcmcia_core 21511 3 pcmcia,yenta_socket,pcmcia_rsrc iwl_legacy 71334 1 iwl3945 snd_seq 51592 2 snd_seq_midi,snd_seq_midi_event mac80211 436493 2 iwl3945,iwl_legacy snd_timer 28931 2 snd_pcm,snd_seq snd_seq_device 14172 3 snd_seq_midi,snd_rawmidi,snd_seq rfcomm 38139 0 bnep 17830 2 parport_pc 32114 0 bluetooth 158447 10 rfcomm,bnep ppdev 12849 0 cfg80211 178877 3 iwl3945,iwl_legacy,mac80211 asus_laptop 23693 0 sparse_keymap 13658 1 asus_laptop input_polldev 13648 1 asus_laptop nls_utf8 12493 6 cifs 258037 10 snd 62218 13 snd_hda_codec_analog,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 14635 1 snd mac_hid 13077 0 snd_page_alloc 14108 2 snd_hda_intel,snd_pcm lp 17455 0 parport 40930 3 parport_pc,ppdev,lp i915 428418 3 firewire_ohci 40172 0 sdhci_pci 18324 0 sdhci 28241 1 sdhci_pci firewire_core 56940 1 firewire_ohci crc_itu_t 12627 1 firewire_core r8169 56396 0 drm_kms_helper 45466 1 i915 drm 197641 4 i915,drm_kms_helper i2c_algo_bit 13199 1 i915 video 19115 1 i915 doug@ubuntu:/sam$ dmesg |grep eth [ 0.116936] i2c-core: driver [aat2870] using legacy suspend method [ 0.116939] i2c-core: driver [aat2870] using legacy resume method [ 1.453811] r8169 0000:03:07.0: eth0: RTL8169sb/8110sb at 0xf840ec00, [BLANKED], XID 10000000 IRQ 16 [ 1.453815] r8169 0000:03:07.0: eth0: jumbo features [frames: 7152 bytes, tx checksumming: ok] [ 25.681231] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 154.037318] r8169 0000:03:07.0: eth0: link down [ 154.037329] r8169 0000:03:07.0: eth0: link down [ 154.037596] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 155.583162] r8169 0000:03:07.0: eth0: link up [ 155.583366] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 156.637048] r8169 0000:03:07.0: eth0: link down [ 156.637066] r8169 0000:03:07.0: eth0: link down [ 156.637339] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 156.773699] r8169 0000:03:07.0: eth0: link down [ 156.773983] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 158.456181] r8169 0000:03:07.0: eth0: link up [ 158.456378] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 159.364468] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 162.384496] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=38877 PROTO=2 [ 166.272457] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 166.422333] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=40695 PROTO=2 [ 168.736049] eth0: no IPv6 routers present [ 183.572472] r8169 0000:03:07.0: eth0: link down [ 183.572490] r8169 0000:03:07.0: eth0: link down [ 183.572934] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 185.204801] r8169 0000:03:07.0: eth0: link up [ 185.205005] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 3620.680451] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 3621.068431] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 3624.912973] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=9118 PROTO=2 [ 3631.088069] eth0: no IPv6 routers present [ 3703.062980] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 3703.465330] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=9210 PROTO=2 [ 3828.062951] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 3833.617772] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=9749 PROTO=2 [ 3953.062920] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 3955.675129] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=15983 PROTO=2 [ 4078.062922] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 4078.386319] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=15997 PROTO=2 [ 4203.062899] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 4203.559241] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16011 PROTO=2 [ 4328.062833] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 4328.930922] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16027 PROTO=2 [ 4453.062811] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 4453.950224] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16039 PROTO=2 [ 4578.062742] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 4580.626432] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=13738 PROTO=2 [ 4703.062704] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 4706.310170] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=15942 PROTO=2 [ 4828.062707] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 4832.174324] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16505 PROTO=2 [ 4953.062628] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 4961.469282] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16090 PROTO=2 [ 5078.062552] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 5080.776462] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=17239 PROTO=2 [ 5203.070394] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 5205.358134] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=17665 PROTO=2 [ 5328.070401] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 5330.651139] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=19090 PROTO=2 [ 5453.072279] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 5457.085433] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16137 PROTO=2 [ 5578.074492] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 5582.359006] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16150 PROTO=2 [ 5703.074410] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 5705.070122] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16163 PROTO=2 [ 5828.074387] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED][BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 5835.319941] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED][BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=23298 PROTO=2 [ 5953.074429] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED][BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 5961.925481] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED][BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=24261 PROTO=2 doug@ubuntu:/sam$ lspci -nnk |grep -iA2 eth 03:07.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8169 PCI Gigabit Ethernet Controller [10ec:8169] (rev 10) Subsystem: ASUSTeK Computer Inc. Device [1043:11e5] Kernel driver in use: r8169 doug@ubuntu:/sam$ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 doug@ubuntu:/sam$ nm-tool NetworkManager Tool State: connected (global) - Device: eth0 [Ifupdown (eth0)] ---------------------------------------------- Type: Wired Driver: r8169 State: connected Default: yes HW Address: [BLANKED] Capabilities: Carrier Detect: yes Speed: 100 Mb/s Wired Properties Carrier: on IPv4 Settings: Address: 192.168.0.7 Prefix: 24 (255.255.255.0) Gateway: 192.168.0.1 DNS: 192.168.0.1 - Device: wlan0 ---------------------------------------------------------------- Type: 802.11 WiFi Driver: iwl3945 State: disconnected Default: no HW Address: 00:19:D2:72:5A:0C Capabilities: Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points ATT592: Infra, 30:60:23:76:FE:60, Freq 2437 MHz, Rate 54 Mb/s, Strength 24 WPA WPA2 doug@ubuntu:/sam$ nslookup ubuntu.com ;; connection timed out; no servers could be reached doug@ubuntu:/sam$ dig ubuntuforums.org ; <<>> DiG 9.8.1-P1 <<>> ubuntuforums.org ;; global options: +cmd ;; connection timed out; no servers could be reached doug@ubuntu:/sam$ sudo ifconfig eth0 up doug@ubuntu:/sam$ dhcpcd eth0 The program 'dhcpcd' can be found in the following packages: * dhcpcd * dhcpcd5 Try: sudo apt-get install <selected package> doug@ubuntu:/sam$ lspci -k 00:00.0 Host bridge: Intel Corporation Mobile 945GM/PM/GMS, 943/940GML and 945GT Express Memory Controller Hub (rev 03) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: agpgart-intel 00:02.0 VGA compatible controller: Intel Corporation Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller (rev 03) Subsystem: ASUSTeK Computer Inc. Device 1252 Kernel driver in use: i915 Kernel modules: intelfb, i915 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03) Subsystem: ASUSTeK Computer Inc. Device 1252 00:1b.0 Audio device: Intel Corporation NM10/ICH7 Family High Definition Audio Controller (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 1 (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.1 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 2 (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #1 (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: uhci_hcd 00:1d.1 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #2 (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: uhci_hcd 00:1d.2 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #3 (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: uhci_hcd 00:1d.3 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #4 (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: uhci_hcd 00:1d.7 USB controller: Intel Corporation NM10/ICH7 Family USB2 EHCI Controller (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: ehci_hcd 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev e2) 00:1f.0 ISA bridge: Intel Corporation 82801GBM (ICH7-M) LPC Interface Bridge (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel modules: leds-ss4200, iTCO_wdt, intel-rng 00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: ata_piix 00:1f.3 SMBus: Intel Corporation NM10/ICH7 Family SMBus Controller (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel modules: i2c-i801 02:00.0 Network controller: Intel Corporation PRO/Wireless 3945ABG [Golan] Network Connection (rev 02) Subsystem: Intel Corporation PRO/Wireless 3945ABG Network Connection Kernel driver in use: iwl3945 Kernel modules: iwl3945 03:01.0 CardBus bridge: Ricoh Co Ltd RL5c476 II (rev b3) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: yenta_cardbus Kernel modules: yenta_socket 03:01.1 FireWire (IEEE 1394): Ricoh Co Ltd R5C552 IEEE 1394 Controller (rev 08) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: firewire_ohci Kernel modules: firewire-ohci 03:01.2 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 17) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: sdhci-pci Kernel modules: sdhci-pci 03:01.3 System peripheral: Ricoh Co Ltd R5C592 Memory Stick Bus Host Adapter (rev 08) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: r592 Kernel modules: r592 03:07.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8169 PCI Gigabit Ethernet Controller (rev 10) Subsystem: ASUSTeK Computer Inc. Device 11e5 Kernel driver in use: r8169 Kernel modules: r8169 doug@ubuntu:/sam$ Things I have tried: sudo start network-manager: no help gksudo gedit /etc/network/interfaces changed line to iface eth0 inet dhcp: no help gksudo gedit /etc/NetworkManager/NetworkManager.conf, I changed managed=false to managed=true. Then sudo service network-manager restart: no help: network is unreachable sudo pkill -9 NetworkManager: no help gksudo gedit /etc/resolve.conf added line nameseriver 8.8.8.8: no help I know very little about networking; to date this has simply worked. Thanks for your help! :- Doug.

    Read the article

  • Web Browser Control &ndash; Specifying the IE Version

    - by Rick Strahl
    I use the Internet Explorer Web Browser Control in a lot of my applications to display document type layout. HTML happens to be one of the most common document formats and displaying data in this format – even in desktop applications, is often way easier than using normal desktop technologies. One issue the Web Browser Control has that it’s perpetually stuck in IE 7 rendering mode by default. Even though IE 8 and now 9 have significantly upgraded the IE rendering engine to be more CSS and HTML compliant by default the Web Browser control will have none of it. IE 9 in particular – with its much improved CSS support and basic HTML 5 support is a big improvement and even though the IE control uses some of IE’s internal rendering technology it’s still stuck in the old IE 7 rendering by default. This applies whether you’re using the Web Browser control in a WPF application, a WinForms app, a FoxPro or VB classic application using the ActiveX control. Behind the scenes all these UI platforms use the COM interfaces and so you’re stuck by those same rules. Rendering Challenged To see what I’m talking about here are two screen shots rendering an HTML 5 doctype page that includes some CSS 3 functionality – rounded corners and border shadows - from an earlier post. One uses IE 9 as a standalone browser, and one uses a simple WPF form that includes the Web Browser control. IE 9 Browser:   Web Browser control in a WPF form: The IE 9 page displays this HTML correctly – you see the rounded corners and shadow displayed. Obviously the latter rendering using the Web Browser control in a WPF application is a bit lacking. Not only are the new CSS features missing but the page also renders in Internet Explorer’s quirks mode so all the margins, padding etc. behave differently by default, even though there’s a CSS reset applied on this page. If you’re building an application that intends to use the Web Browser control for a live preview of some HTML this is clearly undesirable. Feature Delegation via Registry Hacks Fortunately starting with Internet Explore 8 and later there’s a fix for this problem via a registry setting. You can specify a registry key to specify which rendering mode and version of IE should be used by that application. These are not global mind you – they have to be enabled for each application individually. There are two different sets of keys for 32 bit and 64 bit applications. 32 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: yourapplication.exe 64 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: yourapplication.exe The value to set this key to is (taken from MSDN here) as decimal values: 9999 (0x270F) Internet Explorer 9. Webpages are displayed in IE9 Standards mode, regardless of the !DOCTYPE directive. 9000 (0x2328) Internet Explorer 9. Webpages containing standards-based !DOCTYPE directives are displayed in IE9 mode. 8888 (0x22B8) Webpages are displayed in IE8 Standards mode, regardless of the !DOCTYPE directive. 8000 (0x1F40) Webpages containing standards-based !DOCTYPE directives are displayed in IE8 mode. 7000 (0x1B58) Webpages containing standards-based !DOCTYPE directives are displayed in IE7 Standards mode.   The added key looks something like this in the Registry Editor: With this in place my Html Html Help Builder application which has wwhelp.exe as its main executable now works with HTML 5 and CSS 3 documents in the same way that Internet Explorer 9 does. Incidentally I accidentally added an ‘empty’ DWORD value of 0 to my EXE name and that worked as well giving me IE 9 rendering. Although not documented I suspect 0 (or an invalid value) will default to the installed browser. Don’t have a good way to test this but if somebody could try this with IE 8 installed that would be great: What happens when setting 9000 with IE 8 installed? What happens when setting 0 with IE 8 installed? Don’t forget to add Keys for Host Environments If you’re developing your application in Visual Studio and you run the debugger you may find that your application is still not rendering right, but if you run the actual generated EXE from Explorer or the OS command prompt it works. That’s because when you run the debugger in Visual Studio it wraps your application into a debugging host container. For this reason you might want to also add another registry key for yourapp.vshost.exe on your development machine. If you’re developing in Visual FoxPro make sure you add a key for vfp9.exe to see the rendering adjustments in the Visual FoxPro development environment. Cleaner HTML - no more HTML mangling! There are a number of additional benefits to setting up rendering of the Web Browser control to the IE 9 engine (or even the IE 8 engine) beyond the obvious rendering functionality. IE 9 actually returns your HTML in something that resembles the original HTML formatting, as opposed to the IE 7 default format which mangled the original HTML content. If you do the following in the WPF application: private void button2_Click(object sender, RoutedEventArgs e) { dynamic doc = this.webBrowser.Document; MessageBox.Show(doc.body.outerHtml); } you get different output depending on the rendering mode active. With the default IE 7 rendering you get: <BODY><DIV> <H1>Rounded Corners and Shadows - Creating Dialogs in CSS</H1> <DIV class=toolbarcontainer><A class=hoverbutton href="./"><IMG src="../../css/images/home.gif"> Home</A> <A class=hoverbutton href="RoundedCornersAndShadows.htm"><IMG src="../../css/images/refresh.gif"> Refresh</A> </DIV> <DIV class=containercontent> <FIELDSET><LEGEND>Plain Box</LEGEND><!-- Simple Box with rounded corners and shadow --> <DIV style="BORDER-BOTTOM: steelblue 2px solid; BORDER-LEFT: steelblue 2px solid; WIDTH: 550px; BORDER-TOP: steelblue 2px solid; BORDER-RIGHT: steelblue 2px solid" class="roundbox boxshadow"> <DIV style="BACKGROUND: khaki" class="boxcontenttext roundbox">Simple Rounded Corner Box. </DIV></DIV></FIELDSET> <FIELDSET><LEGEND>Box with Header</LEGEND> <DIV style="BORDER-BOTTOM: steelblue 2px solid; BORDER-LEFT: steelblue 2px solid; WIDTH: 550px; BORDER-TOP: steelblue 2px solid; BORDER-RIGHT: steelblue 2px solid" class="roundbox boxshadow"> <DIV class="gridheaderleft roundbox-top">Box with a Header</DIV> <DIV style="BACKGROUND: khaki" class="boxcontenttext roundbox-bottom">Simple Rounded Corner Box. </DIV></DIV></FIELDSET> <FIELDSET><LEGEND>Dialog Style Window</LEGEND> <DIV style="POSITION: relative; WIDTH: 450px" id=divDialog class="dialog boxshadow" jQuery16107208195684204002="2"> <DIV style="POSITION: relative" class=dialog-header> <DIV class=closebox></DIV>User Sign-in <DIV class=closebox jQuery16107208195684204002="3"></DIV></DIV> <DIV class=descriptionheader>This dialog is draggable and closable</DIV> <DIV class=dialog-content><LABEL>Username:</LABEL> <INPUT name=txtUsername value=" "> <LABEL>Password</LABEL> <INPUT name=txtPassword value=" "> <HR> <INPUT id=btnLogin value=Login type=button> </DIV> <DIV class=dialog-statusbar>Ready</DIV></DIV></FIELDSET> </DIV> <SCRIPT type=text/javascript>     $(document).ready(function () {         $("#divDialog")             .draggable({ handle: ".dialog-header" })             .closable({ handle: ".dialog-header",                 closeHandler: function () {                     alert("Window about to be closed.");                     return true;  // true closes - false leaves open                 }             });     }); </SCRIPT> </DIV></BODY> Now lest you think I’m out of my mind and create complete whacky HTML rooted in the last century, here’s the IE 9 rendering mode output which looks a heck of a lot cleaner and a lot closer to my original HTML of the page I’m accessing: <body> <div>         <h1>Rounded Corners and Shadows - Creating Dialogs in CSS</h1>     <div class="toolbarcontainer">         <a class="hoverbutton" href="./"> <img src="../../css/images/home.gif"> Home</a>         <a class="hoverbutton" href="RoundedCornersAndShadows.htm"> <img src="../../css/images/refresh.gif"> Refresh</a>     </div>         <div class="containercontent">     <fieldset>         <legend>Plain Box</legend>                <!-- Simple Box with rounded corners and shadow -->             <div style="border: 2px solid steelblue; width: 550px;" class="roundbox boxshadow">                              <div style="background: khaki;" class="boxcontenttext roundbox">                     Simple Rounded Corner Box.                 </div>             </div>     </fieldset>     <fieldset>         <legend>Box with Header</legend>         <div style="border: 2px solid steelblue; width: 550px;" class="roundbox boxshadow">                          <div class="gridheaderleft roundbox-top">Box with a Header</div>             <div style="background: khaki;" class="boxcontenttext roundbox-bottom">                 Simple Rounded Corner Box.             </div>         </div>     </fieldset>       <fieldset>         <legend>Dialog Style Window</legend>         <div style="width: 450px; position: relative;" id="divDialog" class="dialog boxshadow">             <div style="position: relative;" class="dialog-header">                 <div class="closebox"></div>                 User Sign-in             <div class="closebox"></div></div>             <div class="descriptionheader">This dialog is draggable and closable</div>                    <div class="dialog-content">                             <label>Username:</label>                 <input name="txtUsername" value=" " type="text">                 <label>Password</label>                 <input name="txtPassword" value=" " type="text">                                 <hr/>                                 <input id="btnLogin" value="Login" type="button">                        </div>             <div class="dialog-statusbar">Ready</div>         </div>     </fieldset>     </div> <script type="text/javascript">     $(document).ready(function () {         $("#divDialog")             .draggable({ handle: ".dialog-header" })             .closable({ handle: ".dialog-header",                 closeHandler: function () {                     alert("Window about to be closed.");                     return true;  // true closes - false leaves open                 }             });     }); </script>        </div> </body> IOW, in IE9 rendering mode IE9 is much closer (but not identical) to the original HTML from the page on the Web that we’re reading from. As a side note: Unfortunately, the browser feature emulation can't be applied against the Html Help (CHM) Engine in Windows which uses the Web Browser control (or COM interfaces anyway) to render Html Help content. I tried setting up hh.exe which is the help viewer, to use IE 9 rendering but a help file generated with CSS3 features will simply show in IE 7 mode. Bummer - this would have been a nice quick fix to allow help content served from CHM files to look better. HTML Editing leaves HTML formatting intact In the same vane, if you do any inline HTML editing in the control by setting content to be editable, IE 9’s control does a much more reasonable job of creating usable and somewhat valid HTML. It also leaves the original content alone other than the text your are editing or adding. No longer is the HTML output stripped of excess spaces and reformatted in IEs format. So if I do: private void button3_Click(object sender, RoutedEventArgs e) { dynamic doc = this.webBrowser.Document; doc.body.contentEditable = true; } and then make some changes to the document by typing into it using IE 9 mode, the document formatting stays intact and only the affected content is modified. The created HTML is reasonably clean (although it does lack proper XHTML formatting for things like <br/> <hr/>). This is very different from IE 7 mode which mangled the HTML as soon as the page was loaded into the control. Any editing you did stripped out all white space and lost all of your existing XHTML formatting. In IE 9 mode at least *most* of your original formatting stays intact. This is huge! In Html Help Builder I have supported HTML editing for a long time but the HTML mangling by the Web Browser control made it very difficult to edit the HTML later. Previously IE would mangle the HTML by stripping out spaces, upper casing all tags and converting many XHTML safe tags to its HTML 3 tags. Now IE leaves most of my document alone while editing, and creates cleaner and more compliant markup (with exception of self-closing elements like BR/HR). The end result is that I now have HTML editing in place that's much cleaner and actually capable of being manually edited. Caveats, Caveats, Caveats It wouldn't be Internet Explorer if there weren't some major compatibility issues involved in using this various browser version interaction. The biggest thing I ran into is that there are odd differences in some of the COM interfaces and what they return. I specifically ran into a problem with the document.selection.createRange() function which with IE 7 compatibility returns an expected text range object. When running in IE 8 or IE 9 mode however. I could not retrieve a valid text range with this code where loEdit is the WebBrowser control: loRange = loEdit.document.selection.CreateRange() The loRange object returned (here in FoxPro) had a length property of 0 but none of the other properties of the TextRange or TextRangeCollection objects were available. I figured this was due to some changed security settings but even after elevating the Intranet Security Zone and mucking with the other browser feature flags pertaining to security I had no luck. In the end I relented and used a JavaScript function in my editor document that returns a selection range object: function getselectionrange() { var range = document.selection.createRange(); return range; } and call that JavaScript function from my host applications code: *** Use a function in the document to get around HTML Editing issues loRange = loEdit.document.parentWindow.getselectionrange(.f.) and that does work correctly. This wasn't a big deal as I'm already loading a support script file into the editor page so all I had to do is add the function to this existing script file. You can find out more how to call script code in the Web Browser control from a host application in a previous post of mine. IE 8 and 9 also clamp down the security environment a little more than the default IE 7 control, so there may be other issues you run into. Other than the createRange() problem above I haven't seen anything else that is breaking in my code so far though and that's encouraging at least since it uses a lot of HTML document manipulation for the custom editor I've created (and would love to replace - any PROFESSIONAL alternatives anybody?) Registry Key Installation for your Application It’s important to remember that this registry setting is made per application, so most likely this is something you want to set up with your installer. Also remember that 32 and 64 bit settings require separate settings in the registry so if you’re creating your installer you most likely will want to set both keys in the registry preemptively for your application. I use Tarma Installer for all of my application installs and in Tarma I configure registry keys for both and set a flag to only install the latter key group in the 64 bit version: Because this setting is application specific you have to do this for every application you install unfortunately, but this also means that you can safely configure this setting in the registry because it is after only applied to your application. Another problem with install based installation is version detection. If IE 8 is installed I’d want 8000 for the value, if IE 9 is installed I want 9000. I can do this easily in code but in the installer this is much more difficult. I don’t have a good solution for this at the moment, but given that the app works with IE 7 mode now, IE 9 mode is just a bonus for the moment. If IE 9 is not installed and 9000 is used the default rendering will remain in use.   It sure would be nice if we could specify the IE rendering mode as a property, but I suspect the ActiveX container has to know before it loads what actual version to load up and once loaded can only load a single version of IE. This would account for this annoying application level configuration… Summary The registry feature emulation has been available for quite some time, but I just found out about it today and started experimenting around with it. I’m stoked to see that this is available as I’d pretty much given up in ever seeing any better rendering in the Web Browser control. Now at least my apps can take advantage of newer HTML features. Now if we could only get better HTML Editing support somehow <snicker>… ah can’t have everything.© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  FoxPro  Windows  

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Using HTML 5 SessionState to save rendered Page Content

    - by Rick Strahl
    HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications. SessionState and LocalStorage are easy The APIs that make up sessionState and localStorage are very simple. Both object feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with. Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):// set var lastAccess = new Date().getTime(); if (sessionStorage) sessionStorage.setItem("myapp_time", lastAccess.toString()); // retrieve in another page or on a refresh var time = null; if (sessionStorage) time = sessionStorage.getItem("myapp_time"); if (time) time = new Date(time * 1); else time = new Date(); sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way. The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example. You can go check out the following example online in Plunker: http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview which looks like this: Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it). The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact. The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code: function getSessionCount() { var count = 0; if (sessionStorage) { var count = sessionStorage.getItem("ss_count"); count = !count ? 0 : count * 1; } $("#txtSession").val(count); return count; } function setSessionCount(count) { if (sessionStorage) sessionStorage.setItem("ss_count", count.toString()); } These two functions essentially load and store a session counter value. The two key methods used here are: sessionStorage.getItem(key); sessionStorage.setItem(key,stringVal); Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:var user = { name: "Rick", id="ricks", level=8 } sessionStorage.setItem("app_user",JSON.stringify(user)); to retrieve it:var user = sessionStorage.getItem("app_user"); if (user) user = JSON.parse(user); Simple! If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:   You can also use this tool to refresh or remove entries from storage. What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running. But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later. Let's look at the latter with a real life example. Why do I need Client-side Page Caching for Server Rendered HTML? I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content. So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and… …you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool! Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back… In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting. Content Caching If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again. But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it??? This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this. A real World Use Case Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here: http://classifieds.gorge.net/ However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation: http://classifeds.gorge.net/mobile  (or browse the base url with your browser width under 800px)   Here's what the mobile, non-frames view looks like:   As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place. Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage! Using Client SessionStorage to cache Server Rendered Content To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page. It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation. Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this: https://classifieds.gorge.net/list?back=True The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:@model MessageListViewModel @{ ViewBag.Title = "Classified Listing"; bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]); } <form method="post" action="@Url.Action("list")"> <div id="SizingContainer"> @if (!isBack) { @Html.Partial("List_CommandBar_Partial", Model) <div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;"> @Html.Partial("List_Items_Partial", Model) @if (Model.RequireLoadEntry) { <div class="postitem loadpostitems" style="padding: 15px;"> <div id="LoadProgress" class="smallprogressright"></div> <div class="control-progress"> Load additional listings... </div> </div> } </div> } </div> </form> As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored. Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.page.saveData = function(id) { if (!sessionStorage) return; var data = { id: id, scroll: $("#PostItemContainer").scrollTop(), html: $("#SizingContainer").html() }; sessionStorage.setItem("list_html",JSON.stringify(data)); }; page.restoreData = function() { if (!sessionStorage) return; var data = sessionStorage.getItem("list_html"); if (!data) return null; return JSON.parse(data); }; The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML. In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits. Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:$( function() { … var isBack = getUrlEncodedKey("back", location.href); if (isBack) { // remove the back key from URL setUrlEncodedKey("back", "", location.href); var data = page.restoreData(); // restore from sessionState if (!data) { // no data - force redisplay of the server side default list window.location = "list"; return; } $("#SizingContainer").html(data.html); var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); } else if (window.noFrames) page.saveData(null); // save when page loads $("#SizingContainer").on("click", ".postitem", function() { var id = $(this).attr("data-id"); if (!id) return true; if (window.noFrames) page.saveData(id); var contentFrame = window.parent.frames["Content"]; if (contentFrame) contentFrame.location.href = "show/" + id; else window.location.href = "show/" + id; return false; }); … The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always! If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:$("#SizingContainer").html(data.html); It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone. The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element. For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements. The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view). Some things to watch out for As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious. Here are a few things to pay attention to: Event Handling Logic Timing of manipulating DOM events Inline Script Code Bookmarking to the Cache Url when no cache exists Do you have inline script code in your HTML? That script code isn't going to run if you restore from cache and simply assign or it may not run at the time you think it would normally in the DOM rendering cycle. JavaScript Event Hookups The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:$("#btnSearch").click( function() {…}); that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet. Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:$("#SelectionContainer").on("click","#btnSearch", function() {…}); which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events. Timing of manipulating DOM Elements Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line. Inline Script Code Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script. This code broken when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it). Bookmarking Cached Urls When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL. The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting. This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything. Don't use <body> to replace Content Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags and elements and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container. Other Approaches The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic. Here are a few other ways that come to mind: Using Partial HTML Rendering via AJAXInstead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case. Using JSON Data and Client RenderingThe hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated. I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general. State of the Session SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved. If you haven't played with sessionStorage or localStorage I encourage you to give it a try. There's a lot of cool stuff that you can do with this beyond the specific scenario I've covered here… Resources Overview of localStorage (also applies to sessionStorage) Web Storage Compatibility Modernizr Test Suite© Rick Strahl, West Wind Technologies, 2005-2013Posted in JavaScript  HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Use Extension method to write cleaner code

    - by Fredrik N
    This blog post will show you step by step to refactoring some code to be more readable (at least what I think). Patrik Löwnedahl gave me some of the ideas when we where talking about making code much cleaner. The following is an simple application that will have a list of movies (Normal and Transfer). The task of the application is to calculate the total sum of each movie and also display the price of each movie. class Program { enum MovieType { Normal, Transfer } static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfNormalMovie = 0; int totalPriceOfTransferMovie = 0; foreach (var movie in movies) { if (movie == MovieType.Normal) { totalPriceOfNormalMovie += 2; Console.WriteLine("$2"); } else if (movie == MovieType.Transfer) { totalPriceOfTransferMovie += 3; Console.WriteLine("$3"); } } } private static IEnumerable<MovieType> GetMovies() { return new List<MovieType>() { MovieType.Normal, MovieType.Transfer, MovieType.Normal }; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } In the code above I’m using an enum, a good way to add types (isn’t it ;)). I also use one foreach loop to calculate the price, the loop has a condition statement to check what kind of movie is added to the list of movies. I want to reuse the foreach only to increase performance and let it do two things (isn’t that smart of me?! ;)). First of all I can admit, I’m not a big fan of enum. Enum often results in ugly condition statements and can be hard to maintain (if a new type is added we need to check all the code in our app to see if we use the enum somewhere else). I don’t often care about pre-optimizations when it comes to write code (of course I have performance in mind). I rather prefer to use two foreach to let them do one things instead of two. So based on what I don’t like and Martin Fowler’s Refactoring catalog, I’m going to refactoring this code to what I will call a more elegant and cleaner code. First of all I’m going to use Split Loop to make sure the foreach will do one thing not two, it will results in two foreach (Don’t care about performance here, if the results will results in bad performance, you can refactoring later, but computers are so fast to day, so iterating through a list is not often so time consuming.) Note: The foreach actually do four things, will come to is later. var movies = GetMovies(); int totalPriceOfNormalMovie = 0; int totalPriceOfTransferMovie = 0; foreach (var movie in movies) { if (movie == MovieType.Normal) { totalPriceOfNormalMovie += 2; Console.WriteLine("$2"); } } foreach (var movie in movies) { if (movie == MovieType.Transfer) { totalPriceOfTransferMovie += 3; Console.WriteLine("$3"); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } To remove the condition statement we can use the Where extension method added to the IEnumerable<T> and is located in the System.Linq namespace: foreach (var movie in movies.Where( m => m == MovieType.Normal)) { totalPriceOfNormalMovie += 2; Console.WriteLine("$2"); } foreach (var movie in movies.Where( m => m == MovieType.Transfer)) { totalPriceOfTransferMovie += 3; Console.WriteLine("$3"); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The above code will still do two things, calculate the total price, and display the price of the movie. I will not take care of it at the moment, instead I will focus on the enum and try to remove them. One way to remove enum is by using the Replace Conditional with Polymorphism. So I will create two classes, one base class called Movie, and one called MovieTransfer. The Movie class will have a property called Price, the Movie will now hold the price:   public class Movie { public virtual int Price { get { return 2; } } } public class MovieTransfer : Movie { public override int Price { get { return 3; } } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The following code has no enum and will use the new Movie classes instead: class Program { static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfNormalMovie = 0; int totalPriceOfTransferMovie = 0; foreach (var movie in movies.Where( m => m is Movie)) { totalPriceOfNormalMovie += movie.Price; Console.WriteLine(movie.Price); } foreach (var movie in movies.Where( m => m is MovieTransfer)) { totalPriceOfTransferMovie += movie.Price; Console.WriteLine(movie.Price); } } private static IEnumerable<Movie> GetMovies() { return new List<Movie>() { new Movie(), new MovieTransfer(), new Movie() }; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   If you take a look at the foreach now, you can see it still actually do two things, calculate the price and display the price. We can do some more refactoring here by using the Sum extension method to calculate the total price of the movies:   static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfNormalMovie = movies.Where(m => m is Movie) .Sum(m => m.Price); int totalPriceOfTransferMovie = movies.Where(m => m is MovieTransfer) .Sum(m => m.Price); foreach (var movie in movies.Where( m => m is Movie)) Console.WriteLine(movie.Price); foreach (var movie in movies.Where( m => m is MovieTransfer)) Console.WriteLine(movie.Price); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now when the Movie object will hold the price, there is no need to use two separate foreach to display the price of the movies in the list, so we can use only one instead: foreach (var movie in movies) Console.WriteLine(movie.Price); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If we want to increase the Maintainability index we can use the Extract Method to move the Sum of the prices into two separate methods. The name of the method will explain what we are doing: static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfMovie = TotalPriceOfMovie(movies); int totalPriceOfTransferMovie = TotalPriceOfMovieTransfer(movies); foreach (var movie in movies) Console.WriteLine(movie.Price); } private static int TotalPriceOfMovieTransfer(IEnumerable<Movie> movies) { return movies.Where(m => m is MovieTransfer) .Sum(m => m.Price); } private static int TotalPriceOfMovie(IEnumerable<Movie> movies) { return movies.Where(m => m is Movie) .Sum(m => m.Price); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now to the last thing, I love the ForEach method of the List<T>, but the IEnumerable<T> doesn’t have it, so I created my own ForEach extension, here is the code of the ForEach extension method: public static class LoopExtensions { public static void ForEach<T>(this IEnumerable<T> values, Action<T> action) { Contract.Requires(values != null); Contract.Requires(action != null); foreach (var v in values) action(v); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } I will now replace the foreach by using this ForEach method: static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfMovie = TotalPriceOfMovie(movies); int totalPriceOfTransferMovie = TotalPriceOfMovieTransfer(movies); movies.ForEach(m => Console.WriteLine(m.Price)); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The ForEach on the movies will now display the price of the movie, but maybe we want to display the name of the movie etc, so we can use Extract Method by moving the lamdba expression into a method instead, and let the method explains what we are displaying: movies.ForEach(DisplayMovieInfo); private static void DisplayMovieInfo(Movie movie) { Console.WriteLine(movie.Price); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now the refactoring is done! Here is the complete code:   class Program { static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfMovie = TotalPriceOfMovie(movies); int totalPriceOfTransferMovie = TotalPriceOfMovieTransfer(movies); movies.ForEach(DisplayMovieInfo); } private static void DisplayMovieInfo(Movie movie) { Console.WriteLine(movie.Price); } private static int TotalPriceOfMovieTransfer(IEnumerable<Movie> movies) { return movies.Where(m => m is MovieTransfer) .Sum(m => m.Price); } private static int TotalPriceOfMovie(IEnumerable<Movie> movies) { return movies.Where(m => m is Movie) .Sum(m => m.Price); } private static IEnumerable<Movie> GetMovies() { return new List<Movie>() { new Movie(), new MovieTransfer(), new Movie() }; } } public class Movie { public virtual int Price { get { return 2; } } } public class MovieTransfer : Movie { public override int Price { get { return 3; } } } pulbic static class LoopExtensions { public static void ForEach<T>(this IEnumerable<T> values, Action<T> action) { Contract.Requires(values != null); Contract.Requires(action != null); foreach (var v in values) action(v); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } I think the new code is much cleaner than the first one, and I love the ForEach extension on the IEnumerable<T>, I can use it for different kind of things, for example: movies.Where(m => m is Movie) .ForEach(DoSomething); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } By using the Where and ForEach extension method, some if statements can be removed and will make the code much cleaner. But the beauty is in the eye of the beholder. What would you have done different, what do you think will make the first example in the blog post look much cleaner than my results, comments are welcome! If you want to know when I will publish a new blog post, you can follow me on twitter: http://www.twitter.com/fredrikn

    Read the article

  • Grandparent – Parent – Child Reports in SQL Developer

    - by thatjeffsmith
    You’ll never see one of these family stickers on my car, but I promise not to judge…much. Parent – Child reports are pretty straightforward in Oracle SQL Developer. You have a ‘parent’ report, and then one or more ‘child’ reports which are based off of a value in a selected row or value from the parent. If you need a quick tutorial to get up to speed on the subject, go ahead and take 5 minutes Shortly before I left for vacation 2 weeks agao, I got an interesting question from one of my Twitter Followers: @thatjeffsmith any luck with the #Oracle awr reports in #SQLDeveloper?This is easy with multi generation parent>child Done in #dbvisualizer — Ronald Rood (@Ik_zelf) August 26, 2012 Now that I’m back from vacation, I can tell Ronald and everyone else that the answer is ‘Yes!’ And here’s how Time to Get Out Your XML Editor Don’t have one? That’s OK, SQL Developer can edit XML files. While the Reporting interface doesn’t surface the ability to create multi-generational reports, the underlying code definitely supports it. We just need to hack away at the XML that powers a report. For this example I’m going to start simple. A query that brings back DEPARTMENTs, then EMPLOYEES, then JOBs. We can build the first two parts of the report using the report editor. A Parent-Child report in Oracle SQL Developer (Departments – Employees) Save the Report to XML Once you’ve generated the XML file, open it with your favorite XML editor. For this example I’ll be using the build-it XML editor in SQL Developer. SQL Developer Reports in their raw XML glory! Right after the PDF element in the XML document, we can start a new ‘child’ report by inserting a DISPLAY element. I just copied and pasted the existing ‘display’ down so I wouldn’t have to worry about screwing anything up. Note I also needed to change the ‘master’ name so it wouldn’t confuse SQL Developer when I try to import/open a report that has the same name. Also I needed to update the binds tags to reflect the names from the child versus the original parent report. This is pretty easy to figure out on your own actually – I mean I’m no real developer and I got it pretty quick. <?xml version="1.0" encoding="UTF-8" ?> <displays> <display id="92857fce-0139-1000-8006-7f0000015340" type="" style="Table" enable="true"> <name><![CDATA[Grandparent]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[select * from hr.departments]]></sql> </query> <pdf version="VERSION_1_7" compression="CONTENT"> <docproperty title="" author="" subject="" keywords="" /> <cell toppadding="2" bottompadding="2" leftpadding="2" rightpadding="2" horizontalalign="LEFT" verticalalign="TOP" wrap="true" /> <column> <heading font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="FIRST_PAGE" /> <footing font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="NONE" /> <blob blob="NONE" zip="false" /> </column> <table font="Courier" size="10" style="NORMAL" color="-16777216" userowshading="false" oddrowshading="-1" evenrowshading="-1" showborders="true" spacingbefore="12" spacingafter="12" horizontalalign="LEFT" /> <header enable="false" generatedate="false"> <data> null </data> </header> <footer enable="false" generatedate="false"> <data value="null" /> </footer> <security enable="false" useopenpassword="false" openpassword="" encryption="EXCLUDE_METADATA"> <permission enable="false" permissionpassword="" allowcopying="true" allowprinting="true" allowupdating="false" allowaccessdevices="true" /> </security> <pagesetup papersize="LETTER" orientation="1" measurement="in" margintop="1.0" marginbottom="1.0" marginleft="1.0" marginright="1.0" /> </pdf> <display id="null" type="" style="Table" enable="true"> <name><![CDATA[Parent]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[select * from hr.employees where department_id = EPARTMENT_ID]]></sql> <binds> <bind id="DEPARTMENT_ID"> <prompt><![CDATA[DEPARTMENT_ID]]></prompt> <tooltip><![CDATA[DEPARTMENT_ID]]></tooltip> <value><![CDATA[NULL_VALUE]]></value> </bind> </binds> </query> <pdf version="VERSION_1_7" compression="CONTENT"> <docproperty title="" author="" subject="" keywords="" /> <cell toppadding="2" bottompadding="2" leftpadding="2" rightpadding="2" horizontalalign="LEFT" verticalalign="TOP" wrap="true" /> <column> <heading font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="FIRST_PAGE" /> <footing font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="NONE" /> <blob blob="NONE" zip="false" /> </column> <table font="Courier" size="10" style="NORMAL" color="-16777216" userowshading="false" oddrowshading="-1" evenrowshading="-1" showborders="true" spacingbefore="12" spacingafter="12" horizontalalign="LEFT" /> <header enable="false" generatedate="false"> <data> null </data> </header> <footer enable="false" generatedate="false"> <data value="null" /> </footer> <security enable="false" useopenpassword="false" openpassword="" encryption="EXCLUDE_METADATA"> <permission enable="false" permissionpassword="" allowcopying="true" allowprinting="true" allowupdating="false" allowaccessdevices="true" /> </security> <pagesetup papersize="LETTER" orientation="1" measurement="in" margintop="1.0" marginbottom="1.0" marginleft="1.0" marginright="1.0" /> </pdf> <display id="null" type="" style="Table" enable="true"> <name><![CDATA[Child]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[select * from hr.jobs where job_id = :JOB_ID]]></sql> <binds> <bind id="JOB_ID"> <prompt><![CDATA[JOB_ID]]></prompt> <tooltip><![CDATA[JOB_ID]]></tooltip> <value><![CDATA[NULL_VALUE]]></value> </bind> </binds> </query> <pdf version="VERSION_1_7" compression="CONTENT"> <docproperty title="" author="" subject="" keywords="" /> <cell toppadding="2" bottompadding="2" leftpadding="2" rightpadding="2" horizontalalign="LEFT" verticalalign="TOP" wrap="true" /> <column> <heading font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="FIRST_PAGE" /> <footing font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="NONE" /> <blob blob="NONE" zip="false" /> </column> <table font="Courier" size="10" style="NORMAL" color="-16777216" userowshading="false" oddrowshading="-1" evenrowshading="-1" showborders="true" spacingbefore="12" spacingafter="12" horizontalalign="LEFT" /> <header enable="false" generatedate="false"> <data> null </data> </header> <footer enable="false" generatedate="false"> <data value="null" /> </footer> <security enable="false" useopenpassword="false" openpassword="" encryption="EXCLUDE_METADATA"> <permission enable="false" permissionpassword="" allowcopying="true" allowprinting="true" allowupdating="false" allowaccessdevices="true" /> </security> <pagesetup papersize="LETTER" orientation="1" measurement="in" margintop="1.0" marginbottom="1.0" marginleft="1.0" marginright="1.0" /> </pdf> </display> </display> </display> </displays> Save the file and ‘Open Report…’ You’ll see your new report name in the tree. You just need to double-click it to open it. Here’s what it looks like running A 3 generation family Now Let’s Build an AWR Text Report Ronald wanted to have the ability to query AWR snapshots and generate the AWR reports. That requires a few inputs, including a START and STOP snapshot ID. That basically tells AWR what time period to use for generating the report. And here’s where it gets tricky. We’ll need to use aliases for the SNAP_ID column. Since we’re using the same column name from 2 different queries, we need to use different bind variables. Fortunately for us, SQL Developer’s clever enough to use the column alias as the BIND. Here’s what I mean: Grandparent Query SELECT snap_id start1, begin_interval_time, end_interval_time FROM dba_hist_snapshot ORDER BY 1 asc Parent Query SELECT snap_id stop1, begin_interval_time, end_interval_time, :START1 carry FROM dba_hist_snapshot WHERE snap_id > :START1 ORDER BY 1 asc And here’s where it gets even trickier – you can’t reference a bind from outside the parent query. My grandchild report can’t reference a value from the grandparent report. So I just carry the selected value down to the parent. In my parent query SELECT you see the ‘:START1′ at the end? That’s making that value available to me when I use it in my grandchild query. To complicate things a bit further, I can’t have a column name with a ‘:’ in it, or SQL Developer will get confused when I try to reference the value of the variable with the ‘:’ – and ‘::Name’ doesn’t work. But that’s OK, just alias it. Grandchild Query Select Output From Table(Dbms_Workload_Repository.Awr_Report_Text(1298953802, 1,:CARRY, :STOP1)); Ok, and the last trick – I hard-coded my report to use my database’s DB_ID and INST_ID into the AWR package call. Now a smart person could figure out a way to make that work on any database, but I got lazy and and ran out of time. But this should be far enough for you to take it from here. Here’s what my report looks like now: Caution: don’t run this if you haven’t licensed Enterprise Edition with Diagnostic Pack. The Raw XML for this AWR Report <?xml version="1.0" encoding="UTF-8" ?> <displays> <display id="927ba96c-0139-1000-8001-7f0000015340" type="" style="Table" enable="true"> <name><![CDATA[AWR Start Stop Report Final]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[SELECT snap_id start1, begin_interval_time, end_interval_time FROM dba_hist_snapshot ORDER BY 1 asc]]></sql> </query> <display id="null" type="" style="Table" enable="true"> <name><![CDATA[Stop SNAP_ID]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[SELECT snap_id stop1, begin_interval_time, end_interval_time, :START1 carry FROM dba_hist_snapshot WHERE snap_id > :START1 ORDER BY 1 asc]]></sql> </query> <display id="null" type="" style="Table" enable="true"> <name><![CDATA[AWR Report]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[Select Output From Table(Dbms_Workload_Repository.Awr_Report_Text(1298953802, 1,:CARRY, :STOP1 ))]]></sql> </query> </display> </display> </display> </displays> Should We Build Support for Multiple Levels of Reports into the User Interface? Let us know! A comment here or a suggestion on our SQL Developer Exchange might help your case!

    Read the article

  • Developing Spring Portlet for use inside Weblogic Portal / Webcenter Portal

    - by Murali Veligeti
    We need to understand the main difference between portlet workflow and servlet workflow.The main difference between portlet workflow and servlet workflow is that, the request to the portlet can have two distinct phases: 1) Action phase 2) Render phase. The Action phase is executed only once and is where any 'backend' changes or actions occur, such as making changes in a database. The Render phase then produces what is displayed to the user each time the display is refreshed. The critical point here is that for a single overall request, the action phase is executed only once, but the render phase may be executed multiple times. This provides a clean separation between the activities that modify the persistent state of your system and the activities that generate what is displayed to the user.The dual phases of portlet requests are one of the real strengths of the JSR-168 specification. For example, dynamic search results can be updated routinely on the display without the user explicitly re-running the search. Most other portlet MVC frameworks attempt to completely hide the two phases from the developer and make it look as much like traditional servlet development as possible - we think this approach removes one of the main benefits of using portlets. So, the separation of the two phases is preserved throughout the Spring Portlet MVC framework. The primary manifestation of this approach is that where the servlet version of the MVC classes will have one method that deals with the request, the portlet version of the MVC classes will have two methods that deal with the request: one for the action phase and one for the render phase. For example, where the servlet version of AbstractController has the handleRequestInternal(..) method, the portlet version of AbstractController has handleActionRequestInternal(..) and handleRenderRequestInternal(..) methods.The Spring Portlet Framework is designed around a DispatcherPortlet that dispatches requests to handlers, with configurable handler mappings and view resolution, just as the DispatcherServlet in the Spring Web Framework does.  Developing portlet.xml Let's start the sample development by creating the portlet.xml file in the /WebContent/WEB-INF/ folder as shown below: <?xml version="1.0" encoding="UTF-8"?> <portlet-app version="2.0" xmlns="http://java.sun.com/xml/ns/portlet/portlet-app_2_0.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <portlet> <portlet-name>SpringPortletName</portlet-name> <portlet-class>org.springframework.web.portlet.DispatcherPortlet</portlet-class> <supports> <mime-type>text/html</mime-type> <portlet-mode>view</portlet-mode> </supports> <portlet-info> <title>SpringPortlet</title> </portlet-info> </portlet> </portlet-app> DispatcherPortlet is responsible for handling every client request. When it receives a request, it finds out which Controller class should be used for handling this request, and then it calls its handleActionRequest() or handleRenderRequest() method based on the request processing phase. The Controller class executes business logic and returns a View name that should be used for rendering markup to the user. The DispatcherPortlet then forwards control to that View for actual markup generation. As you can see, DispatcherPortlet is the central dispatcher for use within Spring Portlet MVC Framework. Note that your portlet application can define more than one DispatcherPortlet. If it does so, then each of these portlets operates its own namespace, loading its application context and handler mapping. The DispatcherPortlet is also responsible for loading application context (Spring configuration file) for this portlet. First, it tries to check the value of the configLocation portlet initialization parameter. If that parameter is not specified, it takes the portlet name (that is, the value of the <portlet-name> element), appends "-portlet.xml" to it, and tries to load that file from the /WEB-INF folder. In the portlet.xml file, we did not specify the configLocation initialization parameter, so let's create SpringPortletName-portlet.xml file in the next section. Developing SpringPortletName-portlet.xml Create the SpringPortletName-portlet.xml file in the /WebContent/WEB-INF folder of your application as shown below: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd"> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"/> <property name="prefix" value="/jsp/"/> <property name="suffix" value=".jsp"/> </bean> <bean id="pointManager" class="com.wlp.spring.bo.internal.PointManagerImpl"> <property name="users"> <list> <ref bean="point1"/> <ref bean="point2"/> <ref bean="point3"/> <ref bean="point4"/> </list> </property> </bean> <bean id="point1" class="com.wlp.spring.bean.User"> <property name="name" value="Murali"/> <property name="points" value="6"/> </bean> <bean id="point2" class="com.wlp.spring.bean.User"> <property name="name" value="Sai"/> <property name="points" value="13"/> </bean> <bean id="point3" class="com.wlp.spring.bean.User"> <property name="name" value="Rama"/> <property name="points" value="43"/> </bean> <bean id="point4" class="com.wlp.spring.bean.User"> <property name="name" value="Krishna"/> <property name="points" value="23"/> </bean> <bean id="messageSource" class="org.springframework.context.support.ResourceBundleMessageSource"> <property name="basename" value="messages"/> </bean> <bean name="/users.htm" id="userController" class="com.wlp.spring.controller.UserController"> <property name="pointManager" ref="pointManager"/> </bean> <bean name="/pointincrease.htm" id="pointIncreaseController" class="com.wlp.spring.controller.IncreasePointsFormController"> <property name="sessionForm" value="true"/> <property name="pointManager" ref="pointManager"/> <property name="commandName" value="pointIncrease"/> <property name="commandClass" value="com.wlp.spring.bean.PointIncrease"/> <property name="formView" value="pointincrease"/> <property name="successView" value="users"/> </bean> <bean id="parameterMappingInterceptor" class="org.springframework.web.portlet.handler.ParameterMappingInterceptor" /> <bean id="portletModeParameterHandlerMapping" class="org.springframework.web.portlet.handler.PortletModeParameterHandlerMapping"> <property name="order" value="1" /> <property name="interceptors"> <list> <ref bean="parameterMappingInterceptor" /> </list> </property> <property name="portletModeParameterMap"> <map> <entry key="view"> <map> <entry key="pointincrease"> <ref bean="pointIncreaseController" /> </entry> <entry key="users"> <ref bean="userController" /> </entry> </map> </entry> </map> </property> </bean> <bean id="portletModeHandlerMapping" class="org.springframework.web.portlet.handler.PortletModeHandlerMapping"> <property name="order" value="2" /> <property name="portletModeMap"> <map> <entry key="view"> <ref bean="userController" /> </entry> </map> </property> </bean> </beans> The SpringPortletName-portlet.xml file is an application context file for your MVC portlet. It has a couple of bean definitions: viewController. At this point, remember that the viewController bean definition points to the com.ibm.developerworks.springmvc.ViewController.java class. portletModeHandlerMapping. As we discussed in the last section, whenever DispatcherPortlet gets a client request, it tries to find a suitable Controller class for handling that request. That is where PortletModeHandlerMapping comes into the picture. The PortletModeHandlerMapping class is a simple implementation of the HandlerMapping interface and is used by DispatcherPortlet to find a suitable Controller for every request. The PortletModeHandlerMapping class uses Portlet mode for the current request to find a suitable Controller class to use for handling the request. The portletModeMap property of portletModeHandlerMapping bean is the place where we map the Portlet mode name against the Controller class. In the sample code, we show that viewController is responsible for handling View mode requests. Developing UserController.java In the preceding section, you learned that the viewController bean is responsible for handling all the View mode requests. Your next step is to create the UserController.java class as shown below: public class UserController extends AbstractController { private PointManager pointManager; public void handleActionRequest(ActionRequest request, ActionResponse response) throws Exception { } public ModelAndView handleRenderRequest(RenderRequest request, RenderResponse response) throws ServletException, IOException { String now = (new java.util.Date()).toString(); Map<String, Object> myModel = new HashMap<String, Object>(); myModel.put("now", now); myModel.put("users", this.pointManager.getUsers()); return new ModelAndView("users", "model", myModel); } public void setPointManager(PointManager pointManager) { this.pointManager = pointManager; } } Every controller class in Spring Portlet MVC Framework must implement the org.springframework.web. portlet.mvc.Controller interface directly or indirectly. To make things easier, Spring Framework provides AbstractController class, which is the default implementation of the Controller interface. As a developer, you should always extend your controller from either AbstractController or one of its more specific subclasses. Any implementation of the Controller class should be reusable, thread-safe, and capable of handling multiple requests throughout the lifecycle of the portlet. In the sample code, we create the ViewController class by extending it from AbstractController. Because we don't want to do any action processing in the HelloSpringPortletMVC portlet, we override only the handleRenderRequest() method of AbstractController. Now, the only thing that HelloWorldPortletMVC should do is render the markup of View.jsp to the user when it receives a user request to do so. To do that, return the object of ModelAndView with a value of view equal to View. Developing web.xml According to Portlet Specification 1.0, every portlet application is also a Servlet Specification 2.3-compliant Web application, and it needs a Web application deployment descriptor (that is, web.xml). Let’s create the web.xml file in the /WEB-INF/ folder as shown in listing 4. Follow these steps: Open the existing web.xml file located at /WebContent/WEB-INF/web.xml. Replace the contents of this file with the code as shown below: <servlet> <servlet-name>ViewRendererServlet</servlet-name> <servlet-class>org.springframework.web.servlet.ViewRendererServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>ViewRendererServlet</servlet-name> <url-pattern>/WEB-INF/servlet/view</url-pattern> </servlet-mapping> <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/applicationContext.xml</param-value> </context-param> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> The web.xml file for the sample portlet declares two things: ViewRendererServlet. The ViewRendererServlet is the bridge servlet for portlet support. During the render phase, DispatcherPortlet wraps PortletRequest into ServletRequest and forwards control to ViewRendererServlet for actual rendering. This process allows Spring Portlet MVC Framework to use the same View infrastructure as that of its servlet version, that is, Spring Web MVC Framework. ContextLoaderListener. The ContextLoaderListener class takes care of loading Web application context at the time of the Web application startup. The Web application context is shared by all the portlets in the portlet application. In case of duplicate bean definition, the bean definition in the portlet application context takes precedence over the Web application context. The ContextLoader class tries to read the value of the contextConfigLocation Web context parameter to find out the location of the context file. If the contextConfigLocation parameter is not set, then it uses the default value, which is /WEB-INF/applicationContext.xml, to load the context file. The Portlet Controller interface requires two methods that handle the two phases of a portlet request: the action request and the render request. The action phase should be capable of handling an action request and the render phase should be capable of handling a render request and returning an appropriate model and view. While the Controller interface is quite abstract, Spring Portlet MVC offers a lot of controllers that already contain a lot of the functionality you might need – most of these are very similar to controllers from Spring Web MVC. The Controller interface just defines the most common functionality required of every controller - handling an action request, handling a render request, and returning a model and a view. How rendering works As you know, when the user tries to access a page with PointSystemPortletMVC portlet on it or when the user performs some action on any other portlet on that page or tries to refresh that page, a render request is sent to the PointSystemPortletMVC portlet. In the sample code, because DispatcherPortlet is the main portlet class, Weblogic Portal / Webcenter Portal calls its render() method and then the following sequence of events occurs: The render() method of DispatcherPortlet calls the doDispatch() method, which in turn calls the doRender() method. After the doRenderService() method gets control, first it tries to find out the locale of the request by calling the PortletRequest.getLocale() method. This locale is used while making all the locale-related decisions for choices such as which resource bundle should be loaded or which JSP should be displayed to the user based on the locale. After that, the doRenderService() method starts iterating through all the HandlerMapping classes configured for this portlet, calling their getHandler() method to identify the appropriate Controller for handling this request. In the sample code, we have configured only PortletModeHandlerMapping as a HandlerMapping class. The PortletModeHandlerMapping class reads the value of the current portlet mode, and based on that, it finds out, the Controller class that should be used to handle this request. In the sample code, ViewController is configured to handle the View mode request so that the PortletModeHandlerMapping class returns the object of ViewController. After the object of ViewController is returned, the doRenderService() method calls its handleRenderRequestInternal() method. Implementation of the handleRenderRequestInternal() method in ViewController.java is very simple. It logs a message saying that it got control, and then it creates an instance of ModelAndView with a value equal to View and returns it to DispatcherPortlet. After control returns to doRenderService(), the next task is to figure out how to render View. For that, DispatcherPortlet starts iterating through all the ViewResolvers configured in your portlet application, calling their resolveViewName() method. In the sample code we have configured only one ViewResolver, InternalResourceViewResolver. When its resolveViewName() method is called with viewName, it tries to add /WEB-INF/jsp as a prefix to the view name and to add JSP as a suffix. And it checks if /WEB-INF/jsp/View.jsp exists. If it does exist, it returns the object of JstlView wrapping View.jsp. After control is returned to the doRenderService() method, it creates the object PortletRequestDispatcher, which points to /WEB-INF/servlet/view – that is, ViewRendererServlet. Then it sets the object of JstlView in the request and dispatches the request to ViewRendererServlet. After ViewRendererServlet gets control, it reads the JstlView object from the request attribute and creates another RequestDispatcher pointing to the /WEB-INF/jsp/View.jsp URL and passes control to it for actual markup generation. The markup generated by View.jsp is returned to user. At this point, you may question the need for ViewRendererServlet. Why can't DispatcherPortlet directly forward control to View.jsp? Adding ViewRendererServlet in between allows Spring Portlet MVC Framework to reuse the existing View infrastructure. You may appreciate this more when we discuss how easy it is to integrate Apache Tiles Framework with your Spring Portlet MVC Framework. The attached project SpringPortlet.zip should be used to import the project in to your OEPE Workspace. SpringPortlet_Jars.zip contains jar files required for the application. Project is written on Spring 2.5.  The same JSR 168 portlet should work on Webcenter Portal as well.  Downloads: Download WeblogicPotal Project which consists of Spring Portlet. Download Spring Jars In-addition to above you need to download Spring.jar (Spring2.5)

    Read the article

  • ASP.NET MVC 2 Model Binding for a Collection

    - by nmarun
    Yes, my yet another post on Model Binding (previous one is here), but this one uses features presented in MVC 2. How I got to writing this blog? Well, I’m on a project where we’re doing some MVC things for a shopping cart. Let me show you what I was working with. Below are my model classes: 1: public class Product 2: { 3: public int Id { get; set; } 4: public string Name { get; set; } 5: public int Quantity { get; set; } 6: public decimal UnitPrice { get; set; } 7: } 8:   9: public class Totals 10: { 11: public decimal SubTotal { get; set; } 12: public decimal Tax { get; set; } 13: public decimal Total { get; set; } 14: } 15:   16: public class Basket 17: { 18: public List<Product> Products { get; set; } 19: public Totals Totals { get; set;} 20: } The view looks as below:  1: <h2>Shopping Cart</h2> 2:   3: <% using(Html.BeginForm()) { %> 4: 5: <h3>Products</h3> 6: <% for (int i = 0; i < Model.Products.Count; i++) 7: { %> 8: <div style="width: 100px;float:left;">Id</div> 9: <div style="width: 100px;float:left;"> 10: <%= Html.TextBox("ID", Model.Products[i].Id) %> 11: </div> 12: <div style="clear:both;"></div> 13: <div style="width: 100px;float:left;">Name</div> 14: <div style="width: 100px;float:left;"> 15: <%= Html.TextBox("Name", Model.Products[i].Name) %> 16: </div> 17: <div style="clear:both;"></div> 18: <div style="width: 100px;float:left;">Quantity</div> 19: <div style="width: 100px;float:left;"> 20: <%= Html.TextBox("Quantity", Model.Products[i].Quantity)%> 21: </div> 22: <div style="clear:both;"></div> 23: <div style="width: 100px;float:left;">Unit Price</div> 24: <div style="width: 100px;float:left;"> 25: <%= Html.TextBox("UnitPrice", Model.Products[i].UnitPrice)%> 26: </div> 27: <div style="clear:both;"><hr /></div> 28: <% } %> 29: 30: <h3>Totals</h3> 31: <div style="width: 100px;float:left;">Sub Total</div> 32: <div style="width: 100px;float:left;"> 33: <%= Html.TextBox("SubTotal", Model.Totals.SubTotal)%> 34: </div> 35: <div style="clear:both;"></div> 36: <div style="width: 100px;float:left;">Tax</div> 37: <div style="width: 100px;float:left;"> 38: <%= Html.TextBox("Tax", Model.Totals.Tax)%> 39: </div> 40: <div style="clear:both;"></div> 41: <div style="width: 100px;float:left;">Total</div> 42: <div style="width: 100px;float:left;"> 43: <%= Html.TextBox("Total", Model.Totals.Total)%> 44: </div> 45: <div style="clear:both;"></div> 46: <p /> 47: <input type="submit" name="Submit" value="Submit" /> 48: <% } %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Nothing fancy, just a bunch of div’s containing textboxes and a submit button. Just make note that the textboxes have the same name as the property they are going to display. Yea, yea, I know. I’m displaying unit price as a textbox instead of a label, but that’s beside the point (and trust me, this will not be how it’ll look on the production site!!). The way my controller works is that initially two dummy products are added to the basked object and the Totals are calculated based on what products were added in what quantities and their respective unit price. So when the page loads in edit mode, where the user can change the quantity and hit the submit button. In the ‘post’ version of the action method, the Totals get recalculated and the new total will be displayed on the screen. Here’s the code: 1: public ActionResult Index() 2: { 3: Product product1 = new Product 4: { 5: Id = 1, 6: Name = "Product 1", 7: Quantity = 2, 8: UnitPrice = 200m 9: }; 10:   11: Product product2 = new Product 12: { 13: Id = 2, 14: Name = "Product 2", 15: Quantity = 1, 16: UnitPrice = 150m 17: }; 18:   19: List<Product> products = new List<Product> { product1, product2 }; 20:   21: Basket basket = new Basket 22: { 23: Products = products, 24: Totals = ComputeTotals(products) 25: }; 26: return View(basket); 27: } 28:   29: [HttpPost] 30: public ActionResult Index(Basket basket) 31: { 32: basket.Totals = ComputeTotals(basket.Products); 33: return View(basket); 34: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } That’s that. Now I run the app, I see two products with the totals section below them. I look at the view source and I see that the input controls have the right ID, the right name and the right value as well. 1: <input id="ID" name="ID" type="text" value="1" /> 2: <input id="Name" name="Name" type="text" value="Product 1" /> 3: ... 4: <input id="ID" name="ID" type="text" value="2" /> 5: <input id="Name" name="Name" type="text" value="Product 2" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } So just as a regular user would do, I change the quantity value of one of the products and hit the submit button. The ‘post’ version of the Index method gets called and I had put a break-point on line 32 in the above snippet. When I hovered my mouse on the ‘basked’ object, happily assuming that the object would be all bound and ready for use, I was surprised to see both basket.Products and basket.Totals were null. Huh? A little research and I found out that the reason the DefaultModelBinder could not do its job is because of a naming mismatch on the input controls. What I mean is that when you have to bind to a custom .net type, you need more than just the property name. You need to pass a qualified name to the name property of the input control. I modified my view and the emitted code looked as below: 1: <input id="Product_Name" name="Product.Name" type="text" value="Product 1" /> 2: ... 3: <input id="Product_Name" name="Product.Name" type="text" value="Product 2" /> 4: ... 5: <input id="Totals_SubTotal" name="Totals.SubTotal" type="text" value="550" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now, I update the quantity and hit the submit button and I see that the Totals object is populated, but the Products list is still null. Once again I went: ‘Hmm.. time for more research’. I found out that the way to do this is to provide the name as: 1: <%= Html.TextBox(string.Format("Products[{0}].ID", i), Model.Products[i].Id) %> 2: <!-- this will be rendered as --> 3: <input id="Products_0__ID" name="Products[0].ID" type="text" value="1" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } It was only now that I was able to see both the products and the totals being properly bound in the ‘post’ action method. Somehow, I feel this is kinda ‘clunky’ way of doing things. Seems like people at MS felt in a similar way and offered us a much cleaner way to solve this issue. The simple solution is that instead of using a Textbox, we can either use a TextboxFor or an EditorFor helper method. This one directly spits out the name of the input property as ‘Products[0].ID and so on. Cool right? I totally fell for this and changed my UI to contain EditorFor helper method. At this point, I ran the application, changed the quantity field and pressed the submit button. Of course my basket object parameter in my action method was correctly bound after these changes. I let the app complete the rest of the lines in the action method. When the page finally rendered, I did see that the quantity was changed to what I entered before the post. But, wait a minute, the totals section did not reflect the changes and showed the old values. My status: COMPLETELY PUZZLED! Just to recap, this is what my ‘post’ Index method looked like: 1: [HttpPost] 2: public ActionResult Index(Basket basket) 3: { 4: basket.Totals = ComputeTotals(basket.Products); 5: return View(basket); 6: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } A careful debug confirmed that the basked.Products[0].Quantity showed the updated value and the ComputeTotals() method also returns the correct totals. But still when I passed this basket object, it ended up showing the old totals values only. I began playing a bit with the code and my first guess was that the input controls got their values from the ModelState object. For those who don’t know, the ModelState is a temporary storage area that ASP.NET MVC uses to retain incoming attempted values plus binding and validation errors. Also, the fact that input controls populate the values using data taken from: Previously attempted values recorded in the ModelState["name"].Value.AttemptedValue Explicitly provided value (<%= Html.TextBox("name", "Some value") %>) ViewData, by calling ViewData.Eval("name") FYI: ViewData dictionary takes precedence over ViewData's Model properties – read more here. These two indicators led to my guess. It took me quite some time, but finally I hit this post where Brad brilliantly explains why this is the preferred behavior. My guess was right and I, accordingly modified my code to reflect the following way: 1: [HttpPost] 2: public ActionResult Index(Basket basket) 3: { 4: // read the following posts to see why the ModelState 5: // needs to be cleared before passing it the view 6: // http://forums.asp.net/t/1535846.aspx 7: // http://forums.asp.net/p/1527149/3687407.aspx 8: if (ModelState.IsValid) 9: { 10: ModelState.Clear(); 11: } 12:   13: basket.Totals = ComputeTotals(basket.Products); 14: return View(basket); 15: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } What this does is that in the case where your ModelState IS valid, it clears the dictionary. This enables the values to be read from the model directly and not from the ModelState. So the verdict is this: If you need to pass other parameters (like html attributes and the like) to your input control, use 1: <%= Html.TextBox(string.Format("Products[{0}].ID", i), Model.Products[i].Id) %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Since, in EditorFor, there is no direct and simple way of passing this information to the input control. If you don’t have to pass any such ‘extra’ piece of information to the control, then go the EditorFor way. The code used in the post can be found here.

    Read the article

  • iPhone SDK vs Windows Phone 7 Series SDK Challenge, Part 1: Hello World!

    In this series, I will be taking sample applications from the iPhone SDK and implementing them on Windows Phone 7 Series.  My goal is to do as much of an apples-to-apples comparison as I can.  This series will be written to not only compare and contrast how easy or difficult it is to complete tasks on either platform, how many lines of code, etc., but Id also like it to be a way for iPhone developers to either get started on Windows Phone 7 Series development, or for developers in general to learn the platform. Heres my methodology: Run the iPhone SDK app in the iPhone Simulator to get a feel for what it does and how it works, without looking at the implementation Implement the equivalent functionality on Windows Phone 7 Series using Silverlight. Compare the two implementations based on complexity, functionality, lines of code, number of files, etc. Add some functionality to the Windows Phone 7 Series app that shows off a way to make the scenario more interesting or leverages an aspect of the platform, or uses a better design pattern to implement the functionality. You can download Microsoft Visual Studio 2010 Express for Windows Phone CTP here, and the Expression Blend 4 Beta here. Hello World! Of course no first post would be allowed if it didnt focus on the hello world scenario.  The iPhone SDK follows that tradition with the Your First iPhone Application walkthrough.  I will say that the developer documentation for iPhone is pretty good.  There are plenty of walkthoughs and they break things down into nicely sized steps and do a good job of bringing the user along.  As expected, this application is quite simple.  It comprises of a text box, a label, and a button.  When you push the button, the label changes to Hello plus the  word you typed into the text box.  Makes perfect sense for a starter application.  Theres not much to this but it covers a few basic elements: Laying out basic UI Handling user input Hooking up events Formatting text     So, lets get started building a similar app for Windows Phone 7 Series! Implementing the UI: UI in Silverlight (and therefore Windows Phone 7) is defined in XAML, which is a declarative XML language also used by WPF on the desktop.  For anyone thats familiar with similar types of markup, its relatively straightforward to learn, but has a lot of power in it once you get it figured out.  Well talk more about that. This UI is very simple.  When I look at this, I note a couple of things: Elements are arranged vertically They are all centered So, lets create our Application and then start with the UI.  Once you have the the VS 2010 Express for Windows Phone tool running, create a new Windows Phone Project, and call it Hello World: Once created, youll see the designer on one side and your XAML on the other: Now, we can create our UI in one of three ways: Use the designer in Visual Studio to drag and drop the components Use the designer in Expression Blend 4 to drag and drop the components Enter the XAML by hand in either of the above Well start with (1), then kind of move to (3) just for instructional value. To develop this UI in the designer: First, delete all of the markup between inside of the Grid element (LayoutRoot).  You should be left with just this XAML for your MainPage.xaml (i shortened all the xmlns declarations below for brevity): 1: <phoneNavigation:PhoneApplicationPage 2: x:Class="HelloWorld.MainPage" 3: xmlns="...[snip]" 4: FontFamily="{StaticResource PhoneFontFamilyNormal}" 5: FontSize="{StaticResource PhoneFontSizeNormal}" 6: Foreground="{StaticResource PhoneForegroundBrush}"> 7:   8: <Grid x:Name="LayoutRoot" Background="{StaticResource PhoneBackgroundBrush}"> 9:   10: </Grid> 11:   12: </phoneNavigation:PhoneApplicationPage> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Well be adding XAML at line 9, so thats the important part. Now, Click on the center area of the phone surface Open the Toolbox and double click StackPanel Double click TextBox Double click TextBlock Double click Button That will create the necessary UI elements but they wont be arranged quite right.  Well fix it in a second.    Heres the XAML that we end up with: 1: <StackPanel Height="100" HorizontalAlignment="Left" Margin="10,10,0,0" Name="stackPanel1" VerticalAlignment="Top" Width="200"> 2: <TextBox Height="32" Name="textBox1" Text="TextBox" Width="100" /> 3: <TextBlock Height="23" Name="textBlock1" Text="TextBlock" /> 4: <Button Content="Button" Height="70" Name="button1" Width="160" /> 5: </StackPanel> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The designer does its best at guessing what we want, but in this case we want things to be a bit simpler. So well just clean it up a bit.  We want the items to be centered and we want them to have a little bit of a margin on either side, so heres what we end up with.  Ive also made it match the values and style from the iPhone app: 1: <StackPanel Margin="10"> 2: <TextBox Name="textBox1" HorizontalAlignment="Stretch" Text="You" TextAlignment="Center"/> 3: <TextBlock Name="textBlock1" HorizontalAlignment="Center" Margin="0,100,0,0" Text="Hello You!" /> 4: <Button Name="button1" HorizontalAlignment="Center" Margin="0,150,0,0" Content="Hello"/> 5: </StackPanel> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now lets take a look at what weve done there. Line 1: We removed all of the formatting from the StackPanel, except for Margin, as thats all we need.  Since our parent element is a Grid, by default the StackPanel will be sized to fit in that space.  The Margin says that we want to reserve 10 pixels on each side of the StackPanel. Line 2: Weve set the HorizontalAlignment of the TextBox to Stretch, which says that it should fill its parents size horizontally.  We want to do this so the TextBox is always full-width.  We also set TextAlignment to Center, to center the text. Line 3: In contrast to the TextBox above, we dont care how wide the TextBlock is, just so long as it is big enough for its text.  Thatll happen automatically, so we just set its Horizontal alignment to Center.  We also set a Margin above the TextBlock of 100 pixels to bump it down a bit, per the iPhone UI. Line 4: We do the same things here as in Line 3. Heres how the UI looks in the designer: Believe it or not, were almost done! Implementing the App Logic Now, we want the TextBlock to change its text when the Button is clicked.  In the designer, double click the Button to be taken to the Event Handler for the Buttons Click event.  In that event handler, we take the Text property from the TextBox, and format it into a string, then set it into the TextBlock.  Thats it! 1: private void button1_Click(object sender, RoutedEventArgs e) 2: { 3: string name = textBox1.Text; 4:   5: // if there isn't a name set, just use "World" 6: if (String.IsNullOrEmpty(name)) 7: { 8: name = "World"; 9: } 10:   11: // set the value into the TextBlock 12: textBlock1.Text = String.Format("Hello {0}!", name); 13:   14: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } We use the String.Format() method to handle the formatting for us.    Now all thats left is to test the app in the Windows Phone Emulator and verify it does what we think it does! And it does! Comparing against the iPhone Looking at the iPhone example, there are basically three things that you have to touch as the developer: 1) The UI in the Nib file 2) The app delegate 3) The view controller Counting lines is a bit tricky here, but to try to keep this even, Im going to only count lines of code that I could not have (or would not have) generated with the tooling.  Meaning, Im not counting XAML and Im not counting operations that happen in the Nib file with the XCode designer tool.  So in the case of the above, even though I modified the XAML, I could have done all of those operations using the visual designer tool.  And normally I would have, but the XAML is more instructive (and less steps!).  Im interested in things that I, as the developer have to figure out in code.  Im also not counting lines that just have a curly brace on them, or lines that are generated for me (e.g. method names that are generated for me when I make a connection, etc.) So, by that count, heres what I get from the code listing for the iPhone app found here: HelloWorldAppDelegate.h: 6 HelloWorldAppDelegate.m: 12 MyViewController.h: 8 MyViewController.m: 18 Which gives me a grand total of about 44 lines of code on iPhone.  I really do recommend looking at the iPhone code for a comparison to the above. Now, for the Windows Phone 7 Series application, the only code I typed was in the event handler above Main.Xaml.cs: 4 So a total of 4 lines of code on Windows Phone 7.  And more importantly, the process is just A LOT simpler.  For example, I was surprised that the User Interface Designer in XCode doesnt automatically create instance variables for me and wire them up to the corresponding elements.  I assumed I wouldnt have to write this code myself (and risk getting it wrong!).  I dont need to worry about view controllers or anything.  I just write my code.  This blog post up to this point has covered almost every aspect of this apps development in a few pages.  The iPhone tutorial has 5 top level steps with 2-3 sub sections of each. Now, its worth pointing out that the iPhone development model uses the Model View Controller (MVC) pattern, which is a very flexible and powerful pattern that enforces proper separation of concerns.  But its fairly complex and difficult to understand when you first walk up to it.  Here at Microsoft weve dabbled in MVC a bit, with frameworks like MFC on Visual C++ and with the ASP.NET MVC framework now.  Both are very powerful frameworks.  But one of the reasons weve stayed away from MVC with client UI frameworks is that its difficult to tool.  We havent seen the type of value that beats double click, write code! for the broad set of scenarios. Another thing to think about is how many of those lines of code were focused on my apps functionality?.  Or, the converse of How many lines of code were boilerplate plumbing?  In both examples, the actual number of functional code lines is similar.  I count most of them in MyViewController.m, in the changeGreeting method.  Its about 7 lines of code that do the work of taking the value from the TextBox and putting it into the label.  Versus 4 on the Windows Phone 7 side.  But, unfortunately, on iPhone I still have to write that other 37 lines of code, just to get there. 10% of the code, 1 file instead of 4, its just much simpler. Making Some Tweaks It turns out, I can actually do this application with ZERO  lines of code, if Im willing to change the spec a bit. The data binding functionality in Silverlight is incredibly powerful.  And what I can do is databind the TextBoxs value directly to the TextBlock.  Take some time looking at this XAML below.  Youll see that I have added another nested StackPanel and two more TextBlocks.  Why?  Because thats how I build that string, and the nested StackPanel will lay things out Horizontally for me, as specified by the Orientation property. 1: <StackPanel Margin="10"> 2: <TextBox Name="textBox1" HorizontalAlignment="Stretch" Text="You" TextAlignment="Center"/> 3: <StackPanel Orientation="Horizontal" HorizontalAlignment="Center" Margin="0,100,0,0" > 4: <TextBlock Text="Hello " /> 5: <TextBlock Name="textBlock1" Text="{Binding ElementName=textBox1, Path=Text}" /> 6: <TextBlock Text="!" /> 7: </StackPanel> 8: <Button Name="button1" HorizontalAlignment="Center" Margin="0,150,0,0" Content="Hello" Click="button1_Click" /> 9: </StackPanel> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now, the real action is there in the bolded TextBlock.Text property: Text="{Binding ElementName=textBox1, Path=Text}" .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } That does all the heavy lifting.  It sets up a databinding between the TextBox.Text property on textBox1 and the TextBlock.Text property on textBlock1. As I change the text of the TextBox, the label updates automatically. In fact, I dont even need the button any more, so I could get rid of that altogether.  And no button means no event handler.  No event handler means no C# code at all.  Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 556 557 558 559 560 561 562 563 564 565 566  | Next Page >