Search Results

Search found 93838 results on 3754 pages for 'aspire one'.

Page 330/3754 | < Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >

  • How to make backlight work on Acer 5732z?

    - by Dude Random21
    I want to run 12.04 on my Acer Aspire 5732z. I know from research that these computers have issues with the backlight on Ubuntu. So I tried a couple of solutions: The sudo lightdm restart method. I get no change at all. The sudo setpci -s 00:02.0 F4.B=30 method. This so far has been the most effective. I first tried it in the F1 console, right away I get the screen back, problem is going back to the desktop it goes back to being black. So I tried it from a terminal window and it works as well but as soon as I unplug my external monitor the screen turns black again and doesn't come back. If I plug the monitor back in the screen stays black and the only thing I see is the mouse pointer. From here I go back into console (which I am able to see) and reboot from there. The sudo sed -i 's/GRUB_CMDLINE_LINUX=""/GRUB_CMDLINE_LINUX="acpi_osi=Linux"/g /etc/default/grub method. This one I got no instant change and after reboot still no change. I'm open to pretty much any suggestions you may have.

    Read the article

  • Boot process enter an infinite loop after installing Ubuntu 11.04 together with Windos 7 and XP

    - by Andreafc
    I have a new computer ACER Aspire X3960 which came with Widows 7 preinstalled. I made a new partition where I installed Windows XP and then I installed in new logical partitions (a Swap, a / and a /home partition) Ubuntu 11.04. At the end of the process, the computer enters the very first screen where it says "press DEL to enter BIOS setup, press F12 to enter BOOT options". After a few seconds, the screen goes blank, the computer beeps and then it presents the same screen again. It doesn't even ever go to the grub options of which operating system I want to start. It just loops there forever. I tried to fix the (eventually damaged) grub following these instructions (unfortunately in german) http://wiki.ubuntuusers.de/GRUB_2/Reparatur from a LiveUSB, which I'm also using to post this question. Now I found here that someone asked for the results of the Boot Info Script. Here are mine (I hope I did right in trying to upload the file): http://paste.ubuntu.com/736032/ Can anybody help? Thank you very much. Andreafc

    Read the article

  • cannot boot ubuntu 13.10 with my usb, Can i change the kernal on my laptop to run it?

    - by Carlos Dunick
    Currently i am running 12.04 and looking for an upgrade to 13.10 I first tried a bootable 64bit usb and failed. With the message saying "Kernal requires an x86-64 CPU but only detected an I686 CPU Unable to boot please use a kernal appropriate for your CPU" then tried 32bit and same message came up. Is this due to my laptop simply being to slow? or can/should i change the kernal somehow? Acer Aspire 5710z Intel Pentium dual core processor, 1.73Ghz, 533 MHz FSB, 1 MB L2 cache. 2GB DDR2 lspci 00:00.0 Host bridge: Intel Corporation Mobile 945GM/PM/GMS, 943/940GML and 945GT Express Memory Controller Hub (rev 03) 00:02.0 VGA compatible controller: Intel Corporation Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller (rev 03) 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03) 00:1b.0 Audio device: Intel Corporation NM10/ICH7 Family High Definition Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 1 (rev 02) 00:1c.2 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 3 (rev 02) 00:1c.3 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 4 (rev 02) 00:1d.0 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #3 (rev 02) 00:1d.3 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #4 (rev 02) 00:1d.7 USB controller: Intel Corporation NM10/ICH7 Family USB2 EHCI Controller (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev e2) 00:1f.0 ISA bridge: Intel Corporation 82801GBM (ICH7-M) LPC Interface Bridge (rev 02) 00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 02) 00:1f.2 SATA controller: Intel Corporation 82801GBM/GHM (ICH7-M Family) SATA Controller [AHCI mode] (rev 02) 00:1f.3 SMBus: Intel Corporation NM10/ICH7 Family SMBus Controller (rev 02) 04:00.0 Ethernet controller: Broadcom Corporation NetLink BCM5787M Gigabit Ethernet PCI Express (rev 02) 05:00.0 Network controller: Broadcom Corporation BCM4311 802.11b/g WLAN (rev 01) 06:00.0 FLASH memory: ENE Technology Inc ENE PCI Memory Stick Card Reader Controller 06:00.1 SD Host controller: ENE Technology Inc ENE PCI SmartMedia / xD Card Reader Controller 06:00.2 FLASH memory: ENE Technology Inc Memory Stick Card Reader Controller 06:00.3 FLASH memory: ENE Technology Inc ENE PCI Secure Digital / MMC Card Reader Controller

    Read the article

  • How do I fix my ethernet card losing network connection every few minutes with kernels 3.8.x?

    - by igoryonya
    I'm using Ubuntu 13.04. My laptop is Acer Aspire one 722-c58rr, and my ethernet card works for a few seconds at a time with kernels 3.8.x, however, kernels 3.5.x and below worked fine. On kernels 3.8.x, it works fine after boot for about a minute, then it looses network connection. When pinging to some address, it says: network address is unreachable, but it can ping it's own address. The address is statically configured. Everything was working fine before. I went to vacation, where I used WiFi and 3G connections, so I didn't notice that the problem occurred. Came back home, plugged in into the ethernet. It worked for a minute then stopped. Rebooting commutator fixed the problem. Tried to connect to a different commutator, same problem. Unplugging and plugging the cable fixes the problem for another minute. Disconnecting eth in Network manager and reconnecting it again, does the same thing. WiFi has no such problem. Tried to use a different cable that works fine on another computer, the same problem. Tried to boot with the lower kernel version, the same problem was happening until I got to the version 3.5 of the kernel series. Everything works fine on the kernel 3.5.x, but I don't want to miss out on the new kernel's features. Executing commands, when booted with 3.8 kernel series, give the following results: lspci| grep -i eth: 06:00.0 Ethernet controller: Qualcomm Atheros AR8152 v2.0 Fast Ethernet (rev c1) dmesg| grep eth1: [ 89.548291] atl1c 0000:06:00.0: atl1c: eth1 NIC Link is Up How do I fix it, while staying in the new kernel version?

    Read the article

  • Old Fglrx Driver - AMD Radeon HD 3200 - ubuntu won't start

    - by Yohannes
    I've been using Ubuntu 12.04 64 bit for about 2 weeks now and I installed the latest Fglrx driver (Graphics Card- AMD HD 3200, PC- Acer Aspire 5336, 4GB RAM, 500GB Harddrive). The problem is that sometimes video's lag and play out of sync sometimes the windows take long to show up after I've clicked them etc. After looking around I found a video on Youtube by Ubuntu help guy and in the video he recommended using an older driver if you have an older graphics card, his was about 4 years old (same as mine) and he used the 11.10 catalyst driver so I decided to try it. I removed the previous installation of the driver and then installed the 11.10 driver. However, when I restarted it instead of going to the GUI it goes to a terminal like window and asks for my login. Now its pretty clear I need to remove the old driver and go back to using the latest one. The only problem is I'm not sure where I saved the latest driver and in order to connect to the Internet I need to change /etc/resolv.conf (I use a static IP). So what should I do? Also anyone from personal experience, what propitiatory driver works best with my graphics card? As in the version. Thanks

    Read the article

  • Ubuntu live cd : black screen and blinking cursor

    - by IFasel
    I try to install ubuntu 12.04 on my computer. I can get to the purple screen on the live cd but then, if I choose "Installing Ubuntu", I have a black screen with a cursor blinking (and nothing else happens). My PC : acer aspire M3920, CPU i5-2300, 8 Gb RAM, NVIDIA gt 405. What I already tried : I tried with 12.04 and 13.04 daily build I tried with a live usb and with a live dvd I tried the following boot options : nomodset, acpi=off I googled a lot and it seems that it could be a graphic card problem. Do you know any other boot options that I could try ? UPDATE This is not a duplicate : I've tried all the common boot options (nomodeset, noacpi...) and it doesn't change anything. With the option "no splash" (instead of "quiet splash"), I can see what happens before the forever-blinking cursor : [sdg] no caching mode present [sdg] assuming drive cache : write trough ata8.00: excetion Emask 0x52 ... frozen ata8 : SError : { RecovData RecovComm UnrecovData...} ata8.00 : failed command : IDENTIFY PACKET DEVICE ... ata8.00 : status : { DRDY } ata8 : hard resetting link Does somebody know what it means ? N.B. astonishingly, Puppy Linux boots fine (but Debian, Fedora and Ubuntu do not) Solution In fact, it was not a graphic card problem. I had to disconnect the dvd drive and connect it to another free sata connector (I don't really understand why Ubuntu had trouble with this connector and Windows 7 not). After that, everything worked fine.

    Read the article

  • My sound is not working, so I'm going to reinstall FYI [closed]

    - by fer
    I've had trouble getting the sound to work in Ubuntu 12.04 I'm running an Acer Aspire 5739g laptop. This is using a clean install. This wasn't a problem when ubuntu first installed. Rather, it was when I ran the updates that it stopped working. I already tried the suggestions on the ubuntu sites and other similar queries, and they haven't fixed it. Something in the updates is making my sound not work. Edit: It turns out that this might be a bug (the sound issue, first paragraph). After reinstall, it happened again (it's not caused by updates at all, or any software, because I fixed it now w/o reinstall). It seems like I replicated it as follows: I changed auto-hide in the behavior tab of Appearance settings by turning it on, and setting the sensitivity to below the recommended setting. Then instead of restarting, I just logged out and back in. The sound stopped working again. I set the behavior settings to default, restarted, and now it's back to normal. Not sure if it's due to only logging out (and not restarting) or b/c I set my sensitivity to a low setting. Not sure if this helps anyone, but thought I'd mention it.

    Read the article

  • Wired connection not working

    - by YokoBlac
    I am using a Acer Aspire One. Here is the Ubuntu Wiki Page about my computer. I have a working wireless connection however when I plug a cat5 (Ethernet) cable in the lights flicker on the computer, but then nothing happens. Output of iconfig: eth0 Link encap:Ethernet HWaddr 00:1e:68:96:1a:6b inet6 addr: fe80::21e:68ff:fe96:1a6b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:43 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2622 (2.6 KB) TX bytes:936 (936.0 B) Interrupt:28 Base address:0xe000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:480 (480.0 B) TX bytes:480 (480.0 B) wlan0 Link encap:Ethernet HWaddr 00:22:68:92:7f:36 inet addr:192.168.1.6 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::222:68ff:fe92:7f36/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5702 errors:0 dropped:0 overruns:0 frame:0 TX packets:5284 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4117327 (4.1 MB) TX bytes:936709 (936.7 KB) Any understanding of this output is greatly appreciated.

    Read the article

  • no boot option after install ubuntu 12.04 LTS using cd

    - by utamaku
    im facing a problem. after installing Ubuntu 12.04 LTS using boot from CD, when reboot there is no boot option to choose the OS. i directly log in to my Windows 7. before this im having an issue with the 'nomodeset' if not mistaken. after tick [x] on the nomodeset, i can install my Ubuntu, and stuck again at choosing the partition. so i had done 2 partitions for ubuntu, 1 partition for ubuntu ( ext3, mount /) and the other 1 for swap. then i can proceed until finished the installation. and after the system want to reboot. it take some times and just stuck at the screen doing nothing. not rebooting at all. i force to shutdown then restart again then directly go into windows 7 without boot option. in win7 the partitions for ubuntu is gone. i had tried the boot-repair thing and still doesn't help. it just show up the _ (terminal thing i guess). i typed boot repair and still the same. im using acer aspire 4736z. please anybody help me with this issue.

    Read the article

  • Hard drive clicking noise on Acer AO722

    - by Blank
    I'm running Ubuntu 11.10 on an Acer Aspire One 722. Whenever I'm on battery power I get a clicking sound from my hard drive every 5 seconds or so (this does not happen when the laptop is plugged in). I'm dual booting with Windows 7 and I don't get the clicking sound in Windows. The clicking sound stops when I run the command:sudo hdparm -B 254 /dev/sda Also, according to:sudo smartctl -H /dev/sda my hard drive is healthy. Is this clicking sound something I can just ignore? Or is it a serious problem and will it eventually damage my computer? If so, how would I fix it? I have tried adding hdparm -B 254 /dev/sda to my /etc/rc.local file, but I still run into the clicking problem if my computer boots while plugged in and is then unplugged. Also, I'm finding this fix to be unreliable. Sometimes it works, sometimes it does not. Is this a good solution and is there a better way of doing this? Also, would running my laptop with a -B value of 254 have any negative effects? (I read somewhere about a lower level protecting the hard drive from bumps)

    Read the article

  • trying to upgrade memory

    - by user214876
    I've been using Ubuntu on my laptop for awhile now. Not quite used to it yet. I've got a Acer Aspire with an orig 4 gig mem/500 gig HDD. Running 12.04 presently 32 bit sys. I have the 13.04 upgrade disc and want to upgrade my memory to 8 Gig. Everytime I install the 8 gig memory, the system won't boot to either version. I downloaded the 64 bit version of both versions of Ubuntu but no results yet. Can anyone offer a suggestion here? I'm kinda lost. Additional Information: The memory was purchased through Acer/Kingston. Recommended for this computer. I watched the video on installing it, so I doubt it's installed wrong. (There's only one way of putting it in). I swapped Op Sys, from Ubuntu 12.04 to 13.04 to 13.10 and now to Xubuntu 13.10 64 bit version. I'm still not having any luck with this upgrade. Would it be necessary to upgrade the CPU? It's just a thought, I don't know what else could keep me from utilizing the new memory. Additional Information: Called Kingston this afternoon, they are sending replacement lower density memory modules 2/4 gig - 8 gig. Tech service says I need to upgrade BIOS to utilize new memory install VIA Dos since it is no longer a windows system. I'm not sure how to go about that but it's a learning process I can live with. Thank you all for your help/support. I realize this isn't a Ubuntu problem but each new user of this op sys, seems to share simular problems and maybe someone can use this info to their advantage.

    Read the article

  • Problem installing Ubuntu 14.04 into a laptop using Windows 8.1

    - by AlexanderFreud
    I have used Ubuntu on my LG laptop for several years. I lately bought an Acer Aspire V5 laptop which included Windows 8.1. I don't have any data on it; I would like to just remove it completely (that horrible Windows 8.1) and install Ubuntu. I tried using a USB device with Ubuntu 14.04 (64bit version) saved on it. I changed the BIOS configuration, putting USB device first on boot order, Windows Boot Manager last. When I try to run with USB device it doesn't work. Messages like these show up: System doesn't have any USB boot option. Please select other boot option in Boot Manager Menu. Windows failed to start. A recent hardware or software change might be the cause. To fix the problem: 1. insert your windows installation disc and restart your computer 2. choose your language settings, and then click "next" 3. click "repair your computer" If you do not have this disk, contact your system administrator manufacturer for assistance File \ubuntu\winboot\wubildr.mbr Status: 0xc000007b Info: the application or operating system couldn't be load...[?] required file is missing or contains errors. Could someone please write step-by-step procedures to install Ubuntu 14.04 after removing Windows 8.1 ? I already have done a second partition on the hard disk just in case.

    Read the article

  • Ubuntu is unresponsive and scrolling is very jerky

    - by peterpipe
    I'm new to Ubuntu. I've been using 12.04LTS for over a week now. For 3 days it was great but now I'm having a number of problems. I'm using an Acer Aspire one D255 it has a 250GB HDD and a 1GB Memory. My browser is Firefox. Pages freeze occasionally and I wait for over 10 minutes and have to use the computer's 'off' button. Pages crash. On facebook some pictures don't load and I have to close down and restart a couple of times. Scrolling can sometimes be very jerky. Downloading software has become a problem and I have a 'no entry' sign in the top taskbar. I've looked at this page for some help but the words used and the explanations are not helpful to someone like me who has little knowledge of this. Please - 'Ubuntu for dummies'. I'd like to continue using Ubuntu but am close to giving up.

    Read the article

  • In facebook pictures bdon't always load properly and scrolling is very jerky

    - by peterpipe
    I'm new to Ubuntu. I've been using 12.04LTS for over a week now. For 3 days it was great but now I'm having a number of problems. I'm using an Acer Aspire one D255 it has a 250GB HDD and a 1GB Memory. My browser is Firefox. Pages freeze occasionally and I wait for over 10 minutes and have to use the computer's 'off' button. Pages crash. On facebook some pictures don't load and I have to close down and restart a couple of times. Scrolling can sometimes be very jerky. Downloading software has become a problem and I have a 'no entry' sign in the top taskbar. I've looked at this page for some help but the words used and the explanations are not helpful to someone like me who has little knowledge of this. Please - 'Ubuntu for dummies'. I'd like to continue using Ubuntu but am close to giving up.

    Read the article

  • Wireless access point -> Powerline -> Router -> Internet, should this work?

    - by Anthony
    My network at home used to be a laptop and desktop connected wirelessly to a single Wireless ADSL router, a Cisco 877W. Wireless reception around the house with this setup was quite unreliable, so I've gone about looking to improve it. I purchased some Belkin Gigabit powerline adapters and I've got these working fine. I can hook a computer up to one of the powerline adapters, and with the other one plugged into the ADSL router the computer has internet access. Additionally I can hook a Netgear DG834G Wireless ADSL router into it with the adsl not plugged in, and after turning off DHCP can RJ45 a computer up to the network. Everything works fine. However, if I setup a wireless network on the Netgear then any computer that connects wirelessly to it cannot access the internet. It gets an IP address very slowly via DHCP which is a good one, but it cannot access the internet. It can however communicate with the RJ45'd computer also connected to the Netgear. I wondered whether this could be a problem with the Netgear so I've borrowed a Cisco Aironet 1200 and got this working fine when it's attached directly to the primary ADSL router. I can connect to it wireless and get onto the internet. However, if I then plug it into the Netgear I can communicate with other devices attached to the Netgear, but can't get any further than the Netgear. All the while though the other devices RJ45'd to the Netgear are communicating with the internet just fine. I'm starting to suspect it's one of two things causing the problem: 1) For some reason the belkin powerline adapters don't like carrying wireless-originating signals. Could this be possible? 2) The primary Cisco ADSL router doesn't want to communicate with other devices on my network more than one hop away from it. I'm making an assumption here that within the Netgear box the wireless and wired sides are handled differently. Could this be true? Has anyone successfully setup something similar to what I'm trying, with a wireless device on the otherside of a pair of powerline connectors? Update 06/07/2010 - Response to irrational John 28 June Thanks for the answer John - and for clearing up some of my questions. The model number of the belkin powerline adapters are F5D4076. Security was apparently enabled by default on them, and I didn't change them from their default setting. The network diagram in your answer shows exactly what I'm trying to setup: I've followed that guide and I'm still not able to get things working properly. The thing that perplexes me is that wired network traffic works just fine - it's only the wireless traffic that doesn't. This is with the same laptop, and the same DHCP or static IPs. "1. What IP addresses did you assign to each router? What subnet masks are you using?" - subnet is 255.255.255.0, the router connected to the adsl is 192.168.153.1 and that has the DHCP server. The access point on the other side of the powerline adapters I've tried both a static IP of 192.168.153.110, same subnet, and a DHCP-assigned IP. The other devices are DHCP, although I also tried manually entering IP settings. "2. Have you correctly enabled DHCP on only one of the routers and disabled it on all the others?" Yes I have - only the internet-connected router has DHCP enabled. The IP range for the DHCP is from 192.168.153.11 - 192.168.153.200. The strange thing is that wired connections work fine on the LAN, plugged into any router, work fine - it's only the wireless connections that aren't working when they're plugged into the non-primary AP. "Since the routers you are using appear to integrate an ADSL modem I'm assuming there is no WAN port on them." There's no NAT within the LAN, and all wired connections are connected to LAN ports. It's something wrong with the wireless - wired works fine throughout the whole LAN. Update 06/07/2010 - Response to irrational John 29 June The diagram you've drawn in your answer shows pretty much exactly what I'm trying to do. I've spent another evening trying different things and made some progress but I'm still scratching my head. I've borrowed a Netgear access point and been trying with this, and the strange thing is that my PC is working now - this is a Windows 7 PC connected to the access point in the position of where the DG834G is in the diagram. Meanwhile, however, I have an old Powerbook G4 12" I use for music, and while that has a DHCP-assigned IP address, it's not getting any network throughput to either LAN or internet addresses. To make matters more strange, my phone appears to be intermittently working when it's on the wifi. The access point is a Netgear WPN802v1, DHCP, NAT both switched off, running firmware 2.0.9.0. Last night I set it up with exactly the same settings, and similar to tonight I could get a couple of devices to work, and a couple not to. By the morning, however, everything had stopped working - nothing could get a DHCP IP address. I rebooted the 877W earlier this evening and I'm wondering whether this is why a few things are working now. "Could it be possible that the issue could be with the 877W?" I didn't configure this - is it possible that the DHCP server only likes assigning devices that are immediately attached to it? Or similar, could a firewall be stopping too many addresses that are coming through one device? (ie. the Access Point) This could explain why devices are working at the start but then not by the end. In reply to your questions, "1. I looked at the Netgear DG834G support page. There are five versions of this router. Which version do you have? Netgear usually lists this on the label on the bottom of the router. What version of the firmware does it have?" It's a DG834Gv3, and the firmware is the last on the netgear site version 4.01.40. "3. Not knowing which version you have, I glanced at the reference manual for the DG834G v3. In the section for Wireless Settings under the subsection Wireless Access Point there is a check box for a Wireless Isolation setting. If you have this setting it should be off/unchecked. If it is checked then any device connected via wireless would not be able to talk to any other device on the LAN. This sounds like your problem so maybe this is the cause?" I've checked this and it's switched off. I've made a change to the IP of the access point to something outside the DHCP range - it's now 192.158.153.5, with DHCP starting at 11 and going up to 254. Thanks for the tip about this - I only have a few devices so wouldn't anticipate the DHCP server assigning up to 110, but better safe than sorry. Finally one more thing I thought I should add, is with the Powerbook G4 that's not working - it's getting a DHCP IP address and it can communicate with the WPN802 as I can visit the administration page. Anything further than this, however, it can't reach; I can't administrate the 192.168.153.1 (877W router). Strangely, however, when I open Finder on the same powerbook it's detecting my NAS which is attached directly via wire to the 877W. If I try to browse it, it says connection failed. RE: "Perhaps the problem with your Powerbook is with DNS?.." The IP settings on the powerbook are identical to that of the PC with the exception of the IP address; the PC is 192.168.153.17 and the powerbook is 192.168.153.12. Subnets are the same, 255.255.255.0 and default gateway is the same, .1, and the DNS servers are the same. I administrate the 877W by going to 192.168.153.1 in the browser. This is what isn't working from the Powerbook, despite the PC working fine when I do the same. Meanwhile, however, I can administrate the AP on 192.168.153.5 from both PC and Powerbook Update 06/07/2010 - FINAL RESOLUTION of sorts: First off, sorry for the length of this question. I need start to practice a more concise writing style, so I'm going to try to keep this bit brief. After much fiddling, and with the hugely-appreciated help of irrational John, I have come to the conclusion that it's something wrong with the powerbook. I believe that this was perhaps the reason I doubted things worked at the very beginning. I now have the original DG834Gv3 running both wirelessly and wired, and both wired devices and wireless devices get internet connectivity. The only anomaly is the powerbook which I've had to keep wired, as no matter what I do it refuses to work wirelessly. I still have suspicions that the 877W isn't quite right; I'm fairly sure that if I RJ45 the powerline adapter into a different LAN port on it then everything will break. I've just about run out of patience to test this further, and I think I need to go into the 877W's config to match the 877w's lan port's settings. I'm accepting irrational John's answer as he's been enormously helpful, way above the call of duty, and for this line he wrote: Beats the heck out of me. which in the midst of great frustration made me chuckle, and for a sentence in one of his comments to the same answer: If it is specific to the Powerbook I would put that issue aside until after you feel you have the rest of your LAN and the additional WAP all working together correctlyt It was this second sentence that made me put the powerbook aside and concentrate on the other devices that ultimately led me to getting things working.

    Read the article

  • NDepend tool – Why every developer working with Visual Studio.NET must try it!

    - by hajan
    In the past two months, I have had a chance to test the capabilities and features of the amazing NDepend tool designed to help you make your .NET code better, more beautiful and achieve high code quality. In other words, this tool will definitely help you harmonize your code. I mean, you’ve probably heard about Chaos Theory. Experienced developers and architects are already advocates of the programming chaos that happens when working with complex project architecture, the matrix of relationships between objects which simply even if you are the one who have written all that code, you know how hard is to visualize everything what does the code do. When the application get more and more complex, you will start missing a lot of details in your code… NDepend will help you visualize all the details on a clever way that will help you make smart moves to make your code better. The NDepend tool supports many features, such as: Code Query Language – which will help you write custom rules and query your own code! Imagine, you want to find all your methods which have more than 100 lines of code :)! That’s something simple! However, I will dig much deeper in one of my next blogs which I’m going to dedicate to the NDepend’s CQL (Code Query Language) Architecture Visualization – You are an architect and want to visualize your application’s architecture? I’m thinking how many architects will be really surprised from their architectures since NDepend shows your whole architecture showing each piece of it. NDepend will show you how your code is structured. It shows the architecture in graphs, but if you have very complex architecture, you can see it in Dependency Matrix which is more suited to display large architecture Code Metrics – Using NDepend’s panel, you can see the code base according to Code Metrics. You can do some additional filtering, like selecting the top code elements ordered by their current code metric value. You can use the CQL language for this purpose too. Smart Search – NDepend has great searching ability, which is again based on the CQL (Code Query Language). However, you have some options to search using dropdown lists and text boxes and it will generate the appropriate CQL code on fly. Moreover, you can modify the CQL code if you want it to fit some more advanced searching tasks. Compare Builds and Code Difference – NDepend will also help you compare previous versions of your code with the current one at one of the most clever ways I’ve seen till now. Create Custom Rules – using CQL you can create custom rules and let NDepend warn you on each build if you break a rule Reporting – NDepend can automatically generate reports with detailed stats, graph representation, dependency matrixes and some additional advanced reporting features that will simply explain you everything related to your application’s code, architecture and what you’ve done. And that’s not all. As I’ve seen, there are many other features that NDepend supports. I will dig more in the upcoming days and will blog more about it. The team who built the NDepend have also created good documentation, which you can find on the NDepend website. On their website, you can also find some good videos that will help you get started quite fast. It’s easy to install and what is very important it is fully integrated with Visual Studio. To get you started, you can watch the following Getting Started Online Demo and Tutorial with explanations and screenshots. If you are interested to know more about how to use the features of this tool, either visit their website or wait for my next blogs where I will show some real examples of using the tool and how it helps make your code better. And the last thing for this blog, I would like to copy one sentence from the NDepend’s home page which says: ‘Hence the software design becomes concrete, code reviews are effective, large refactoring are easy and evolution is mastered.’ Website: www.ndepend.com Getting Started: http://www.ndepend.com/GettingStarted.aspx Features: http://www.ndepend.com/Features.aspx Download: http://www.ndepend.com/NDependDownload.aspx Hope you like it! Please do let me know your feedback by providing comments to my blog post. Kind Regards, Hajan

    Read the article

  • Month in Geek: December 2010 Edition

    - by Asian Angel
    As 2010 draws to a close, we have gathered together another great batch of article goodness for your reading enjoyment. Here are our ten hottest articles for December. Note: Articles are listed as #10 through #1. The 50 Best How-To Geek Windows Articles of 2010 Even though we cover plenty of other topics, Windows has always been a primary focus around here, and we’ve got one of the largest collections of Windows-related how-to articles anywhere. Here’s the fifty best Windows articles that we wrote in 2010. Read the article Desktop Fun: Happy New Year Wallpaper Collection [Bonus Edition] As this year draws to a close, it is a time to reflect back on what we have done this year and to look forward to the new one. To help commemorate the event we have put together a bonus size edition of Happy New Year wallpapers for your desktops. Read the article LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology With image technology progressing faster than ever, High-Def has become the standard, giving TV buyers more options at cheaper prices. But what’s different in all these confusing TVs, and what should you know before buying one? Read the article HTG Explains: Which Linux File System Should You Choose? File systems are one of the layers beneath your operating system that you don’t think about—unless you’re faced with the plethora of options in Linux. Here’s how to make an educated decision on which file system to use. Read the article Desktop Fun: Merry Christmas Fonts Christmas will soon be here and there are lots of cards, invitations, gift tags, photos, and more to prepare beforehand. To help you get ready we have gathered together a great collection of fun holiday fonts to help turn those ordinary looking holiday items into extraordinary looking ones. Read the article Microsoft Security Essentials 2.0 Kills Viruses Dead. Download It Now. Microsoft’s Security Essentials has been our favorite anti-malware application for a while—it’s free, unobtrusive, and it doesn’t slow your PC down, but now it’s even better with the new 2.0 release, which adds network filtering, heuristic protection, and more. Read the article 20 OS X Keyboard Shortcuts You Might Not Know Mastering the keyboard will not only increase your navigation speed but it can also help with wrist fatigue. Here are some lesser known OS X shortcuts to help you become a keyboard ninja. Read the article 20 Windows Keyboard Shortcuts You Might Not Know Mastering the keyboard will not only increase your navigation speed but it can also help with wrist fatigue. Here are some lesser known Windows shortcuts to help you become a keyboard ninja. Read the article The 50 Best Registry Hacks that Make Windows Better We’re big fans of hacking the Windows Registry around here, and we’ve got one of the biggest collections of registry hacks you’ll find. Don’t believe us? Here’s a list of the top 50 registry hacks that we’ve covered. Read the article The Complete List of iPad Tips, Tricks, and Tutorials The Apple iPad is an amazing tablet, and to help you get the most out of it, we’ve put together a comprehensive list of every tip, trick, and tutorial for you. Read on for more. Read the article Latest Features How-To Geek ETC The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Tune Pop Enhances Android Music Notifications Another Busy Night in Gotham City Wallpaper Classic Super Mario Brothers Theme for Chrome and Iron Experimental Firefox Builds Put Tabs on the Title Bar (Available for Download) Android Trojan Found in the Wild Chaos, Panic, and Disorder Wallpaper

    Read the article

  • Is your team is a high-performing team?

    As a child I can remember looking out of the car window as my father drove along the Interstate in Florida while seeing prisoners wearing bright orange jump suits and prison guards keeping a watchful eye on them. The prisoners were taking part in a prison road gang. These road gangs were formed to help the state maintain the state highway infrastructure. The prisoner’s primary responsibilities are to pick up trash and debris from the roadway. This is a prime example of a work group or working group used by most prison systems in the United States. Work groups or working groups can be defined as a collection of individuals or entities working together to achieve a specific goal or accomplish a specific set of tasks. Typically these groups are only established for a short period of time and are dissolved once the desired outcome has been achieved. More often than not group members usually feel as though they are expendable to the group and some even dread that they are even in the group. "A team is a small number of people with complementary skills who are committed to a common purpose, performance goals, and approach for which they are mutually accountable." (Katzenbach and Smith, 1993) So how do you determine that a team is a high-performing team?  This can be determined by three base line criteria that include: consistently high quality output, the promotion of personal growth and well being of all team members, and most importantly the ability to learn and grow as a unit. Initially, a team can successfully create high-performing output without meeting all three criteria, however this will erode over time because team members will feel detached from the group or that they are not growing then the quality of the output will decline. High performing teams are similar to work groups because they both utilize a collection of individuals or entities to accomplish tasks. What distinguish a high-performing team from a work group are its characteristics. High-performing teams contain five core characteristics. These characteristics are what separate a group from a team. The five characteristics of a high-performing team include: Purpose, Performance Measures, People with Tasks and Relationship Skills, Process, and Preparation and Practice. A high-performing team is much more than a work group, and typically has a life cycle that can vary from team to team. The standard team lifecycle consists of five states and is comparable to a human life cycle. The five states of a high-performing team lifecycle include: Formulating, Storming, Normalizing, Performing, and Adjourning. The Formulating State of a team is first realized when the team members are first defined and roles are assigned to all members. This initial stage is very important because it can set the tone for the team and can ultimately determine its success or failure. In addition, this stage requires the team to have a strong leader because team members are normally unclear about specific roles, specific obstacles and goals that my lay ahead of them.  Finally, this stage is where most team members initially meet one another prior to working as a team unless the team members already know each other. The Storming State normally arrives directly after the formulation of a new team because there are still a lot of unknowns amongst the newly formed assembly. As a general rule most of the parties involved in the team are still getting used to the workload, pace of work, deadlines and the validity of various tasks that need to be performed by the group.  In this state everything is questioned because there are so many unknowns. Items commonly questioned include the credentials of others on the team, the actual validity of a project, and the leadership abilities of the team leader.  This can be exemplified by looking at the interactions between animals when they first meet.  If we look at a scenario where two people are walking directly toward each other with their dogs. The dogs will automatically enter the Storming State because they do not know the other dog. Typically in this situation, they attempt to define which is more dominating via play or fighting depending on how the dogs interact with each other. Once dominance has been defined and accepted by both dogs then they will either want to play or leave depending on how the dogs interacted and other environmental variables. Once the Storming State has been realized then the Normalizing State takes over. This state is entered by a team once all the questions of the Storming State have been answered and the team has been tested by a few tasks or projects.  Typically, participants in the team are filled with energy, and comradery, and a strong alliance with team goals and objectives.  A high school football team is a perfect example of the Normalizing State when they start their season.  The player positions have been assigned, the depth chart has been filled and everyone is focused on winning each game. All of the players encourage and expect each other to perform at the best of their abilities and are united by competition from other teams. The Performing State is achieved by a team when its history, working habits, and culture solidify the team as one working unit. In this state team members can anticipate specific behaviors, attitudes, reactions, and challenges are seen as opportunities and not problems. Additionally, each team member knows their role in the team’s success, and the roles of others. This is the most productive state of a group and is where all the time invested working together really pays off. If you look at an Olympic figure skating team skate you can easily see how the time spent working together benefits their performance. They skate as one unit even though it is comprised of two skaters. Each skater has their routine completely memorized as well as their partners. This allows them to anticipate each other’s moves on the ice makes their skating look effortless. The final state of a team is the Adjourning State. This state is where accomplishments by the team and each individual team member are recognized. Additionally, this state also allows for reflection of the interactions between team members, work accomplished and challenges that were faced. Finally, the team celebrates the challenges they have faced and overcome as a unit. Currently in the workplace teams are divided into two different types: Co-located and Distributed Teams. Co-located teams defined as the traditional group of people working together in an office, according to Andy Singleton of Assembla. This traditional type of a team has dominated business in the past due to inadequate technology, which forced workers to primarily interact with one another via face to face meetings.  Team meetings are primarily lead by the person with the highest status in the company. Having personally, participated in meetings of this type, usually a select few of the team members dominate the flow of communication which reduces the input of others in group discussions. Since discussions are dominated by a select few individuals the discussions and group discussion are skewed in favor of the individuals who communicate the most in meetings. In addition, Team members might not give their full opinions on a topic of discussion in part not to offend or create controversy amongst the team and can alter decision made in meetings towards those of the opinions of the dominating team members. Distributed teams are by definition spread across an area or subdivided into separate sections. That is exactly what distributed teams when compared to a more traditional team. It is common place for distributed teams to have team members across town, in the next state, across the country and even with the advances in technology over the last 20 year across the world. These teams allow for more diversity compared to the other type of teams because they allow for more flexibility regarding location. A team could consist of a 30 year old male Italian project manager from New York, a 50 year old female Hispanic from California and a collection of programmers from India because technology allows them to communicate as if they were standing next to one another.  In addition, distributed team members consult with more team members prior to making decisions compared to traditional teams, and take longer to come to decisions due to the changes in time zones and cultural events. However, team members feel more empowered to speak out when they do not agree with the team and to notify others of potential issues regarding the work that the team is doing. Virtual teams which are a subset of the distributed team type is changing organizational strategies due to the fact that a team can now in essence be working 24 hrs a day because of utilizing employees in various time zones and locations.  A primary example of this is with customer services departments, a company can have multiple call centers spread across multiple time zones allowing them to appear to be open 24 hours a day while all a employees work from 9AM to 5 PM every day. Virtual teams also allow human resources departments to go after the best talent for the company regardless of where the potential employee works because they will be a part of a virtual team all that is need is the proper technology to be setup to allow everyone to communicate. In addition to allowing employees to work from home, the company can save space and resources by not having to provide a desk for every team member. In fact, those team members that randomly come into the office can actually share one desk amongst multiple people. This is definitely a cost cutting plus given the current state of the economy. One thing that can turn a team into a high-performing team is leadership. High-performing team leaders need to focus on investing in ongoing personal development, provide team members with direction, structure, and resources needed to accomplish their work, make the right interventions at the right time, and help the team manage boundaries between the team and various external parties involved in the teams work. A team leader needs to invest in ongoing personal development in order to effectively manage their team. People have said that attitude is everything; this is very true about leaders and leadership. A team takes on the attitudes and behaviors of its leaders. This can potentially harm the team and the team’s output. Leaders must concentrate on self-awareness, and understanding their team’s group dynamics to fully understand how to lead them. In addition, always learning new leadership techniques from other effective leaders is also very beneficial. Providing team members with direction, structure, and resources that they need to accomplish their work collectively sounds easy, but it is not.  Leaders need to be able to effectively communicate with their team on how their work helps the company reach for its organizational vision. Conversely, the leader needs to allow his team to work autonomously within specific guidelines to turn the company’s vision into a reality.  This being said the team must be appropriately staffed according to the size of the team’s tasks and their complexity. These tasks should be clear, and be meaningful to the company’s objectives and allow for feedback to be exchanged with the leader and the team member and the leader and upper management. Now if the team is properly staffed, and has a clear and full understanding of what is to be done; the company also must supply the workers with the proper tools to achieve the tasks that they are asked to do. No one should be asked to dig a hole without being given a shovel.  Finally, leaders must reward their team members for accomplishments that they achieve. Awards could range from just a simple congratulatory email, a party to close the completion of a large project, or other monetary rewards. Managing boundaries is very important for team leaders because it can alter attitudes of team members and can add undue stress to the team which will force them to loose focus on the tasks at hand for the group. Team leaders should promote communication between team members so that burdens are shared amongst the team and solutions can be derived from hearing the opinions of multiple sources. This also reinforces team camaraderie and working as a unit. Team leaders must manage the type and timing of interventions as to not create an even bigger mess within the team. Poorly timed interventions can really deflate team members and make them question themselves. This could really increase further and undue interventions by the team leader. Typically, the best time for interventions is when the team is just starting to form so that all unproductive behaviors are removed from the team and that it can retain focus on its agenda. If an intervention is effectively executed the team will feel energized about the work that they are doing, promote communication and interaction amongst the group and improve moral overall. High-performing teams are very import to organizations because they consistently produce high quality output and develop a collective purpose for their work. This drive to succeed allows team members to utilize specific talents allowing for growth in these areas.  In addition, these team members usually take on a sense of ownership with their projects and feel that the other team members are irreplaceable. References: http://blog.assembla.com/assemblablog/tabid/12618/bid/3127/Three-ways-to-organize-your-team-co-located-outsourced-or-global.aspx Katzenbach, J.R. & Smith, D.K. (1993). The Wisdom of Teams: Creating the High-performance Organization. Boston: Harvard Business School.

    Read the article

  • cannot make ubuntu 64-bit v12.04 install work

    - by honestann
    I decided it was time to update my ubuntu (single boot) computer from 64-bit v10.04 to 64-bit v12.04. Unfortunately, for some reason (or reasons) I just can't make it work. Note that I am attempting a fresh install of 64-bit v12.04 onto a new 3TB hard disk, not an upgrade of the 1TB hard disk that contains my working 64-bit v10.04 installation. To perform the attempted install of v12.04 I unplug the SATA cable from the 1TB drive and plug it into the 3TB drive (to avoid risking damage to my working v10.04 installation). I downloaded the ubuntu 64-bit v12.04 install DVD ISO file (~1.6 GB) from the ubuntu releases webpage and burned it onto a DVD. I have downloaded the DVD ISO file 3 times and burned 3 of these installation DVDs (twice with v10.04 and once with my winxp64 system), but none of them work. I run the "check disk" on the DVDs at the beginning of the installation process to assure the DVD is valid. When installation completes and the system boots the 3TB drive, it reports "unknown filesystem". After installation on the 250GB drives, the system boots up fine. During every install I plug the same SATA cable (sda) into only one disk drive (the 3TB or one of the 250GB drives) and leave the other disk drives unconnected (for simplicity). It is my understanding that 64-bit ubuntu (and 64-bit linux in general) has no problem with 3TB disk drives. In the BIOS I have tried having EFI set to "enabled" and "auto" with no apparent difference (no success). I never bothered setting the BIOS to "non-EFI". I have tried partitioning the drive in a few ways to see if that makes a difference, but so far it has not mattered. Typically I manually create partitions something like this: 8GB /boot ext4 8GB swap 3TB / ext4 But I've also tried the following, just in case it matters: 8GB boot efi 8GB swap 8GB /boot ext4 3TB / ext4 Note: In the partition dialog I specify bootup on the same drive I am partitioning and installing ubuntu v12.04 onto. It is a VERY DANGEROUS FACT that the default for this always comes up with the wrong drive (some other drive, generally the external drive). Unless I'm stupid or misunderstanding something, this is very wrong and very dangerous default behavior. Note: If I connect the SATA cable to the 1TB drive that has been my ubuntu 64-bit v10.04 system drive for the past 2 years, it boots up and runs fine. I guess there must be a log file somewhere, and maybe it gives some hints as to what the problem is. I should be able to boot off the 1TB drive with the 3TB drive connected as a secondary (non-boot) drive and get the log file, assuming there is one and someone tells me the name (and where to find it if the name is very generic). After installation on the 3TB drive completes and the system reboots, the following prints out on a black screen: Loading Operating System ... Boot from CD/DVD : Boot from CD/DVD : error: unknown filesystem grub rescue> Note: I have two DVD burners in the system, hence the duplicate line above. Note: I install and boot 64-bit ubuntu v12.04 on both of my 250GB in this same system, but still cannot make the 3TB drive boot. Sigh. Any ideas? ========== motherboard == gigabyte 990FXA-UD7 CPU == AMD FX-8150 8-core bulldozer @ 3.6 GHz RAM == 8GB of DDR3 in 2 sticks (matched pair) HDD == seagate 3TB SATA3 @ 7200 rpm (new install 64-bit v12.04 FAILS) HDD == seagate 1TB SATA3 @ 7200 rpm (64-bit v10.04 WORKS for two years) HDD == seagate 250GB SATA2 @ 7200 rpm (new install 64-bit v12.04 WORKS) HDD == seagate 250GB SATA2 @ 7200 rpm (new install 64-bit v12.04 WORKS) GPU == nvidia GTX-285 ??? == no overclocking or other funky business USB == external seagate 2TB HDD for making backups DVD == one bluray burner (SATA) DVD == one DVD burner (SATA) 64-bit ubuntu v10.04 has booted and run fine on the seagate 1TB drive for 2 years.

    Read the article

  • SQL SERVER – Attach mdf file without ldf file in Database

    - by pinaldave
    Background Story: One of my friends recently called up and asked me if I had spare time to look at his database and give him a performance tuning advice. Because I had some free time to help him out, I said yes. I asked him to send me the details of his database structure and sample data. He said that since his database is in a very early stage and is small as of the moment, so he told me that he would like me to have a complete database. My response to him was “Sure! In that case, take a backup of the database and send it to me. I will restore it into my computer and play with it.” He did send me his database; however, his method made me write this quick note here. Instead of taking a full backup of the database and sending it to me, he sent me only the .mdf (primary database file). In fact, I asked for a complete backup (I wanted to review file groups, files, as well as few other details).  Upon calling my friend,  I found that he was not available. Now,  he left me with only a .mdf file. As I had some extra time, I decided to checkout his database structure and get back to him regarding the full backup, whenever I can get in touch with him again. Technical Talk: If the database is shutdown gracefully and there was no abrupt shutdown (power outrages, pulling plugs to machines, machine crashes or any other reasons), it is possible (there’s no guarantee) to attach .mdf file only to the server. Please note that there can be many more reasons for a database that is not getting attached or restored. In my case, the database had a clean shutdown and there were no complex issues. I was able to recreate a transaction log file and attached the received .mdf file. There are multiple ways of doing this. I am listing all of them here. Before using any of them, please consult the Domain Expert in your company or industry. Also, never attempt this on live/production server without the presence of a Disaster Recovery expert. USE [master] GO -- Method 1: I use this method EXEC sp_attach_single_file_db @dbname='TestDb', @physname=N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf' GO -- Method 2: CREATE DATABASE TestDb ON (FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf') FOR ATTACH_REBUILD_LOG GO Method 2: If one or more log files are missing, they are recreated again. There is one more method which I am demonstrating here but I have not used myself before. According to Book Online, it will work only if there is one log file that is missing. If there are more than one log files involved, all of them are required to undergo the same procedure. -- Method 3: CREATE DATABASE TestDb ON ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf') FOR ATTACH GO Please read the Book Online in depth and consult DR experts before working on the production server. In my case, the above syntax just worked fine as the database was clean when it was detached. Feel free to write your opinions and experiences for it will help the IT community to learn more from your suggestions and skills. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • When is a SQL function not a function?

    - by Rob Farley
    Should SQL Server even have functions? (Oh yeah – this is a T-SQL Tuesday post, hosted this month by Brad Schulz) Functions serve an important part of programming, in almost any language. A function is a piece of code that is designed to return something, as opposed to a piece of code which isn’t designed to return anything (which is known as a procedure). SQL Server is no different. You can call stored procedures, even from within other stored procedures, and you can call functions and use these in other queries. Stored procedures might query something, and therefore ‘return data’, but a function in SQL is considered to have the type of the thing returned, and can be used accordingly in queries. Consider the internal GETDATE() function. SELECT GETDATE(), SomeDatetimeColumn FROM dbo.SomeTable; There’s no logical difference between the field that is being returned by the function and the field that’s being returned by the table column. Both are the datetime field – if you didn’t have inside knowledge, you wouldn’t necessarily be able to tell which was which. And so as developers, we find ourselves wanting to create functions that return all kinds of things – functions which look up values based on codes, functions which do string manipulation, and so on. But it’s rubbish. Ok, it’s not all rubbish, but it mostly is. And this isn’t even considering the SARGability impact. It’s far more significant than that. (When I say the SARGability aspect, I mean “because you’re unlikely to have an index on the result of some function that’s applied to a column, so try to invert the function and query the column in an unchanged manner”) I’m going to consider the three main types of user-defined functions in SQL Server: Scalar Inline Table-Valued Multi-statement Table-Valued I could also look at user-defined CLR functions, including aggregate functions, but not today. I figure that most people don’t tend to get around to doing CLR functions, and I’m going to focus on the T-SQL-based user-defined functions. Most people split these types of function up into two types. So do I. Except that most people pick them based on ‘scalar or table-valued’. I’d rather go with ‘inline or not’. If it’s not inline, it’s rubbish. It really is. Let’s start by considering the two kinds of table-valued function, and compare them. These functions are going to return the sales for a particular salesperson in a particular year, from the AdventureWorks database. CREATE FUNCTION dbo.FetchSales_inline(@salespersonid int, @orderyear int) RETURNS TABLE AS  RETURN (     SELECT e.LoginID as EmployeeLogin, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ) ; GO CREATE FUNCTION dbo.FetchSales_multi(@salespersonid int, @orderyear int) RETURNS @results TABLE (     EmployeeLogin nvarchar(512),     OrderDate datetime,     SalesOrderID int     ) AS BEGIN     INSERT @results (EmployeeLogin, OrderDate, SalesOrderID)     SELECT e.LoginID, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ;     RETURN END ; GO You’ll notice that I’m being nice and responsible with the use of the DATEADD function, so that I have SARGability on the OrderDate filter. Regular readers will be hoping I’ll show what’s going on in the execution plans here. Here I’ve run two SELECT * queries with the “Show Actual Execution Plan” option turned on. Notice that the ‘Query cost’ of the multi-statement version is just 2% of the ‘Batch cost’. But also notice there’s trickery going on. And it’s nothing to do with that extra index that I have on the OrderDate column. Trickery. Look at it – clearly, the first plan is showing us what’s going on inside the function, but the second one isn’t. The second one is blindly running the function, and then scanning the results. There’s a Sequence operator which is calling the TVF operator, and then calling a Table Scan to get the results of that function for the SELECT operator. But surely it still has to do all the work that the first one is doing... To see what’s actually going on, let’s look at the Estimated plan. Now, we see the same plans (almost) that we saw in the Actuals, but we have an extra one – the one that was used for the TVF. Here’s where we see the inner workings of it. You’ll probably recognise the right-hand side of the TVF’s plan as looking very similar to the first plan – but it’s now being called by a stack of other operators, including an INSERT statement to be able to populate the table variable that the multi-statement TVF requires. And the cost of the TVF is 57% of the batch! But it gets worse. Let’s consider what happens if we don’t need all the columns. We’ll leave out the EmployeeLogin column. Here, we see that the inline function call has been simplified down. It doesn’t need the Employee table. The join is redundant and has been eliminated from the plan, making it even cheaper. But the multi-statement plan runs the whole thing as before, only removing the extra column when the Table Scan is performed. A multi-statement function is a lot more powerful than an inline one. An inline function can only be the result of a single sub-query. It’s essentially the same as a parameterised view, because views demonstrate this same behaviour of extracting the definition of the view and using it in the outer query. A multi-statement function is clearly more powerful because it can contain far more complex logic. But a multi-statement function isn’t really a function at all. It’s a stored procedure. It’s wrapped up like a function, but behaves like a stored procedure. It would be completely unreasonable to expect that a stored procedure could be simplified down to recognise that not all the columns might be needed, but yet this is part of the pain associated with this procedural function situation. The biggest clue that a multi-statement function is more like a stored procedure than a function is the “BEGIN” and “END” statements that surround the code. If you try to create a multi-statement function without these statements, you’ll get an error – they are very much required. When I used to present on this kind of thing, I even used to call it “The Dangers of BEGIN and END”, and yes, I’ve written about this type of thing before in a similarly-named post over at my old blog. Now how about scalar functions... Suppose we wanted a scalar function to return the count of these. CREATE FUNCTION dbo.FetchSales_scalar(@salespersonid int, @orderyear int) RETURNS int AS BEGIN     RETURN (         SELECT COUNT(*)         FROM Sales.SalesOrderHeader AS o         LEFT JOIN HumanResources.Employee AS e         ON e.EmployeeID = o.SalesPersonID         WHERE o.SalesPersonID = @salespersonid         AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')         AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ); END ; GO Notice the evil words? They’re required. Try to remove them, you just get an error. That’s right – any scalar function is procedural, despite the fact that you wrap up a sub-query inside that RETURN statement. It’s as ugly as anything. Hopefully this will change in future versions. Let’s have a look at how this is reflected in an execution plan. Here’s a query, its Actual plan, and its Estimated plan: SELECT e.LoginID, y.year, dbo.FetchSales_scalar(p.SalesPersonID, y.year) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; We see here that the cost of the scalar function is about twice that of the outer query. Nicely, the query optimizer has worked out that it doesn’t need the Employee table, but that’s a bit of a red herring here. There’s actually something way more significant going on. If I look at the properties of that UDF operator, it tells me that the Estimated Subtree Cost is 0.337999. If I just run the query SELECT dbo.FetchSales_scalar(281,2003); we see that the UDF cost is still unchanged. You see, this 0.0337999 is the cost of running the scalar function ONCE. But when we ran that query with the CROSS JOIN in it, we returned quite a few rows. 68 in fact. Could’ve been a lot more, if we’d had more salespeople or more years. And so we come to the biggest problem. This procedure (I don’t want to call it a function) is getting called 68 times – each one between twice as expensive as the outer query. And because it’s calling it in a separate context, there is even more overhead that I haven’t considered here. The cheek of it, to say that the Compute Scalar operator here costs 0%! I know a number of IT projects that could’ve used that kind of costing method, but that’s another story that I’m not going to go into here. Let’s look at a better way. Suppose our scalar function had been implemented as an inline one. Then it could have been expanded out like a sub-query. It could’ve run something like this: SELECT e.LoginID, y.year, (SELECT COUNT(*)     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = p.SalesPersonID     AND o.OrderDate >= DATEADD(year,y.year-2000,'20000101')     AND o.OrderDate < DATEADD(year,y.year-2000+1,'20000101')     ) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; Don’t worry too much about the Scan of the SalesOrderHeader underneath a Nested Loop. If you remember from plenty of other posts on the matter, execution plans don’t push the data through. That Scan only runs once. The Index Spool sucks the data out of it and populates a structure that is used to feed the Stream Aggregate. The Index Spool operator gets called 68 times, but the Scan only once (the Number of Executions property demonstrates this). Here, the Query Optimizer has a full picture of what’s being asked, and can make the appropriate decision about how it accesses the data. It can simplify it down properly. To get this kind of behaviour from a function, we need it to be inline. But without inline scalar functions, we need to make our function be table-valued. Luckily, that’s ok. CREATE FUNCTION dbo.FetchSales_inline2(@salespersonid int, @orderyear int) RETURNS table AS RETURN (SELECT COUNT(*) as NumSales     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ); GO But we can’t use this as a scalar. Instead, we need to use it with the APPLY operator. SELECT e.LoginID, y.year, n.NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID OUTER APPLY dbo.FetchSales_inline2(p.SalesPersonID, y.year) AS n; And now, we get the plan that we want for this query. All we’ve done is tell the function that it’s returning a table instead of a single value, and removed the BEGIN and END statements. We’ve had to name the column being returned, but what we’ve gained is an actual inline simplifiable function. And if we wanted it to return multiple columns, it could do that too. I really consider this function to be superior to the scalar function in every way. It does need to be handled differently in the outer query, but in many ways it’s a more elegant method there too. The function calls can be put amongst the FROM clause, where they can then be used in the WHERE or GROUP BY clauses without fear of calling the function multiple times (another horrible side effect of functions). So please. If you see BEGIN and END in a function, remember it’s not really a function, it’s a procedure. And then fix it. @rob_farley

    Read the article

  • Where’s my MD.050?

    - by Dave Burke
    A question that I’m sometimes asked is “where’s my MD.050 in OUM?” For those not familiar with an MD.050, it serves the purpose of being a Functional Design Document (FDD) in one of Oracle’s legacy Methods. Functional Design Documents have existed for many years with their primary purpose being to describe the functional aspects of one or more components of an IT system, typically, a Custom Extension of some sort. So why don’t we have a direct replacement for the MD.050/FDD in OUM? In simple terms, the disadvantage of the MD.050/FDD approach is that it tends to lead practitioners into “Design mode” too early in the process. Whereas OUM encourages more emphasis on gathering, and describing the functional requirements of a system ahead of the formal Analysis and Design process. So that just means more work up front for the Business Analyst or Functional Consultants right? Well no…..the design of a solution, particularly when it involves a complex custom extension, does not necessarily take longer just because you put more thought into the functional requirements. In fact, one could argue the complete opposite, in that by putting more emphasis on clearly understanding the nuances of functionality requirements early in the process, then the overall time and cost incurred during the Analysis to Design process should be less. In short, as your understanding of requirements matures over time, it is far easier (and more cost effective) to update a document or a diagram, than to change lines of code. So how does that translate into Tasks and Work Products in OUM? Let us assume you have reached a point on a project where a Custom Extension is needed. One of the first things you should consider doing is creating a Use Case, and remember, a Use Case could be as simple as a few lines of text reflecting a “User Story”, or it could be what Cockburn1 describes a “fully dressed Use Case”. It is worth mentioned at this point the highly scalable nature of OUM in the sense that “documents” should not be produced just because that is the way we have always done things. Some projects may well be predicated upon a base of electronic documents, whilst other projects may take a much more Agile approach to describing functional requirements; through “User Stories” perhaps. In any event, it is quite common for a Custom Extension to involve the creation of several “components”, i.e. some new screens, an interface, a report etc. Therefore several Use Cases might be required, which in turn can then be assembled into a Use Case Package. Once you have the Use Cases attributed to an appropriate (fit-for-purpose) level of detail, and assembled into a Package, you can now create an Analysis Model for the Package. An Analysis Model is conceptual in nature, and depending on the solution being developing, would involve the creation of one or more diagrams (i.e. Sequence Diagrams, Collaboration Diagrams etc.) which collectively describe the Data, Behavior and Use Interface requirements of the solution. If required, the various elements of the Analysis Model may be indexed via an Analysis Specification. For Custom Extension projects that follow a pure Object Orientated approach, then the Analysis Model will naturally support the development of the Design Model without any further artifacts. However, for projects that are transitioning to this approach, then the various elements of the Analysis Model may be represented within the Analysis Specification. If we now return to the original question of “Where’s my MD.050”. The full answer would be: Capture the functional requirements within a Use Case Group related Use Cases into a Package Create an Analysis Model for each Package Consider creating an Analysis Specification (AN.100) as a index to each Analysis Model artifact An alternative answer for a relatively simple Custom Extension would be: Capture the functional requirements within a Use Case Optionally, group related Use Cases into a Package Create an Analysis Specification (AN.100) for each package 1 Cockburn, A, 2000, Writing Effective Use Case, Addison-Wesley Professional; Edition 1

    Read the article

  • Is Social Media The Vital Skill You Aren’t Tracking?

    - by HCM-Oracle
    By Mark Bennett - Originally featured in Talent Management Excellence The ever-increasing presence of the workforce on social media presents opportunities as well as risks for organizations. While on the one hand, we read about social media embarrassments happening to organizations, on the other we see that social media activities by workers and candidates can enhance a company’s brand and provide insight into what individuals are, or can become, influencers in the social media sphere. HR can play a key role in helping organizations make the most value out of the activities and presence of workers and candidates, while at the same time also helping to manage the risks that come with the permanence and viral nature of social media. What is Missing from Understanding Our Workforce? “If only HP knew what HP knows, we would be three-times more productive.”  Lew Platt, Former Chairman, President, CEO, Hewlett-Packard  What Lew Platt recognized was that organizations only have a partial understanding of what their workforce is capable of. This lack of understanding impacts the company in several negative ways: 1. A particular skill that the company needs to access in one part of the organization might exist somewhere else, but there is no record that the skill exists, so the need is unfulfilled. 2. As market conditions change rapidly, the company needs to know strategic options, but some options are missed entirely because the company doesn’t know that sufficient capability already exists to enable those options. 3. Employees may miss out on opportunities to demonstrate how their hidden skills could create new value to the company. Why don’t companies have that more complete picture of their workforce capabilities – that is, not know what they know? One very good explanation is that companies put most of their efforts into rating their workforce according to the jobs and roles they are filling today. This is the essence of two important talent management processes: recruiting and performance appraisals.  In recruiting, a set of requirements is put together for a job, either explicitly or indirectly through a job description. During the recruiting process, much of the attention is paid towards whether the candidate has the qualifications, the skills, the experience and the cultural fit to be successful in the role. This makes a lot of sense.  In the performance appraisal process, an employee is measured on how well they performed the functions of their role and in an effort to help the employee do even better next time, they are also measured on proficiency in the competencies that are deemed to be key in doing that job. Again, the logic is impeccable.  But in both these cases, two adages come to mind: 1. What gets measured is what gets managed. 2. You only see what you are looking for. In other words, the fact that the current roles the workforce are performing are the basis for measuring which capabilities the workforce has, makes them the only capabilities to be measured. What was initially meant to be a positive, i.e. identify what is needed to perform well and measure it, in order that it can be managed, comes with the unintended negative consequence of overshadowing the other capabilities the workforce has. This also comes with an employee engagement price, for the measurements and management of workforce capabilities is to typically focus on where the workforce comes up short. Again, it makes sense to do this, since improving a capability that appears to result in improved performance benefits, both the individual through improved performance ratings and the company through improved productivity. But this is based on the assumption that the capabilities identified and their required proficiencies are the only attributes of the individual that matter. Anything else the individual brings that results in high performance, while resulting in a desired performance outcome, often goes unrecognized or underappreciated at best. As social media begins to occupy a more important part in current and future roles in organizations, businesses must incorporate social media savvy and innovation into job descriptions and expectations. These new measures could provide insight into how well someone can use social media tools to influence communities and decision makers; keep abreast of trends in fast-moving industries; present a positive brand image for the organization around thought leadership, customer focus, social responsibility; and coordinate and collaborate with partners. These measures should demonstrate the “social capital” the individual has invested in and developed over time. Without this dimension, “short cut” methods may generate a narrow set of positive metrics that do not have real, long-lasting benefits to the organization. How Workforce Reputation Management Helps HR Harness Social Media With hundreds of petabytes of social media data flowing across Facebook, LinkedIn and Twitter, businesses are tapping technology solutions to effectively leverage social for HR. Workforce reputation management technology helps organizations discover, mobilize and retain talent by providing insight into the social reputation and influence of the workforce while also helping organizations monitor employee social media policy compliance and mitigate social media risk.  There are three major ways that workforce reputation management technology can play a strategic role to support HR: 1. Improve Awareness and Decisions on Talent Many organizations measure the skills and competencies that they know they need today, but are unaware of what other skills and competencies their workforce has that could be essential tomorrow. How about whether your workforce has the reputation and influence to make their skills and competencies more effective? Many organizations don’t have insight into the social media “reach” their workforce has, which is becoming more critical to business performance. These features help organizations, managers, and employees improve many talent processes and decision making, including the following: Hiring and Assignments. People and teams with higher reputations are considered more valuable and effective workers. Someone with high reputation who refers a candidate also can have high credibility as a source for hires.   Training and Development. Reputation trend analysis can impact program decisions regarding training offerings by showing how reputation and influence across the workforce changes in concert with training. Worker reputation impacts development plans and goal choices by helping the individual see which development efforts result in improved reputation and influence.   Finding Hidden Talent. Managers can discover hidden talent and skills amongst employees based on a combination of social profile information and social media reputation. Employees can improve their personal brand and accelerate their career development.  2. Talent Search and Discovery The right technology helps organizations find information on people that might otherwise be hidden. By leveraging access to candidate and worker social profiles as well as their social relationships, workforce reputation management provides companies with a more complete picture of what their knowledge, skills, and attributes are and what they can in turn access. This more complete information helps to find the right talent both outside the organization as well as the right, perhaps previously hidden talent, within the organization to fill roles and staff projects, particularly those roles and projects that are required in reaction to fast-changing opportunities and circumstances. 3. Reputation Brings Credibility Workforce reputation management technology provides a clearer picture of how candidates and workers are viewed by their peers and communities across a wide range of social reputation and influence metrics. This information is less subject to individual bias and can impact critical decision-making. Knowing the individual’s reputation and influence enables the organization to predict how well their capabilities and behaviors will have a positive effect on desired business outcomes. Many roles that have the highest impact on overall business performance are dependent on the individual’s influence and reputation. In addition, reputation and influence measures offer a very tangible source of feedback for workers, providing them with insight that helps them develop themselves and their careers and see the effectiveness of those efforts by tracking changes over time in their reputation and influence. The following are some examples of the different reputation and influence measures of the workforce that Workforce Reputation Management could gather and analyze: Generosity – How often the user reposts other’s posts. Influence – How often the user’s material is reposted by others.  Engagement – The ratio of recent posts with references (e.g. links to other posts) to the total number of posts.  Activity – How frequently the user posts. (e.g. number per day)  Impact – The size of the users’ social networks, which indicates their ability to reach unique followers, friends, or users.   Clout – The number of references and citations of the user’s material in others’ posts.  The Vital Ingredient of Workforce Reputation Management: Employee Participation “Nothing about me, without me.” Valerie Billingham, “Through the Patient’s Eyes”, Salzburg Seminar Session 356, 1998 Since data resides primarily in social media, a question arises: what manner is used to collect that data? While much of social media activity is publicly accessible (as many who wished otherwise have learned to their chagrin), the social norms of social media have developed to put some restrictions on what is acceptable behavior and by whom. Disregarding these norms risks a repercussion firestorm. One of the more recognized norms is that while individuals can follow and engage with other individual’s public social activity (e.g. Twitter updates) fairly freely, the more an organization does this unprompted and without getting permission from the individual beforehand, the more likely the organization risks a totally opposite outcome from the one desired. Instead, the organization must look for permission from the individual, which can be met with resistance. That resistance comes from not knowing how the information will be used, how it will be shared with others, and not receiving enough benefit in return for granting permission. As the quote above about patient concerns and rights succinctly states, no one likes not feeling in control of the information about themselves, or the uncertainty about where it will be used. This is well understood in consumer social media (i.e. permission-based marketing) and is applicable to workforce reputation management. However, asking permission leaves open the very real possibility that no one, or so few, will grant permission, resulting in a small set of data with little usefulness for the company. Connecting Individual Motivation to Organization Needs So what is it that makes an individual decide to grant an organization access to the data it wants? It is when the individual’s own motivations are in alignment with the organization’s objectives. In the case of workforce reputation management, when the individual is motivated by a desire for increased visibility and career growth opportunities to advertise their skills and level of influence and reputation, they are aligned with the organizations’ objectives; to fill resource needs or strategically build better awareness of what skills are present in the workforce, as well as levels of influence and reputation. Individuals can see the benefit of granting access permission to the company through multiple means. One is through simple social awareness; they begin to discover that peers who are getting more career opportunities are those who are signed up for workforce reputation management. Another is where companies take the message directly to the individual; we think you would benefit from signing up with our workforce reputation management solution. Another, more strategic approach is to make reputation management part of a larger Career Development effort by the company; providing a wide set of tools to help the workforce find ways to plan and take action to achieve their career aspirations in the organization. An effective mechanism, that facilitates connecting the visibility and career growth motivations of the workforce with the larger context of the organization’s business objectives, is to use game mechanics to help individuals transform their career goals into concrete, actionable steps, such as signing up for reputation management. This works in favor of companies looking to use workforce reputation because the workforce is more apt to see how it fits into achieving their overall career goals, as well as seeing how other participation brings additional benefits.  Once an individual has signed up with reputation management, not only have they made themselves more visible within the organization and increased their career growth opportunities, they have also enabled a tool that they can use to better understand how their actions and behaviors impact their influence and reputation. Since they will be able to see their reputation and influence measurements change over time, they will gain better insight into how reputation and influence impacts their effectiveness in a role, as well as how their behaviors and skill levels in turn affect their influence and reputation. This insight can trigger much more directed, and effective, efforts by the individual to improve their ability to perform at a higher level and become more productive. The increased sense of autonomy the individual experiences, in linking the insight they gain to the actions and behavior changes they make, greatly enhances their engagement with their role as well as their career prospects within the company. Workforce reputation management takes the wide range of disparate data about the workforce being produced across various social media platforms and transforms it into accessible, relevant, and actionable information that helps the organization achieve its desired business objectives. Social media holds untapped insights about your talent, brand and business, and workforce reputation management can help unlock them. Imagine - if you could find the hidden secrets of your businesses, how much more productive and efficient would your organization be? Mark Bennett is a Director of Product Strategy at Oracle. Mark focuses on setting the strategic vision and direction for tools that help organizations understand, shape, and leverage the capabilities of their workforce to achieve business objectives, as well as help individuals work effectively to achieve their goals and navigate their own growth. His combination of a deep technical background in software design and development, coupled with a broad knowledge of business challenges and thinking in today’s globalized, rapidly changing, technology accelerated economy, has enabled him to identify and incorporate key innovations that are central to Oracle Fusion’s unique value proposition. Mark has over the course of his career been in charge of the design, development, and strategy of Talent Management products and the design and development of cutting edge software that is better equipped to handle the increasingly complex demands of users while also remaining easy to use. Follow him @mpbennett

    Read the article

  • Microsoft Declares the Future of ASP.NET is Web API

    - by sbwalker
    Sitting on a plane on my way home from Tech Ed 2012 in Orlando, I thought it would be a good time to jot down some key takeaways from this year’s conference. Some of these items I have known since the Microsoft MVP Summit which occurred in Redmond in late February ( but due to NDA restrictions I could not share them with the developer community at large ) and some of them are a result of insightful conversations with a wide variety of industry insiders and Microsoft employees at the conference. First, let’s travel back in time 4 years to the Microsoft MVP Summit in 2008. Microsoft was facing some heat from market newcomer Ruby on Rails and responded with a new web development framework of its own, ASP.NET MVC. At the Summit they estimated that MVC would only be applicable for ~10% of all new web development projects. Based on that prediction I questioned why they were investing such considerable resources for such a relative edge case, but my guess is that they felt it was an important edge case at the time as some of the more vocal .NET evangelists as well as some very high profile start-ups ( ie. Twitter ) had publicly announced their intent to use Rails. Microsoft made a lot of noise about MVC. In fact, they focused so much of their messaging and marketing hype around MVC that it appeared that WebForms was essentially dead. Yes, it may have been true that Microsoft continued to invest in WebForms, but from an outside perspective it really appeared that MVC was the only framework getting any real attention. As a result, MVC started to gain market share. An inside source at Microsoft told me that MVC usage has grown at a rate of about 5% per year and now sits at ~30%. Essentially by focusing so much marketing effort on MVC, Microsoft actually created a larger market demand for it.  This is because in the Microsoft ecosystem there is somewhat of a bandwagon mentality amongst developers. If Microsoft spends a lot of time talking about a specific technology, developers get the perception that it must be really important. So rather than choosing the right tool for the job, they often choose the tool with the most marketing hype and then try to sell it to the customer. In 2010, I blogged about the fact that MVC did not make any business sense for the DotNetNuke platform. This was because our ecosystem relied on third party extensions which were dependent on the WebForms model. If we migrated the core to MVC it would mean that all of the third party extensions would no longer be compatible, which would be an irresponsible business decision for us to make at the expense of our users and customers. However, this did not stop the debate from continuing to occur in our ecosystem. Clearly some developers had drunk Microsoft’s Kool-Aid about MVC and were of the mindset, to paraphrase an old Scottish saying, “If its not MVC, it’s crap”. Now, this is a rather ignorant position to take as most of the benefits of MVC can be achieved in WebForms with solid architecture and responsible coding practices. Clean separation of concerns, unit testing, and direct control over page output are all possible in the WebForms model – it just requires diligence and discipline. So over the past few years some horror stories have begun to bubble to the surface of software development projects focused on ground-up rewrites of web applications for the sole purpose of migrating from WebForms to MVC. These large scale rewrites were typically initiated by engineering teams with only a single argument driving the business decision, that Microsoft was promoting MVC as “the future”. These ill-fated rewrites offered no benefit to end users or customers and in fact resulted in a less stable, less scalable and more complicated systems – basically taking one step forward and two full steps back. A case in point is the announcement earlier this week that a popular open source .NET CMS provider has decided to pull the plug on their new MVC product which has been under active development for more than 18 months and revert back to WebForms. The availability of multiple server-side development models has deeply fragmented the Microsoft developer community. Some folks like to compare it to the age-old VB vs. C# language debate. However, the VB vs. C# language debate was ultimately more of a religious war because at least the two dominant programming languages were compatible with one another and could be used interchangeably. The issue with WebForms vs. MVC is much more challenging. This is because the messaging from Microsoft has positioned the two solutions as being incompatible with one another and as a result web developers feel like they are forced to choose one path or another. Yes, it is true that it has always been technically possible to use WebForms and MVC in the same project, but the tooling support has always made this feel “dirty”. The fragmentation has also made it difficult to attract newcomers as the perceived barrier to entry for learning ASP.NET has become higher. As a result many new software developers entering the market are gravitating to environments where the development model seems more simple and intuitive ( ie. PHP or Ruby ). At the same time that the Web Platform team was busy promoting ASP.NET MVC, the Microsoft Office team has been promoting Sharepoint as a platform for building internal enterprise web applications. Sharepoint has great penetration in the enterprise and over time has been enhanced with improved extensibility capabilities for software developers. But, like many other mature enterprise ASP.NET web applications, it is built on the WebForms development model. Similar to DotNetNuke, Sharepoint leverages a rich third party ecosystem for both generic web controls and more specialized WebParts – both of which rely on WebForms. So basically this resulted in a situation where the Web Platform group had headed off in one direction and the Office team had gone in another direction, and the end customer was stuck in the middle trying to figure out what to do with their existing investments in Microsoft technology. It really emphasized the perception that the left hand was not speaking to the right hand, as strategically speaking there did not seem to be any high level plan from Microsoft to ensure consistency and continuity across the different product lines. With the introduction of ASP.NET MVC, it also made some of the third party control vendors scratch their heads, and wonder what the heck Microsoft was thinking. The original value proposition of ASP.NET over Classic ASP was the ability for web developers to emulate the highly productive desktop development model by using abstract components for creating rich, interactive web interfaces. Web control vendors like Telerik, Infragistics, DevExpress, and ComponentArt had all built sizable businesses offering powerful user interface components to WebForms developers. And even after MVC was introduced these vendors continued to improve their products, offering greater productivity and a superior user experience via AJAX to what was possible in MVC. And since many developers were comfortable and satisfied with these third party solutions, the demand remained strong and the third party web control market continued to prosper despite the availability of MVC. While all of this was going on in the Microsoft ecosystem, there has also been a fundamental shift in the general software development industry. Driven by the explosion of Internet-enabled devices, the focus has now centered on service-oriented architecture (SOA). Service-oriented architecture is all about defining a public API for your product that any client can consume; whether it’s a native application running on a smart phone or tablet, a web browser taking advantage of HTML5 and Javascript, or a rich desktop application running on a PC. REST-based services which utilize the less verbose characteristics of JSON as a transport mechanism, have become the preferred approach over older, more bloated SOAP-based techniques. SOA also has the benefit of producing a cross-platform API, as every major technology stack is able to interact with standard REST-based web services. And for web applications, more and more developers are turning to robust Javascript libraries like JQuery and Knockout for browser-based client-side development techniques for calling web services and rendering content to end users. In fact, traditional server-side page rendering has largely fallen out of favor, resulting in decreased demand for server-side frameworks like Ruby on Rails, WebForms, and (gasp) MVC. In response to these new industry trends, Microsoft did what it always does – it immediately poured some resources into developing a solution which will ensure they remain relevant and competitive in the web space. This work culminated in a new framework which was branded as Web API. It is convention-based and designed to embrace native HTTP standards without copious layers of abstraction. This framework is designed to be the ultimate replacement for both the REST aspects of WCF and ASP.NET MVC Web Services. And since it was developed out of band with a dependency only on ASP.NET 4.0, it means that it can be used immediately in a variety of production scenarios. So at Tech Ed 2012 it was made abundantly clear in numerous sessions that Microsoft views Web API as the “Future of ASP.NET”. In fact, one Microsoft PM even went as far as to say that if we look 3-4 years into the future, that all ASP.NET web applications will be developed using the Web API approach. This is a fairly bold prediction and clearly telegraphs where Microsoft plans to allocate its resources going forward. Currently Web API is being delivered as part of the MVC4 package, but this is only temporary for the sake of convenience. It also sounds like there are still internal discussions going on in terms of how to brand the various aspects of ASP.NET going forward – perhaps the moniker of “ASP.NET Web Stack” coined a couple years ago by Scott Hanselman and utilized as part of the open source release of ASP.NET bits on Codeplex a few months back will eventually stick. Web API is being positioned as the unification of ASP.NET – the glue that is able to pull this fragmented mess back together again. The  “One ASP.NET” strategy will promote the use of all frameworks - WebForms, MVC, and Web API, even within the same web project. Basically the message is utilize the appropriate aspects of each framework to solve your business problems. Instead of navigating developers to a fork in the road, the plan is to educate them that “hybrid” applications are a great strategy for delivering solutions to customers. In addition, the service-oriented approach coupled with client-side development promoted by Web API can effectively be used in both WebForms and MVC applications. So this means it is also relevant to application platforms like DotNetNuke and Sharepoint, which means that it starts to create a unified development strategy across all ASP.NET product lines once again. And so what about MVC? There have actually been rumors floated that MVC has reached a stage of maturity where, similar to WebForms, it will be treated more as a maintenance product line going forward ( MVC4 may in fact be the last significant iteration of this framework ). This may sound alarming to some folks who have recently adopted MVC but it really shouldn’t, as both WebForms and MVC will continue to play a vital role in delivering solutions to customers. They will just not be the primary area where Microsoft is spending the majority of its R&D resources. That distinction will obviously go to Web API. And when the question comes up of why not enhance MVC to make it work with Web API, you must take a step back and look at this from the higher level to see that it really makes no sense. MVC is a server-side page compositing framework; whereas, Web API promotes client-side page compositing with a heavy focus on web services. In order to make MVC work well with Web API, would require a complete rewrite of MVC and at the end of the day, there would be no upgrade path for existing MVC applications. So it really does not make much business sense. So what does this have to do with DotNetNuke? Well, around 8-12 months ago we recognized the software industry trends towards web services and client-side development. We decided to utilize a “hybrid” model which would provide compatibility for existing modules while at the same time provide a bridge for developers who wanted to utilize more modern web techniques. Customers who like the productivity and familiarity of WebForms can continue to build custom modules using the traditional approach. However, in DotNetNuke 6.2 we also introduced a new Service Framework which is actually built on top of MVC2 ( we chose to leverage MVC because it had the most intuitive, light-weight REST implementation in the .NET stack ). The Services Framework allowed us to build some rich interactive features in DotNetNuke 6.2, including the Messaging and Notification Center and Activity Feed. But based on where we know Microsoft is heading, it makes sense for the next major version of DotNetNuke ( which is expected to be released in Q4 2012 ) to migrate from MVC2 to Web API. This will likely result in some breaking changes in the Services Framework but we feel it is the best approach for ensuring the platform remains highly modern and relevant. The fact that our development strategy is perfectly aligned with the “One ASP.NET” strategy from Microsoft means that our customers and developer community can be confident in their current and future investments in the DotNetNuke platform.

    Read the article

  • UAT Testing for SOA 10G Clusters

    - by [email protected]
    A lot of customers ask how to verify their SOA clusters and make them production ready. Here is a list that I recommend using for 10G SOA Clusters. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-CA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:12.0pt; font-family:"Calibri","sans-serif"; mso-fareast-language:EN-US;} Test cases for each component - Oracle Application Server 10G General Application Server test cases This section is going to cover very General test cases to make sure that the Application Server cluster has been set up correctly and if you can start and stop all the components in the server via opmnct and AS Console. Test Case 1 Check if you can see AS instances in the console Implementation 1. Log on to the AS Console --> check to see if you can see all the nodes in your AS cluster. You should be able to see all the Oracle AS instances that are part of the cluster. This means that the OPMN clustering worked and the AS instances successfully joined the AS cluster. Result You should be able to see if all the instances in the AS cluster are listed in the EM console. If the instances are not listed here are the files to check to see if OPMN joined the cluster properly: $ORACLE_HOME\opmn\logs{*}opmn.log*$ORACLE_HOME\opmn\logs{*}opmn.dbg* If OPMN did not join the cluster properly, please check the opmn.xml file to make sure the discovery multicast address and port are correct (see this link  for opmn documentation). Restart the whole instance using opmnctl stopall followed by opmnctl startall. Log on to AS console to see if instance is listed as part of the cluster. Test Case 2 Check to see if you can start/stop each component Implementation Check each OC4J component on each AS instanceStart each and every component through the AS console to see if they will start and stop.Do that for each and every instance. Result Each component should start and stop through the AS console. You can also verify if the component started by checking opmnctl status by logging onto each box associated with the cluster Test Case 3 Add/modify a datasource entry through AS console on a remote AS instance (not on the instance where EM is physically running) Implementation Pick an OC4J instanceCreate a new data-source through the AS consoleModify an existing data-source or connection pool (optional) Result Open $ORACLE_HOME\j2ee\<oc4j_name>\config\data-sources.xml to see if the new (and or the modified) connection details and data-source exist. If they do then the AS console has successfully updated a remote file and MBeans are communicating correctly. Test Case 4 Start and stop AS instances using opmnctl @cluster command Implementation 1. Go to $ORACLE_HOME\opmn\bin and use the opmnctl @cluster to start and stop the AS instances Result Use opmnctl @cluster status to check for start and stop statuses.  HTTP server test cases This section will deal with use cases to test HTTP server failover scenarios. In these examples the HTTP server will be talking to the BPEL console (or any other web application that the client wants), so the URL will be _http://hostname:port\BPELConsole Test Case 1  Shut down one of the HTTP servers while accessing the BPEL console and see the requested routed to the second HTTP server in the cluster Implementation Access the BPELConsoleCheck $ORACLE_HOME\Apache\Apache\logs\access_log --> check for the timestamp and the URL that was accessed by the user. Timestamp and URL would look like this 1xx.2x.2xx.xxx [24/Mar/2009:16:04:38 -0500] "GET /BPELConsole=System HTTP/1.1" 200 15 After you have figured out which HTTP server this is running on, shut down this HTTP server by using opmnctl stopproc --> this is a graceful shutdown.Access the BPELConsole again (please note that you should have a LoadBalancer in front of the HTTP server and configured the Apache Virtual Host, see EDG for steps)Check $ORACLE_HOME\Apache\Apache\logs\access_log --> check for the timestamp and the URL that was accessed by the user. Timestamp and URL would look like above Result Even though you are shutting down the HTTP server the request is routed to the surviving HTTP server, which is then able to route the request to the BPEL Console and you are able to access the console. By checking the access log file you can confirm that the request is being picked up by the surviving node. Test Case 2 Repeat the same test as above but instead of calling opmnctl stopproc, pull the network cord of one of the HTTP servers, so that the LBR routes the request to the surviving HTTP node --> this is simulating a network failure. Test Case 3 In test case 1 we have simulated a graceful shutdown, in this case we will simulate an Apache crash Implementation Use opmnctl status -l to get the PID of the HTTP server that you would like forcefully bring downOn Linux use kill -9 <PID> to kill the HTTP serverAccess the BPEL console Result As you shut down the HTTP server, OPMN will restart the HTTP server. The restart may be so quick that the LBR may still route the request to the same server. One way to check if the HTTP server restared is to check the new PID and the timestamp in the access log for the BPEL console. BPEL test cases This section is going to cover scenarios dealing with BPEL clustering using jGroups, BPEL deployment and testing related to BPEL failover. Test Case 1 Verify that jGroups has initialized correctly. There is no real testing in this use case just a visual verification by looking at log files that jGroups has initialized correctly. Check the opmn log for the BPEL container for all nodes at $ORACLE_HOME/opmn/logs/<group name><container name><group name>~1.log. This logfile will contain jGroups related information during startup and steady-state operation. Soon after startup you should find log entries for UDP or TCP.Example jGroups Log Entries for UDPApr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.UDP createSockets ·         INFO: sockets will use interface 144.25.142.172·          ·         Apr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.UDP createSockets·          ·         INFO: socket information:·          ·         local_addr=144.25.142.172:1127, mcast_addr=228.8.15.75:45788, bind_addr=/144.25.142.172, ttl=32·         sock: bound to 144.25.142.172:1127, receive buffer size=64000, send buffer size=32000·         mcast_recv_sock: bound to 144.25.142.172:45788, send buffer size=32000, receive buffer size=64000·         mcast_send_sock: bound to 144.25.142.172:1128, send buffer size=32000, receive buffer size=64000·         Apr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         -------------------------------------------------------·          ·         GMS: address is 144.25.142.172:1127·          ------------------------------------------------------- Example jGroups Log Entries for TCPApr 3, 2008 6:23:39 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable start ·         INFO: server socket created on 144.25.142.172:7900·          ·         Apr 3, 2008 6:23:39 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         -------------------------------------------------------·         GMS: address is 144.25.142.172:7900------------------------------------------------------- In the log below the "socket created on" indicates that the TCP socket is established on the own node at that IP address and port the "created socket to" shows that the second node has connected to the first node, matching the logfile above with the IP address and port.Apr 3, 2008 6:25:40 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable start ·         INFO: server socket created on 144.25.142.173:7901·          ·         Apr 3, 2008 6:25:40 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         ------------------------------------------------------·         GMS: address is 144.25.142.173:7901·         -------------------------------------------------------·         Apr 3, 2008 6:25:41 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable getConnectionINFO: created socket to 144.25.142.172:7900  Result By reviewing the log files, you can confirm if BPEL clustering at the jGroups level is working and that the jGroup channel is communicating. Test Case 2  Test connectivity between BPEL Nodes Implementation Test connections between different cluster nodes using ping, telnet, and traceroute. The presence of firewalls and number of hops between cluster nodes can affect performance as they have a tendency to take down connections after some time or simply block them.Also reference Metalink Note 413783.1: "How to Test Whether Multicast is Enabled on the Network." Result Using the above tools you can confirm if Multicast is working  and whether BPEL nodes are commnunicating. Test Case3 Test deployment of BPEL suitcase to one BPEL node.  Implementation Deploy a HelloWorrld BPEL suitcase (or any other client specific BPEL suitcase) to only one BPEL instance using ant, or JDeveloper or via the BPEL consoleLog on to the second BPEL console to check if the BPEL suitcase has been deployed Result If jGroups has been configured and communicating correctly, BPEL clustering will allow you to deploy a suitcase to a single node, and jGroups will notify the second instance of the deployment. The second BPEL instance will go to the DB and pick up the new deployment after receiving notification. The result is that the new deployment will be "deployed" to each node, by only deploying to a single BPEL instance in the BPEL cluster. Test Case 4  Test to see if the BPEL server failsover and if all asynch processes are picked up by the secondary BPEL instance Implementation Deploy a 2 Asynch process: A ParentAsynch Process which calls a ChildAsynchProcess with a variable telling it how many times to loop or how many seconds to sleepA ChildAsynchProcess that loops or sleeps or has an onAlarmMake sure that the processes are deployed to both serversShut down one BPEL serverOn the active BPEL server call ParentAsynch a few times (use the load generation page)When you have enough ParentAsynch instances shut down this BPEL instance and start the other one. Please wait till this BPEL instance shuts down fully before starting up the second one.Log on to the BPEL console and see that the instance were picked up by the second BPEL node and completed Result The BPEL instance will failover to the secondary node and complete the flow ESB test cases This section covers the use cases involved with testing an ESB cluster. For this section please Normal 0 false false false EN-CA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:12.0pt; font-family:"Calibri","sans-serif"; mso-fareast-language:EN-US;} follow Metalink Note 470267.1 which covers the basic tests to verify your ESB cluster.

    Read the article

< Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >