Search Results

Search found 15255 results on 611 pages for 'michael little'.

Page 21/611 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • C#: System.Lazy&lt;T&gt; and the Singleton Design Pattern

    - by James Michael Hare
    So we've all coded a Singleton at one time or another.  It's a really simple pattern and can be a slightly more elegant alternative to global variables.  Make no mistake, Singletons can be abused and are often over-used -- but occasionally you find a Singleton is the most elegant solution. For those of you not familiar with a Singleton, the basic Design Pattern is that a Singleton class is one where there is only ever one instance of the class created.  This means that constructors must be private to avoid users creating their own instances, and a static property (or method in languages without properties) is defined that returns a single static instance. 1: public class Singleton 2: { 3: // the single instance is defined in a static field 4: private static readonly Singleton _instance = new Singleton(); 5:  6: // constructor private so users can't instantiate on their own 7: private Singleton() 8: { 9: } 10:  11: // read-only property that returns the static field 12: public static Singleton Instance 13: { 14: get 15: { 16: return _instance; 17: } 18: } 19: } This is the most basic singleton, notice the key features: Static readonly field that contains the one and only instance. Constructor is private so it can only be called by the class itself. Static property that returns the single instance. Looks like it satisfies, right?  There's just one (potential) problem.  C# gives you no guarantee of when the static field _instance will be created.  This is because the C# standard simply states that classes (which are marked in the IL as BeforeFieldInit) can have their static fields initialized any time before the field is accessed.  This means that they may be initialized on first use, they may be initialized at some other time before, you can't be sure when. So what if you want to guarantee your instance is truly lazy.  That is, that it is only created on first call to Instance?  Well, there's a few ways to do this.  First we'll show the old ways, and then talk about how .Net 4.0's new System.Lazy<T> type can help make the lazy-Singleton cleaner. Obviously, we could take on the lazy construction ourselves, but being that our Singleton may be accessed by many different threads, we'd need to lock it down. 1: public class LazySingleton1 2: { 3: // lock for thread-safety laziness 4: private static readonly object _mutex = new object(); 5:  6: // static field to hold single instance 7: private static LazySingleton1 _instance = null; 8:  9: // property that does some locking and then creates on first call 10: public static LazySingleton1 Instance 11: { 12: get 13: { 14: if (_instance == null) 15: { 16: lock (_mutex) 17: { 18: if (_instance == null) 19: { 20: _instance = new LazySingleton1(); 21: } 22: } 23: } 24:  25: return _instance; 26: } 27: } 28:  29: private LazySingleton1() 30: { 31: } 32: } This is a standard double-check algorithm so that you don't lock if the instance has already been created.  However, because it's possible two threads can go through the first if at the same time the first time back in, you need to check again after the lock is acquired to avoid creating two instances. Pretty straightforward, but ugly as all heck.  Well, you could also take advantage of the C# standard's BeforeFieldInit and define your class with a static constructor.  It need not have a body, just the presence of the static constructor will remove the BeforeFieldInit attribute on the class and guarantee that no fields are initialized until the first static field, property, or method is called.   1: public class LazySingleton2 2: { 3: // because of the static constructor, this won't get created until first use 4: private static readonly LazySingleton2 _instance = new LazySingleton2(); 5:  6: // Returns the singleton instance using lazy-instantiation 7: public static LazySingleton2 Instance 8: { 9: get { return _instance; } 10: } 11:  12: // private to prevent direct instantiation 13: private LazySingleton2() 14: { 15: } 16:  17: // removes BeforeFieldInit on class so static fields not 18: // initialized before they are used 19: static LazySingleton2() 20: { 21: } 22: } Now, while this works perfectly, I hate it.  Why?  Because it's relying on a non-obvious trick of the IL to guarantee laziness.  Just looking at this code, you'd have no idea that it's doing what it's doing.  Worse yet, you may decide that the empty static constructor serves no purpose and delete it (which removes your lazy guarantee).  Worse-worse yet, they may alter the rules around BeforeFieldInit in the future which could change this. So, what do I propose instead?  .Net 4.0 adds the System.Lazy type which guarantees thread-safe lazy-construction.  Using System.Lazy<T>, we get: 1: public class LazySingleton3 2: { 3: // static holder for instance, need to use lambda to construct since constructor private 4: private static readonly Lazy<LazySingleton3> _instance 5: = new Lazy<LazySingleton3>(() => new LazySingleton3()); 6:  7: // private to prevent direct instantiation. 8: private LazySingleton3() 9: { 10: } 11:  12: // accessor for instance 13: public static LazySingleton3 Instance 14: { 15: get 16: { 17: return _instance.Value; 18: } 19: } 20: } Note, you need your lambda to call the private constructor as Lazy's default constructor can only call public constructors of the type passed in (which we can't have by definition of a Singleton).  But, because the lambda is defined inside our type, it has access to the private members so it's perfect. Note how the Lazy<T> makes it obvious what you're doing (lazy construction), instead of relying on an IL generation side-effect.  This way, it's more maintainable.  Lazy<T> has many other uses as well, obviously, but I really love how elegant and readable it makes the lazy Singleton.

    Read the article

  • When should I use Areas in TFS instead of Team Projects

    - by Martin Hinshelwood
    Well, it depends…. If you are a small company that creates a finite number of internal projects then you will find it easier to create a single project for each of your products and have TFS do the heavy lifting with reporting, SharePoint sites and Version Control. But what if you are not… Update 9th March 2010 Michael Fourie gave me some feedback which I have integrated. Ed Blankenship via @edblankenship offered encouragement and a nice quote. Ewald Hofman gave me a couple of Cons, and maybe a few more soon. Ewald’s company, Avanade, currently uses Areas, but it looks like the manual management is getting too much and the project is getting cluttered. What if you are likely to have hundreds of projects, possibly with a multitude of internal and external projects? You might have 1 project for a customer or 10. This is the situation that most consultancies find themselves in and thus they need a more sustainable and maintainable option. What I am advocating is that we should have 1 “Team Project” per customer, and use areas to create “sub projects” within that single “Team Project”. "What you describe is what we generally do internally and what we recommend. We make very heavy use of area path to categorize the work within a larger project." - Brian Harry, Microsoft Technical Fellow & Product Unit Manager for Team Foundation Server   "We tend to use areas to segregate multiple projects in the same team project and it works well." - Tiago Pascoal, Visual Studio ALM MVP   "In general, I believe this approach provides consistency [to multi-product engagements] and lowers the administration and maintenance costs. All good." - Michael Fourie, Visual Studio ALM MVP   “@MrHinsh BTW, I'm very much a fan of very large, if not huge, team projects in TFS. Just FYI :) Use Areas & Iterations.” Ed Blankenship, Visual Studio ALM MVP   This would mean that SSW would have a single Team Project called “SSW” that contains all of our internal projects and consequently all of the Areas and Iteration move down one hierarchy to accommodate this. Where we would have had “\SSW\Sprint 1” we now have “\SSW\SqlDeploy\Sprint1” with “SqlDeploy” being our internal project. At the moment SSW has over 70 internal projects and more than 170 total projects in TFS. This method has long term benefits that help to simplify the support model for companies that often have limited internal support time and many projects. But, there are implications as TFS does not provide this model “out-of-the-box”. These implications stretch across Areas, Iterations, Queries, Project Portal and Version Control. Michael made a good comment, he said: I agree with your approach, assuming that in a multi-product engagement with a client, they are happy to adopt the same process template across all products. If they are not, then it’ll either be easy to convince them or there is a valid reason for having a different template - Michael Fourie, Visual Studio ALM MVP   At SSW we have a standard template that we use and this is applied across the board, to all of our projects. We even apply any changes to the core process template to all of our existing projects as well. If you have multiple projects for the same clients on multiple templates and you want to keep it that way, then this approach will not work for you. However, if you want to standardise as we have at SSW then this approach may benefit you as well. Implications around Areas Areas should be used for topological classification/isolation of work items. You can think of this as architecture areas, organisational areas or even the main features of your application. In our scenario there is an additional top level item that represents the Project / Product that we want to chop our Team Project into. Figure: Creating a sub area to represent a product/project is easy. <teamproject> <teamproject>\<Functional Area/module whatever> Becomes: <teamproject> <teamproject>\<ProjectName>\ <teamproject>\<ProjectName>\<Functional Area/module whatever> Implications around Iterations Iterations should be used for chronological classification/isolation of work items. This could include isolated time boxes, milestones or release timelines and really depends on the logical flow of your project or projects. Due to the new level in Area we need to add the same level to Iteration. This is primarily because it is unlikely that the sprints in each of your projects/products will start and end at the same time. This is just a reality of managing multiple projects. Figure: Adding the same Area value to Iteration as the top level item adds flexibility to Iteration. <teamproject>\Sprint 1 Or <teamproject>\Release 1\Sprint 1 Becomes: <teamproject>\<ProjectName>\Sprint 1 Or <teamproject>\<ProjectName>\Release 1\Sprint 1 Implications around Queries Queries are used to filter your work items based on a specified level of granularity. There are a number of queries that are built into a project created using the MSF Agile 5.0 template, but we now have multiple projects and it would be a pain to have to edit all of the work items every time we changed project, and that would only allow one team to work on one project at a time.   Figure: The Queries that are created in a normal MSF Agile 5.0 project do not quite suit our new needs. In order for project contributors to be able to query based on their project we need a couple of things. The first thing I did was to create an “_Area Template” folder that has a copy of the project layout with all the queries setup to filter based on the “_Area Template” Area and the “_Sprint template” you can see in the Area and Iteration views. Figure: The template is currently easily drag and drop, but you then need to edit the queries to point at the right Area and Iteration. This needs a tool. I then created an “Areas” folder to hold all of the area specific queries. So, when you go to create a new TFS Sub-Project you just drag “_Area Template” while holding “Ctrl” and drop it onto “Areas”. There is a little setup here. That said I managed it in around 10 minutes which is not so bad, and I can imagine it being quite easy to build a tool to create these queries Figure: These new queries can be configured in around 10 minutes, which includes setting up the Area and Iteration as well. Version Control What about your source code? Well, that is the easiest of the lot. Just create a sub folder for each of your projects/products.   Figure: Creating sub folders in source control is easy as “Right click | Create new folder”. <teamproject>\DEV\Main\ Becomes: <teamproject>\<ProjectName>\DEV\Main\ Conclusion I think it is up to each company to make a call on how you want to configure your Team Projects and it depends completely on how many projects/products you are going to have for each customer including yourself. If we decide to utilise this route it will require some configuration to get our 170+ projects into this format, and I will probably be writing some tools to help. Pros You only have one project to upgrade when a process template changes – After going through an upgrade of over 170 project prior to the changes in the RC I can tell you that that many projects is no fun. Standardises your Process Template – You will always have the same Process implementation across projects/products without exception You get tighter control over the permissions – Yes, you can do this on a standard Team Project, but it gets a lot easier with practice. You can “move” work items from one “product” to another – Have we not always wanted to do that. You can rename your projects – Wahoo: everyone wants to do this, now you can. One set of Reporting Services reports to manage – You set an area and iteration to run reports anyway, so you may as well set both. Simplified Check-In Policies– There is only one set of check-in policies per client. This simplifies administration of policies. Simplified Alerts – As alerts are applied across multiple projects this simplifies your alert rules as per client. Cons All of these cons could be mitigated by a custom tool that helps automate creation of “Sub-projects” within Team Projects. This custom tool could create areas, Iteration, permissions, SharePoint and queries. It just does not exist yet :) You need to configure the Areas and Iterations You need to configure the permissions You may need to configure sub sites for SharePoint (depends on your requirement) – If you have two projects/products in the same Team Project then you will not see the burn down for each one out-of-the-box, but rather a cumulative for the Team Project. This is not really that much of a problem as you would have to configure your burndown graphs for your current iteration anyway. note: When you create a sub site to a TFS linked portal it will inherit the settings of its parent site :) This is fantastic as it means that you can easily create sub sites and then set the Area and Iteration path in each of the reports to be the correct one. Every team wants their own customization (via Ewald Hofman) - small teams of 2 persons against teams of 30 – or even outsourcing – need their own process, you cannot allow that because everybody gets the same work item types. note: Luckily at SSW this is not a problem as our template is standardised across all projects and customers. Large list of builds (via Ewald Hofman) – As the build list in Team Explorer is just a flat list it can get very cluttered. note: I would mitigate this by removing any build that has not been run in over 30 days. The build template and workflow will still be available in version control, but it will clean the list. Feedback Now that I have explained this method, what do you think? What other pros and cons can you see? What do you think of this approach? Will you be using it? What tools would you like to support you?   Technorati Tags: Visual Studio ALM,TFS Administration,TFS,Team Foundation Server,Project Planning,TFS Customisation

    Read the article

  • Enterprise SharePoint 2010 Hosting, SharePoint Foundation 2010 Hosting, SharePoint Standard 2010 Hos

    - by Michael J. Hamilton, Sr.
    Enterprise SharePoint 2010 Hosting, SharePoint Foundation 2010 Hosting, SharePoint Standard 2010 Hosting, Michigan Sclera, a Microsoft Hosted Services Provider Partner, is offering key Service Offerings around the Microsoft SharePoint Server 2010 stack. Specifically – if you’re looking for SharePoint Foundation, SharePoint Standard or Enterprise 2010 hosting provisions, checkout the Service Offerings from Sclera Hosting (www.sclerahosting.com) and compare with some of the lowest prices available on the web today. I wanted to post this so you could shot around and compare. There are a couple of the larger on demand hosting agencies (247hosting, and fpweb hosting) – that charge outrageous fees  - like $350 a month for SharePoint Foundation 2010 hosting. The most incredible part? This is on a shared domain name – not the client’s domain. It’s hosting on something like .sharepointsites.com">.sharepointsites.com">http://<yourSiteName>.sharepointsites.com – or something crazy like that. Sclera Hosting provides you on demand – SharePoint Foundation, SharePoint Server Standard/Enterprise – 2010 RTM bits – within minutes of your order – ON YOUR DOMAIN – and that is a major perk for me. You have complete SharePoint Designer 2010 integration; complete support for custom assemblies, web parts, you name it – this hosting provider gives you more bang for buck than any provider on the Net today. Now – some teasers – I was in a meeting this week and I heard – SharePoint Foundation – 2010 RTM bits – unlimited users, 10 GB content database quota, full SharePoint Designer 2010 integration/support, all on the client’s domain – sit down and soak this up - $175.00 per month – no kidding. Now, I do not know about you – but – I have not seen a deal like that EVER on the Net – so – get over to www.sclerahosting.com – or email the Sales Team at Sclera Design, Inc. today for more details. Have a great weekend!

    Read the article

  • The entity type String is not part of the model for the current context error [migrated]

    - by Michael V
    I am getting the following error in my controller after the view submits the collection: The entity type String is not part of the model for the current context. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: The entity type String is not part of the model for the current context. Source Error: Line 51: foreach (var survey in mysurveys) Line 52: { Line 53: db.Entry(survey).State = EntityState.Modified; Line 54: Line 55: // db.Entry(survey).State = EntityState.Modified; Here is the code ` [HttpPost] public ActionResult UpdateTest(FormCollection mysurveys) { System.Diagnostics.Debug.WriteLine("iam in test post" + mysurveys.Count); foreach (var survey in mysurveys) { db.Entry(survey).State = EntityState.Modified; } db.SaveChanges(); return View(mysurveys); } `Similar code with one record only (no foreach) works fine

    Read the article

  • Dual Boot issues with Windows 7 and Ubuntu

    - by Michael
    I'm finding myself in a rather unique situation. I've read through just about every resource I can find about this and while things have helped me understand some background, I haven't yet been able to find a solution. So I'm asking here. I originally had just a Windows 7 64-bit OS installation on my desktop. Learning that I couldn't do anything with Apache, PHP and MySql from within a 64-bit system, I did some research and found out that I could use Ubuntu. I've installed the latest version: 11.04. I created a CD to install Ubuntu from and the install went just fine. I installed it side-by-side with Windows 7. I can boot into Ubuntu just fine through the dual-boot option. When I reboot to load Windows though, the Grub2 list shows Windows 7 (loader) and when I select this option the Windows System Recovery loads instead of the actual OS. I haven't made it past there because I didn't know what to do. I just shut the computer down and rebooted into Ubuntu. I've been working for the last hour and a half to try to figure out how to boot into the Windows 7 OS and I haven't got a clue. While I'm somewhat proficient with Windows 7, I'm totally new to Ubuntu, so if you do know what needs to happen, please keep it simple enough that I'll be able to understand. Thanks for all your help in advance. Here's the results after using the Boot Info Script: Boot Info Script 0.55 dated February 15th, 2010 ============================= Boot Info Summary: ============================== => Grub 2 is installed in the MBR of /dev/sda and looks on the same drive in partition #5 for cbh. => Windows is installed in the MBR of /dev/sdb => Grub 2 is installed in the MBR of /dev/mapper/pdc_bdadcfbdif and looks on the same drive in partition #5 for cbh. sda1: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Mounting failed: fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy sda2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Mounting failed: fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy sda3: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Mounting failed: fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy sdb1: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: /bootmgr /Boot/BCD sdb2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: sdb3: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: /bootmgr /boot/BCD sdb4: _________________________________________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sdb5: _________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 11.04 Boot files/dirs: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sdb6: _________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: pdc_bdadcfbdif1: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: /bootmgr /Boot/BCD pdc_bdadcfbdif2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files/dirs: /bootmgr /Boot/BCD /Windows/System32/winload.exe pdc_bdadcfbdif3: _________________________________________________________________________ File system: Boot sector type: Unknown Boot sector info: Mounting failed: fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy mount: unknown filesystem type '' =========================== Drive/Partition Info: ============================= Drive: sda ___________________ _____________________________________________________ Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start End Size Id System /dev/sda1 * 2,048 206,847 204,800 7 HPFS/NTFS /dev/sda2 206,911 1,440,372,735 1,440,165,825 7 HPFS/NTFS /dev/sda3 1,440,372,736 1,464,856,575 24,483,840 7 HPFS/NTFS Drive: sdb ___________________ _____________________________________________________ Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start End Size Id System /dev/sdb1 * 2,048 206,847 204,800 7 HPFS/NTFS /dev/sdb2 206,911 1,342,554,688 1,342,347,778 7 HPFS/NTFS /dev/sdb3 1,930,344,448 1,953,521,663 23,177,216 7 HPFS/NTFS /dev/sdb4 1,342,556,158 1,930,344,447 587,788,290 5 Extended /dev/sdb5 1,342,556,160 1,896,806,399 554,250,240 83 Linux /dev/sdb6 1,896,808,448 1,930,344,447 33,536,000 82 Linux swap / Solaris Drive: pdc_bdadcfbdif ___________________ _____________________________________________________ Disk /dev/mapper/pdc_bdadcfbdif: 750.0 GB, 749999947776 bytes 255 heads, 63 sectors/track, 91182 cylinders, total 1464843648 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start End Size Id System /dev/mapper/pdc_bdadcfbdif1 * 2,048 206,847 204,800 7 HPFS/NTFS /dev/mapper/pdc_bdadcfbdif2 206,911 1,440,372,735 1,440,165,825 7 HPFS/NTFS /dev/mapper/pdc_bdadcfbdif3 1,440,372,736 1,464,856,575 24,483,840 7 HPFS/NTFS /dev/mapper/pdc_bdadcfbdif3 ends after the last sector of /dev/mapper/pdc_bdadcfbdif blkid -c /dev/null: ____________________________________________________________ Device UUID TYPE LABEL /dev/mapper/pdc_bdadcfbdif1 888E54CC8E54B482 ntfs SYSTEM /dev/mapper/pdc_bdadcfbdif2 C2766BF6766BEA1D ntfs OS /dev/mapper/pdc_bdadcfbdif: PTTYPE="dos" /dev/sda1 888E54CC8E54B482 ntfs SYSTEM /dev/sda2 C2766BF6766BEA1D ntfs OS /dev/sda3 BE6CA31D6CA2CF87 ntfs HP_RECOVERY /dev/sda promise_fasttrack_raid_member /dev/sdb1 20B65685B6565B7C ntfs SYSTEM /dev/sdb2 B4467A314679F508 ntfs HP /dev/sdb3 6E10B7A410B77227 ntfs FACTORY_IMAGE /dev/sdb4: PTTYPE="dos" /dev/sdb5 266f9801-cf4f-4acc-affa-2092be035f0c ext4 /dev/sdb6 1df35749-a887-45ff-a3de-edd52239847d swap /dev/sdb: PTTYPE="dos" error: /dev/mapper/pdc_bdadcfbdif3: No such file or directory error: /dev/sdc: No medium found error: /dev/sdd: No medium found error: /dev/sde: No medium found error: /dev/sdf: No medium found error: /dev/sdg: No medium found ============================ "mount | grep ^/dev output: =========================== Device Mount_Point Type Options /dev/sdb5 / ext4 (rw,errors=remount-ro,commit=0) =========================== sdb5/boot/grub/grub.cfg: =========================== # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm fi terminal_output gfxterm insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### if [ ${recordfail} != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "$linux_gfx_mode" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 2.6.38-8-generic-pae' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c linux /boot/vmlinuz-2.6.38-8-generic-pae root=UUID=266f9801-cf4f-4acc- affa-2092be035f0c ro quiet splash vt.handoff=7 initrd /boot/initrd.img-2.6.38-8-generic-pae } menuentry 'Ubuntu, with Linux 2.6.38-8-generic-pae (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c echo 'Loading Linux 2.6.38-8-generic-pae ...' linux /boot/vmlinuz-2.6.38-8-generic-pae root=UUID=266f9801-cf4f-4acc-affa-2092be035f0c ro single echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.38-8-generic-pae } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows 7 (loader) (on /dev/sdb1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(/dev/sdb,msdos1)' search --no-floppy --fs-uuid --set=root 20B65685B6565B7C chainloader +1 } menuentry "Windows Recovery Environment (loader) (on /dev/sdb3)" --class windows --class os { insmod part_msdos insmod ntfs set root='(/dev/sdb,msdos3)' search --no-floppy --fs-uuid --set=root 6E10B7A410B77227 drivemap -s (hd0) ${root} chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### =============================== sdb5/etc/fstab: =============================== # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sdb5 during installation UUID=266f9801-cf4f-4acc-affa-2092be035f0c / ext4 errors=remount-ro 0 1 # swap was on /dev/sdb6 during installation UUID=1df35749-a887-45ff-a3de-edd52239847d none swap sw 0 0 =================== sdb5: Location of files loaded by Grub: =================== 900.1GB: boot/grub/core.img 825.0GB: boot/grub/grub.cfg 688.7GB: boot/initrd.img-2.6.38-8-generic-pae 688.0GB: boot/vmlinuz-2.6.38-8-generic-pae 688.7GB: initrd.img 688.0GB: vmlinuz =========================== Unknown MBRs/Boot Sectors/etc ======================= Unknown BootLoader on pdc_bdadcfbdif3 =======Devices which don't seem to have a corresponding hard drive============== sdc sdd sde sdf sdg =============================== StdErr Messages: =============================== ERROR: dos: partition address past end of RAID device hexdump: /dev/mapper/pdc_bdadcfbdif3: No such file or directory hexdump: /dev/mapper/pdc_bdadcfbdif3: No such file or directory ERROR: dos: partition address past end of RAID device

    Read the article

  • T-SQL Tuesday #005: Creating SSMS Custom Reports

    - by Mike C
    This is my contribution to the T-SQL Tuesday blog party, started by Adam Machanic and hosted this month by Aaron Nelson . Aaron announced this month's topic is "reporting" so I figured I'd throw a blog up on a reporting topic I've been interested in for a while -- namely creating custom reports in SSMS. Creating SSMS custom reports isn't difficult, but like most technical work it's very detailed with a lot of little steps involved. So this post is a little longer than usual and includes a lot of...(read more)

    Read the article

  • T-SQL Tuesday #005: Creating SSMS Custom Reports

    - by Mike C
    This is my contribution to the T-SQL Tuesday blog party, started by Adam Machanic and hosted this month by Aaron Nelson . Aaron announced this month's topic is "reporting" so I figured I'd throw a blog up on a reporting topic I've been interested in for a while -- namely creating custom reports in SSMS. Creating SSMS custom reports isn't difficult, but like most technical work it's very detailed with a lot of little steps involved. So this post is a little longer than usual and includes a lot of...(read more)

    Read the article

  • Access Insurance Company Wins 2010 Technology Innovation Award at IASA

    - by [email protected]
    Helen Pitts, senior product marketing manager for Oracle Insurance, is blogging from the 2010 IASA Annual Conference and Business Show this week. For the second time in two weeks an Oracle Insurance customer has earned recognition at an insurance industry event for its innovative use of technology to transform their business. Access Insurance Company received the 2010 Technology Innovation Award during the 2010 IASA Annual Conference and Business Show this week in Grapevine, Texas. The company earned the recognition for its "Instant Access" application, which executes all the business rules and processes needed to provide a quote, bind, and issue a policy. CIO Andy Dunn and Tim Reynolds stopped by the Oracle Insurance Booth at IASA to visit with the team, show their award, and share how the platform has provided a strategic advantage to the company and helped it increase revenue by penetrating new markets, increasing market share and improving customer retention. Since implementing Instant Access in 2009 - a platform that leverages both Oracle Insurance Insbridge Rating and Underwriting and Oracle Documaker - the carrier has: Increased policies in force by 22%, from 140,185 to more than 270,000 Grown market share by 4.6% Increased 2009 revenue by 26.5% Increased ratio of policyholders per CSR by 30% Increased its appointed independent producers by 43 percent Now that's true innovation! You can learn more about the company's formula for success by reading Access Insurance Holdings CEO and president Michael McMenamin's interview with Insurance & Technology, Data Mastery Drives Access Insurance's 'Instant Access' Business Technology Platform. Congratulations to Michael, Andy, Tim and the entire team at Access Insurance on this well deserved honor - and for your role as a technology leader for the industry. Helen Pitts is senior product marketing manager for Oracle Insurance.

    Read the article

  • Replacing “if”s with your own number system

    - by Michael Williamson
    During our second code retreat at Red Gate, the restriction for one of the sessions was disallowing the use of if statements. That includes other constructs that have the same effect, such as switch statements or loops that will only be executed zero or one times. The idea is to encourage use of polymorphism instead, and see just how far it can be used to get rid of “if”s. The main place where people struggled to get rid of numbers from their implementation of Conway’s Game of Life was the piece of code that decides whether a cell is live or dead in the next generation. For instance, for a cell that’s currently live, the code might look something like this: if (numberOfNeighbours == 2 || numberOfNeighbours == 3) { return CellState.LIVE; } else { return CellState.DEAD; } The problem is that we need to change behaviour depending on the number of neighbours each cell has, but polymorphism only allows us to switch behaviour based on the type of a value. It follows that the solution is to make different numbers have different types: public interface IConwayNumber { IConwayNumber Increment(); CellState LiveCellNextGeneration(); } public class Zero : IConwayNumber { public IConwayNumber Increment() { return new One(); } public CellState LiveCellNextGeneration() { return CellState.DEAD; } } public class One : IConwayNumber { public IConwayNumber Increment() { return new Two(); } public CellState LiveCellNextGeneration() { return CellState.LIVE; } } public class Two : IConwayNumber { public IConwayNumber Increment() { return new ThreeOrMore(); } public CellState LiveCellNextGeneration() { return CellState.LIVE; } } public class ThreeOrMore : IConwayNumber { public IConwayNumber Increment() { return this; } public CellState LiveCellNextGeneration() { return CellState.DEAD; } } In the code that counts the number of neighbours, we use our new number system by starting with Zero and incrementing when we find a neighbour. To choose the next state of the cell, rather than inspecting the number of neighbours, we ask the number of neighbours for the next state directly: return numberOfNeighbours.LiveCellNextGeneration(); And now we have no “if”s! If C# had double-dispatch, or if we used the visitor pattern, we could move the logic for choosing the next cell out of the number classes, which might feel a bit more natural. I suspect that reimplementing the natural numbers is still going to feel about the same amount of crazy though.

    Read the article

  • AppFabric Caching Feedback

    - by Michael Stephenson
    We are running a survey to collect feedback around scenarios where people were experimenting with velocity on windows 2003 in the CTP but are now in a position where the beta requires windows 2008. We would like to understand how important this scenario is precieved to be. If you are in the Connected Systems Community and would like to provide feedback please complete the following survey http://www.surveymonkey.com/s/N3VKZWN

    Read the article

  • Who IS Brian Solis?

    - by Michael Snow
    Q: Brian, Welcome to the WebCenter Blog. Can you tell our readers your current role and what career path brought you here? A: I’m proudly serving as a principal analyst at Altimeter Group, a research based advisory firm in Silicon Valley. My career path, well, let’s just say it’s a long and winding road. As a kid, I was fascinated with technology. I learned programming at an early age and found myself naturally drawn to all things tech. I started my career as a database programmer at a technology marketing agency in Southern California. When I saw the chance to work with tech companies and help them better market their capabilities to businesses and consumers, I switched focus from programming to marketing and advertising. As technologist, my approach to marketing was different. I didn’t believe in hype, fluff or buzz words. I believed in translating features into benefits and specifications and capabilities into solutions for real world problems and opportunities. In the mid 90’s I experimented with direct to consumer/customer engagement in dedicated technology forums and boards. I quickly realized that the entire approach to do so would need to change. Therefore, I learned and developed new methods for a more social and informed way of engaging people in ways that helped them, marketed the company, and also tied to tangible benefits for the company. This work would lead me to start an agency in 1999 dedicated to interactive marketing. As I continued to experiment with interactive platforms, I developed interesting methods for converting one-to-many forms of media into one-to-one-to-many programs. I ran that company until joining Altimeter Group. Along the way, in the early 2000s, I realized that everything was changing and that there were others like me finding success in what would become a more social form of media. I dedicated a significant amount of my time to sharing everything that I learned in the form of articles, blogs, and eventually books. My mission became to share my experience with anyone who’d listen. It would later become much bigger than marketing, this would lead to a decade of work, that still continues, in business transformation. Then and now, I find myself always assuming the role of a student. Q: As an industry analyst & technology change evangelist, what are you primarily focused on these days? A: As a digital analyst, I study how disruptive technology impacts business. As an aspiring social scientist, I study how technology affects human behavior. I explore both horizons professionally and personally to better understand the future of popular culture and also the opportunities that exist for organizations to improve relationships and experiences with customers and the people that are important to them. Q: People cite that the line between work and life is getting more and more blurred. Do you see your personal life influencing your professional work? A: The line between work and life isn’t blurred it’s been overtly crossed and erased. We live in an always on society. The digital lifestyle keeps us connected to one another it keeps us connected all the time. Whether your sending or checking email, trying to catch up, or simply trying to get ahead, people are spending the equivalent of an extra day at work in the time they spend out of work…working. That’s absurd. It’s a matter of survival. It’s also a matter of unintended, subconscious self-causation. We brought this on ourselves and continue to do so. Think about your day. You’re in meetings for the better part of each day. You probably spend evenings and weekends catching up on email and actually doing the work you couldn’t get to during the day. And, your co-workers and executives are doing the same thing. So if you try to slow down, you find yourself at a disadvantage as you’re willfully pulling yourself out of an unfortunate culture of whenever wherever business dynamics. If you’re unresponsive or unreachable, someone within your organization or on your team is accessible. Over time, this could contribute to unfavorable impressions. I choose to steer my life balance in ways that complement one another. But, I don’t pretend to have this figured out by any means. In fact, I find myself swimming upstream like those around me. It’s essentially a competition for relevance and at some point I’ll learn how to earn attention and relevance while redrawing the line between work and life. Q: How can people keep up with what you’re working on? A: The easy answer is that people can keep up with me at briansolis.com. But, I also try to reach people where their attention is focused. Whether it’s Facebook (facebook.com/briansolis), Twitter (@briansolis), Google+ (+briansolis), Youtube (briansolis.tv) or through books and conferences, people can usually find me in a place of their choosing. Q: Recently, you’ve been working with us here at Oracle on something exciting coming up later this week. What’s on the horizon? A: I spent some time with the Oracle team reviewing the idea of Digital Darwinism and how technology and society are evolving faster than many organizations can adapt. Digital Darwinism: How Brands Can Survive the Rapid Evolution of Society and Technology Thursday, December 13, 2012, 10 a.m. PT / 1 p.m. ET Q: You’ve been very actively pursued for media interviews and conference and company speaking engagements – anything you’d like to share to give us a sneak peak of what to expect on Thursday’s webcast? A: We’re inviting guests to join us online as we dive into the future of business and how the convergence of technology and connected consumerism would ultimately impact how business is done. It’ll be an exciting and revealing conversation that explores just how much everything is changing. We’ll also review the importance of adapting to emergent trends and how to compete for the future. It’s important to recognize that change is not happening to us, it’s happening because of us. We are part of the revolution and therefore we need to help organizations adapt from the inside out. Watch the Entire Oracle Social Business Thought Leaders Webcast Series On-Demand and Stay Tuned for More to Come in 2013!

    Read the article

  • SEO question, getting info about websites

    - by michael
    I don't know much about SEO. I have a csv file with 200,000 links to websites i want to add another field (or maybe couple of them) to each link in the csv file with page ranking of each link and maybe other interesting metrics and info about the link. I saw a free API from http://apiwiki.seomoz.org/ i can maybe use to build a simple script, but it's limited to 3 links for second which will roughly take 1100 minutes or 18 hours to run any other ideas how to get this kind of simple metrics about each link ? thanks !

    Read the article

  • Parsing Extended Events xml_deadlock_report

    - by Michael Zilberstein
    Jonathan Kehayias and Paul Randall posted more than a year ago great articles on how to monitor historical deadlocks using Extended Events system_health default trace. Both tried to fix on the fly the bug in xml output that caused failures in xml validation. Today I've found out that their version isn't bulletproof either. So here is the fixed one: SELECT CAST ( xest.target_data as XML ) xml_data , * INTO #ring_buffer_data FROM     sys.dm_xe_session_targets xest    INNER...(read more)

    Read the article

  • Cut Caseload Costs, Speed Service Delivery For Social Services

    - by michael.seback
    Lower Caseload Costs, Speedier Service Delivery with New Oracle Social Services Solution Oracle has just introduced a new solution for social services agencies that's designed to help case workers address the challenges of rising workloads and growing demands by citizens for additional services. In the past, IT departments developed custom software in an effort to meet program outcomes. "Because this capability is out of the box with the Oracle solution, there's less complexity for organizations and an overall lower total cost of ownership," says Kimberly Ellison-Taylor, Oracle's executive director of health and human services. "Self service brings costs down to just pennies per interaction and makes it possible for clients to receive government services more quickly," Ellison-Taylor says. read more

    Read the article

  • Using Web Services from an XNA 4.0 WP7 Game

    - by Michael Cummings
    Now that the Windows Phone 7 development tools have been out for a while, let’s talk about how you can use them. Windows Phone 7 ( WP7 ) has two application types that you can create, either Silverlight or XNA, and you can’t really mix the two together. The development environment for WP7 is a special edition of Visual Studio 2010 called Visual Studio 2010 Express for Windows Phone. This edition will be installed with the WP7 tools, even if you have a full edition of VS2010 already installed. While you can use your full edition of VS2010 to do WP7 development, this astute developer has noticed that there are a few things that you can only do in the Express for Windows Phone edition. So lets start by discussing WP7 networking. On the WP7 platform the only networking available is through Web Services using WCF or if you’re really masochistic, you’ll use the WebClient to do http. In Silverlight, it’s fairly easy to wire up a WCF proxy to call a web service and get some data. In the XNA projects, not so much. Create WCF Service First, we’ll create our service that will return some information that we need in our game. Open Visual Studio 2010, and create a new WCF Web Service project. We’ll use the default implementation as we only need to see how to use a service, we are not interested in creating a really cool service at this point. However you may want to follow the instructions in the comments of Service1.svc.cs to change the name to something better, I used DataService and IDataService for the interface. You should now be able to run the project and the WCF Test Client will load and properly enumerate your service. At this point we have a functional service that can be consumed by our XNA game. Consume the WCF Service Open Visual Studio 2010 Express for Windows Phone and create a new XNA Game Studio 4.0 Windows Phone Game project. Now if you try to add a service reference to the project, you’ll notice that the option is not available. However, if you add a Silverlight application to your solution, you’ll notice that you can create a service reference there. So using the Silverlight project, we can create the service reference. Unfortunately you can’t reference the Silverlight project from the XNA Game project, so using Windows Explorer copy the Service References folder from the Silverlight project directory to the XNA Game project directory, then add the folder to your XNA Game project. You’ll need to set the property Build Action to None for all the files, except for Reference.cs, which should be Build. Truely, we only need Reference.cs but I find it easier to copy the whole folder. If you try to compile at this point, you’ll notice that we are missing  a couple of references, System.Runtime.Serialization, System.Net and System.ServiceModel. Add these to the XNA Game project and you should build successfully. You’ll also need to copy the ServiceReference.ClientConfig file and add it to your project. The WCF infrastructure looks for this file and will complain if it can’t find it. You’ll need to set the Copy to Output Directory property to Copy if Newer. We now need to add the code to call the service and display the results on the screen. Go ahead and add a SpriteFont resource to the Content project and load it in the Game project. There’s nothing here that’s changed much from 3.1 other than your Content project is now under the Solution node and not the Project node. While you’re at it, add a string field to store the result of the service call, and intialize it to string.Empty. Then in the Draw method, write the string out to the screen, only if it does not equal string.Empty. Now to wrap this up, lets create a new field that’s of the type DataServiceClient. In the Initialize Method, create a new instance of this type using its default contructor, then in the LoadContent we can call the service. Since we can only call the GetData method of our service asynchronously we need to set up a Completed event handler first. Thankfully, Visual Studio helps out a lot there just create, using the tab key whatever VS says to. In the GetDataAsyncCompleted event handler assign the service result ( e.Result) to your string field. If you run your game, you should get something like this : Enjoy!

    Read the article

  • Guest Post: Christian Finn: Is Facebook About to Become a Victim of its Own Success?

    - by Michael Snow
    12.00 Print 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Cambria; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}  Since we have a number of new members of the WebCenter Evangelist team - I thought it would be appropriate to close the week with the newest hire and leader of the global WebCenter Evangelists, Christian Finn, who has just joined the Red team after many years with the small technology company up in Redmond, WA. He gave an intro to himself in an earlier post this morning but his post below is a great example of how customer engagement takes on a life of its own in our global online connected and social digital ecosystem. Is Facebook About to Become a Victim of its Own Success? What if I told you that your brand could advertise so successfully, you wouldn’t have to pay for the ads? A recent campaign by Ford Motor Company for the Ford Focus featuring Doug the spokespuppet (I am not making this up) did just that—and it raises some interesting issues for marketers and social media alike in the brave new world of customer engagement that is the Social Web. Allow me to elaborate. An article in the Wall Street Journal last week—“Big Brands Like Facebook, But They Don’t Like to Pay” tells the story of Ford’s recently concluded online campaign for the 2012 Ford Focus. (Ford, by the way, under the leadership of people such as Scott Monty, has been a pioneer of effective social campaigns.) The centerpiece of the campaign was the aforementioned Doug, who appeared as a character on Facebook in videos and via chat. (If you are not familiar with Doug, you can see him in action here, and read the WSJ story here.) You may be thinking puppet ads are a sign of Internet Bubble 2.0 and want to stop now, but bear with me. The Journal reported that Ford spent about $95M on its overall Ford Focus campaign, with TV accounting for over $60M of that spend. The Internet buy for the campaign was just over $10M, which included ad buys to drive traffic to Facebook for people to meet and ‘Like’ Doug and some amount on Facebook ads, too, to promote Doug and by extension, the Ford Focus. So far, a fairly straightforward consumer marketing story in the Internet Era. Yet here’s the curious thing: once Doug reached 10,000 fans on Facebook, Ford stopped paying for Facebook ads. Doug had gone viral with people sharing his videos with one another; once critical mass was reached there was no need to buy more ads on Facebook. Doug went on to be Liked by over 43,000 people, and 61% of his fans said they would be more likely to consider buying a Focus. According to the article, Ford says Focus sales are up this year—and increasing sales is every marketer’s goal. And so in effect, Ford found its Facebook campaign so successful that it could stop paying for it, instead letting its target consumers communicate its messages for fun—and for free. Not only did they get a 3X increase in fans beyond their paid campaign, they had thousands of customers sharing their messages in video form for months. Since free advertising is the Holy Grail of marketing both old and new-- and it appears social networks have an advantage in generating that buzz—it seems reasonable to ask: what would happen to brands’ advertising strategies—and the media they use to engage customers, if this success were repeated at scale? It seems logical to conclude that, at least initially, more ad dollars would be spent with social networks like Facebook as brands attempt to replicate Ford’s success. Certainly Facebook ad revenues are on the rise—eMarketer expects Facebook’s ad revenues to quintuple by 2012 compared with 2009 levels, to nearly 2.9B. That’s bad news for TV and the already battered print media and good news for Facebook. But perhaps not so over the longer run. With TV buys, you have to keep paying to generate impressions. If Doug the spokespuppet is any guide, however, that may not be true for social media campaigns. After an initial outlay, if a social campaign takes off, the audience will generate more impressions on its own. Thus a social medium like Facebook could be the victim of its own success when it comes to ad revenue. It may be there is an inherent limiting factor in the ad spend they can capture, as exemplified by Ford’s experience with Dough and the Focus. And brands may spend much less overall on advertising, with as good or better results, than they ever have in the past. How will these trends evolve? Can brands create social campaigns that repeat Ford’s formula for the Focus with effective results? Can social networks find ways to capture more spend and overcome their potential tendency to make further spend unnecessary? And will consumers become tired and insulated from social campaigns, much as they have to traditional advertising channels? These are the questions CMOs and Facebook execs alike will be asking themselves in the brave new world of customer engagement. As always, your thoughts and comments are most welcome.

    Read the article

  • The Evolution of an Era: Customer Experience in Retail

    - by Michael Hylton
    Two New Studies Point to the Direction Retailers are Taking in their CX Initiatives. Is it the Right Direction? The sheer velocity of change in retailing and customer behavior is forcing retailers to reinvigorate, expand and sharpen their vital Customer Experience (CX) strategies. Customers are becoming increasingly dynamic as they race to embrace the newest digital channels; shop in new ways on mobile devices, including smartphones and tablets, on the Web and in the store; share experiences socially; and interact with their preferred brands in new ways. Retailers are stepping up to their customers as they and their competitors create new modes of customer interaction. Underpinning these changes are vast quantities of customer data as customers flood digital channels and the social sphere. The informed retailer must now understand what their priorities are and what they should be for the future. To better understand this, Tata Consultancy Services (TCS) and Oracle independently launched CX-focused surveys to uncover what retailing leadership found important today. By comparing the results of these two studies together, we can further discover new insights about the industry. Click here to download this informative white paper.

    Read the article

  • Oracle Delivers Oracle Social Services Suite

    - by michael.seback
    Oracle Delivers Oracle Social Services Suite with New Releases of Siebel CRM Public Sector 8.2 and Oracle Policy Automation 10 Continuing its leadership and commitment to provide key innovations specifically created for social services agencies, Oracle today released the new Oracle Social Services Suite that includes updated versions of Oracle's Siebel CRM Public Sector 8.2 and Oracle Policy Automation 10. "Oracle's commitment to our social services customers is indisputable with the introduction of Oracle Social Services Suite and the latest innovations from Oracle's Siebel CRM Public Sector 8.2 and Oracle Policy Automation 10," said Anthony Lye, Senior Vice President of CRM, Oracle. "Social service agencies have not only many of the most complex jobs to perform with limited time and funding, but also some of the most important for our society, especially when children are involved. The technology advances Oracle provides will help these agencies increase their own efficiency and save costs, while helping to improve the outcome for their clients." read more

    Read the article

  • Silverlight Cream for May 19, 2010 -- #865

    - by Dave Campbell
    In this Issue: Michael Washington, Mike Snow(-2-), Justin Angel(-2-), Jeremy Likness, and David Kelley. Shoutout: Erik Mork and crew have their latest up: Silverlight Week – Silverlight Android? From SilverlightCream.com: Simple Silverlight 4 Example Using oData and RX Extensions Michael Washington has a follow-on tutorial up on ViewModel, Rx, and lashed up to OData... good detailed tutorial with external links for more information. Silverlight Tip of the Day #21 – Animation Easing Mike Snow has a couple new tips up -- this first one is about easing... great diagrams to help visualize and a cool demo application to boot. Silverlight Tip of the Day #22 – Data Validation Mike Snow's second tip (#22) is about validation and again he has a great demo app on the post. Windows Phone 7 - Emulator Automation Justin Angel has a WP7 post up about Automating the emulator... and in the process, loading the emulator from something other than VS2010... lots of good information. TFS2010 WP7 Continuous Integration Justin Angel hinted at continuous integration for WP7 in the last post, and he pays off with this one... even without all the bits installed on the build server. Making the ScrollViewer Talk in Silverlight 4 Jeremy Likness tried to respond to a user query about knowing when a user scrolled to the bottom of a ScrollViewer... Jeremy resolved it by listening to the right property. MEF (Microsoft Extensibility Framework) made simple (ish) David Kelley is discussing MEF and using a real-world example while doing so. Good discussion and code available in his code browser app... check thecomments. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Red Gate Coder interviews: Robin Hellen

    - by Michael Williamson
    Robin Hellen is a test engineer here at Red Gate, and is also the latest coder I’ve interviewed. We chatted about debugging code, the roles of software engineers and testers, and why Vala is currently his favourite programming language. How did you get started with programming?It started when I was about six. My dad’s a professional programmer, and he gave me and my sister one of his old computers and taught us a bit about programming. It was an old Amiga 500 with a variant of BASIC. I don’t think I ever successfully completed anything! It was just faffing around. I didn’t really get anywhere with it.But then presumably you did get somewhere with it at some point.At some point. The PC emerged as the dominant platform, and I learnt a bit of Visual Basic. I didn’t really do much, just a couple of quick hacky things. A bit of demo animation. Took me a long time to get anywhere with programming, really.When did you feel like you did start to get somewhere?I think it was when I started doing things for someone else, which was my sister’s final year of university project. She called up my dad two days before she was due to submit, saying “We need something to display a graph!”. Dad says, “I’m too busy, go talk to your brother”. So I hacked up this ugly piece of code, sent it off and they won a prize for that project. Apparently, the graph, the bit that I wrote, was the reason they won a prize! That was when I first felt that I’d actually done something that was worthwhile. That was my first real bit of code, and the ugliest code I’ve ever written. It’s basically an array of pre-drawn line elements that I shifted round the screen to draw a very spikey graph.When did you decide that programming might actually be something that you wanted to do as a career?It’s not really a decision I took, I always wanted to do something with computers. And I had to take a gap year for uni, so I was looking for twelve month internships. I applied to Red Gate, and they gave me a job as a tester. And that’s where I really started having to write code well. To a better standard that I had been up to that point.How did you find coming to Red Gate and working with other coders?I thought it was really nice. I learnt so much just from other people around. I think one of the things that’s really great is that people are just willing to help you learn. Instead of “Don’t you know that, you’re so stupid”, it’s “You can just do it this way”.If you could go back to the very start of that internship, is there something that you would tell yourself?Write shorter code. I have a tendency to write massive, many-thousand line files that I break out of right at the end. And then half-way through a project I’m doing something, I think “Where did I write that bit that does that thing?”, and it’s almost impossible to find. I wrote some horrendous code when I started. Just that principle, just keep things short. Even if looks a bit crazy to be jumping around all over the place all of the time, it’s actually a lot more understandable.And how do you hold yourself to that?Generally, if a function’s going off my screen, it’s probably too long. That’s what I tell myself, and within the team here we have code reviews, so the guys I’m with at the moment are pretty good at pulling me up on, “Doesn’t that look like it’s getting a bit long?”. It’s more just the subjective standard of readability than anything.So you’re an advocate of code review?Yes, definitely. Both to spot errors that you might have made, and to improve your knowledge. The person you’re reviewing will say “Oh, you could have done it that way”. That’s how we learn, by talking to others, and also just sharing knowledge of how your project works around the team, or even outside the team. Definitely a very firm advocate of code reviews.Do you think there’s more we could do with them?I don’t know. We’re struggling with how to add them as part of the process without it becoming too cumbersome. We’ve experimented with a few different ways, and we’ve not found anything that just works.To get more into the nitty gritty: how do you like to debug code?The first thing is to do it in my head. I’ll actually think what piece of code is likely to have caused that error, and take a quick look at it, just to see if there’s anything glaringly obvious there. The next thing I’ll probably do is throw in print statements, or throw some exceptions from various points, just to check: is it going through the code path I expect it to? A last resort is to actually debug code using a debugger.Why is the debugger the last resort?Probably because of the environments I learnt programming in. VB and early BASIC didn’t have much of a debugger, the only way to find out what your program was doing was to add print statements. Also, because a lot of the stuff I tend to work with is non-interactive, if it’s something that takes a long time to run, I can throw in the print statements, set a run off, go and do something else, and look at it again later, rather than trying to remember what happened at that point when I was debugging through it. So it also gives me the record of what happens. I hate just sitting there pressing F5, F5, continually. If you’re having to find out what your code is doing at each line, you’ve probably got a very wrong mental model of what your code’s doing, and you can find that out just as easily by inspecting a couple of values through the print statements.If I were on some codebase that you were also working on, what should I do to make it as easy as possible to understand?I’d say short and well-named methods. The one thing I like to do when I’m looking at code is to find out where a value comes from, and the more layers of indirection there are, particularly DI [dependency injection] frameworks, the harder it is to find out where something’s come from. I really hate that. I want to know if the value come from the user here or is a constant here, and if I can’t find that out, that makes code very hard to understand for me.As a tester, where do you think the split should lie between software engineers and testers?I think the split is less on areas of the code you write and more what you’re designing and creating. The developers put a structure on the code, while my major role is to say which tests we should have, whether we should test that, or it’s not worth testing that because it’s a tiny function in code that nobody’s ever actually going to see. So it’s not a split in the code, it’s a split in what you’re thinking about. Saying what code we should write, but alternatively what code we should take out.In your experience, do the software engineers tend to do much testing themselves?They tend to control the lowest layer of tests. And, depending on how the balance of people is in the team, they might write some of the higher levels of test. Or that might go to the testers. I’m the only tester on my team with three other developers, so they’ll be writing quite a lot of the actual test code, with input from me as to whether we should test that functionality, whereas on other teams, where it’s been more equal numbers, the testers have written pretty much all of the high level tests, just because that’s the best use of resource.If you could shuffle resources around however you liked, do you think that the developers should be writing those high-level tests?I think they should be writing them occasionally. It helps when they have an understanding of how testing code works and possibly what assumptions we’ve made in tests, and they can say “actually, it doesn’t work like that under the hood so you’ve missed this whole area”. It’s one of those agile things that everyone on the team should be at least comfortable doing the various jobs. So if the developers can write test code then I think that’s a very good thing.So you think testers should be able to write production code?Yes, although given most testers skills at coding, I wouldn’t advise it too much! I have written a few things, and I did make a few changes that have actually gone into our production code base. They’re not necessarily running every time but they are there. I think having that mix of skill sets is really useful. In some ways we’re using our own product to test itself, so being able to make those changes where it’s not working saves me a round-trip through the developers. It can be really annoying if the developers have no time to make a change, and I can’t touch the code.If the software engineers are consistently writing tests at all levels, what role do you think the role of a tester is?I think on a team like that, those distinctions aren’t quite so useful. There’ll be two cases. There’s either the case where the developers think they’ve written good tests, but you still need someone with a test engineer mind-set to go through the tests and validate that it’s a useful set, or the correct set for that code. Or they won’t actually be pure developers, they’ll have that mix of test ability in there.I think having slightly more distinct roles is useful. When it starts to blur, then you lose that view of the tests as a whole. The tester job is not to create tests, it’s to validate the quality of the product, and you don’t do that just by writing tests. There’s more things you’ve got to keep in your mind. And I think when you blur the roles, you start to lose that end of the tester.So because you’re working on those features, you lose that holistic view of the whole system?Yeah, and anyone who’s worked on the feature shouldn’t be testing it. You always need to have it tested it by someone who didn’t write it. Otherwise you’re a bit too close and you assume “yes, people will only use it that way”, but the tester will come along and go “how do people use this? How would our most idiotic user use this?”. I might not test that because it might be completely irrelevant. But it’s coming in and trying to have a different set of assumptions.Are you a believer that it should all be automated if possible?Not entirely. So an automated test is always better than a manual test for the long-term, but there’s still nothing that beats a human sitting in front of the application and thinking “What could I do at this point?”. The automated test is very good but they follow that strict path, and they never check anything off the path. The human tester will look at things that they weren’t expecting, whereas the automated test can only ever go “Is that value correct?” in many respects, and it won’t notice that on the other side of the screen you’re showing something completely wrong. And that value might have been checked independently, but you always find a few odd interactions when you’re going through something manually, and you always need to go through something manually to start with anyway, otherwise you won’t know where the important bits to write your automation are.When you’re doing that manual testing, do you think it’s important to do that across the entire product, or just the bits that you’ve touched recently?I think it’s important to do it mostly on the bits you’ve touched, but you can’t ignore the rest of the product. Unless you’re dealing with a very, very self-contained bit, you’re almost always encounter other bits of the product along the way. Most testers I know, even if they are looking at just one path, they’ll keep open and move around a bit anyway, just because they want to find something that’s broken. If we find that your path is right, we’ll go out and hunt something else.How do you think this fits into the idea of continuously deploying, so long as the tests pass?With deploying a website it’s a bit different because you can always pull it back. If you’re deploying an application to customers, when you’ve released it, it’s out there, you can’t pull it back. Someone’s going to keep it, no matter how hard you try there will be a few installations that stay around. So I’d always have at least a human element on that path. With websites, you could probably automate straight out, or at least straight out to an internal environment or a single server in a cloud of fifty that will serve some people. But I don’t think you should release to everyone just on automated tests passing.You’ve already mentioned using BASIC and C# — are there any other languages that you’ve used?I’ve used a few. That’s something that has changed more recently, I’ve become familiar with more languages. Before I started at Red Gate I learnt a bit of C. Then last year, I taught myself Python which I actually really enjoyed using. I’ve also come across another language called Vala, which is sort of a C#-like language. It’s basically a pre-processor for C, but it has very nice syntax. I think that’s currently my favourite language.Any particular reason for trying Vala?I have a completely Linux environment at home, and I’ve been looking for a nice language, and C# just doesn’t cut it because I won’t touch Mono. So, I was looking for something like C# but that was useable in an open source environment, and Vala’s what I found. C#’s got a few features that Vala doesn’t, and Vala’s got a few features where I think “It would be awesome if C# had that”.What are some of the features that it’s missing?Extension methods. And I think that’s the only one that really bugs me. I like to use them when I’m writing C# because it makes some things really easy, especially with libraries that you can’t touch the internals of. It doesn’t have method overloading, which is sometimes annoying.Where it does win over C#?Everything is non-nullable by default, you never have to check that something’s unexpectedly null.Also, Vala has code contracts. This is starting to come in C# 4, but the way it works in Vala is that you specify requirements in short phrases as part of your function signature and they stick to the signature, so that when you inherit it, it has exactly the same code contract as the base one, or when you inherit from an interface, you have to match the signature exactly. Just using those makes you think a bit more about how you’re writing your method, it’s not an afterthought when you’ve got contracts from base classes given to you, you can’t change it. Which I think is a lot nicer than the way C# handles it. When are those actually checked?They’re checked both at compile and run-time. The compile-time checking isn’t very strong yet, it’s quite a new feature in the compiler, and because it compiles down to C, you can write C code and interface with your methods, so you can bypass that compile-time check anyway. So there’s an extra runtime check, and if you violate one of the contracts at runtime, it’s game over for your program, there’s no exception to catch, it’s just goodbye!One thing I dislike about C# is the exceptions. You write a bit of code and fifty exceptions could come from any point in your ten lines, and you can’t mentally model how those exceptions are going to come out, and you can’t even predict them based on the functions you’re calling, because if you’ve accidentally got a derived class there instead of a base class, that can throw a completely different set of exceptions. So I’ve got no way of mentally modelling those, whereas in Vala they’re checked like Java, so you know only these exceptions can come out. You know in advance the error conditions.I think Raymond Chen on Old New Thing says “the only thing you know when you throw an exception is that you’re in an invalid state somewhere in your program, so just kill it and be done with it!”You said you’ve also learnt bits of Python. How did you find that compared to Vala and C#?Very different because of the dynamic typing. I’ve been writing a website for my own use. I’m quite into photography, so I take photos off my camera, post-process them, dump them in a file, and I get a webpage with all my thumbnails. So sort of like Picassa, but written by myself because I wanted something to learn Python with. There are some things that are really nice, I just found it really difficult to cope with the fact that I’m not quite sure what this object type that I’m passed is, I might not ever be sure, so it can randomly blow up on me. But once I train myself to ignore that and just say “well, I’m fairly sure it’s going to be something that looks like this, so I’ll use it like this”, then it’s quite nice.Any particular features that you’ve appreciated?I don’t like any particular feature, it’s just very straightforward to work with. It’s very quick to write something in, particularly as you don’t have to worry that you’ve changed something that affects a different part of the program. If you have, then that part blows up, but I can get this part working right now.If you were doing a big project, would you be willing to do it in Python rather than C# or Vala?I think I might be willing to try something bigger or long term with Python. We’re currently doing an ASP.NET MVC project on C#, and I don’t like the amount of reflection. There’s a lot of magic that pulls values out, and it’s all done under the scenes. It’s almost managed to put a dynamic type system on top of C#, which in many ways destroys the language to me, whereas if you’re already in a dynamic language, having things done dynamically is much more natural. In many ways, you get the worst of both worlds. I think for web projects, I would go with Python again, whereas for anything desktop, command-line or GUI-based, I’d probably go for C# or Vala, depending on what environment I’m in.It’s the fact that you can gain from the strong typing in ways that you can’t so much on the web app. Or, in a web app, you have to use dynamic typing at some point, or you have to write a hell of a lot of boilerplate, and I’d rather use the dynamic typing than write the boilerplate.What do you think separates great programmers from everyone else?Probably design choices. Choosing to write it a piece of code one way or another. For any given program you ask me to write, I could probably do it five thousand ways. A programmer who is capable will see four or five of them, and choose one of the better ones. The excellent programmer will see the largest proportion and manage to pick the best one very quickly without having to think too much about it. I think that’s probably what separates, is the speed at which they can see what’s the best path to write the program in. More Red Gater Coder interviews

    Read the article

  • C# tip: do not use “is” type, if you will need cast “as” later

    - by Michael Freidgeim
    We have a debate with one of my collegues, is it agood style to check, if the object of particular style, and then cast  as this  type. The perfect answer of Jon Skeet   and answers in Cast then check or check then cast? confirmed my point.//good    var coke = cola as CocaCola;    if (coke != null)    {        // some unique coca-cola only code    }    //worse    if (cola is CocaCola)    {        var coke =  cola as CocaCola;        // some unique coca-cola only code here.    }

    Read the article

  • Ubuntu Software Center does not allow changes to software sources

    - by Michael Goldshteyn
    The checkboxes that appear to be changeable under the Edit / Software Sources dialog box cannot be changed. I click on them and they just turn gray and stay at their current setting. Update: When I run software-center from a terminal window and try to change one of the checkbox settings, I get: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/softwareproperties/gtk/SoftwarePropertiesGtk.py", line 649, in on_isv_source_toggled self.backend.ToggleSourceUse(str(source_entry)) File "/usr/lib/python2.7/dist-packages/dbus/proxies.py", line 143, in __call__ **keywords) File "/usr/lib/python2.7/dist-packages/dbus/connection.py", line 630, in call_blocking message, timeout) dbus.exceptions.DBusException: com.ubuntu.SoftwareProperties.PermissionDeniedByPolicy: com.ubuntu.softwareproperties.applychanges These things happen instead of it properly prompting me for a password (for root privs).

    Read the article

  • ASP.NET Frameworks and Raw Throughput Performance

    - by Rick Strahl
    A few days ago I had a curious thought: With all these different technologies that the ASP.NET stack has to offer, what's the most efficient technology overall to return data for a server request? When I started this it was mere curiosity rather than a real practical need or result. Different tools are used for different problems and so performance differences are to be expected. But still I was curious to see how the various technologies performed relative to each just for raw throughput of the request getting to the endpoint and back out to the client with as little processing in the actual endpoint logic as possible (aka Hello World!). I want to clarify that this is merely an informal test for my own curiosity and I'm sharing the results and process here because I thought it was interesting. It's been a long while since I've done any sort of perf testing on ASP.NET, mainly because I've not had extremely heavy load requirements and because overall ASP.NET performs very well even for fairly high loads so that often it's not that critical to test load performance. This post is not meant to make a point  or even come to a conclusion which tech is better, but just to act as a reference to help understand some of the differences in perf and give a starting point to play around with this yourself. I've included the code for this simple project, so you can play with it and maybe add a few additional tests for different things if you like. Source Code on GitHub I looked at this data for these technologies: ASP.NET Web API ASP.NET MVC WebForms ASP.NET WebPages ASMX AJAX Services  (couldn't get AJAX/JSON to run on IIS8 ) WCF Rest Raw ASP.NET HttpHandlers It's quite a mixed bag, of course and the technologies target different types of development. What started out as mere curiosity turned into a bit of a head scratcher as the results were sometimes surprising. What I describe here is more to satisfy my curiosity more than anything and I thought it interesting enough to discuss on the blog :-) First test: Raw Throughput The first thing I did is test raw throughput for the various technologies. This is the least practical test of course since you're unlikely to ever create the equivalent of a 'Hello World' request in a real life application. The idea here is to measure how much time a 'NOP' request takes to return data to the client. So for this request I create the simplest Hello World request that I could come up for each tech. Http Handler The first is the lowest level approach which is an HTTP handler. public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public bool IsReusable { get { return true; } } } WebForms Next I added a couple of ASPX pages - one using CodeBehind and one using only a markup page. The CodeBehind page simple does this in CodeBehind without any markup in the ASPX page: public partial class HelloWorld_CodeBehind : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello World. Time is: " + DateTime.Now.ToString() ); Response.End(); } } while the Markup page only contains some static output via an expression:<%@ Page Language="C#" AutoEventWireup="false" CodeBehind="HelloWorld_Markup.aspx.cs" Inherits="AspNetFrameworksPerformance.HelloWorld_Markup" %> Hello World. Time is <%= DateTime.Now %> ASP.NET WebPages WebPages is the freestanding Razor implementation of ASP.NET. Here's the simple HelloWorld.cshtml page:Hello World @DateTime.Now WCF REST WCF REST was the token REST implementation for ASP.NET before WebAPI and the inbetween step from ASP.NET AJAX. I'd like to forget that this technology was ever considered for production use, but I'll include it here. Here's an OperationContract class: [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World" + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } } WCF REST can return arbitrary results by returning a Stream object and a content type. The code above turns the string result into a stream and returns that back to the client. ASP.NET AJAX (ASMX Services) I also wanted to test ASP.NET AJAX services because prior to WebAPI this is probably still the most widely used AJAX technology for the ASP.NET stack today. Unfortunately I was completely unable to get this running on my Windows 8 machine. Visual Studio 2012  removed adding of ASP.NET AJAX services, and when I tried to manually add the service and configure the script handler references it simply did not work - I always got a SOAP response for GET and POST operations. No matter what I tried I always ended up getting XML results even when explicitly adding the ScriptHandler. So, I didn't test this (but the code is there - you might be able to test this on a Windows 7 box). ASP.NET MVC Next up is probably the most popular ASP.NET technology at the moment: MVC. Here's the small controller: public class MvcPerformanceController : Controller { public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } } ASP.NET WebAPI Next up is WebAPI which looks kind of similar to MVC. Except here I have to use a StringContent result to return the response: public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } } Testing Take a minute to think about each of the technologies… and take a guess which you think is most efficient in raw throughput. The fastest should be pretty obvious, but the others - maybe not so much. The testing I did is pretty informal since it was mainly to satisfy my curiosity - here's how I did this: I used Apache Bench (ab.exe) from a full Apache HTTP installation to run and log the test results of hitting the server. ab.exe is a small executable that lets you hit a URL repeatedly and provides counter information about the number of requests, requests per second etc. ab.exe and the batch file are located in the \LoadTests folder of the project. An ab.exe command line  looks like this: ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld which hits the specified URL 100,000 times with a load factor of 20 concurrent requests. This results in output like this:   It's a great way to get a quick and dirty performance summary. Run it a few times to make sure there's not a large amount of varience. You might also want to do an IISRESET to clear the Web Server. Just make sure you do a short test run to warm up the server first - otherwise your first run is likely to be skewed downwards. ab.exe also allows you to specify headers and provide POST data and many other things if you want to get a little more fancy. Here all tests are GET requests to keep it simple. I ran each test: 100,000 iterations Load factor of 20 concurrent connections IISReset before starting A short warm up run for API and MVC to make sure startup cost is mitigated Here is the batch file I used for the test: IISRESET REM make sure you add REM C:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin REM to your path so ab.exe can be found REM Warm up ab.exe -n100 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJsonab.exe -n100 -c20 http://localhost/aspnetperf/api/HelloWorldJson ab.exe -n100 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld ab.exe -n100000 -c20 http://localhost/aspnetperf/handler.ashx > handler.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_CodeBehind.aspx > AspxCodeBehind.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_Markup.aspx > AspxMarkup.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld > Wcf.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldCode > Mvc.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld > WebApi.txt I ran each of these tests 3 times and took the average score for Requests/second, with the machine otherwise idle. I did see a bit of variance when running many tests but the values used here are the medians. Part of this has to do with the fact I ran the tests on my local machine - result would probably more consistent running the load test on a separate machine hitting across the network. I ran these tests locally on my laptop which is a Dell XPS with quad core Sandibridge I7-2720QM @ 2.20ghz and a fast SSD drive on Windows 8. CPU load during tests ran to about 70% max across all 4 cores (IOW, it wasn't overloading the machine). Ideally you can try running these tests on a separate machine hitting the local machine. If I remember correctly IIS 7 and 8 on client OSs don't throttle so the performance here should be Results Ok, let's cut straight to the chase. Below are the results from the tests… It's not surprising that the handler was fastest. But it was a bit surprising to me that the next fastest was WebForms and especially Web Forms with markup over a CodeBehind page. WebPages also fared fairly well. MVC and WebAPI are a little slower and the slowest by far is WCF REST (which again I find surprising). As mentioned at the start the raw throughput tests are not overly practical as they don't test scripting performance for the HTML generation engines or serialization performances of the data engines. All it really does is give you an idea of the raw throughput for the technology from time of request to reaching the endpoint and returning minimal text data back to the client which indicates full round trip performance. But it's still interesting to see that Web Forms performs better in throughput than either MVC, WebAPI or WebPages. It'd be interesting to try this with a few pages that actually have some parsing logic on it, but that's beyond the scope of this throughput test. But what's also amazing about this test is the sheer amount of traffic that a laptop computer is handling. Even the slowest tech managed 5700 requests a second, which is one hell of a lot of requests if you extrapolate that out over a 24 hour period. Remember these are not static pages, but dynamic requests that are being served. Another test - JSON Data Service Results The second test I used a JSON result from several of the technologies. I didn't bother running WebForms and WebPages through this test since that doesn't make a ton of sense to return data from the them (OTOH, returning text from the APIs didn't make a ton of sense either :-) In these tests I have a small Person class that gets serialized and then returned to the client. The Person class looks like this: public class Person { public Person() { Id = 10; Name = "Rick"; Entered = DateTime.Now; } public int Id { get; set; } public string Name { get; set; } public DateTime Entered { get; set; } } Here are the updated handler classes that use Person: Handler public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { var action = context.Request.QueryString["action"]; if (action == "json") JsonRequest(context); else TextRequest(context); } public void TextRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public void JsonRequest(HttpContext context) { var json = JsonConvert.SerializeObject(new Person(), Formatting.None); context.Response.ContentType = "application/json"; context.Response.Write(json); } public bool IsReusable { get { return true; } } } This code adds a little logic to check for a action query string and route the request to an optional JSON result method. To generate JSON, I'm using the same JSON.NET serializer (JsonConvert.SerializeObject) used in Web API to create the JSON response. WCF REST   [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World " + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } [OperationContract] [WebGet(ResponseFormat=WebMessageFormat.Json,BodyStyle=WebMessageBodyStyle.WrappedRequest)] public Person HelloWorldJson() { // Add your operation implementation here return new Person(); } } For WCF REST all I have to do is add a method with the Person result type.   ASP.NET MVC public class MvcPerformanceController : Controller { // // GET: /MvcPerformance/ public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } public JsonResult HelloWorldJson() { return Json(new Person(), JsonRequestBehavior.AllowGet); } } For MVC all I have to do for a JSON response is return a JSON result. ASP.NET internally uses JavaScriptSerializer. ASP.NET WebAPI public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } [HttpGet] public Person HelloWorldJson() { return new Person(); } [HttpGet] public HttpResponseMessage HelloWorldJson2() { var response = new HttpResponseMessage(HttpStatusCode.OK); response.Content = new ObjectContent<Person>(new Person(), GlobalConfiguration.Configuration.Formatters.JsonFormatter); return response; } } Testing and Results To run these data requests I used the following ab.exe commands:REM JSON RESPONSES ab.exe -n100000 -c20 http://localhost/aspnetperf/Handler.ashx?action=json > HandlerJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJson > MvcJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorldJson > WebApiJson.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorldJson > WcfJson.txt The results from this test run are a bit interesting in that the WebAPI test improved performance significantly over returning plain string content. Here are the results:   The performance for each technology drops a little bit except for WebAPI which is up quite a bit! From this test it appears that WebAPI is actually significantly better performing returning a JSON response, rather than a plain string response. Snag with Apache Benchmark and 'Length Failures' I ran into a little snag with Apache Benchmark, which was reporting failures for my Web API requests when serializing. As the graph shows performance improved significantly from with JSON results from 5580 to 6530 or so which is a 15% improvement (while all others slowed down by 3-8%). However, I was skeptical at first because the WebAPI test reports showed a bunch of errors on about 10% of the requests. Check out this report: Notice the Failed Request count. What the hey? Is WebAPI failing on roughly 10% of requests when sending JSON? Turns out: No it's not! But it took some sleuthing to figure out why it reports these failures. At first I thought that Web API was failing, and so to make sure I re-ran the test with Fiddler attached and runiisning the ab.exe test by using the -X switch: ab.exe -n100 -c10 -X localhost:8888 http://localhost/aspnetperf/api/HelloWorldJson which showed that indeed all requests where returning proper HTTP 200 results with full content. However ab.exe was reporting the errors. After some closer inspection it turned out that the dates varying in size altered the response length in dynamic output. For example: these two results: {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.841926-10:00"} {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.8519262-10:00"} are different in length for the number which results in 68 and 69 bytes respectively. The same URL produces different result lengths which is what ab.exe reports. I didn't notice at first bit the same is happening when running the ASHX handler with JSON.NET result since it uses the same serializer that varies the milliseconds. Moral: You can typically ignore Length failures in Apache Benchmark and when in doubt check the actual output with Fiddler. Note that the other failure values are accurate though. Another interesting Side Note: Perf drops over Time As I was running these tests repeatedly I was finding that performance steadily dropped from a startup peak to a 10-15% lower stable level. IOW, with Web API I'd start out with around 6500 req/sec and in subsequent runs it keeps dropping until it would stabalize somewhere around 5900 req/sec occasionally jumping lower. For these tests this is why I did the IIS RESET and warm up for individual tests. This is a little puzzling. Looking at Process Monitor while the test are running memory very quickly levels out as do handles and threads, on the first test run. Subsequent runs everything stays stable, but the performance starts going downwards. This applies to all the technologies - Handlers, Web Forms, MVC, Web API - curious to see if others test this and see similar results. Doing an IISRESET then resets everything and performance starts off at peak again… Summary As I stated at the outset, these were informal to satiate my curiosity not to prove that any technology is better or even faster than another. While there clearly are differences in performance the differences (other than WCF REST which was by far the slowest and the raw handler which was by far the highest) are relatively minor, so there is no need to feel that any one technology is a runaway standout in raw performance. Choosing a technology is about more than pure performance but also about the adequateness for the job and the easy of implementation. The strengths of each technology will make for any minor performance difference we see in these tests. However, to me it's important to get an occasional reality check and compare where new technologies are heading. Often times old stuff that's been optimized and designed for a time of less horse power can utterly blow the doors off newer tech and simple checks like this let you compare. Luckily we're seeing that much of the new stuff performs well even in V1.0 which is great. To me it was very interesting to see Web API perform relatively badly with plain string content, which originally led me to think that Web API might not be properly optimized just yet. For those that caught my Tweets late last week regarding WebAPI's slow responses was with String content which is in fact considerably slower. Luckily where it counts with serialized JSON and XML WebAPI actually performs better. But I do wonder what would make generic string content slower than serialized code? This stresses another point: Don't take a single test as the final gospel and don't extrapolate out from a single set of tests. Certainly Twitter can make you feel like a fool when you post something immediate that hasn't been fleshed out a little more <blush>. Egg on my face. As a result I ended up screwing around with this for a few hours today to compare different scenarios. Well worth the time… I hope you found this useful, if not for the results, maybe for the process of quickly testing a few requests for performance and charting out a comparison. Now onwards with more serious stuff… Resources Source Code on GitHub Apache HTTP Server Project (ab.exe is part of the binary distribution)© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Silverlight Cream for November 21, 2011 -- #1171

    - by Dave Campbell
    In this Issue: Colin Eberhardt, Sumit Dutta, Morten Nielsen, Jesse Liberty, Jeff Blankenburg(-2-), Brian Noyes, and Tony Champion. Above the Fold: Silverlight: "PV Basics : Client-side Collections" Tony Champion WP7: "Pushpin Clustering with the Windows Phone 7 Bing Map control" Colin Eberhardt Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up From SilverlightCream.com: Pushpin Clustering with the Windows Phone 7 Bing Map control Colin Eberhardt is back discussing Pushpins for a BingMaps app on WP7 and provides a utility class that clusters pushpins, allowing you to render 1000s of pins on an app ... all the explanation and all the code Part 22 - Windows Phone 7 - Tile Push Notification Part 22 in Sumit Dutta's WP7 series is about Tile Push Notification... nice tutorial with all the code listed Correctly displaying your current location Morten Nielsen demonstrates formatting the information from the GPS on your WP7 into something intelligible and useful Spiking the Pomodoro Timer Jesse Liberty put up a quick and dirty version of a Pomodoro timer for WP7.1 to explore the technical challenges of the Full Stack Phase 2 he's cranking up 31 Days of Mango | Day #11: Network Jeff Blankenburg's Number 10 in his 31 Days quest of WP7.1 is about the NetworkInformation namespace which gives you all sorts of info on the user's device network connection availability, type, etc. 31 Days of Mango | Day #11: Live Tiles Jeff Blankenburg takes off on Live Tiles for Day 11... big topic for a 1 day post, but he takes off on it... updating and Live Tiles too Working with Prism 4 Part 2: MVVM Basics and Commands Brian Noyes has part 2 of his Prism/MVVM series up at SilverlightShow... very nice tutorial on the basics of getting a view and viewmodel up, and setting up an ICommand to launch an Edit View... plus the code to peruse. PV Basics : Client-side Collections Tony Champion is startig a series on Silverlight 5 and Pivot Viewer... First up is some basics in dealing with the control in SL5 and talking about Client-side Collections... great informative tutorial and all the code Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Do NOT Change "Copy Local” project references to false, unless understand subsequences.

    - by Michael Freidgeim
    To optimize performance of visual studio build I've found multiple recommendations to change CopyLocal property for dependent dlls to false,e.g. From http://stackoverflow.com/questions/690033/best-practices-for-large-solutions-in-visual-studio-2008 CopyLocal? For sure turn this offhttp://stackoverflow.com/questions/280751/what-is-the-best-practice-for-copy-local-and-with-project-referencesAlways set the Copy Local property to false and enforce this via a custom msbuild stephttp://codebetter.com/patricksmacchia/2007/06/20/benefit-from-the-c-and-vb-net-compilers-perf/BenefitBenefitMy advice is to always set ‘Copy Local’ to falseSome time ago we've tried to change the setting to false, and found that it causes problem for deployment of top-level projects.Recently I've followed the suggestion and changed the settings for middle-level projects. It didn't cause immediate issues, but I was warned by Readify Consultant Colin Savage about possible errors during deploymentsI haven't undone the changes immediately and we found a few issues during testing.There are many scenarios, when you need to have Copy Local’ left to True.The concerns are highlighted in some stack overflow answers, but they have small number of votes.Top-level projects:  set copy local = true.First of all, it doesn't work correctly for top-level projects, i.e. executables or web sites.As pointed in the answer http://stackoverflow.com/a/6529461/52277for all the references in the one at the top set copy local = true.Alternatively you have to change output directory as it's described in http://www.simple-talk.com/dotnet/.net-framework/partitioning-your-code-base-through-.net-assemblies-and-visual-studio-projects/If you set ‘ Copy Local = false’, VS will, unless you tell it otherwise, place each assembly alone in its own .\bin\Debugdirectory. Because of this, you will need to configure VS to place assemblies together in the same directory. To do so, for each VS project, go to VS > Project Properties > Build tab > Output path, and set the Ouput path to ..\bin\Debugfor debug configuration, and ..\bin\Release for release configuration.Second-level  dependencies:  set copy local = true.Another example when copylocal =false fails on run-time, is when top level assembly doesn't directly referenced one of indirect dependencies.E..g. Top-level assembly A has reference to assembly B with copylocal =true, but assembly B has reference to assembly C with copylocal =false. Most likely assembly C will be missing on runtime and will cause errors E.g. http://stackoverflow.com/questions/602765/when-should-copy-local-be-set-to-true-and-when-should-it-not?lq=1Copy local is important for deployment scenarios and tools. As a general rule you should use CopyLocal=True and http://stackoverflow.com/questions/602765/when-should-copy-local-be-set-to-true-and-when-should-it-not?lq=1 Unfortunately there are some quirks and CopyLocal won't necessary work as expected for assembly references in secondary assemblies structured as shown below.MainApp.exe MyLibrary.dll ThirdPartyLibrary.dll (if in the GAC CopyLocal won't copy to MainApp bin folder)This makes xcopy deployments difficult . .Reflection called DLLs  dependencies:  set copy local = true.E.g user can see error "ISystem.Reflection.ReflectionTypeLoadException: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information."The fix for the issue is recommended in http://stackoverflow.com/a/6200173/52277"I solved this issue by setting the Copy Local attribute of my project's references to true."In general, the problems with investigation of deployment issues may overweight the benefits of reduced build time. Setting the Copy Local to false without considering deployment issues is not a good idea.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >