Search Results

Search found 7216 results on 289 pages for 'low cost'.

Page 131/289 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • Of C# Iterators and Performance

    - by James Michael Hare
    Some of you reading this will be wondering, "what is an iterator" and think I'm locked in the world of C++.  Nope, I'm talking C# iterators.  No, not enumerators, iterators.   So, for those of you who do not know what iterators are in C#, I will explain it in summary, and for those of you who know what iterators are but are curious of the performance impacts, I will explore that as well.   Iterators have been around for a bit now, and there are still a bunch of people who don't know what they are or what they do.  I don't know how many times at work I've had a code review on my code and have someone ask me, "what's that yield word do?"   Basically, this post came to me as I was writing some extension methods to extend IEnumerable<T> -- I'll post some of the fun ones in a later post.  Since I was filtering the resulting list down, I was using the standard C# iterator concept; but that got me wondering: what are the performance implications of using an iterator versus returning a new enumeration?   So, to begin, let's look at a couple of methods.  This is a new (albeit contrived) method called Every(...).  The goal of this method is to access and enumeration and return every nth item in the enumeration (including the first).  So Every(2) would return items 0, 2, 4, 6, etc.   Now, if you wanted to write this in the traditional way, you may come up with something like this:       public static IEnumerable<T> Every<T>(this IEnumerable<T> list, int interval)     {         List<T> newList = new List<T>();         int count = 0;           foreach (var i in list)         {             if ((count++ % interval) == 0)             {                 newList.Add(i);             }         }           return newList;     }     So basically this method takes any IEnumerable<T> and returns a new IEnumerable<T> that contains every nth item.  Pretty straight forward.   The problem?  Well, Every<T>(...) will construct a list containing every nth item whether or not you care.  What happens if you were searching this result for a certain item and find that item after five tries?  You would have generated the rest of the list for nothing.   Enter iterators.  This C# construct uses the yield keyword to effectively defer evaluation of the next item until it is asked for.  This can be very handy if the evaluation itself is expensive or if there's a fair chance you'll never want to fully evaluate a list.   We see this all the time in Linq, where many expressions are chained together to do complex processing on a list.  This would be very expensive if each of these expressions evaluated their entire possible result set on call.    Let's look at the same example function, this time using an iterator:       public static IEnumerable<T> Every<T>(this IEnumerable<T> list, int interval)     {         int count = 0;         foreach (var i in list)         {             if ((count++ % interval) == 0)             {                 yield return i;             }         }     }   Notice it does not create a new return value explicitly, the only evidence of a return is the "yield return" statement.  What this means is that when an item is requested from the enumeration, it will enter this method and evaluate until it either hits a yield return (in which case that item is returned) or until it exits the method or hits a yield break (in which case the iteration ends.   Behind the scenes, this is all done with a class that the CLR creates behind the scenes that keeps track of the state of the iteration, so that every time the next item is asked for, it finds that item and then updates the current position so it knows where to start at next time.   It doesn't seem like a big deal, does it?  But keep in mind the key point here: it only returns items as they are requested. Thus if there's a good chance you will only process a portion of the return list and/or if the evaluation of each item is expensive, an iterator may be of benefit.   This is especially true if you intend your methods to be chainable similar to the way Linq methods can be chained.    For example, perhaps you have a List<int> and you want to take every tenth one until you find one greater than 10.  We could write that as:       List<int> someList = new List<int>();         // fill list here         someList.Every(10).TakeWhile(i => i <= 10);     Now is the difference more apparent?  If we use the first form of Every that makes a copy of the list.  It's going to copy the entire list whether we will need those items or not, that can be costly!    With the iterator version, however, it will only take items from the list until it finds one that is > 10, at which point no further items in the list are evaluated.   So, sounds neat eh?  But what's the cost is what you're probably wondering.  So I ran some tests using the two forms of Every above on lists varying from 5 to 500,000 integers and tried various things.    Now, iteration isn't free.  If you are more likely than not to iterate the entire collection every time, iterator has some very slight overhead:   Copy vs Iterator on 100% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 5 Copy 5 5 5 Iterator 5 50 50 Copy 28 50 50 Iterator 27 500 500 Copy 227 500 500 Iterator 247 5000 5000 Copy 2266 5000 5000 Iterator 2444 50,000 50,000 Copy 24,443 50,000 50,000 Iterator 24,719 500,000 500,000 Copy 250,024 500,000 500,000 Iterator 251,521   Notice that when iterating over the entire produced list, the times for the iterator are a little better for smaller lists, then getting just a slight bit worse for larger lists.  In reality, given the number of items and iterations, the result is near negligible, but just to show that iterators come at a price.  However, it should also be noted that the form of Every that returns a copy will have a left-over collection to garbage collect.   However, if we only partially evaluate less and less through the list, the savings start to show and make it well worth the overhead.  Let's look at what happens if you stop looking after 80% of the list:   Copy vs Iterator on 80% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 4 Copy 5 5 4 Iterator 5 50 40 Copy 27 50 40 Iterator 23 500 400 Copy 215 500 400 Iterator 200 5000 4000 Copy 2099 5000 4000 Iterator 1962 50,000 40,000 Copy 22,385 50,000 40,000 Iterator 19,599 500,000 400,000 Copy 236,427 500,000 400,000 Iterator 196,010       Notice that the iterator form is now operating quite a bit faster.  But the savings really add up if you stop on average at 50% (which most searches would typically do):     Copy vs Iterator on 50% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 2 Copy 5 5 2 Iterator 4 50 25 Copy 25 50 25 Iterator 16 500 250 Copy 188 500 250 Iterator 126 5000 2500 Copy 1854 5000 2500 Iterator 1226 50,000 25,000 Copy 19,839 50,000 25,000 Iterator 12,233 500,000 250,000 Copy 208,667 500,000 250,000 Iterator 122,336   Now we see that if we only expect to go on average 50% into the results, we tend to shave off around 40% of the time.  And this is only for one level deep.  If we are using this in a chain of query expressions it only adds to the savings.   So my recommendation?  If you have a resonable expectation that someone may only want to partially consume your enumerable result, I would always tend to favor an iterator.  The cost if they iterate the whole thing does not add much at all -- and if they consume only partially, you reap some really good performance gains.   Next time I'll discuss some of my favorite extensions I've created to make development life a little easier and maintainability a little better.

    Read the article

  • External usb 3.0 hard drive is not recognised when plugged into usb 3 port (ubuntu natty 64 bit).

    - by kimangroo
    I have an Iomega Prestige Portable External Hard Drive 1TB USB 3.0. It works fine on windows 7 as a usb 3.0 drive. It isn't detected on ubuntu natty 64bit, 2.6.38-8-generic. fdisk -l cannot see it at all: Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1bed746b Device Boot Start End Blocks Id System /dev/sda1 1 1689 13560832 27 Unknown /dev/sda2 * 1689 1702 102400 7 HPFS/NTFS /dev/sda3 1702 19978 146805760 7 HPFS/NTFS /dev/sda4 19978 60802 327914497 5 Extended /dev/sda5 25555 60802 283120640 7 HPFS/NTFS /dev/sda6 19978 23909 31571968 83 Linux /dev/sda7 23909 25555 13218816 82 Linux swap / Solaris Partition table entries are not in disk order lsusb can see it: Bus 003 Device 003: ID 059b:0070 Iomega Corp. Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 002 Device 004: ID 05fe:0011 Chic Technology Corp. Browser Mouse Bus 002 Device 003: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 005: ID 0489:e00f Foxconn / Hon Hai Bus 001 Device 004: ID 0c45:64b5 Microdia Bus 001 Device 003: ID 08ff:168f AuthenTec, Inc. Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub And dmesg | grep -i xhci (I may have unplugged the drive and plugged it back in again after booting): [ 1.659060] pci 0000:04:00.0: xHCI HW did not halt within 2000 usec status = 0x0 [ 11.484971] xhci_hcd 0000:04:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18 [ 11.484997] xhci_hcd 0000:04:00.0: setting latency timer to 64 [ 11.485002] xhci_hcd 0000:04:00.0: xHCI Host Controller [ 11.485064] xhci_hcd 0000:04:00.0: new USB bus registered, assigned bus number 3 [ 11.636149] xhci_hcd 0000:04:00.0: irq 18, io mem 0xc5400000 [ 11.636241] xhci_hcd 0000:04:00.0: irq 43 for MSI/MSI-X [ 11.636246] xhci_hcd 0000:04:00.0: irq 44 for MSI/MSI-X [ 11.636251] xhci_hcd 0000:04:00.0: irq 45 for MSI/MSI-X [ 11.636256] xhci_hcd 0000:04:00.0: irq 46 for MSI/MSI-X [ 11.636261] xhci_hcd 0000:04:00.0: irq 47 for MSI/MSI-X [ 11.639654] xHCI xhci_add_endpoint called for root hub [ 11.639655] xHCI xhci_check_bandwidth called for root hub [ 11.956366] usb 3-1: new SuperSpeed USB device using xhci_hcd and address 2 [ 12.001073] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.007059] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.012932] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.018922] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.049139] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.056754] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.131607] xhci_hcd 0000:04:00.0: WARN no SS endpoint bMaxBurst [ 12.179717] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.686876] xhci_hcd 0000:04:00.0: WARN: babble error on endpoint [ 12.687058] xhci_hcd 0000:04:00.0: WARN Set TR Deq Ptr cmd invalid because of stream ID configuration [ 12.687152] xhci_hcd 0000:04:00.0: ERROR Transfer event for disabled endpoint or incorrect stream ring [ 43.330737] usb 3-1: reset SuperSpeed USB device using xhci_hcd and address 2 [ 43.422579] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 43.422658] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff88014669af00 [ 43.422665] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff88014669af40 [ 43.422671] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff88014669af80 [ 43.422677] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff88014669afc0 [ 43.531159] xhci_hcd 0000:04:00.0: WARN no SS endpoint bMaxBurst [ 125.160248] xhci_hcd 0000:04:00.0: WARN no SS endpoint bMaxBurst [ 903.766466] usb 3-1: new SuperSpeed USB device using xhci_hcd and address 3 [ 903.807789] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 903.813530] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 903.819400] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 903.825104] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 903.855067] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 903.862314] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 903.862597] xhci_hcd 0000:04:00.0: WARN no SS endpoint bMaxBurst [ 903.913211] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 904.424416] xhci_hcd 0000:04:00.0: WARN: babble error on endpoint [ 904.424599] xhci_hcd 0000:04:00.0: WARN Set TR Deq Ptr cmd invalid because of stream ID configuration [ 904.424700] xhci_hcd 0000:04:00.0: ERROR Transfer event for disabled endpoint or incorrect stream ring [ 935.139021] usb 3-1: reset SuperSpeed USB device using xhci_hcd and address 3 [ 935.226075] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 935.226140] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff880148186b00 [ 935.226148] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff880148186b40 [ 935.226153] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff880148186b80 [ 935.226159] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff880148186bc0 [ 935.343339] xhci_hcd 0000:04:00.0: WARN no SS endpoint bMaxBurst I thought it might be that the firmware wasn't compatible with linux or something, but when booting a live image of partedmagic, (2.6.38.4-pmagic), the drive was detected fine, I could mount it and got usb 3.0 speeds (at least they double the speeds I got from plugging same drive in usb 2 ports). dmesg in partedmagic did say something about no SuperSpeed endpoint which was an error I saw in a previous dmesg of ubuntu: Jun 27 15:49:02 (none) user.info kernel: [ 2.978743] xhci_hcd 0000:04:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18 Jun 27 15:49:02 (none) user.debug kernel: [ 2.978771] xhci_hcd 0000:04:00.0: setting latency timer to 64 Jun 27 15:49:02 (none) user.info kernel: [ 2.978781] xhci_hcd 0000:04:00.0: xHCI Host Controller Jun 27 15:49:02 (none) user.info kernel: [ 2.978856] xhci_hcd 0000:04:00.0: new USB bus registered, assigned bus number 3 Jun 27 15:49:02 (none) user.info kernel: [ 3.089458] xhci_hcd 0000:04:00.0: irq 18, io mem 0xc5400000 Jun 27 15:49:02 (none) user.debug kernel: [ 3.089541] xhci_hcd 0000:04:00.0: irq 42 for MSI/MSI-X Jun 27 15:49:02 (none) user.debug kernel: [ 3.089544] xhci_hcd 0000:04:00.0: irq 43 for MSI/MSI-X Jun 27 15:49:02 (none) user.debug kernel: [ 3.089546] xhci_hcd 0000:04:00.0: irq 44 for MSI/MSI-X Jun 27 15:49:02 (none) user.debug kernel: [ 3.089548] xhci_hcd 0000:04:00.0: irq 45 for MSI/MSI-X Jun 27 15:49:02 (none) user.debug kernel: [ 3.089550] xhci_hcd 0000:04:00.0: irq 46 for MSI/MSI-X Jun 27 15:49:02 (none) user.warn kernel: [ 3.092857] usb usb3: No SuperSpeed endpoint companion for config 1 interface 0 altsetting 0 ep 129: using minimum values Jun 27 15:49:02 (none) user.info kernel: [ 3.092864] usb usb3: New USB device found, idVendor=1d6b, idProduct=0003 Jun 27 15:49:02 (none) user.info kernel: [ 3.092866] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Jun 27 15:49:02 (none) user.info kernel: [ 3.092867] usb usb3: Product: xHCI Host Controller Jun 27 15:49:02 (none) user.info kernel: [ 3.092869] usb usb3: Manufacturer: Linux 2.6.38.4-pmagic xhci_hcd Jun 27 15:49:02 (none) user.info kernel: [ 3.092870] usb usb3: SerialNumber: 0000:04:00.0 Jun 27 15:49:02 (none) user.debug kernel: [ 3.092961] xHCI xhci_add_endpoint called for root hub Jun 27 15:49:02 (none) user.debug kernel: [ 3.092963] xHCI xhci_check_bandwidth called for root hub Well I have no idea what's going wrong, and I haven't had much luck from google and the forums so far. A number of unanswered threads with people with similar error messages and problems only. Hopefully someone here can help or point me in the right direction?!

    Read the article

  • C# Extension Methods - To Extend or Not To Extend...

    - by James Michael Hare
    I've been thinking a lot about extension methods lately, and I must admit I both love them and hate them. They are a lot like sugar, they taste so nice and sweet, but they'll rot your teeth if you eat them too much.   I can't deny that they aren't useful and very handy. One of the major components of the Shared Component library where I work is a set of useful extension methods. But, I also can't deny that they tend to be overused and abused to willy-nilly extend every living type.   So what constitutes a good extension method? Obviously, you can write an extension method for nearly anything whether it is a good idea or not. Many times, in fact, an idea seems like a good extension method but in retrospect really doesn't fit.   So what's the litmus test? To me, an extension method should be like in the movies when a person runs into their twin, separated at birth. You just know you're related. Obviously, that's hard to quantify, so let's try to put a few rules-of-thumb around them.   A good extension method should:     Apply to any possible instance of the type it extends.     Simplify logic and improve readability/maintainability.     Apply to the most specific type or interface applicable.     Be isolated in a namespace so that it does not pollute IntelliSense.     So let's look at a few examples in relation to these rules.   The first rule, to me, is the most important of all. Once again, it bears repeating, a good extension method should apply to all possible instances of the type it extends. It should feel like the long lost relative that should have been included in the original class but somehow was missing from the family tree.    Take this nifty little int extension, I saw this once in a blog and at first I really thought it was pretty cool, but then I started noticing a code smell I couldn't quite put my finger on. So let's look:       public static class IntExtensinos     {         public static int Seconds(int num)         {             return num * 1000;         }           public static int Minutes(int num)         {             return num * 60000;         }     }     This is so you could do things like:       ...     Thread.Sleep(5.Seconds());     ...     proxy.Timeout = 1.Minutes();     ...     Awww, you say, that's cute! Well, that's the problem, it's kitschy and it doesn't always apply (and incidentally you could achieve the same thing with TimeStamp.FromSeconds(5)). It's syntactical candy that looks cool, but tends to rot and pollute the code. It would allow things like:       total += numberOfTodaysOrders.Seconds();     which makes no sense and should never be allowed. The problem is you're applying an extension method to a logical domain, not a type domain. That is, the extension method Seconds() doesn't really apply to ALL ints, it applies to ints that are representative of time that you want to convert to milliseconds.    Do you see what I mean? The two problems, in a nutshell, are that a) Seconds() called off a non-time value makes no sense and b) calling Seconds() off something to pass to something that does not take milliseconds will be off by a factor of 1000 or worse.   Thus, in my mind, you should only ever have an extension method that applies to the whole domain of that type.   For example, this is one of my personal favorites:       public static bool IsBetween<T>(this T value, T low, T high)         where T : IComparable<T>     {         return value.CompareTo(low) >= 0 && value.CompareTo(high) <= 0;     }   This allows you to check if any IComparable<T> is within an upper and lower bound. Think of how many times you type something like:       if (response.Employee.Address.YearsAt >= 2         && response.Employee.Address.YearsAt <= 10)     {     ...     }     Now, you can instead type:       if(response.Employee.Address.YearsAt.IsBetween(2, 10))     {     ...     }     Note that this applies to all IComparable<T> -- that's ints, chars, strings, DateTime, etc -- and does not depend on any logical domain. In addition, it satisfies the second point and actually makes the code more readable and maintainable.   Let's look at the third point. In it we said that an extension method should fit the most specific interface or type possible. Now, I'm not saying if you have something that applies to enumerables, you create an extension for List, Array, Dictionary, etc (though you may have reasons for doing so), but that you should beware of making things TOO general.   For example, let's say we had an extension method like this:       public static T ConvertTo<T>(this object value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This lets you do more fluent conversions like:       double d = "5.0".ConvertTo<double>();     However, if you dig into Reflector (LOVE that tool) you will see that if the type you are calling on does not implement IConvertible, what you convert to MUST be the exact type or it will throw an InvalidCastException. Now this may or may not be what you want in this situation, and I leave that up to you. Things like this would fail:       object value = new Employee();     ...     // class cast exception because typeof(IEmployee) != typeof(Employee)     IEmployee emp = value.ConvertTo<IEmployee>();       Yes, that's a downfall of working with Convertible in general, but if you wanted your fluent interface to be more type-safe so that ConvertTo were only callable on IConvertibles (and let casting be a manual task), you could easily make it:         public static T ConvertTo<T>(this IConvertible value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This is what I mean by choosing the best type to extend. Consider that if we used the previous (object) version, every time we typed a dot ('.') on an instance we'd pull up ConvertTo() whether it was applicable or not. By filtering our extension method down to only valid types (those that implement IConvertible) we greatly reduce our IntelliSense pollution and apply a good level of compile-time correctness.   Now my fourth rule is just my general rule-of-thumb. Obviously, you can make extension methods as in-your-face as you want. I included all mine in my work libraries in its own sub-namespace, something akin to:       namespace Shared.Core.Extensions { ... }     This is in a library called Shared.Core, so just referencing the Core library doesn't pollute your IntelliSense, you have to actually do a using on Shared.Core.Extensions to bring the methods in. This is very similar to the way Microsoft puts its extension methods in System.Linq. This way, if you want 'em, you use the appropriate namespace. If you don't want 'em, they won't pollute your namespace.   To really make this work, however, that namespace should only include extension methods and subordinate types those extensions themselves may use. If you plant other useful classes in those namespaces, once a user includes it, they get all the extensions too.   Also, just as a personal preference, extension methods that aren't simply syntactical shortcuts, I like to put in a static utility class and then have extension methods for syntactical candy. For instance, I think it imaginable that any object could be converted to XML:       namespace Shared.Core     {         // A collection of XML Utility classes         public static class XmlUtility         {             ...             // Serialize an object into an xml string             public static string ToXml(object input)             {                 var xs = new XmlSerializer(input.GetType());                   // use new UTF8Encoding here, not Encoding.UTF8. The later includes                 // the BOM which screws up subsequent reads, the former does not.                 using (var memoryStream = new MemoryStream())                 using (var xmlTextWriter = new XmlTextWriter(memoryStream, new UTF8Encoding()))                 {                     xs.Serialize(xmlTextWriter, input);                     return Encoding.UTF8.GetString(memoryStream.ToArray());                 }             }             ...         }     }   I also wanted to be able to call this from an object like:       value.ToXml();     But here's the problem, if i made this an extension method from the start with that one little keyword "this", it would pop into IntelliSense for all objects which could be very polluting. Instead, I put the logic into a utility class so that users have the choice of whether or not they want to use it as just a class and not pollute IntelliSense, then in my extensions namespace, I add the syntactical candy:       namespace Shared.Core.Extensions     {         public static class XmlExtensions         {             public static string ToXml(this object value)             {                 return XmlUtility.ToXml(value);             }         }     }   So now it's the best of both worlds. On one hand, they can use the utility class if they don't want to pollute IntelliSense, and on the other hand they can include the Extensions namespace and use as an extension if they want. The neat thing is it also adheres to the Single Responsibility Principle. The XmlUtility is responsible for converting objects to XML, and the XmlExtensions is responsible for extending object's interface for ToXml().

    Read the article

  • Share and Deliver BI Publisher Reports in Multiple Languages

    - by kanichiro.nishida
    When you share your reports with someone who speak and read in different languages you want your reports to be shown in their language, right ? Well, translating reports with BI Publisher is not only easy but also reduces the maintenance cost a lot. Many of us in the BI Publisher product development team used to work in Globalization and Multi Lingual support, which enables Oracle products and applications to be used in many different languages and countries and territories.  And we have a lot of experience in this area. In fact, being a strategic reporting platform for Oracle EBS, PeopleSoft, JD Edwards, Siebel, and many other Oracle application products, our customers from all over the world are generating thousands of thousands of reports, including out-of-the-box pre-developed reports from Oracle and customer created or customized reports, in their own local language everyday as they operate and manage their business. Today, I’m going to talk about this very topic, how to translate my reports with BI Publisher 11G. Translation Grows, not the Numbers of the Reports Most of the reporting tools, regardless if it’s traditional or new, always take this translation on the back burner. They require their users to copy an original report and translate the whole thing. So when you want to support additional10 languages you will need to have 10 copies of the original. Imagine when you have 50 reports then you will end up having 500 reports (50 x 10) ! Now you need to maintain these 500 reports, whenever you need to make a change in a report you need to apply the same change to the other 10 reports. And as you imagine this is not only a nightmare for IT managements but not acceptable especially for the applications like Oracle EBS that supports over 30 languages. So first thing we did was, very simple, we separated the translation out of the report and marry it to the report only at the report generation. This means, regardless of how many languages you need to support you need to have only one report and translation files for the 10 languages, which would contain the translated letters and words. So let’s say you have 50 reports and need to support 10 languages for those reports you still have only 50 reports and each report now has 10 language translation files. Yes, translation is the one should grow as you add more languages to support, not the report itself! And second, we provide the translation files in XLIFF format, which is an international standard XML based format to exchange and maintain translation strings. So once you generate the XLIFF files for your reports with BI Publisher then you can work with any translation vendors in the world to make a mass translation or you can translate the XML files by yourself by manually updating the translatable strings presented in this text file. Lastly, we made it easier to manage the translation process starting from generating the XLIFF files to uploading the translated XLIFF files back to the BI Publisher server. You can generate, download, upload the XLIFF files from the BI Publisher’s Web interface with your browser and you can see the translated reports right away without needing to shutdown or restart your server. While the translated reports are displayed based on your language preference setting you can also specify a different language when you schedule or deliver the reports so that they can be generated in your customer’s preferred language. What Can I Translate? When it comes to translation there are three things. First, report content translation. When you receive a report you like to see the content like report title, section title, comments, annotation, table column header, and anything that are static and embedded in the report. in your preferred language. We call this Reports Content translation. Second, when you open a report online you might want to see not only the report content being translated but also the report UI, such as report name, parameter name, layout name, and anything that would help you to navigate around the reports, to be translated in your language. We call this Reports UI translation. And this separation of the Reports Content and Reports UI translation makes it very useful especially when you want to navigate through the reports in your preferred language UI but want to generate the reports in your customer’s preferred language. Imagine you are English native speaker and need to generate and send a report to your customers in China. You like to see the report name, parameter name in English so that you can comfortably navigate to the report and generate the report output, but like to see the report generated in Chinese so that the your customers in China can understand the report when they receive it. And lastly, you might want to see even the data presented in the report to be translated. For example, you might want to see product names in an Order Status report to be translated based on the report viewer’s language preference. We call this Reporting Data translation. Since this Reporting Data translation is maintained at the data source level such as Database tables along with the main data, you need to prepare the translation at the data source level first. Then, you want to make sure that your query is switched accordingly based on the language preference setting so that the translated data will be retrieved. How to Translate BI Publisher Reports? Now when it comes to ‘how to translate BI Publisher reports?’ the main focus here is about the translation for the Report Content and Report UI. And I just created this video to show you how to create and manage the translation with BI Publisher 11G. Please take a look at the clip below.   In today’s business world, customers and suppliers are from all over the world regardless of the size of the company or organization. Supporting multiple languages for your reports is no longer something ‘nice to have’, it’s mandatory. BI Publisher is designed to support multi lingual reports from the beginning without any extra hidden cost of license or configuration like other reporting tools such as Crystal Reports. You can support additional languages translation at any time with the very simple steps shown in the video above. Happy translation! Please share your translation experience with us! 

    Read the article

  • How to free up space on /boot? [closed]

    - by Phrogz
    Possible Duplicate: Free up more space on /boot I logged onto my server today to find the message: => /boot is using 98.9% of 91MB When I look at /boot I see that it is indeed very low on space, and has old-kernel files in it: phrogz@planar:~$ df -h /boot Filesystem Size Used Avail Use% Mounted on /dev/sda1 92M 54M 33M 63% /boot phrogz@planar:~$ la /boot total 81880 drwxr-xr-x 4 root root 3072 2011-12-02 06:26 ./ drwxr-xr-x 22 root root 4096 2011-09-29 06:37 ../ -rw-r--r-- 1 root root 646419 2011-03-01 19:02 abi-2.6.32-30-server -rw-r--r-- 1 root root 646419 2011-04-08 17:07 abi-2.6.32-31-server -rw-r--r-- 1 root root 646454 2011-04-20 16:53 abi-2.6.32-32-server -rw-r--r-- 1 root root 646454 2011-07-29 16:07 abi-2.6.32-33-server -rw-r--r-- 1 root root 646710 2011-09-13 18:00 abi-2.6.32-34-server -rw-r--r-- 1 root root 646820 2011-10-11 11:10 abi-2.6.32-35-server -rw-r--r-- 1 root root 110687 2011-03-01 19:02 config-2.6.32-30-server -rw-r--r-- 1 root root 110676 2011-04-08 17:07 config-2.6.32-31-server -rw-r--r-- 1 root root 110687 2011-04-20 16:53 config-2.6.32-32-server -rw-r--r-- 1 root root 110687 2011-07-29 16:07 config-2.6.32-33-server -rw-r--r-- 1 root root 110687 2011-09-13 18:00 config-2.6.32-34-server -rw-r--r-- 1 root root 110687 2011-10-11 11:10 config-2.6.32-35-server drwxr-xr-x 3 root root 6144 2011-12-02 06:26 grub/ -rw-r--r-- 1 root root 8258196 2011-05-18 11:58 initrd.img-2.6.32-30-server -rw-r--r-- 1 root root 8259568 2011-05-23 20:24 initrd.img-2.6.32-31-server -rw-r--r-- 1 root root 8257374 2011-05-30 07:47 initrd.img-2.6.32-32-server -rw-r--r-- 1 root root 8287489 2011-08-10 06:37 initrd.img-2.6.32-33-server -rw-r--r-- 1 root root 8288075 2011-09-29 06:37 initrd.img-2.6.32-34-server drwx------ 2 root root 12288 2011-05-18 11:46 lost+found/ -rw-r--r-- 1 root root 160280 2010-03-23 03:40 memtest86+.bin -rw-r--r-- 1 root root 2179117 2011-03-01 19:02 System.map-2.6.32-30-server -rw-r--r-- 1 root root 2179628 2011-04-08 17:07 System.map-2.6.32-31-server -rw-r--r-- 1 root root 2178240 2011-04-20 16:53 System.map-2.6.32-32-server -rw-r--r-- 1 root root 2178382 2011-07-29 16:07 System.map-2.6.32-33-server -rw-r--r-- 1 root root 2178952 2011-09-13 18:00 System.map-2.6.32-34-server -rw-r--r-- 1 root root 2179333 2011-10-11 11:10 System.map-2.6.32-35-server -rw-r--r-- 1 root root 1336 2011-03-01 19:08 vmcoreinfo-2.6.32-30-server -rw-r--r-- 1 root root 1336 2011-04-08 17:13 vmcoreinfo-2.6.32-31-server -rw-r--r-- 1 root root 1336 2011-04-20 16:54 vmcoreinfo-2.6.32-32-server -rw-r--r-- 1 root root 1336 2011-07-29 16:08 vmcoreinfo-2.6.32-33-server -rw-r--r-- 1 root root 1336 2011-09-13 18:03 vmcoreinfo-2.6.32-34-server -rw-r--r-- 1 root root 1336 2011-10-11 11:11 vmcoreinfo-2.6.32-35-server -rw-r--r-- 1 root root 4111552 2011-03-01 19:02 vmlinuz-2.6.32-30-server -rw-r--r-- 1 root root 4113344 2011-04-08 17:07 vmlinuz-2.6.32-31-server -rw-r--r-- 1 root root 4106528 2011-04-20 16:53 vmlinuz-2.6.32-32-server -rw-r--r-- 1 root root 4107648 2011-07-29 16:07 vmlinuz-2.6.32-33-server -rw-r--r-- 1 root root 4108960 2011-09-13 18:00 vmlinuz-2.6.32-34-server -rw-r--r-- 1 root root 4111040 2011-10-11 11:10 vmlinuz-2.6.32-35-server I was able to find the old kernel packages like so: phrogz@planar:/boot$ dpkg -l | grep linux-image ii linux-image-2.6.32-30-server 2.6.32-30.59 Linux kernel image for version 2.6.32 on x86 ii linux-image-2.6.32-31-server 2.6.32-31.61 Linux kernel image for version 2.6.32 on x86 ii linux-image-2.6.32-32-server 2.6.32-32.62 Linux kernel image for version 2.6.32 on x86 ii linux-image-2.6.32-33-server 2.6.32-33.72 Linux kernel image for version 2.6.32 on x86 ii linux-image-2.6.32-34-server 2.6.32-34.77 Linux kernel image for version 2.6.32 on x86 iF linux-image-2.6.32-35-server 2.6.32-35.78 Linux kernel image for version 2.6.32 on x86 iU linux-image-server 2.6.32.36.42 Linux kernel image on Server Equipment. …and I can see that many of them are older than my current image: phrogz@planar:/boot$ uname -a Linux planar 2.6.32-34-server #77-Ubuntu SMP Tue Sep 13 20:54:38 UTC 2011 x86_64 GNU/Linux However, I can't actually remove them due to an unmet dependency: phrogz@planar:/boot$ sudo apt-get --purge remove linux-image-2.6.32-30-server Reading package lists... Done Building dependency tree Reading state information... Done You might want to run `apt-get -f install' to correct these: The following packages have unmet dependencies: linux-image-server: Depends: linux-image-2.6.32-36-server but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). But I can't fix the dependency (presumably due to low disk space): phrogz@planar:/boot$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: liblcms1 linux-headers-2.6.32-32-server libnspr4-0d linux-headers-2.6.32-33-server linux-headers-2.6.32-32 linux-headers-2.6.32-33 linux-headers-2.6.32-34 libcups2 tzdata-java libjpeg62 linux-headers-2.6.32-34-server libavahi-client3 ca-certificates-java libnss3-1d Use 'apt-get autoremove' to remove them. The following extra packages will be installed: linux-image-2.6.32-36-server Suggested packages: fdutils linux-doc-2.6.32 linux-source-2.6.32 linux-tools The following NEW packages will be installed: linux-image-2.6.32-36-server 0 upgraded, 1 newly installed, 0 to remove and 8 not upgraded. 3 not fully installed or removed. Need to get 0B/31.8MB of archives. After this operation, 128MB of additional disk space will be used. Do you want to continue [Y/n]? (Reading database ... 145200 files and directories currently installed.) Unpacking linux-image-2.6.32-36-server (from .../linux-image-2.6.32-36-server_2.6.32-36.79_amd64.deb) ... Done. dpkg: error processing /var/cache/apt/archives/linux-image-2.6.32-36-server_2.6.32-36.79_amd64.deb (--unpack): failed in buffer_write(fd) (10, ret=-1): backend dpkg-deb during `./boot/vmlinuz-2.6.32-36-server': No space left on device dpkg-deb: subprocess paste killed by signal (Broken pipe) Running postrm hook script /usr/sbin/update-grub. Generating grub.cfg ... Found linux image: /boot/vmlinuz-2.6.32-35-server Found linux image: /boot/vmlinuz-2.6.32-34-server Found initrd image: /boot/initrd.img-2.6.32-34-server Found linux image: /boot/vmlinuz-2.6.32-33-server Found initrd image: /boot/initrd.img-2.6.32-33-server Found linux image: /boot/vmlinuz-2.6.32-32-server Found initrd image: /boot/initrd.img-2.6.32-32-server Found linux image: /boot/vmlinuz-2.6.32-31-server Found initrd image: /boot/initrd.img-2.6.32-31-server Found linux image: /boot/vmlinuz-2.6.32-30-server Found initrd image: /boot/initrd.img-2.6.32-30-server Found memtest86+ image: /memtest86+.bin done Errors were encountered while processing: /var/cache/apt/archives/linux-image-2.6.32-36-server_2.6.32-36.79_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) How do I free up space on /boot so that I can fix my dependencies? Should I just delete the files manually? And then, should I resize my /boot to be larger, so this doesn't happen again? If so, how? If not, what maintenance should I be running regularly to prevent the accumulation of this cruft?

    Read the article

  • AndEngine; Box2D - high speed body overlapping, prismatic joints

    - by Visher
    I'm trying to make good suspension for my car game, but I'm getting nervous of some problems with it. At the beginning, I've tried to make it out of one prismatic joint/revolute joint per one wheel only, but surprisingly prismatic joint that should only move in Y asix moves also in X axis, if car travels very fast, or even on low speeds if there's setContinuousPhysics = true. This causes wheels to "shift back", moving them away from axle. Now I've tried to add some bodies that will keep it in place: Suspension helper collides with spring only, wheel doesn't collide with spring&helper&vehicle body This is how I create those elements: rect = new Rectangle(1100, 1350, 200, 50, getVertexBufferObjectManager()); rect.setColor(Color.RED); scene.attachChild(rect); //rect.setRotation(90); Rectangle miniRect1 = new Rectangle(1102, 1355, 30, 50, getVertexBufferObjectManager()); miniRect1.setColor(0, 0, 1, 0.5f); miniRect1.setVisible(true); scene.attachChild(miniRect1); Rectangle miniRect2 = new Rectangle(1268, 1355, 30, 50, getVertexBufferObjectManager()); miniRect2.setColor(0, 0, 1, 0.5f); miniRect1.setVisible(true); scene.attachChild(miniRect2); rectBody = PhysicsFactory.createBoxBody( physicsWorld, rect, BodyDef.BodyType.DynamicBody, PhysicsFactory.createFixtureDef(10.0f, 0.01f, 10.0f)); rectBody.setUserData("car"); Body miniRect1Body = PhysicsFactory.createBoxBody( physicsWorld, miniRect1, BodyDef.BodyType.DynamicBody, PhysicsFactory.createFixtureDef(10.0f, 0.01f, 10.0f)); miniRect1Body.setUserData("suspension"); Body miniRect2Body = PhysicsFactory.createBoxBody( physicsWorld, miniRect2, BodyDef.BodyType.DynamicBody, PhysicsFactory.createFixtureDef(10.0f, 0.01f, 10.0f)); miniRect2Body.setUserData("suspension"); physicsWorld.registerPhysicsConnector(new PhysicsConnector(rect, rectBody, true, true)); physicsWorld.registerPhysicsConnector(new PhysicsConnector(miniRect1, miniRect1Body, true, true)); physicsWorld.registerPhysicsConnector(new PhysicsConnector(miniRect2, miniRect2Body, true, true)); PrismaticJointDef miniRect1JointDef = new PrismaticJointDef(); miniRect1JointDef.initialize(rectBody, miniRect1Body, miniRect1Body.getWorldCenter(), new Vector2(0.0f, 0.3f)); miniRect1JointDef.collideConnected = false; miniRect1JointDef.enableMotor= true; miniRect1JointDef.maxMotorForce = 15; miniRect1JointDef.motorSpeed = 5; miniRect1JointDef.enableLimit = true; physicsWorld.createJoint(miniRect1JointDef); PrismaticJointDef miniRect2JointDef = new PrismaticJointDef(); miniRect2JointDef.initialize(rectBody, miniRect2Body, miniRect2Body.getWorldCenter(), new Vector2(0.0f, 0.3f)); miniRect2JointDef.collideConnected = false; miniRect2JointDef.enableMotor= true; miniRect2JointDef.maxMotorForce = 15; miniRect2JointDef.motorSpeed = 5; miniRect2JointDef.enableLimit = true; physicsWorld.createJoint(miniRect2JointDef); scene.attachChild(karoseriaSprite); Rectangle r1 = new Rectangle(1050, 1300, 52, 150, getVertexBufferObjectManager()); r1.setColor(0, 1, 0, 0.5f); r1.setVisible(true); scene.attachChild(r1); Body r1body = PhysicsFactory.createBoxBody(physicsWorld, r1, BodyDef.BodyType.DynamicBody, PhysicsFactory.createFixtureDef(10.0f, 0.001f, 0.01f)); r1body.setUserData("suspensionHelper"); physicsWorld.registerPhysicsConnector(new PhysicsConnector(r1, r1body, true, true)); WeldJointDef r1jointDef = new WeldJointDef(); r1jointDef.initialize(r1body, rectBody, r1body.getWorldCenter()); physicsWorld.createJoint(r1jointDef); Rectangle r2 = new Rectangle(1132, 1300, 136, 150, getVertexBufferObjectManager()); r2.setColor(0, 1, 0, 0.5f); r2.setVisible(true); scene.attachChild(r2); Body r2body = PhysicsFactory.createBoxBody(physicsWorld, r2, BodyDef.BodyType.DynamicBody, PhysicsFactory.createFixtureDef(10.0f, 0.001f, 0.01f)); r2body.setUserData("suspensionHelper"); physicsWorld.registerPhysicsConnector(new PhysicsConnector(r2, r2body, true, true)); WeldJointDef r2jointDef = new WeldJointDef(); r2jointDef.initialize(r2body, rectBody, r2body.getWorldCenter()); physicsWorld.createJoint(r2jointDef); Rectangle r3 = new Rectangle(1298, 1300, 50, 150, getVertexBufferObjectManager()); r3.setColor(0, 1, 0, 0.5f); r3.setVisible(true); scene.attachChild(r3); Body r3body = PhysicsFactory.createBoxBody(physicsWorld, r3, BodyDef.BodyType.DynamicBody, PhysicsFactory.createFixtureDef(1f, 0.01f, 0.01f)); r3body.setUserData("suspensionHelper"); physicsWorld.registerPhysicsConnector(new PhysicsConnector(r3, r3body, true, true)); WeldJointDef r3jointDef = new WeldJointDef(); r3jointDef.initialize(r3body, rectBody, r3body.getWorldCenter()); physicsWorld.createJoint(r3jointDef); MouseJointDef md = new MouseJointDef(); Sprite wheel1 = new Sprite( miniRect1.getX()+miniRect1.getWidth()/2-wheelTexture.getWidth()/2, miniRect1.getY()+miniRect1.getHeight()-wheelTexture.getHeight()/2, wheelTexture, engine.getVertexBufferObjectManager()); scene.attachChild(wheel1); Body wheel1body = PhysicsFactory.createCircleBody( physicsWorld, wheel1, BodyDef.BodyType.DynamicBody, PhysicsFactory.createFixtureDef(10.0f, 0.01f, 5.0f)); wheel1body.setUserData("wheel"); Shape wheel1shape = wheel1body.getFixtureList().get(0).getShape(); wheel1shape.setRadius(wheel1shape.getRadius()*(3.0f/4.0f)); physicsWorld.registerPhysicsConnector(new PhysicsConnector(wheel1, wheel1body, true, true)); Sprite wheel2 = new Sprite( miniRect2.getX()+miniRect2.getWidth()/2-wheelTexture.getWidth()/2, miniRect2.getY()+miniRect2.getHeight()-wheelTexture.getHeight()/2, wheelTexture, engine.getVertexBufferObjectManager()); scene.attachChild(wheel2); Body wheel2body = PhysicsFactory.createCircleBody( physicsWorld, wheel2, BodyDef.BodyType.DynamicBody, PhysicsFactory.createFixtureDef(10.0f, 0.01f, 5.0f)); wheel2body.setUserData("wheel"); Shape wheel2shape = wheel2body.getFixtureList().get(0).getShape(); wheel2shape.setRadius(wheel2shape.getRadius()*(3.0f/4.0f)); physicsWorld.registerPhysicsConnector(new PhysicsConnector(wheel2, wheel2body, true, true)); RevoluteJointDef frontWheelRevoluteJointDef = new RevoluteJointDef(); frontWheelRevoluteJointDef.initialize(wheel1body, miniRect1Body, wheel1body.getWorldCenter()); frontWheelRevoluteJointDef.collideConnected = false; RevoluteJointDef rearWheelRevoluteJointDef = new RevoluteJointDef(); rearWheelRevoluteJointDef.initialize(wheel2body, miniRect2Body, wheel2body.getWorldCenter()); rearWheelRevoluteJointDef.collideConnected = false; rearWheelRevoluteJointDef.motorSpeed = 2050; rearWheelRevoluteJointDef.maxMotorTorque= 3580; physicsWorld.createJoint(frontWheelRevoluteJointDef); Joint j = physicsWorld.createJoint(rearWheelRevoluteJointDef); rearWheelRevoluteJoint = (RevoluteJoint)j; r1body.setBullet(true); r2body.setBullet(true); r3body.setBullet(true); miniRect1Body.setBullet(true); miniRect2Body.setBullet(true); rectBody.setBullet(true); at low speeds, it's OK, but on high speed vehicle can even flip around on flat ground.. Is there a way to make this work better?

    Read the article

  • SortedDictionary and SortedList

    - by Simon Cooper
    Apart from Dictionary<TKey, TValue>, there's two other dictionaries in the BCL - SortedDictionary<TKey, TValue> and SortedList<TKey, TValue>. On the face of it, these two classes do the same thing - provide an IDictionary<TKey, TValue> interface where the iterator returns the items sorted by the key. So what's the difference between them, and when should you use one rather than the other? (as in my previous post, I'll assume you have some basic algorithm & datastructure knowledge) SortedDictionary We'll first cover SortedDictionary. This is implemented as a special sort of binary tree called a red-black tree. Essentially, it's a binary tree that uses various constraints on how the nodes of the tree can be arranged to ensure the tree is always roughly balanced (for more gory algorithmical details, see the wikipedia link above). What I'm concerned about in this post is how the .NET SortedDictionary is actually implemented. In .NET 4, behind the scenes, the actual implementation of the tree is delegated to a SortedSet<KeyValuePair<TKey, TValue>>. One example tree might look like this: Each node in the above tree is stored as a separate SortedSet<T>.Node object (remember, in a SortedDictionary, T is instantiated to KeyValuePair<TKey, TValue>): class Node { public bool IsRed; public T Item; public SortedSet<T>.Node Left; public SortedSet<T>.Node Right; } The SortedSet only stores a reference to the root node; all the data in the tree is accessed by traversing the Left and Right node references until you reach the node you're looking for. Each individual node can be physically stored anywhere in memory; what's important is the relationship between the nodes. This is also why there is no constructor to SortedDictionary or SortedSet that takes an integer representing the capacity; there are no internal arrays that need to be created and resized. This may seen trivial, but it's an important distinction between SortedDictionary and SortedList that I'll cover later on. And that's pretty much it; it's a standard red-black tree. Plenty of webpages and datastructure books cover the algorithms behind the tree itself far better than I could. What's interesting is the comparions between SortedDictionary and SortedList, which I'll cover at the end. As a side point, SortedDictionary has existed in the BCL ever since .NET 2. That means that, all through .NET 2, 3, and 3.5, there has been a bona-fide sorted set class in the BCL (called TreeSet). However, it was internal, so it couldn't be used outside System.dll. Only in .NET 4 was this class exposed as SortedSet. SortedList Whereas SortedDictionary didn't use any backing arrays, SortedList does. It is implemented just as the name suggests; two arrays, one containing the keys, and one the values (I've just used random letters for the values): The items in the keys array are always guarenteed to be stored in sorted order, and the value corresponding to each key is stored in the same index as the key in the values array. In this example, the value for key item 5 is 'z', and for key item 8 is 'm'. Whenever an item is inserted or removed from the SortedList, a binary search is run on the keys array to find the correct index, then all the items in the arrays are shifted to accomodate the new or removed item. For example, if the key 3 was removed, a binary search would be run to find the array index the item was at, then everything above that index would be moved down by one: and then if the key/value pair {7, 'f'} was added, a binary search would be run on the keys to find the index to insert the new item, and everything above that index would be moved up to accomodate the new item: If another item was then added, both arrays would be resized (to a length of 10) before the new item was added to the arrays. As you can see, any insertions or removals in the middle of the list require a proportion of the array contents to be moved; an O(n) operation. However, if the insertion or removal is at the end of the array (ie the largest key), then it's only O(log n); the cost of the binary search to determine it does actually need to be added to the end (excluding the occasional O(n) cost of resizing the arrays to fit more items). As a side effect of using backing arrays, SortedList offers IList Keys and Values views that simply use the backing keys or values arrays, as well as various methods utilising the array index of stored items, which SortedDictionary does not (and cannot) offer. The Comparison So, when should you use one and not the other? Well, here's the important differences: Memory usage SortedDictionary and SortedList have got very different memory profiles. SortedDictionary... has a memory overhead of one object instance, a bool, and two references per item. On 64-bit systems, this adds up to ~40 bytes, not including the stored item and the reference to it from the Node object. stores the items in separate objects that can be spread all over the heap. This helps to keep memory fragmentation low, as the individual node objects can be allocated wherever there's a spare 60 bytes. In contrast, SortedList... has no additional overhead per item (only the reference to it in the array entries), however the backing arrays can be significantly larger than you need; every time the arrays are resized they double in size. That means that if you add 513 items to a SortedList, the backing arrays will each have a length of 1024. To conteract this, the TrimExcess method resizes the arrays back down to the actual size needed, or you can simply assign list.Capacity = list.Count. stores its items in a continuous block in memory. If the list stores thousands of items, this can cause significant problems with Large Object Heap memory fragmentation as the array resizes, which SortedDictionary doesn't have. Performance Operations on a SortedDictionary always have O(log n) performance, regardless of where in the collection you're adding or removing items. In contrast, SortedList has O(n) performance when you're altering the middle of the collection. If you're adding or removing from the end (ie the largest item), then performance is O(log n), same as SortedDictionary (in practice, it will likely be slightly faster, due to the array items all being in the same area in memory, also called locality of reference). So, when should you use one and not the other? As always with these sort of things, there are no hard-and-fast rules. But generally, if you: need to access items using their index within the collection are populating the dictionary all at once from sorted data aren't adding or removing keys once it's populated then use a SortedList. But if you: don't know how many items are going to be in the dictionary are populating the dictionary from random, unsorted data are adding & removing items randomly then use a SortedDictionary. The default (again, there's no definite rules on these sort of things!) should be to use SortedDictionary, unless there's a good reason to use SortedList, due to the bad performance of SortedList when altering the middle of the collection.

    Read the article

  • Premature-Optimization and Performance Anxiety

    - by James Michael Hare
    While writing my post analyzing the new .NET 4 ConcurrentDictionary class (here), I fell into one of the classic blunders that I myself always love to warn about.  After analyzing the differences of time between a Dictionary with locking versus the new ConcurrentDictionary class, I noted that the ConcurrentDictionary was faster with read-heavy multi-threaded operations.  Then, I made the classic blunder of thinking that because the original Dictionary with locking was faster for those write-heavy uses, it was the best choice for those types of tasks.  In short, I fell into the premature-optimization anti-pattern. Basically, the premature-optimization anti-pattern is when a developer is coding very early for a perceived (whether rightly-or-wrongly) performance gain and sacrificing good design and maintainability in the process.  At best, the performance gains are usually negligible and at worst, can either negatively impact performance, or can degrade maintainability so much that time to market suffers or the code becomes very fragile due to the complexity. Keep in mind the distinction above.  I'm not talking about valid performance decisions.  There are decisions one should make when designing and writing an application that are valid performance decisions.  Examples of this are knowing the best data structures for a given situation (Dictionary versus List, for example) and choosing performance algorithms (linear search vs. binary search).  But these in my mind are macro optimizations.  The error is not in deciding to use a better data structure or algorithm, the anti-pattern as stated above is when you attempt to over-optimize early on in such a way that it sacrifices maintainability. In my case, I was actually considering trading the safety and maintainability gains of the ConcurrentDictionary (no locking required) for a slight performance gain by using the Dictionary with locking.  This would have been a mistake as I would be trading maintainability (ConcurrentDictionary requires no locking which helps readability) and safety (ConcurrentDictionary is safe for iteration even while being modified and you don't risk the developer locking incorrectly) -- and I fell for it even when I knew to watch out for it.  I think in my case, and it may be true for others as well, a large part of it was due to the time I was trained as a developer.  I began college in in the 90s when C and C++ was king and hardware speed and memory were still relatively priceless commodities and not to be squandered.  In those days, using a long instead of a short could waste precious resources, and as such, we were taught to try to minimize space and favor performance.  This is why in many cases such early code-bases were very hard to maintain.  I don't know how many times I heard back then to avoid too many function calls because of the overhead -- and in fact just last year I heard a new hire in the company where I work declare that she didn't want to refactor a long method because of function call overhead.  Now back then, that may have been a valid concern, but with today's modern hardware even if you're calling a trivial method in an extremely tight loop (which chances are the JIT compiler would optimize anyway) the results of removing method calls to speed up performance are negligible for the great majority of applications.  Now, obviously, there are those coding applications where speed is absolutely king (for example drivers, computer games, operating systems) where such sacrifices may be made.  But I would strongly advice against such optimization because of it's cost.  Many folks that are performing an optimization think it's always a win-win.  That they're simply adding speed to the application, what could possibly be wrong with that?  What they don't realize is the cost of their choice.  For every piece of straight-forward code that you obfuscate with performance enhancements, you risk the introduction of bugs in the long term technical debt of the application.  It will become so fragile over time that maintenance will become a nightmare.  I've seen such applications in places I have worked.  There are times I've seen applications where the designer was so obsessed with performance that they even designed their own memory management system for their application to try to squeeze out every ounce of performance.  Unfortunately, the application stability often suffers as a result and it is very difficult for anyone other than the original designer to maintain. I've even seen this recently where I heard a C++ developer bemoaning that in VS2010 the iterators are about twice as slow as they used to be because Microsoft added range checking (probably as part of the 0x standard implementation).  To me this was almost a joke.  Twice as slow sounds bad, but it almost never as bad as you think -- especially if you're gaining safety.  The only time twice is really that much slower is when once was too slow to begin with.  Think about it.  2 minutes is slow as a response time because 1 minute is slow.  But if an iterator takes 1 microsecond to move one position and a new, safer iterator takes 2 microseconds, this is trivial!  The only way you'd ever really notice this would be in iterating a collection just for the sake of iterating (i.e. no other operations).  To my mind, the added safety makes the extra time worth it. Always favor safety and maintainability when you can.  I know it can be a hard habit to break, especially if you started out your career early or in a language such as C where they are very performance conscious.  But in reality, these type of micro-optimizations only end up hurting you in the long run. Remember the two laws of optimization.  I'm not sure where I first heard these, but they are so true: For beginners: Do not optimize. For experts: Do not optimize yet. This is so true.  If you're a beginner, resist the urge to optimize at all costs.  And if you are an expert, delay that decision.  As long as you have chosen the right data structures and algorithms for your task, your performance will probably be more than sufficient.  Chances are it will be network, database, or disk hits that will be your slow-down, not your code.  As they say, 98% of your code's bottleneck is in 2% of your code so premature-optimization may add maintenance and safety debt that won't have any measurable impact.  Instead, code for maintainability and safety, and then, and only then, when you find a true bottleneck, then you should go back and optimize further.

    Read the article

  • More Fun with C# Iterators and Generators

    - by James Michael Hare
    In my last post, I talked quite a bit about iterators and how they can be really powerful tools for filtering a list of items down to a subset of items.  This had both pros and cons over returning a full collection, which, in summary, were:   Pros: If traversal is only partial, does not have to visit rest of collection. If evaluation method is costly, only incurs that cost on elements visited. Adds little to no garbage collection pressure.    Cons: Very slight performance impact if you know caller will always consume all items in collection. And as we saw in the last post, that con for the cost was very, very small and only really became evident on very tight loops consuming very large lists completely.    One of the key items to note, though, is the garbage!  In the traditional (return a new collection) method, if you have a 1,000,000 element collection, and wish to transform or filter it in some way, you have to allocate space for that copy of the collection.  That is, say you have a collection of 1,000,000 items and you want to double every item in the collection.  Well, that means you have to allocate a collection to hold those 1,000,000 items to return, which is a lot especially if you are just going to use it once and toss it.   Iterators, though, don't have this problem.  Each time you visit the node, it would return the doubled value of the node (in this example) and not allocate a second collection of 1,000,000 doubled items.  Do you see the distinction?  In both cases, we're consuming 1,000,000 items.  But in one case we pass back each doubled item which is just an int (for example's sake) on the stack and in the other case, we allocate a list containing 1,000,000 items which then must be garbage collected.   So iterators in C# are pretty cool, eh?  Well, here's one more thing a C# iterator can do that a traditional "return a new collection" transformation can't!   It can return **unbounded** collections!   I know, I know, that smells a lot like an infinite loop, eh?  Yes and no.  Basically, you're relying on the caller to put the bounds on the list, and as long as the caller doesn't you keep going.  Consider this example:   public static class Fibonacci {     // returns the infinite fibonacci sequence     public static IEnumerable<int> Sequence()     {         int iteration = 0;         int first = 1;         int second = 1;         int current = 0;         while (true)         {             if (iteration++ < 2)             {                 current = 1;             }             else             {                 current = first + second;                 second = first;                 first = current;             }             yield return current;         }     } }   Whoa, you say!  Yes, that's an infinite loop!  What the heck is going on there?  Yes, that was intentional.  Would it be better to have a fibonacci sequence that returns only a specific number of items?  Perhaps, but that wouldn't give you the power to defer the execution to the caller.   The beauty of this function is it is as infinite as the sequence itself!  The fibonacci sequence is unbounded, and so is this method.  It will continue to return fibonacci numbers for as long as you ask for them.  Now that's not something you can do with a traditional method that would return a collection of ints representing each number.  In that case you would eventually run out of memory as you got to higher and higher numbers.  This method, though, never runs out of memory.   Now, that said, you do have to know when you use it that it is an infinite collection and bound it appropriately.  Fortunately, Linq provides a lot of these extension methods for you!   Let's say you only want the first 10 fibonacci numbers:       foreach(var fib in Fibonacci.Sequence().Take(10))     {         Console.WriteLine(fib);     }   Or let's say you only want the fibonacci numbers that are less than 100:       foreach(var fib in Fibonacci.Sequence().TakeWhile(f => f < 100))     {         Console.WriteLine(fib);     }   So, you see, one of the nice things about iterators is their power to work with virtually any size (even infinite) collections without adding the garbage collection overhead of making new collections.    You can also do fun things like this to make a more "fluent" interface for for loops:   // A set of integer generator extension methods public static class IntExtensions {     // Begins counting to inifity, use To() to range this.     public static IEnumerable<int> Every(this int start)     {         // deliberately avoiding condition because keeps going         // to infinity for as long as values are pulled.         for (var i = start; ; ++i)         {             yield return i;         }     }     // Begins counting to infinity by the given step value, use To() to     public static IEnumerable<int> Every(this int start, int byEvery)     {         // deliberately avoiding condition because keeps going         // to infinity for as long as values are pulled.         for (var i = start; ; i += byEvery)         {             yield return i;         }     }     // Begins counting to inifity, use To() to range this.     public static IEnumerable<int> To(this int start, int end)     {         for (var i = start; i <= end; ++i)         {             yield return i;         }     }     // Ranges the count by specifying the upper range of the count.     public static IEnumerable<int> To(this IEnumerable<int> collection, int end)     {         return collection.TakeWhile(item => item <= end);     } }   Note that there are two versions of each method.  One that starts with an int and one that starts with an IEnumerable<int>.  This is to allow more power in chaining from either an existing collection or from an int.  This lets you do things like:   // count from 1 to 30 foreach(var i in 1.To(30)) {     Console.WriteLine(i); }     // count from 1 to 10 by 2s foreach(var i in 0.Every(2).To(10)) {     Console.WriteLine(i); }     // or, if you want an infinite sequence counting by 5s until something inside breaks you out... foreach(var i in 0.Every(5)) {     if (someCondition)     {         break;     }     ... }     Yes, those are kinda play functions and not particularly useful, but they show some of the power of generators and extension methods to form a fluid interface.   So what do you think?  What are some of your favorite generators and iterators?

    Read the article

  • The Application Architecture Domain

    - by Michael Glas
    I have been spending a lot of time thinking about Application Architecture in the context of EA. More specifically, as an Enterprise Architect, what do I need to consider when looking at/defining/designing the Application Architecture Domain?There are several definitions of Application Architecture. TOGAF says “The objective here [in Application Architecture] is to define the major kinds of application system necessary to process the data and support the business”. FEA says the Application Architecture “Defines the applications needed to manage the data and support the business functions”.I agree with these definitions. They reflect what the Application Architecture domain does. However, they need to be decomposed to be practical.I find it useful to define a set of views into the Application Architecture domain. These views reflect what an EA needs to consider when working with/in the Applications Architecture domain. These viewpoints are, at a high level:Capability View: This view reflects how applications alignment with business capabilities. It is a super set of the following views when viewed in aggregate. By looking at the Application Architecture domain in terms of the business capabilities it supports, you get a good perspective on how those applications are directly supporting the business.Technology View: The technology view reflects the underlying technology that makes up the applications. Based on the number of rationalization activities I have seen (more specifically application rationalization), the phrase “complexity equals cost” drives the importance of the technology view, especially when attempting to reduce that complexity through standardization type activities. Some of the technology components to be considered are: Software: The application itself as well as the software the application relies on to function (web servers, application servers). Infrastructure: The underlying hardware and network components required by the application and supporting application software. Development: How the application is created and maintained. This encompasses development components that are part of the application itself (i.e. customizable functions), as well as bolt on development through web services, API’s, etc. The maintenance process itself also falls under this view. Integration: The interfaces that the application provides for integration as well as the integrations to other applications and data sources the application requires to function. Type: Reflects the kind of application (mash-up, 3 tiered, etc). (Note: functional type [CRM, HCM, etc.] are reflected under the capability view). Organization View: Organizations are comprised of people and those people use applications to do their jobs. Trying to define the application architecture domain without taking the organization that will use/fund/change it into consideration is like trying to design a car without thinking about who will drive it (i.e. you may end up building a formula 1 car for a family of 5 that is really looking for a minivan). This view reflects the people aspect of the application. It includes: Ownership: Who ‘owns’ the application? This will usually reflect primary funding and utilization but not always. Funding: Who funds both the acquisition/creation as well as the on-going maintenance (funding to create/change/operate)? Change: Who can/does request changes to the application and what process to the follow? Utilization: Who uses the application, how often do they use it, and how do they use it? Support: Which organization is responsible for the on-going support of the application? Information View: Whether or not you subscribe to the view that “information drives the enterprise”, it is a fact that information is critical. The management, creation, and organization of that information are primary functions of enterprise applications. This view reflects how the applications are tied to information (or at a higher level – how the Application Architecture domain relates to the Information Architecture domain). It includes: Access: The application is the mechanism by which end users access information. This could be through a primary application (i.e. CRM application), or through an information access type application (a BI application as an example). Creation: Applications create data in order to provide information to end-users. (I.e. an application creates an order to be used by an end-user as part of the fulfillment process). Consumption: Describes the data required by applications to function (i.e. a product id is required by a purchasing application to create an order. Application Service View: Organizations today are striving to be more agile. As an EA, I need to provide an architecture that supports this agility. One of the primary ways to achieve the required agility in the application architecture domain is through the use of ‘services’ (think SOA, web services, etc.). Whether it is through building applications from the ground up utilizing services, service enabling an existing application, or buying applications that are already ‘service enabled’, compartmentalizing application functions for re-use helps enable flexibility in the use of those applications in support of the required business agility. The applications service view consists of: Services: Here, I refer to the generic definition of a service “a set of related software functionalities that can be reused for different purposes, together with the policies that should control its usage”. Functions: The activities within an application that are not available / applicable for re-use. This view is helpful when identifying duplication functions between applications that are not service enabled. Delivery Model View: It is hard to talk about EA today without hearing the terms ‘cloud’ or shared services.  Organizations are looking at the ways their applications are delivered for several reasons, to reduce cost (both CAPEX and OPEX), to improve agility (time to market as an example), etc.  From an EA perspective, where/how an application is deployed has impacts on the overall enterprise architecture. From integration concerns to SLA requirements to security and compliance issues, the Enterprise Architect needs to factor in how applications are delivered when designing the Enterprise Architecture. This view reflects how applications are delivered to end-users. The delivery model view consists of different types of delivery mechanisms/deployment options for applications: Traditional: Reflects non-cloud type delivery options. The most prevalent consists of an application running on dedicated hardware (usually specific to an environment) for a single consumer. Private Cloud: The application runs on infrastructure provisioned for exclusive use by a single organization comprising multiple consumers. Public Cloud: The application runs on infrastructure provisioned for open use by the general public. Hybrid: The application is deployed on two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability. While by no means comprehensive, I find that applying these views to the application domain gives a good understanding of what an EA needs to consider when effecting changes to the Application Architecture domain.Finally, the application architecture domain is one of several architecture domains that an EA must consider when developing an overall Enterprise Architecture. The Oracle Enterprise Architecture Framework defines four Primary domains: Business Architecture, Application Architecture, Information Architecture, and Technology Architecture. Each domain links to the others either directly or indirectly at some point. Oracle links them at a high level as follows:Business Capabilities and/or Business Processes (Business Architecture), links to the Applications that enable the capability/process (Applications Architecture – COTS, Custom), links to the Information Assets managed/maintained by the Applications (Information Architecture), links to the technology infrastructure upon which all this runs (Technology Architecture - integration, security, BI/DW, DB infrastructure, deployment model). There are however, times when the EA needs to narrow focus to a particular domain for some period of time. These views help me to do just that.

    Read the article

  • The Business of Winning Innovation: An Exclusive Blog Series

    - by Kerrie Foy
    "The Business of Winning Innovation” is a series of articles authored by Oracle Agile PLM experts on what it takes to make innovation a successful and lucrative competitive advantage. Our customers have proven Agile PLM applications to be enormously flexible and comprehensive, so we’ve launched this article series to showcase some of the most fascinating, value-packed use cases. In this article by Keith Colonna, we kick-off the series by taking a look at the science side of innovation within the Consumer Products industry and how PLM can help companies innovate faster, cheaper, smarter. This article will review how innovation has become the lifeline for growth within consumer products companies and how certain companies are “winning” by creating a competitive advantage for themselves by taking a more enterprise-wide,systematic approach to “innovation”.   Managing the Science of Innovation within the Consumer Products Industry By: Keith Colonna, Value Chain Solution Manager, Oracle The consumer products (CP) industry is very mature and competitive. Most companies within this industry have saturated North America (NA) with their products thus maximizing their NA growth potential. Future growth is expected to come from either expansion outside of North America and/or by way of new ideas and products. Innovation plays an integral role in both of these strategies, whether you’re innovating business processes or the products themselves, and may cause several challenges for the typical CP company, Becoming more innovative is both an art and a science. Most CP companies are very good at the art of coming up with new innovative ideas, but many struggle with perfecting the science aspect that involves the best practice processes that help companies quickly turn ideas into sellable products and services. Symptoms and Causes of Business Pain Struggles associated with the science of innovation show up in a variety of ways, like: · Establishing and storing innovative product ideas and data · Funneling these ideas to the chosen few · Time to market cycle time and on-time launch rates · Success rates, or how often the best idea gets chosen · Imperfect decision making (i.e. the ability to kill projects that are not projected to be winners) · Achieving financial goals · Return on R&D investment · Communicating internally and externally as more outsource partners are added globally · Knowing your new product pipeline and project status These challenges (and others) can be consolidated into three root causes: A lack of visibility Poor data with limited access The inability to truly collaborate enterprise-wide throughout your extended value chain Choose the Right Remedy Product Lifecycle Management (PLM) solutions are uniquely designed to help companies solve these types challenges and their root causes. However, PLM solutions can vary widely in terms of configurability, functionality, time-to-value, etc. Business leaders should evaluate PLM solution in terms of their own business drivers and long-term vision to determine the right fit. Many of these solutions are point solutions that can help you cure only one or two business pains in the short term. Others have been designed to serve other industries with different needs. Then there are those solutions that demo well but are owned by companies that are either unable or unwilling to continuously improve their solution to stay abreast of the ever changing needs of the CP industry to grow through innovation. What the Right PLM Solution Should Do for You Based on more than twenty years working in the CP industry, I recommend investing in a single solution that can help you solve all of the issues associated with the science of innovation in a totally integrated fashion. By integration I mean the (1) integration of the all of the processes associated with the development, maintenance and delivery of your product data, and (2) the integration, or harmonization of this product data with other downstream sources, like ERP, product catalogues and the GS1 Global Data Synchronization Network (or GDSN, which is now a CP industry requirement for doing business with most retailers). The right PLM solution should help you: Increase Revenue. A best practice PLM solution should help a company grow its revenues by consolidating product development cycle-time and helping companies get new and improved products to market sooner. PLM should also eliminate many of the root causes for a product being returned, refused and/or reclaimed (which takes away from top-line growth) by creating an enterprise-wide, collaborative, workflow-driven environment. Reduce Costs. A strong PLM solution should help shave many unnecessary costs that companies typically take for granted. Rationalizing SKU’s, components (ingredients and packaging) and suppliers is a major opportunity at most companies that PLM should help address. A natural outcome of this rationalization is lower direct material spend and a reduction of inventory. Another cost cutting opportunity comes with PLM when it helps companies avoid certain costs associated with process inefficiencies that lead to scrap, rework, excess and obsolete inventory, poor end of life administration, higher cost of quality and regulatory and increased expediting. Mitigate Risk. Risks are the hardest to quantify but can be the most costly to a company. Food safety, recalls, line shutdowns, customer dissatisfaction and, worst of all, the potential tarnishing of your brands are a few of the debilitating risks that CP companies deal with on a daily basis. These risks are so uniquely severe that they require an enterprise PLM solution specifically designed for the CP industry that safeguards product information and processes while still allowing the art of innovation to flourish. Many CP companies have already created a winning advantage by leveraging a single, best practice PLM solution to establish an enterprise-wide, systematic approach to innovation. Oracle’s Answer for the Consumer Products Industry Oracle is dedicated to solving the growth and innovation challenges facing the CP industry. Oracle’s Agile Product Lifecycle Management for Process solution was originally developed with and for CP companies and is driven by a specialized development staff solely focused on maintaining and continuously improving the solution per the latest industry requirements. Agile PLM for Process helps CP companies handle all of the processes associated with managing the science of the innovation process, including: specification management, new product development/project and portfolio management, formulation optimization, supplier management, and quality and regulatory compliance to name a few. And as I mentioned earlier, integration is absolutely critical. Many Oracle CP customers, both with Oracle ERP systems and non-Oracle ERP systems, report benefits from Oracle’s Agile PLM for Process. In future articles we will explain in greater detail how both existing Oracle customers (like Gallo, Smuckers, Land-O-Lakes and Starbucks) and new Oracle customers (like ConAgra, Tyson, McDonalds and Heinz) have all realized the benefits of Agile PLM for Process and its integration to their ERP systems. More to Come Stay tuned for more articles in our blog series “The Business of Winning Innovation.” While we will also feature articles focused on other industries, look forward to more on how Agile PLM for Process addresses innovation challenges facing the CP industry. Additional topics include: Innovation Data Management (IDM), New Product Development (NPD), Product Quality Management (PQM), Menu Management,Private Label Management, and more! . Watch this video for more info about Agile PLM for Process

    Read the article

  • Creating a Training Lab on Windows Azure

    - by Michael Stephenson
    Originally posted on: http://geekswithblogs.net/michaelstephenson/archive/2013/06/17/153149.aspxThis week we are preparing for a training course that Alan Smith will be running for the support teams at one of my customers around Windows Azure. In order to facilitate the training lab we have a few prerequisites we need to handle. One of the biggest ones is that although the support team all have MSDN accounts the local desktops they work on are not ideal for running most of the labs as we want to give them some additional developer background training around Azure. Some recent Azure announcements really help us in this area: MSDN software can now be used on Azure VM You don't pay for Azure VM's when they are no longer used  Since the support team only have limited experience of Windows Azure and the organisation also have an Enterprise Agreement we decided it would be best value for money to spin up a training lab in a subscription on the EA and then we can turn the machines off when we are done. At the same time we would be able to spin them back up when the users need to do some additional lab work once the training course is completed. In order to achieve this I wanted to create a powershell script which would setup my training lab. The aim was to create 18 VM's which would be based on a prebuilt template with Visual Studio and the Azure development tools. The script I used is described below The Start & Variables The below text will setup the powershell environment and some variables which I will use elsewhere in the script. It will also import the Azure Powershell cmdlets. You can see below that I will need to download my publisher settings file and know some details from my Azure account. At this point I will assume you have a basic understanding of Azure & Powershell so already know how to do this. Set-ExecutionPolicy Unrestrictedcls $startTime = get-dateImport-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\Azure.psd1"# Azure Publisher Settings $azurePublisherSettings = '<Your settings file>.publishsettings'  # Subscription Details $subscriptionName = "<Your subscription name>" $defaultStorageAccount = "<Your default storage account>"  # Affinity Group Details $affinityGroup = '<Your affinity group>' $dataCenter = 'West Europe' # From Get-AzureLocation  # VM Details $baseVMName = 'TRN' $adminUserName = '<Your admin username>' $password = '<Your admin password>' $size = 'Medium' $vmTemplate = '<The name of your VM template image>' $rdpFilePath = '<File path to save RDP files to>' $machineSettingsPath = '<File path to save machine info to>'    Functions In the next section of the script I have some functions which are used to perform certain actions. The first is called CreateVM. This will do the following actions: If the VM already exists it will be deleted Create the cloud service Create the VM from the template I have created Add an endpoint so we can RDP to them all over the same port Download the RDP file so there is a short cut the trainees can easily access the machine via Write settings for the machine to a log file  function CreateVM($machineNo) { # Specify a name for the new VM $machineName = "$baseVMName-$machineNo" Write-Host "Creating VM: $machineName"       # Get the Azure VM Image      $myImage = Get-AzureVMImage $vmTemplate   #If the VM already exists delete and re-create it $existingVm = Get-AzureVM -Name $machineName -ServiceName $serviceName if($existingVm -ne $null) { Write-Host "VM already exists so deleting it" Remove-AzureVM -Name $machineName -ServiceName $serviceName }   "Creating Service" $serviceName = "bupa-azure-train-$machineName" Remove-AzureService -Force -ServiceName $serviceName New-AzureService -Location $dataCenter -ServiceName $serviceName   Write-Host "Creating VM: $machineName" New-AzureQuickVM -Windows -name $machineName -ServiceName $serviceName -ImageName $myImage.ImageName -InstanceSize $size -AdminUsername $adminUserName -Password $password  Write-Host "Updating the RDP endpoint for $machineName" Get-AzureVM -name $machineName -ServiceName $serviceName ` | Add-AzureEndpoint -Name RDP -Protocol TCP -LocalPort 3389 -PublicPort 550 ` | Update-AzureVM    Write-Host "Get the RDP File for machine $machineName" $machineRDPFilePath = "$rdpFilePath\$machineName.rdp" Get-AzureRemoteDesktopFile -name $machineName -ServiceName $serviceName -LocalPath "$machineRDPFilePath"   WriteMachineSettings "$machineName" "$serviceName" }    The delete machine settings function is used to delete the log file before we start re-running the process.  function DeleteMachineSettings() { Write-Host "Deleting the machine settings output file" [System.IO.File]::Delete("$machineSettingsPath"); }    The write machine settings function will get the VM and then record its details to the log file. The importance of the log file is that I can easily provide the information for all of the VM's to our infrastructure team to be able to configure access to all of the VM's    function WriteMachineSettings([string]$vmName, [string]$vmServiceName) { Write-Host "Writing to the machine settings output file"   $vm = Get-AzureVM -name $vmName -ServiceName $vmServiceName $vmEndpoint = Get-AzureEndpoint -VM $vm -Name RDP   $sb = new-object System.Text.StringBuilder $sb.Append("Service Name: "); $sb.Append($vm.ServiceName); $sb.Append(", "); $sb.Append("VM: "); $sb.Append($vm.Name); $sb.Append(", "); $sb.Append("RDP Public Port: "); $sb.Append($vmEndpoint.Port); $sb.Append(", "); $sb.Append("Public DNS: "); $sb.Append($vmEndpoint.Vip); $sb.AppendLine(""); [System.IO.File]::AppendAllText($machineSettingsPath, $sb.ToString());  } # end functions    Rest of Script In the rest of the script it is really just the bit that orchestrates the actions we want to happen. It will load the publisher settings, select the Azure subscription and then loop around the CreateVM function and create 16 VM's  Import-AzurePublishSettingsFile $azurePublisherSettings Set-AzureSubscription -SubscriptionName $subscriptionName -CurrentStorageAccount $defaultStorageAccount Select-AzureSubscription -SubscriptionName $subscriptionName  DeleteMachineSettings    "Starting creating Bupa International Azure Training Lab" $numberOfVMs = 16  for ($index=1; $index -le $numberOfVMs; $index++) { $vmNo = "$index" CreateVM($vmNo); }    "Finished creating Bupa International Azure Training Lab" # Give it a Minute Start-Sleep -s 60  $endTime = get-date "Script run time " + ($endTime - $startTime)    Conclusion As you can see there is nothing too fancy about this script but in our case of creating a small isolated training lab which is not connected to our corporate network then we can easily use this to provision the lab. Im sure if this is of use to anyone you can easily modify it to do other things with the lab environment too. A couple of points to note are that there are some soft limits in Azure about the number of cores and services your subscription can use. You may need to contact the Azure support team to be able to increase this limit. In terms of the real business value of this approach, it was not possible to use the existing desktops to do the training on, and getting some internal virtual machines would have been relatively expensive and time consuming for our ops team to do. With the Azure option we are able to spin these machines up for a temporary period during the training course and then throw them away when we are done. We expect the costing of this test lab to be very small, especially considering we have EA pricing. As a ball park I think my 18 lab VM training environment will cost in the region of $80 per day on our EA. This is a fraction of the cost of the creation of a single VM on premise.

    Read the article

  • The Faces in the Crowdsourcing

    - by Applications User Experience
    By Jeff Sauro, Principal Usability Engineer, Oracle Imagine having access to a global workforce of hundreds of thousands of people who can perform tasks or provide feedback on a design quickly and almost immediately. Distributing simple tasks not easily done by computers to the masses is called "crowdsourcing" and until recently was an interesting concept, but due to practical constraints wasn't used often. Enter Amazon.com. For five years, Amazon has hosted a service called Mechanical Turk, which provides an easy interface to the crowds. The service has almost half a million registered, global users performing a quarter of a million human intelligence tasks (HITs). HITs are submitted by individuals and companies in the U.S. and pay from $.01 for simple tasks (such as determining if a picture is offensive) to several dollars (for tasks like transcribing audio). What do we know about the people who toil away in this digital crowd? Can we rely on the work done in this anonymous marketplace? A rendering of the actual Mechanical Turk (from Wikipedia) Knowing who is behind Amazon's Mechanical Turk is fitting, considering the history of the actual Mechanical Turk. In the late 1800's, a mechanical chess-playing machine awed crowds as it beat master chess players in what was thought to be a mechanical miracle. It turned out that the creator, Wolfgang von Kempelen, had a small person (also a chess master) hiding inside the machine operating the arms to provide the illusion of automation. The field of human computer interaction (HCI) is quite familiar with gathering user input and incorporating it into all stages of the design process. It makes sense then that Mechanical Turk was a popular discussion topic at the recent Computer Human Interaction usability conference sponsored by the Association for Computing Machinery in Atlanta. It is already being used as a source for input on Web sites (for example, Feedbackarmy.com) and behavioral research studies. Two papers shed some light on the faces in this crowd. One paper tells us about the shifting demographics from mostly stay-at-home moms to young men in India. The second paper discusses the reliability and quality of work from the workers. Just who exactly would spend time doing tasks for pennies? In "Who are the crowdworkers?" University of California researchers Ross, Silberman, Zaldivar and Tomlinson conducted a survey of Mechanical Turk worker demographics and compared it to a similar survey done two years before. The initial survey reported workers consisting largely of young, well-educated women living in the U.S. with annual household incomes above $40,000. The more recent survey reveals a shift in demographics largely driven by an influx of workers from India. Indian workers went from 5% to over 30% of the crowd, and this block is largely male (two-thirds) with a higher average education than U.S. workers, and 64% report an annual income of less than $10,000 (keeping in mind $1 has a lot more purchasing power in India). This shifting demographic certainly has implications as language and culture can play critical roles in the outcome of HITs. Of course, the demographic data came from paying Turkers $.10 to fill out a survey, so there is some question about both a self-selection bias (characteristics which cause Turks to take this survey may be unrepresentative of the larger population), not to mention whether we can really trust the data we get from the crowd. Crowds can perform tasks or provide feedback on a design quickly and almost immediately for usability testing. (Photo attributed to victoriapeckham Flikr While having immediate access to a global workforce is nice, one major problem with Mechanical Turk is the incentive structure. Individuals and companies that deploy HITs want quality responses for a low price. Workers, on the other hand, want to complete the task and get paid as quickly as possible, so that they can get on to the next task. Since many HITs on Mechanical Turk are surveys, how valid and reliable are these results? How do we know whether workers are just rushing through the multiple-choice responses haphazardly answering? In "Are your participants gaming the system?" researchers at Carnegie Mellon (Downs, Holbrook, Sheng and Cranor) set up an experiment to find out what percentage of their workers were just in it for the money. The authors set up a 30-minute HIT (one of the more lengthy ones for Mechanical Turk) and offered a very high $4 to those who qualified and $.20 to those who did not. As part of the HIT, workers were asked to read an email and respond to two questions that determined whether workers were likely rushing through the HIT and not answering conscientiously. One question was simple and took little effort, while the second question required a bit more work to find the answer. Workers were led to believe other factors than these two questions were the qualifying aspect of the HIT. Of the 2000 participants, roughly 1200 (or 61%) answered both questions correctly. Eighty-eight percent answered the easy question correctly, and 64% answered the difficult question correctly. In other words, about 12% of the crowd were gaming the system, not paying enough attention to the question or making careless errors. Up to about 40% won't put in more than a modest effort to get paid for a HIT. Young men and those that considered themselves in the financial industry tended to be the most likely to try to game the system. There wasn't a breakdown by country, but given the demographic information from the first article, we could infer that many of these young men come from India, which makes language and other cultural differences a factor. These articles raise questions about the role of crowdsourcing as a means for getting quick user input at low cost. While compensating users for their time is nothing new, the incentive structure and anonymity of Mechanical Turk raises some interesting questions. How complex of a task can we ask of the crowd, and how much should these workers be paid? Can we rely on the information we get from these professional users, and if so, how can we best incorporate it into designing more usable products? Traditional usability testing will still play a central role in enterprise software. Crowdsourcing doesn't replace testing; instead, it makes certain parts of gathering user feedback easier. One can turn to the crowd for simple tasks that don't require specialized skills and get a lot of data fast. As more studies are conducted on Mechanical Turk, I suspect we will see crowdsourcing playing an increasing role in human computer interaction and enterprise computing. References: Downs, J. S., Holbrook, M. B., Sheng, S., and Cranor, L. F. 2010. Are your participants gaming the system?: screening mechanical turk workers. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15, 2010). CHI '10. ACM, New York, NY, 2399-2402. Link: http://doi.acm.org/10.1145/1753326.1753688 Ross, J., Irani, L., Silberman, M. S., Zaldivar, A., and Tomlinson, B. 2010. Who are the crowdworkers?: shifting demographics in mechanical turk. In Proceedings of the 28th of the international Conference Extended Abstracts on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15, 2010). CHI EA '10. ACM, New York, NY, 2863-2872. Link: http://doi.acm.org/10.1145/1753846.1753873

    Read the article

  • Why We Should Learn to Stop Worrying and Love Millennials

    - by HCM-Oracle
    By Christine Mellon Much is said and written about the new generations of employees entering our workforce, as though they are a strange specimen, a mysterious life form to be “figured out,” accommodated and engaged – at a safe distance, of course.  At its worst, this talk takes a critical and disapproving tone, with baby boomer employees adamantly refusing to validate this new breed of worker, let alone determine how to help them succeed and achieve their potential.   The irony of our baby-boomer resentments and suspicions is that they belie the fact that we created the very vision that younger employees are striving to achieve.  From our frustrations with empty careers that did not fulfill us, from our opposition to “the man,” from our sharp memories of our parents’ toiling for 30 years just for the right to retire, from the simple desire not to live our lives in a state of invisibility, came the seeds of hope for something better. One characteristic of Millennial workers that grew from these seeds is the desire to experience as much as possible.  They are the “Experiential Employee”, with a passion for growing in diverse ways and expanding personal and professional horizons.  Rather than rooting themselves in a single company for a career, or even in a single career path, these employees are committed to building a broad portfolio of experiences and capabilities that will enable them to make a difference and to leave a mark of significance in the world.  How much richer is the organization that nurtures and leverages this inclination?  Our curmudgeonly ways must be surrendered and our focus redirected toward building the next generation of talent ecosystems, if we are to optimize what future generations have to offer.   Accelerating Professional Development In spite of our Boomer grumblings about Millennials’ “unrealistic” expectations, the truth is that we have a well-matched set of circumstances.  We have executives-in-waiting who want to learn quickly and a concurrent, urgent need to ramp up their development time, based on anticipated high levels of retirement in the next 10+ years.  Since we need to rapidly skill up these heirs to the corporate kingdom, isn’t it a fortunate coincidence that they are hungry to learn, develop and move fluidly throughout our organizations??  So our challenge now is to efficiently operationalize the wisdom we have acquired about effective learning and development.   We have already evolved from classroom-based models to diverse instructional methods.  The next step is to find the best approaches to help younger employees learn quickly and apply new learnings in an impactful way.   Creating temporary or even permanent functional partnerships among Millennial employees is one way to maximize outcomes.  This might take the form of 2 or more employees owning aspects of what once fell under a single role.  While one might argue this would mean duplication of resources, it could be a short term cost while employees come up to speed.  And the potential benefits would be numerous:  leveraging and validating the inherent sense of community of new generations, creating cross-functional skills with broad applicability, yielding additional perspectives and approaches to traditional work outcomes, and accelerating the performance curve for incumbents through Cooperative Learning (Johnson, D. and Johnson R., 1989, 1999).  This well-researched teaching strategy, where students support each other in the absorption and application of new information, has been shown to deliver faster, more efficient learning, and greater retention. Alternately, perhaps short term contracts with exiting retirees, or former retirees, to help facilitate the development of following generations may have merit.  Again, a short term cost, certainly.  However, the gains realized in shortening the learning curve, and strengthening engagement are substantial and lasting. Ultimately, there needs to be creative thinking applied for each organization on how to accelerate the capabilities of our future leaders in unique ways that mesh with current culture. The manner in which performance is evaluated must finally shift as well.  Employees will need to be assessed on how well they have developed key skills and capabilities vs. end-to-end mastery of functional positions they have no interest in keeping for an entire career. As we become more comfortable in placing greater and greater weight on competencies vs. tasks, we will realize increased organizational agility via this new generation of workers, which will be further enhanced by their natural flexibility and appetite for change. Revisiting Succession  For many years, organizations have failed to deliver desired succession planning outcomes.  According to CEB’s 2013 research, only 28% of current leaders were pre-identified in a succession plan. These disappointing results, along with the entrance of the experiential, Millennial employee into the workforce, may just provide the needed impetus for HR to reinvent succession processes.   We have recognized that the best professional development efforts are not always linear, and the time has come to fully adopt this philosophy in regard to succession as well.  Paths to specific organizational roles will not look the same for newer generations who seek out unique learning opportunities, without consideration of a singular career destination.  Rather than charting particular jobs as precursors for key positions, the experiences and skills behind what makes an incumbent successful must become essential in succession mapping.  And the multitude of ways in which those experiences and skills may be acquired must be factored into the process, along with the individual employee’s level of learning agility. While this may seem daunting, it is necessary and long overdue.  We have talked about the criticality of competency-based succession, however, we have not lived up to our own rhetoric.  Many Boomers have experienced the same frustration in our careers; knowing we are capable of shining in a particular role, but being denied the opportunity due to how our career history lined up, on paper, with documented job requirements.  These requirements usually emphasized past jobs/titles and specific tasks, versus capabilities, drive and willingness (let alone determination) to learn new things.  How satisfying would it be for us to leave a legacy where such narrow thinking no longer applies and potential is amplified? Realizing Diversity Another bloom from the seeds we Boomers have tried to plant over the past decades is a completely evolved view of diversity.  Millennial employees assume a diverse workforce, and are startled by anything less.  Their social tolerance, nurtured by wide and diverse networks, is unprecedented.  College graduates expect a similar landscape in the “real world” to what they experienced throughout their lives.  They appreciate and seek out divergent points of view and experiences without needing any persuasion.  The face of our U.S. workforce will likely see dramatic change as Millennials apply their fresh take on hiring and building strong teams, with an inherent sense of inclusion.  This wonderful aspect of the Millennial wave should be celebrated and strongly encouraged, as it is the fulfillment of our own aspirations. Future Perfect The Experiential Employee is operating more as a free agent than a long term player, and their commitment will essentially last as long as meaningful organizational culture and personal/professional opportunities keep their interest.  As Boomers, we have laid the foundation for this new, spirited employment attitude, and we should take pride in knowing that.  Generations to come will challenge organizations to excel in how they identify, manage and nurture talent. Let’s support and revel in the future that we’ve helped invent, rather than lament what we think has been lost.  After all, the future is always connected to the past.  And as so eloquently phrased by Antoine Lavoisier, French nobleman, chemist and politico:  “Nothing is Lost, Nothing is Created, and Everything is Transformed.” Christine has over 25 years of diverse HR experience.  She has held HR consulting and corporate roles, including CHRO positions for Echostar in Denver, a 6,000+ employee global engineering firm, and Aepona, a startup software firm, successfully acquired by Intel. Christine is a resource to Oracle clients, to assist in Human Capital Management strategy development and implementation, compensation practices, talent development initiatives, employee engagement, global HR management, and integrated HR systems and processes that support the full employee lifecycle. 

    Read the article

  • When is a Seek not a Seek?

    - by Paul White
    The following script creates a single-column clustered table containing the integers from 1 to 1,000 inclusive. IF OBJECT_ID(N'tempdb..#Test', N'U') IS NOT NULL DROP TABLE #Test ; GO CREATE TABLE #Test ( id INTEGER PRIMARY KEY CLUSTERED ); ; INSERT #Test (id) SELECT V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 1000 ; Let’s say we need to find the rows with values from 100 to 170, excluding any values that divide exactly by 10.  One way to write that query would be: SELECT T.id FROM #Test AS T WHERE T.id IN ( 101,102,103,104,105,106,107,108,109, 111,112,113,114,115,116,117,118,119, 121,122,123,124,125,126,127,128,129, 131,132,133,134,135,136,137,138,139, 141,142,143,144,145,146,147,148,149, 151,152,153,154,155,156,157,158,159, 161,162,163,164,165,166,167,168,169 ) ; That query produces a pretty efficient-looking query plan: Knowing that the source column is defined as an INTEGER, we could also express the query this way: SELECT T.id FROM #Test AS T WHERE T.id >= 101 AND T.id <= 169 AND T.id % 10 > 0 ; We get a similar-looking plan: If you look closely, you might notice that the line connecting the two icons is a little thinner than before.  The first query is estimated to produce 61.9167 rows – very close to the 63 rows we know the query will return.  The second query presents a tougher challenge for SQL Server because it doesn’t know how to predict the selectivity of the modulo expression (T.id % 10 > 0).  Without that last line, the second query is estimated to produce 68.1667 rows – a slight overestimate.  Adding the opaque modulo expression results in SQL Server guessing at the selectivity.  As you may know, the selectivity guess for a greater-than operation is 30%, so the final estimate is 30% of 68.1667, which comes to 20.45 rows. The second difference is that the Clustered Index Seek is costed at 99% of the estimated total for the statement.  For some reason, the final SELECT operator is assigned a small cost of 0.0000484 units; I have absolutely no idea why this is so, or what it models.  Nevertheless, we can compare the total cost for both queries: the first one comes in at 0.0033501 units, and the second at 0.0034054.  The important point is that the second query is costed very slightly higher than the first, even though it is expected to produce many fewer rows (20.45 versus 61.9167). If you run the two queries, they produce exactly the same results, and both complete so quickly that it is impossible to measure CPU usage for a single execution.  We can, however, compare the I/O statistics for a single run by running the queries with STATISTICS IO ON: Table '#Test'. Scan count 63, logical reads 126, physical reads 0. Table '#Test'. Scan count 01, logical reads 002, physical reads 0. The query with the IN list uses 126 logical reads (and has a ‘scan count’ of 63), while the second query form completes with just 2 logical reads (and a ‘scan count’ of 1).  It is no coincidence that 126 = 63 * 2, by the way.  It is almost as if the first query is doing 63 seeks, compared to one for the second query. In fact, that is exactly what it is doing.  There is no indication of this in the graphical plan, or the tool-tip that appears when you hover your mouse over the Clustered Index Seek icon.  To see the 63 seek operations, you have click on the Seek icon and look in the Properties window (press F4, or right-click and choose from the menu): The Seek Predicates list shows a total of 63 seek operations – one for each of the values from the IN list contained in the first query.  I have expanded the first seek node to show the details; it is seeking down the clustered index to find the entry with the value 101.  Each of the other 62 nodes expands similarly, and the same information is contained (even more verbosely) in the XML form of the plan. Each of the 63 seek operations starts at the root of the clustered index B-tree and navigates down to the leaf page that contains the sought key value.  Our table is just large enough to need a separate root page, so each seek incurs 2 logical reads (one for the root, and one for the leaf).  We can see the index depth using the INDEXPROPERTY function, or by using the a DMV: SELECT S.index_type_desc, S.index_depth FROM sys.dm_db_index_physical_stats ( DB_ID(N'tempdb'), OBJECT_ID(N'tempdb..#Test', N'U'), 1, 1, DEFAULT ) AS S ; Let’s look now at the Properties window when the Clustered Index Seek from the second query is selected: There is just one seek operation, which starts at the root of the index and navigates the B-tree looking for the first key that matches the Start range condition (id >= 101).  It then continues to read records at the leaf level of the index (following links between leaf-level pages if necessary) until it finds a row that does not meet the End range condition (id <= 169).  Every row that meets the seek range condition is also tested against the Residual Predicate highlighted above (id % 10 > 0), and is only returned if it matches that as well. You will not be surprised that the single seek (with a range scan and residual predicate) is much more efficient than 63 singleton seeks.  It is not 63 times more efficient (as the logical reads comparison would suggest), but it is around three times faster.  Let’s run both query forms 10,000 times and measure the elapsed time: DECLARE @i INTEGER, @n INTEGER = 10000, @s DATETIME = GETDATE() ; SET NOCOUNT ON; SET STATISTICS XML OFF; ; WHILE @n > 0 BEGIN SELECT @i = T.id FROM #Test AS T WHERE T.id IN ( 101,102,103,104,105,106,107,108,109, 111,112,113,114,115,116,117,118,119, 121,122,123,124,125,126,127,128,129, 131,132,133,134,135,136,137,138,139, 141,142,143,144,145,146,147,148,149, 151,152,153,154,155,156,157,158,159, 161,162,163,164,165,166,167,168,169 ) ; SET @n -= 1; END ; PRINT DATEDIFF(MILLISECOND, @s, GETDATE()) ; GO DECLARE @i INTEGER, @n INTEGER = 10000, @s DATETIME = GETDATE() ; SET NOCOUNT ON ; WHILE @n > 0 BEGIN SELECT @i = T.id FROM #Test AS T WHERE T.id >= 101 AND T.id <= 169 AND T.id % 10 > 0 ; SET @n -= 1; END ; PRINT DATEDIFF(MILLISECOND, @s, GETDATE()) ; On my laptop, running SQL Server 2008 build 4272 (SP2 CU2), the IN form of the query takes around 830ms and the range query about 300ms.  The main point of this post is not performance, however – it is meant as an introduction to the next few parts in this mini-series that will continue to explore scans and seeks in detail. When is a seek not a seek?  When it is 63 seeks © Paul White 2011 email: [email protected] twitter: @SQL_kiwi

    Read the article

  • first install for windows eight.....da beta

    - by raysmithequip
    The W8 preview is now installed and I am enjoying it.  I remember the learning curve of my first unix machine back in the eighties, this ain't that.It is normal for me to do the first os install with a keyboard and low end monitor...you never know what you'll encounter out in the field.  The OS took like a fish to water.  I used a low end INTEL motherboard dp55w I gathered on the cheap, an 1157 i5 from the used bin a pair of 6 gig ddr3 sticks, a rosewell 550 watt power supply a cheap used twenty buck sub 200g wd sata drive, a half working dvd burner and an asus fanless nvidia vid card, not a great one but Sub 50.00 on newey eggey...I did have to hunt the ms forums for a key and of course to activate the thing, if dos would of needed this outmoded ritual, we would still be on cpm and osborne would be a household name, of course little do people know that this ritual was common as far back as the seventies on att unix installs....not, but it was possible, I used to joke about when I ran a bbs, what hell would of been wrought had dos 3.2 machines been required to dial into my bbs to send fido mail to ms and wait for an acknowledgement.  All in all the thing was pushing a seven on the ms richter scale, not including the vid card, sadly it came in at just a tad over three....I wanted to evaluate it for a possible replacement on critical machines that in the past went down due to a vid card fan failure....you have no idea what a customer thinks when you show them a failed vid card fan..."you mean that little plastic piece of junk caused all this!!??!!!"...yea man.  Some production machines don't need any sort of vid, I will at least keep it on the maybe list for those, MTBF is a very important factor, some big box stores should put percentage of failure rate within 24 month estimates on the outside of the carton for sure.  And a warning that the power supplies are already at their limit.  Let's face it, today even 550w can be iffy.A few neat eye candy improvements over the earlier windows is nice, the metro screen is nice, anyone who has used a newer phone recently will intuitively drag their fingers across the screen....lot of good that was with no mouse or touch screen though.  Lucky me, I have been using windows since day one, I still have a copy of win 2.0 (and every other version) for no good reason.  Still the old ix collection of disks is much larger, recompiling any kernal is another silly ritual, same machine, different day, same recompile...argh. Rh is my all time fav, mandrake was always missing something, like it rewrote the init file or something, novell is ok as long as you stay on the beaten path and of course ubuntu normally recompiles with the same errors consistantly....makes life easy that way....no errors on windows eight, just a screen that did not match the installed hardware, natuarally I alt tabbed right out of it, then hit the flag key to find the start menu....no start button. I miss the start button already. Keyboard cowboy funnin and I was browsing the harddrive, nothing stunning there, I like that, means I can find stuff. Only I can't find what I want, the start button....the start menu is that first screen for touch tablets. No biggie for useruser, that is where they will want to be, I can see that. Admins won't want to be there, it is easy enough to get the control panel a bazzilion other ways though, just not the start button. (see a pattern here?). Personally, from the keyboard I find it fun to hit the carets along the location bar at the top of the explorer screen with tabs and arrows and choose SHOW ALL CONTROL PANEL ITEMS, or thereabouts. Bottom line, I love seven and I'll love eight even more!...very happy I did not have to follow the normal rule of thumb (a customer watching me build a system and asking questions said "oh I get it, so every piece you put in there is basically a hundred bucks, right?)...ok, sure, pretty much, more or less, well, ya dude.  It will be WAY past october till I get a real touch screen but I did pick up a pair of cheap tatungs so I can try the NEW main start screen, I parse a lot of folders and have a vision of how a pair of touch screens will be easier than landing a rover on mars.  Ok.  fine, they are way smallish, and I don't expect multitouch to work but we are talking a few percent of a new 21 inch viewsonic touch screen.  Will this OS be a game changer?  I don't know.  Bottom line with all the pads and droids in the world, it is more of a catch up move at first glance.  Not something ms is used to.  An app store?  I can see ms's motivation, the others have it.  I gather there will not be gadgets there, go ahead and see what ms did  to the once populated gadget page...go ahead, google gadgets and take a gander, used to hundreds of gadgets, they are already gone.  They replaced gadgets?  sort of, I'll drop that, it's a bit of a sore point for me.  More of interest was what happened when I downloaded stuff off codeplex and some other normal programs that I like, like orbitron, top o' my list!!...cardware it is...anyways, click on the exe, get a screen, normal for windows, this one indicated that I was not running a normal windows program and had a button for  exit the install, naw, I hit details, a hidden run program anyways came into view....great, my path to the normal windows has detected a program tha.....yea ok, acl is on, fine, moving along I got orbitron installed in record time and was tracking the iss on the newest Microsoft OS, beta of course, felt like the first time I setup bsd all those year ago...FUN!!...I suppose I gotta start to think about budgeting for the real os when it comes out in october, by then I should have a rasberry pi and be done with fedora remixed.  Of course that sounds like fun too!!  I would use this OS on a tablet or phone.  I don't like the idea of being hearded to an app store, don't like that on anything, we are americans and want real choices not marketed hype, lest you are younger with opm (other peoples money).   This os would be neat on a zune, but I suspect the zune is a gonner, I am rooting for microsoft, after all their default password is not admin anymore, nor alpine,  it's blank. Others force a password, my first fawn password was so long I could not even log into it with the password in front of me, who the heck uses %$# anyways, and if I was writing a brute force attack what the heck kinda impasse is that anyways at .00001 microseconds of a code execution cycle (just a non qualified number, not a real clock speed)....AI is where it will be before too long, MS is on that path, perhaps soon someone will sit down and write an app for the kinect that watches your eyes while you scan the new main start screen, clicking on the big E icon when you blink.....boy is that going to be fun!!!! sure. Blink,dammit,blink,dammit...... OPM no doubt.I like windows eight, we are moving forwards, better keep a close eye on ubuntu.  The real clinch comes when open source becomes paid source......don't blink, I already see plenty of very expensive 'ix apps, some even in app stores already.  more to come.......

    Read the article

  • A star algorithm implementation problems

    - by bryan226
    I’m having some trouble implementing the A* algorithm in a 2D tile based game. The problem is basically that the algorithm gets stuck when something gets in its direct way (e.g. walls) Note that it only allows Horizontal and Vertical movement. Here's a picture as it works fine across the map without something in its direct way: (Green tile = destination, Blue = In closed list, Green = in open list) This is what happens if I try to walk 'around' a wall: I calculate costs with the F = G + H formula: G = 1 Cost per Step H = 10 Cost per Step //Count how many tiles are between current-tile & destination-tile The functions: short c_astar::GuessH(short Startx,short Starty,short Destinationx,short Destinationy) { hgeVector Start, Destination; Start.x = Startx; Start.y = Starty; Destination.x = Destinationx; Destination.y = Destinationy; short a = 0; short b = 0; if(Start.x > Destination.x) a = Start.x - Destination.x; else a = Destination.x - Start.x; if(Start.y > Destination.y) b = Start.y - Destination.y; else b = Destination.y - Start.y; return (a+b)*10; } short c_astar::GuessG(short Startx,short Starty,short Currentx,short Currenty) { hgeVector Start, Destination; Start.x = Startx; Start.y = Starty; Destination.x = Currentx; Destination.y = Currenty; short a = 0; short b = 0; if(Start.x > Destination.x) a = Start.x - Destination.x; else a = Destination.x - Start.x; if(Start.y > Destination.y) b = Start.y - Destination.y; else b = Destination.y - Start.y; return (a+b); } At the end of the loop I check which tile is the cheapest to go according to its F value: Then some quick checks are done for each tile (UP,DOWN,LEFT,RIGHT): //...CX are holding the F value of the TILE specified // Info: C0 = Center (Current) // C1 = UP // C2 = DOWN // C3 = LEFT // C4 = RIGHT //Quick checks if(((C1 < C2) && (C1 < C3) && (C1 < C4))) { Current.y -= 1; bSimilar = false; if(DEBUG) hge->System_Log("C1 < ALL"); } //.. same for C2,C3 & C4 If there are multiple tiles with the same F value: It’s actually a switch for DOWNLEFT,UPRIGHT.. etc. Here’s one of it: case UPRIGHT: { //UP Temporary = Current; Temporary.y -= 1; bTileStatus[0] = IsTileWalkable(Temporary.x,Temporary.y); if(bTileStatus[0]) { //Proceed normal we are OK & walkable Tilex.Tile = map.at(Temporary.y).at(Temporary.x); //Search in lists if(SearchInClosedList(Tilex.Tile.ID,C0)) bFoundInClosedList[0] = true; if(SearchInOpenList(Tilex.Tile.ID,C0)) bFoundInOpenList[0] = true; //RIGHT Temporary = Current; Temporary.x += 1; bTileStatus[1] = IsTileWalkable(Temporary.x,Temporary.y); if(bTileStatus[1]) { //Proceed normal we are OK & walkable Tilex.Tile = map.at(Temporary.y).at(Temporary.x); //Search in lists if(SearchInClosedList(Tilex.Tile.ID,C0)) bFoundInClosedList[1] = true; if(SearchInOpenList(Tilex.Tile.ID,C0)) bFoundInOpenList[1] = true; //************************************************* // Purpose: ClosedList behavior //************************************************* if(bFoundInClosedList[0] && !bFoundInClosedList[1]) { //UP found in ClosedList. Go RIGHT return RIGHT; } if(!bFoundInClosedList[0] && bFoundInClosedList[1]) { //RIGHT found in ClosedList. Go UP return UP; } if(bFoundInClosedList[0] && bFoundInClosedList[1]) { //Both found in ClosedList. Random value switch(hge->Random_Int(8,9)) { case 8: return UP; break; case 9: return RIGHT; break; } } //************************************************* // Purpose: OpenList behavior //************************************************* if(bFoundInOpenList[0] && !bFoundInOpenList[1]) { //UP found in OpenList. Go RIGHT return RIGHT; } if(!bFoundInOpenList[0] && bFoundInOpenList[1]) { //RIGHT found in OpenList. Go UP return UP; } if(bFoundInOpenList[0] && bFoundInOpenList[1]) { //Both found in OpenList. Random value switch(hge->Random_Int(8,9)) { case 8: return UP; break; case 9: return RIGHT; break; } } } else if(!bTileStatus[1]) { //RIGHT is not walkable OR out of range //Choose UP return UP; } } else if(!bTileStatus[0]) { //UP is not walkable OR out of range //Fast check RIGHT Temporary = Current; Temporary.x += 1; bTileStatus[1] = IsTileWalkable(Temporary.x,Temporary.y); if(bTileStatus[1]) { return RIGHT; } else return FAILED; //Failed, no valid path found! } } break; A log for the second picture: (Cut down to ten passes, because it’s just repeating itself) ----------------------------------------------------- PASS: 1 | C1: 211 | C2: 191 | C3: 211 | C4: 191 DOWN + RIGHT SIMILAR Going DOWN ----------------------------------------------------- PASS: 2 | C1: 200 | C2: 182 | C3: 202 | C4: 182 DOWN + RIGHT SIMILAR Going DOWN ----------------------------------------------------- PASS: 3 | C1: 191 | C2: 193 | C3: 193 | C4: 173 C4 < ALL Tile(12.000000,6.000000) not walkable. MAX_F_VALUE set. ----------------------------------------------------- PASS: 4 | C1: 182 | C2: 184 | C3: 182 | C4: 999 UP + LEFT SIMILAR Going UP Tile(12.000000,5.000000) not walkable. MAX_F_VALUE set. ----------------------------------------------------- PASS: 5 | C1: 191 | C2: 173 | C3: 191 | C4: 999 C2 < ALL Tile(12.000000,6.000000) not walkable. MAX_F_VALUE set. ----------------------------------------------------- PASS: 6 | C1: 182 | C2: 184 | C3: 182 | C4: 999 UP + LEFT SIMILAR Going UP Tile(12.000000,5.000000) not walkable. MAX_F_VALUE set. ----------------------------------------------------- PASS: 7 | C1: 191 | C2: 173 | C3: 191 | C4: 999 C2 < ALL Tile(12.000000,6.000000) not walkable. MAX_F_VALUE set. ----------------------------------------------------- PASS: 8 | C1: 182 | C2: 184 | C3: 182 | C4: 999 UP + LEFT SIMILAR Going LEFT ----------------------------------------------------- PASS: 9 | C1: 191 | C2: 193 | C3: 193 | C4: 173 C4 < ALL Tile(12.000000,6.000000) not walkable. MAX_F_VALUE set. ----------------------------------------------------- PASS: 10 | C1: 182 | C2: 184 | C3: 182 | C4: 999 UP + LEFT SIMILAR Going LEFT ----------------------------------------------------- Its always going after the cheapest F value, which seems to be wrong. If someone could point me to the right direction I'd be thankful. Regards, bryan226

    Read the article

  • Oracle SOA Suite - Highlighted Travel and Transportation Customer References

    - by Bruce Tierney
    0 0 1 1137 6483 - 54 15 7605 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Next in this series on industry-specific highlights of Oracle SOA Suite customers is the Travel and Transportation industry.  If you are in the travel or transportation industry, take a look at how these Oracle SOA Suite integration customers have addressed common business requirements to enable better customer service, lower costs, and deliver new business services. For example, All Nippon Airways (ANA) has significantly lowered management costs associated with their hybrid on-premise/cloud ticketing system deployments for domestic and international flights. Their lead-time for changes or new applications has been greatly reduced compared to their old mainframe-based systems, enabling ANA to rapidly develop new services in response to changing market needs. Another example is Schneider National, a leading provider of truckload logistics, and how they have integrated Oracle E-Business Suite, Siebel CRM, Oracle Transportation Management and customers applications using Oracle SOA Suite. Schneider National has 400 BPEL processes that generate over 60 million composite instances over five SOA clusters.  Take a deeper look into any of these case studies, videos, and Oracle Magazine articles that closely align with your industry:  Customers fly and airline succeeds with an IT transformation. Company:  All Nippon Airways  Customer Oracle or Profit Magazine Article   |   Travel and Transportation   |   Published on January 06, 2014 Any successful business must ensure ongoing customer satisfaction, respond to increased competition, and minimize costs. Running a successful airline in today’s economic climate requires all of those things, as well a... Openmatics Revolutionizes Fleet Management with Standards-Based Vehicle Telematics Platform New Company:  Openmatics s.r.o.  Customer Snapshot   |   Automotive   |   Published on May 20, 2014 Openmatics uses Oracle WebCenter Portal and Oracle Application Development Framework as a foundation for Openmatics, a vehicle telematics service for next-generation fleet management. It integrated its own app shop wi... Future Proof: To keep pace with mobile, social, and location-based services, smart technologists are using middleware to innovate Company:  SFpark  Customer Oracle or Profit Magazine Article   |   Professional Services   |   Published on August 01, 2012 Oracle Fusion Middleware is at the heart of a recently completed and very ambitious project to change how people handle the challenge of finding a parking space in San Francisco, California. “Parking is a universal is... Globalia Corporación Empresarial Accelerates Hotel Bookings, Boosts Sales by 40% with In-Memory Data Grid Solution Company:  Globalia Corporación Empresarial S.A.  Customer Snapshot   |   Travel and Transportation   |   Published on April 29, 2013 Globalia Corporación Empresarial S.A. deployed Oracle Coherence to reengineer the group’s core system for hotel bookings, now serving booking requests involving 80 hotels within an average response time of 100 millise... Choice Hotels Uses Oracle SOA Suite and Oracle BPM Suite to Modernize Global IT Architecture Company:  Choice Hotels  Press Release   |   Travel and Transportation   |   Published on August 07, 2012 Choice Hotels International, one of the largest and most successful hotel franchises in the world, has implemented Oracle SOA Suite and Oracle BPM Suite. Sascar Consolidates Fleet Management Infrastructure and Accelerates Customers’ Data Access Company:  Sascar  Customer Case Study   |   Travel and Transportation   |   Published on February 07, 2014 Description – Sascar used Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud and Oracle WebLogic Suite 11g to consolidate fleet management and perform real-time vehicle tracking 4x faster. Directorate General of Civil Aviation Streamlines Key Aviation Applications Access, Improves Productivity and Reduces Maintenance Costs Company:  Directorate General of Civil Aviation (DGAC)  Customer Snapshot   |   Travel and Transportation   |   Published on May 24, 2013 With Oracle Fusion Middleware, the Directorate General of Civil Aviation (DGAC) provided its 12,500 employees a virtual office environment that integrates team workspaces, business applications, and e-mails within a n... Schneider National Implements Next-Generation IT Infrastructure to Continue Leadership in Transportation and Logistics Industry Company:  Schneider National, Inc.  Customer Snapshot   |   Travel and Transportation   |   Published on February 26, 2013 Schneider National, Inc. deployed Oracle applications, Oracle Fusion Middleware, and Oracle development tools as the foundation for its next-generation IT environment, which is driving new levels of efficiency, profit... DGAC Cuts Subscription Costs with Oracle Company:  DGAC  Video   |   Travel and Transportation   |   Published on October 31, 2012 Using Oracle WebCenter Portal, Oracle SOA Suite, and Oracle Exalogic, DGAC reduces the cost of subscriptions to newsletters and provide to its 12,500 employees a collaborative workspace portal. Asiana Airlines Builds PIP System with Oracle Solutions Company:  Asiana Airlines  Video   |   Travel and Transportation   |   Published on July 26, 2012 With Oracle Exalogic and the Oracle SOA Suite, Asiana Airlines builds a passenger service integrated platform providing various services such as integration between its interface and internal systems and a data wareho... Choice Hotels Reduces Time to Market with Oracle WebCenter Company:  Choice Hotels  Video   |   Travel and Transportation   |   Published on April 11, 2014 Using Oracle WebCenter and Oracle SOA standardization, Choice Hotels consolidated multiple platforms, reduced IT dependency and realized tremendous benefits in total cost of ownership and faster time to market support... An Interview with Schneider National's Judy Lemke Company:  Schneider National  Video   |   Travel and Transportation   |   Published on December 17, 2013 Judy Lemke talks with Mark Sunday about the challenges Schneider National faced and how they overcame them through a companywide transformational change. For more details on these case studies, you can use this pre-filtered search on “Travel and Transportation” / “Middleware” / “Service Oriented Architecture” or browse on your own at www.oracle.com/customers

    Read the article

  • Master-slave vs. peer-to-peer archictecture: benefits and problems

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Almost two decades ago, I was a member of a database development team that introduced adaptive locking. Locking, the most popular concurrency control technique in database systems, is pessimistic. Locking ensures that two or more conflicting operations on the same data item don’t “trample” on each other’s toes, resulting in data corruption. In a nutshell, here’s the issue we were trying to address. In everyday life, traffic lights serve the same purpose. They ensure that traffic flows smoothly and when everyone follows the rules, there are no accidents at intersections. As I mentioned earlier, the problem with typical locking protocols is that they are pessimistic. Regardless of whether there is another conflicting operation in the system or not, you have to hold a lock! Acquiring and releasing locks can be quite expensive, depending on how many objects the transaction touches. Every transaction has to pay this penalty. To use the earlier traffic light analogy, if you have ever waited at a red light in the middle of nowhere with no one on the road, wondering why you need to wait when there’s clearly no danger of a collision, you know what I mean. The adaptive locking scheme that we invented was able to minimize the number of locks that a transaction held, by detecting whether there were one or more transactions that needed conflicting eyou could get by without holding any lock at all. In many “well-behaved” workloads, there are few conflicts, so this optimization is a huge win. If, on the other hand, there are many concurrent, conflicting requests, the algorithm gracefully degrades to the “normal” behavior with minimal cost. We were able to reduce the number of lock requests per TPC-B transaction from 178 requests down to 2! Wow! This is a dramatic improvement in concurrency as well as transaction latency. The lesson from this exercise was that if you can identify the common scenario and optimize for that case so that only the uncommon scenarios are more expensive, you can make dramatic improvements in performance without sacrificing correctness. So how does this relate to the architecture and design of some of the modern NoSQL systems? NoSQL systems can be broadly classified as master-slave sharded, or peer-to-peer sharded systems. NoSQL systems with a peer-to-peer architecture have an interesting way of handling changes. Whenever an item is changed, the client (or an intermediary) propagates the changes synchronously or asynchronously to multiple copies (for availability) of the data. Since the change can be propagated asynchronously, during some interval in time, it will be the case that some copies have received the update, and others haven’t. What happens if someone tries to read the item during this interval? The client in a peer-to-peer system will fetch the same item from multiple copies and compare them to each other. If they’re all the same, then every copy that was queried has the same (and up-to-date) value of the data item, so all’s good. If not, then the system provides a mechanism to reconcile the discrepancy and to update stale copies. So what’s the problem with this? There are two major issues: First, IT’S HORRIBLY PESSIMISTIC because, in the common case, it is unlikely that the same data item will be updated and read from different locations at around the same time! For every read operation, you have to read from multiple copies. That’s a pretty expensive, especially if the data are stored in multiple geographically separate locations and network latencies are high. Second, if the copies are not all the same, the application has to reconcile the differences and propagate the correct value to the out-dated copies. This means that the application program has to handle discrepancies in the different versions of the data item and resolve the issue (which can further add to cost and operation latency). Resolving discrepancies is only one part of the problem. What if the same data item was updated independently on two different nodes (copies)? In that case, due to the asynchronous nature of change propagation, you might land up with different versions of the data item in different copies. In this case, the application program also has to resolve conflicts and then propagate the correct value to the copies that are out-dated or have incorrect versions. This can get really complicated. My hunch is that there are many peer-to-peer-based applications that don’t handle this correctly, and worse, don’t even know it. Imagine have 100s of millions of records in your database – how can you tell whether a particular data item is incorrect or out of date? And what price are you willing to pay for ensuring that the data can be trusted? Multiple network messages per read request? Discrepancy and conflict resolution logic in the application, and potentially, additional messages? All this overhead, when all you were trying to do was to read a data item. Wouldn’t it be simpler to avoid this problem in the first place? Master-slave architectures like the Oracle NoSQL Database handles this very elegantly. A change to a data item is always sent to the master copy. Consequently, the master copy always has the most current and authoritative version of the data item. The master is also responsible for propagating the change to the other copies (for availability and read scalability). Client drivers are aware of master copies and replicas, and client drivers are also aware of the “currency” of a replica. In other words, each NoSQL Database client knows how stale a replica is. This vastly simplifies the job of the application developer. If the application needs the most current version of the data item, the client driver will automatically route the request to the master copy. If the application is willing to tolerate some staleness of data (e.g. a version that is no more than 1 second out of date), the client can easily determine which replica (or set of replicas) can satisfy the request, and route the request to the most efficient copy. This results in a dramatic simplification in application logic and also minimizes network requests (the driver will only send the request to exactl the right replica, not many). So, back to my original point. A well designed and well architected system minimizes or eliminates unnecessary overhead and avoids pessimistic algorithms wherever possible in order to deliver a highly efficient and high performance system. If you’ve every programmed an Oracle NoSQL Database application, you’ll know the difference! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Partner Blog Series: PwC Perspectives Part 2 - Jumpstarting your IAM program with R2

    - by Tanu Sood
    Identity and access management (IAM) isn’t a new concept. Over the past decade, companies have begun to address identity management through a variety of solutions that have primarily focused on provisioning. . The new age workforce is converging at a rapid pace with ever increasing demand to use diverse portfolio of applications and systems to interact and interface with their peers in the industry and customers alike. Oracle has taken a significant leap with their release of Identity and Access Management 11gR2 towards enabling this global workforce to conduct their business in a secure, efficient and effective manner. As companies deal with IAM business drivers, it becomes immediately apparent that holistic, rather than piecemeal, approaches better address their needs. When planning an enterprise-wide IAM solution, the first step is to create a common framework that serves as the foundation on which to build the cost, compliance and business process efficiencies. As a leading industry practice, IAM should be established on a foundation of accurate data for identity management, making this data available in a uniform manner to downstream applications and processes. Mature organizations are looking beyond IAM’s basic benefits to harness more advanced capabilities in user lifecycle management. For any organization looking to embark on an IAM initiative, consider the following use cases in managing and administering user access. Expanding the Enterprise Provisioning Footprint Almost all organizations have some helpdesk resources tied up in handling access requests from users, a distraction from their core job of handling problem tickets. This dependency has mushroomed from the traditional acceptance of provisioning solutions integrating and addressing only a portion of applications in the heterogeneous landscape Oracle Identity Manager (OIM) 11gR2 solves this problem by offering integration with third party ticketing systems as “disconnected applications”. It allows for the existing business processes to be seamlessly integrated into the system and tracked throughout its lifecycle. With minimal effort and analysis, an organization can begin integrating OIM with groups or applications that are involved with manually intensive access provisioning and de-provisioning activities. This aspect of OIM allows organizations to on-board applications and associated business processes quickly using out of box templates and frameworks. This is especially important for organizations looking to fold in users and resources from mergers and acquisitions. Simplifying Access Requests Organizations looking to implement access request solutions often find it challenging to get their users to accept and adopt the new processes.. So, how do we improve the user experience, make it intuitive and personalized and yet simplify the user access process? With R2, OIM helps organizations alleviate the challenge by placing the most used functionality front and centre in the new user request interface. Roles, application accounts, and entitlements can all be found in the same interface as catalog items, giving business users a single location to go to whenever they need to initiate, approve or track a request. Furthermore, if a particular item is not relevant to a user’s job function or area inside the organization, it can be hidden so as to not overwhelm or confuse the user with superfluous options. The ability to customize the user interface to suit your needs helps in exercising the business rules effectively and avoiding access proliferation within the organization. Saving Time with Templates A typical use case that is most beneficial to business users is flexibility to place, edit, and withdraw requests based on changing circumstances and business needs. With OIM R2, multiple catalog items can now be added and removed from the shopping cart, an ecommerce paradigm that many users are already familiar with. This feature can be especially useful when setting up a large number of new employees or granting existing department or group access to a newly integrated application. Additionally, users can create their own shopping cart templates in order to complete subsequent requests more quickly. This feature saves the user from having to search for and select items all over again if a request is similar to a previous one. Advanced Delegated Administration A key feature of any provisioning solution should be to empower each business unit in managing their own access requests. By bringing administration closer to the user, you improve user productivity, enable efficiency and alleviate the administration overhead. To do so requires a federated services model so that the business units capable of shouldering the onus of user life cycle management of their business users can be enabled to do so. OIM 11gR2 offers advanced administrative options for creating, managing and controlling business logic and workflows through easy to use administrative interface and tools that can be exposed to delegated business administrators. For example, these business administrators can establish or modify how certain requests and operations should be handled within their business unit based on a number of attributes ranging from the type of request or the risk level of the individual items requested. Closed-Loop Remediation Security continues to be a major concern for most organizations. Identity management solutions bolster security by ensuring only the right users have the right access to the right resources. To prevent unauthorized access and where it already exists, the ability to detect and remediate it, are key requirements of an enterprise-grade proven solution. But the challenge with most solutions today is that some of this information still exists in silos. And when changes are made to systems directly, not all information is captured. With R2, oracle is offering a comprehensive Identity Governance solution that our customer organizations are leveraging for closed loop remediation that allows for an automated way for administrators to revoke unauthorized access. The change is automatically captured and the action noted for continued management. Conclusion While implementing provisioning solutions, it is important to keep the near term and the long term goals in mind. The provisioning solution should always be a part of a larger security and identity management program but with the ability to seamlessly integrate not only with the company’s infrastructure but also have the ability to leverage the information, business models compiled and used by the other identity management solutions. This allows organizations to reduce the cost of ownership, close security gaps and leverage the existing infrastructure. And having done so a multiple clients’ sites, this is the approach we recommend. In our next post, we will take a journey through our experiences of advising clients looking to upgrade to R2 from a previous version or migrating from a different solution. Meet the Writers:   Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL). Jenny (Xiao) Zhang is a member of the Advisory Security practice within PwC.  She has consulted across multiple industries including financial services, entertainment and retail. Jenny has three years of experience in delivering IT solutions out of which she has been implementing Identity Management solutions for the past one and a half years.

    Read the article

  • Augmenting your Social Efforts via Data as a Service (DaaS)

    - by Mike Stiles
    The following is the 3rd in a series of posts on the value of leveraging social data across your enterprise by Oracle VP Product Development Don Springer and Oracle Cloud Data and Insight Service Sr. Director Product Management Niraj Deo. In this post, we will discuss the approach and value of integrating additional “public” data via a cloud-based Data-as-as-Service platform (or DaaS) to augment your Socially Enabled Big Data Analytics and CX Management. Let’s assume you have a functional Social-CRM platform in place. You are now successfully and continuously listening and learning from your customers and key constituents in Social Media, you are identifying relevant posts and following up with direct engagement where warranted (both 1:1, 1:community, 1:all), and you are starting to integrate signals for communication into your appropriate Customer Experience (CX) Management systems as well as insights for analysis in your business intelligence application. What is the next step? Augmenting Social Data with other Public Data for More Advanced Analytics When we say advanced analytics, we are talking about understanding causality and correlation from a wide variety, volume and velocity of data to Key Performance Indicators (KPI) to achieve and optimize business value. And in some cases, to predict future performance to make appropriate course corrections and change the outcome to your advantage while you can. The data to acquire, process and analyze this is very nuanced: It can vary across structured, semi-structured, and unstructured data It can span across content, profile, and communities of profiles data It is increasingly public, curated and user generated The key is not just getting the data, but making it value-added data and using it to help discover the insights to connect to and improve your KPIs. As we spend time working with our larger customers on advanced analytics, we have seen a need arise for more business applications to have the ability to ingest and use “quality” curated, social, transactional reference data and corresponding insights. The challenge for the enterprise has been getting this data inline into an easily accessible system and providing the contextual integration of the underlying data enriched with insights to be exported into the enterprise’s business applications. The following diagram shows the requirements for this next generation data and insights service or (DaaS): Some quick points on these requirements: Public Data, which in this context is about Common Business Entities, such as - Customers, Suppliers, Partners, Competitors (all are organizations) Contacts, Consumers, Employees (all are people) Products, Brands This data can be broadly categorized incrementally as - Base Utility data (address, industry classification) Public Master Reference data (trade style, hierarchy) Social/Web data (News, Feeds, Graph) Transactional Data generated by enterprise process, workflows etc. This Data has traits of high-volume, variety, velocity etc., and the technology needed to efficiently integrate this data for your needs includes - Change management of Public Reference Data across all categories Applied Big Data to extract statics as well as real-time insights Knowledge Diagnostics and Data Mining As you consider how to deploy this solution, many of our customers will be using an online “cloud” service that provides quality data and insights uniformly to all their necessary applications. In addition, they are requesting a service that is: Agile and Easy to Use: Applications integrated with the service can obtain data on-demand, quickly and simply Cost-effective: Pre-integrated into applications so customers don’t have to Has High Data Quality: Single point access to reference data for data quality and linkages to transactional, curated and social data Supports Data Governance: Becomes more manageable and cost-effective since control of data privacy and compliance can be enforced in a centralized place Data-as-a-Service (DaaS) Just as the cloud has transformed and now offers a better path for how an enterprise manages its IT from their infrastructure, platform, and software (IaaS, PaaS, and SaaS), the next step is data (DaaS). Over the last 3 years, we have seen the market begin to offer a cloud-based data service and gain initial traction. On one side of the DaaS continuum, we see an “appliance” type of service that provides a single, reliable source of accurate business data plus social information about accounts, leads, contacts, etc. On the other side of the continuum we see more of an online market “exchange” approach where ISVs and Data Publishers can publish and sell premium datasets within the exchange, with the exchange providing a rich set of web interfaces to improve the ease of data integration. Why the difference? It depends on the provider’s philosophy on how fast the rate of commoditization of certain data types will occur. How do you decide the best approach? Our perspective, as shown in the diagram below, is that the enterprise should develop an elastic schema to support multi-domain applicability. This allows the enterprise to take the most flexible approach to harness the speed and breadth of public data to achieve value. The key tenet of the proposed approach is that an enterprise carefully federates common utility, master reference data end points, mobility considerations and content processing, so that they are pervasively available. One way you may already be familiar with this approach is in how you do Address Verification treatments for accounts, contacts etc. If you design and revise this service in such a way that it is also easily available to social analytic needs, you could extend this to launch geo-location based social use cases (marketing, sales etc.). Our fundamental belief is that value-added data achieved through enrichment with specialized algorithms, as well as applying business “know-how” to weight-factor KPIs based on innovative combinations across an ever-increasing variety, volume and velocity of data, will be where real value is achieved. Essentially, Data-as-a-Service becomes a single entry point for the ever-increasing richness and volume of public data, with enrichment and combined capabilities to extract and integrate the right data from the right sources with the right factoring at the right time for faster decision-making and action within your core business applications. As more data becomes available (and in many cases commoditized), this value-added data processing approach will provide you with ongoing competitive advantage. Let’s look at a quick example of creating a master reference relationship that could be used as an input for a variety of your already existing business applications. In phase 1, a simple master relationship is achieved between a company (e.g. General Motors) and a variety of car brands’ social insights. The reference data allows for easy sort, export and integration into a set of CRM use cases for analytics, sales and marketing CRM. In phase 2, as you create more data relationships (e.g. competitors, contacts, other brands) to have broader and deeper references (social profiles, social meta-data) for more use cases across CRM, HCM, SRM, etc. This is just the tip of the iceberg, as the amount of master reference relationships is constrained only by your imagination and the availability of quality curated data you have to work with. DaaS is just now emerging onto the marketplace as the next step in cloud transformation. For some of you, this may be the first you have heard about it. Let us know if you have questions, or perspectives. In the meantime, we will continue to share insights as we can.Photo: Erik Araujo, stock.xchng

    Read the article

  • Feedback on meeting of the Linux User Group of Mauritius

    Once upon a time in a country far far away... Okay, actually it's not that bad but it has been a while since the last meeting of the Linux User Group of Mauritius (LUGM). There have been plans in the past but it never really happened. Finally, Selven took the opportunity and organised a new meetup with low administrative overhead, proper scheduling on alternative dates and a small attendee's survey on the preferred option. All the pre-work was nicely executed. First, I wasn't sure whether it would be possible to attend. Luckily I got some additional information, like children should come, too, and I was sold to this community gathering. According to other long-term members of the LUGM it was the first time 'ever' that a gathering was organised outside of Quatre Bornes, and I have to admit it was great! LUGM - user group meeting on the 15.06.2013 in L'Escalier Quick overview of Linux & the LUGM With a little bit of delay the LUGM meeting officially started with a quick overview and introduction to Linux presented by Avinash. During the session he told the audience that there had been quite some activity over the island some years ago but unfortunately it had been quiet during recent times. Of course, we also spoke about the acknowledged world dominance of Linux - thanks to Android - and the interesting possibilities for countries like Mauritius. It is known that a couple of public institutions have there back-end infrastructure running on Red Hat Linux systems but the presence on the desktop is still very low. Users are simply hanging on to Windows XP and older versions of Microsoft Office. Following the introduction of the LUGM Ajay joined into the session and it quickly changed into a panel discussion with lots of interesting questions and answers, sharing of first-hand experience either on the job or in private use of Linux, and a couple of ideas about how the LUGM could promote Linux a bit more in Mauritius. It was great to get an insight into other attendee's opinion and activities. Especially taking into consideration that I'm already using Linux since around 1996/97. Frankly speaking, I bought a SuSE 4.x distribution back in those days because I couldn't achieve certain tasks on Windows NT 4.0 without spending a fortune. OpenELEC Mediacenter Next, Selven gave us decent introduction on OpenELEC: Open Embedded Linux Entertainment Center (OpenELEC) is a small Linux distribution built from scratch as a platform to turn your computer into an XBMC media center. OpenELEC is designed to make your system boot fast, and the install is so easy that anyone can turn a blank PC into a media machine in less than 15 minutes. I didn't know about it until this presentation. In the past, I was mainly attached to Video Disk Recorder (VDR) as it allows the use of satellite receiver cards very easily. Hm, somehow I'm still missing my precious HTPC that I had to leave back in Germany years ago. It was great piece of hardware and software; self-built PC in a standard HiFi-sized (43cm) black desktop casing with 2 full-featured Hauppauge DVB-s cards, an old-fashioned Voodoo graphics card, WiFi card, Pioneer slot-in DVD drive, and fully remote controlled via infra-red thanks to Debian, VDR and LIRC. With EP Guide, scheduled recordings and general multimedia centre it offered all the necessary comfort in the living room, besides a Nintendo game console; actually a GameCube at that time... But I have to admit that putting OpenELEC on a Raspberry Pi would be a cool DIY project in the near future. LUGM - our next generation of linux users (15.06.2013) Project Evil Genius (PEG) Don't be scared of the paragraph header. Ish gave us a cool explanation why he named it PEG - Project Evil Genius; it's because of the time of the day when he was scripting down his ideas to be able to build, package and provide software applications to various Linux distributions. The main influence came from openSuSE but the platform didn't cater for his needs and ideas, so he started to work out something on his own. During his passionate session he also talked about the amazing experience he had due to other Linux users from all over the world. During the next couple of days Ish promised to put his script to GitHub... Looking forward to that. Check out Ish's personal blog over at hacklog.in. Highly recommended to read. Why India? Simply because the registration fees per year for an Indian domain are approximately 20 times less than for a Mauritian domain (.mu). Exploring the beach of L'Escalier af the meeting 'After-party' at the beach of L'Escalier Puh, after such interesting sessions, ideas around Linux and good conversation during the breaks and over lunch it was time for a little break-out. Selven suggested that we all should head down to the beach of L'Escalier and get some impressions of nature down here in the south of the island. Talking about 'beach' ;-) - absolutely not comparable to the white-sanded ones here in Flic en Flac... There are no lagoons down at the south coast of Mauriitus, and watching the breaking waves is a different experience and joy after all. Unfortunately, I was a little bit worried about the thoughtless littering at such a remote location. You have to drive on natural paths through the sugar cane fields and I was really shocked by the amount of rubbish lying around almost everywhere. Sad, really sad and it concurs with Yasir's recent article on the same topic. Resumé & outlook It was a great event. I met with new people, had some good conversations, and even my children enjoyed themselves the whole day. The location was well-chosen, enough space for each and everyone, parking spaces and even a playground for the children. Also, a big "Thank You" to Selven and his helpers for the organisation and preparation of lunch. I'm kind of sure that this was an exceptional meeting of LUGM and I'm really looking forward to the next gathering of Linux geeks. Hopefully, soon. All images are courtesy of Avinash Meetoo. More pictures are available on Flickr.

    Read the article

  • Testing Workflows &ndash; Test-First

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-first.aspxThis is the second of two posts on some common strategies for approaching the job of writing tests.  The previous post covered test-after workflows where as this will focus on test-first.  Each workflow presented is a method of attack for adding tests to a project.  The more tools in your tool belt the better.  So here is a partial list of some test-first methodologies. Ping Pong Ping Pong is a methodology commonly used in pair programing.  One developer will write a new failing test.  Then they hand the keyboard to their partner.  The partner writes the production code to get the test passing.  The partner then writes the next test before passing the keyboard back to the original developer. The reasoning behind this testing methodology is to facilitate pair programming.  That is to say that this testing methodology shares all the benefits of pair programming, including ensuring multiple team members are familiar with the code base (i.e. low bus number). Test Blazer Test Blazing, in some respects, is also a pairing strategy.  The developers don’t work side by side on the same task at the same time.  Instead one developer is dedicated to writing tests at their own desk.  They write failing test after failing test, never touching the production code.  With these tests they are defining the specification for the system.  The developer most familiar with the specifications would be assigned this task. The next day or later in the same day another developer fetches the latest test suite.  Their job is to write the production code to get those tests passing.  Once all the tests pass they fetch from source control the latest version of the test project to get the newer tests. This methodology has some of the benefits of pair programming, namely lowering the bus number.  This can be good way adding an extra developer to a project without slowing it down too much.  The production coder isn’t slowed down writing tests.  The tests are in another project from the production code, so there shouldn’t be any merge conflicts despite two developers working on the same solution. This methodology is also a good test for the tests.  Can another developer figure out what system should do just by reading the tests?  This question will be answered as the production coder works there way through the test blazer’s tests. Test Driven Development (TDD) TDD is a highly disciplined practice that calls for a new test and an new production code to be written every few minutes.  There are strict rules for when you should be writing test or production code.  You start by writing a failing (red) test, then write the simplest production code possible to get the code working (green), then you clean up the code (refactor).  This is known as the red-green-refactor cycle. The goal of TDD isn’t the creation of a suite of tests, however that is an advantageous side effect.  The real goal of TDD is to follow a practice that yields a better design.  The practice is meant to push the design toward small, decoupled, modularized components.  This is generally considered a better design that large, highly coupled ball of mud. TDD accomplishes this through the refactoring cycle.  Refactoring is only possible to do safely when tests are in place.  In order to use TDD developers must be trained in how to look for and repair code smells in the system.  Through repairing these sections of smelly code (i.e. a refactoring) the design of the system emerges. For further information on TDD, I highly recommend the series “Is TDD Dead?”.  It discusses its pros and cons and when it is best used. Acceptance Test Driven Development (ATDD) Whereas TDD focuses on small unit tests that concentrate on a small piece of the system, Acceptance Tests focuses on the larger integrated environment.  Acceptance Tests usually correspond to user stories, which come directly from the customer. The unit tests focus on the inputs and outputs of smaller parts of the system, which are too low level to be of interest to the customer. ATDD generally uses the same tools as TDD.  However, ATDD uses fewer mocks and test doubles than TDD. ATDD often complements TDD; they aren’t competing methods.  A full test suite will usually consist of a large number of unit (created via TDD) tests and a smaller number of acceptance tests. Behaviour Driven Development (BDD) BDD is more about audience than workflow.  BDD pushes the testing realm out towards the client.  Developers, managers and the client all work together to define the tests. Typically different tooling is used for BDD than acceptance and unit testing.  This is done because the audience is not just developers.  Tools using the Gherkin family of languages allow for test scenarios to be described in an English format.  Other tools such as MSpec or FitNesse also strive for highly readable behaviour driven test suites. Because these tests are public facing (viewable by people outside the development team), the terminology usually changes.  You can’t get away with the same technobabble you can with unit tests written in a programming language that only developers understand.  For starters, they usually aren’t called tests.  Usually they’re called “examples”, “behaviours”, “scenarios”, or “specifications”. This may seem like a very subtle difference, but I’ve seen this small terminology change have a huge impact on the acceptance of the process.  Many people have a bias that testing is something that comes at the end of a project.  When you say we need to define the tests at the start of the project many people will immediately give that a lower priority on the project schedule.  But if you say we need to define the specification or behaviour of the system before we can start, you’ll get more cooperation.   Keep these test-first and test-after workflows in your tool belt.  With them you’ll be able to find new opportunities to apply them.

    Read the article

  • HTG Explains: Should You Buy Extended Warranties?

    - by Chris Hoffman
    Buy something at an electronics store and you’ll be confronted by a pushy salesperson who insists you need an extended warranty. You’ll also see extended warranties pushed hard when shopping online. But are they worth it? There’s a reason stores push extended warranties so hard. They’re almost always pure profit for the store involved. An electronics store may live on razor-thin product margins and make big profits on extended warranties and overpriced HDMI cables. You’re Already Getting Multiple Warranties First, back up. The product you’re buying already includes a warranty. In fact, you’re probably getting several different types of warranties. Store Return and Exchange: Most electronics stores allow you to return a malfunctioning product within the first 15 or 30 days and they’ll provide you with a new one. The exact period of time will vary from store to store. If you walk out of the store with a defective product and have to swap it for a new one within the first few weeks, this should be easy. Manufacturer Warranty: A device’s manufacturer — whether the device is a laptop, a television, or a graphics card — offers their own warranty period. The manufacturer warranty covers you after the store refuses to take the product back and exchange it. The length of this warranty depends on the type of product. For example, a cheap laptop may only offer a one-year manufacturer warranty, while a more expensive laptop may offer a two-year warranty. Credit Card Warranty Extension: Many credit cards offer free extended warranties on products you buy with that credit card. Credit card companies will often give you an additional year of warranty. For example, if you buy a laptop with a two year warranty and it fails in the third year, you could then contact your credit card company and they’d cover the cost of fixing or replacing it. Check your credit card’s benefits and fine print for more information. Why Extended Warranties Are Bad You’re already getting a fairly long warranty period, especially if you have a credit card that offers you a free extended warranty — these are fairly common. If the product you get is a “lemon” and has a manufacturing error, it will likely fail pretty soon — well within your warranty period. The extended warranty matters after all your other warranties are exhausted. In the case of a laptop with a two-year warranty that you purchase with a credit card giving you a one-year warranty extension, your extended warranty will kick in three years after you purchase the laptop. In that many years, your current laptop will likely feel pretty old and laptops that are as good — or better — will likely be pretty cheap. If it’s a television, better television displays will be available at a lower price point. You’ll either want to upgrade to a newer model or you’ll be able to buy a new, just-as-good product for very cheap. You’ll only have to pay out-of-pocket if your device fails after the normal warranty period — in over two or three years for typical laptops purchased with a decent credit card. Save the money you would have spent on the warranty and put it towards a future upgrade. How Much Do Extended Warranties Cost? Let’s look at an example from a typical pushy retail outlet, Best Buy. We went to Best Buy’s website and found a pretty standard $600 Samsung laptop. This laptop comes with a one-year warranty period. If purchased with a fairly common credit card, you can easily get a two-year warranty period on this laptop without spending an additional penny. (Yes, such credit cards are available with no yearly fees.) During the check-out process, Best Buy tries to sell you a Geek Squad “Accidental Protection Plan.” To get an additional year of Best Buy’s extended warranty, you’d have to pay $324.98 for a “3-Year Accidental Protection Plan”. You’d basically be paying more than half the price of your laptop for an additional year of warranty — remember, the standard warranties would cover you anyway for the first two years. If this laptop did break sometime between two and three years from now, we wouldn’t be surprised if you could purchase a comparable laptop for about $325 anyway. And, if you don’t need to replace it, you’ve saved that money. Best Buy would object that this isn’t a standard extended warranty. It’s a supercharged warranty plan that will also provide coverage if you spill something on your laptop or drop it and break it. You just have to ask yourself a question. What are the odds that you’ll drop your laptop or spill something on it? They’re probably pretty low if you’re a typical human being. Is it worth spending more than half the price of the laptop just in case you’ll make an uncommon mistake? Probably not. There may be occasional exceptions to this — some Apple users swear by Apple’s AppleCare, for example — but you should generally avoid buying these things. There’s a reason stores are so pushy about extended warranties, and it’s not because they want to help protect you. It’s because they’re making lots of profit from these plans, and they’re making so much profit because they’re not a good deal for customers. Image Credit: Philip Taylor on Flickr     

    Read the article

  • Different Not Automatically Implies Better

    - by Alois Kraus
    Originally posted on: http://geekswithblogs.net/akraus1/archive/2013/11/05/154556.aspxRecently I was digging deeper why some WCF hosted workflow application did consume quite a lot of memory although it did basically only load a xaml workflow. The first tool of choice is Process Explorer or even better Process Hacker (has more options and the best feature copy&paste does work). The three most important numbers of a process with regards to memory are Working Set, Private Working Set and Private Bytes. Working set is the currently consumed physical memory (parts can be shared between processes e.g. loaded dlls which are read only) Private Working Set is the physical memory needed by this process which is not shareable Private Bytes is the number of non shareable which is only visible in the current process (e.g. all new, malloc, VirtualAlloc calls do create private bytes) When you have a bigger workflow it can consume under 64 bit easily 500MB for a 1-2 MB xaml file. This does not look very scalable. Under 64 bit the issue is excessive private bytes consumption and not the managed heap. The picture is quite different for 32 bit which looks a bit strange but it seems that the hosted VB compiler is a lot less memory hungry under 32 bit. I did try to repro the issue with a medium sized xaml file (400KB) which does contain 1000 variables and 1000 if which can be represented by C# code like this: string Var1; string Var2; ... string Var1000; if (!String.IsNullOrEmpty(Var1) ) { Console.WriteLine(“Var1”); } if (!String.IsNullOrEmpty(Var2) ) { Console.WriteLine(“Var2”); } ....   Since WF is based on VB.NET expressions you are bound to the hosted VB.NET compiler which does result in (x64) 140 MB of private bytes which is ca. 140 KB for each if clause which is quite a lot if you think about the actually present functionality. But there is hope. .NET 4.5 does allow now C# expressions for WF which is a major step forward for all C# lovers. I did create some simple patcher to “cross compile” my xaml to C# expressions. Lets look at the result: C# Expressions VB Expressions x86 x86 On my home machine I have only 32 bit which gives you quite exactly half of the memory consumption under 64 bit. C# expressions are 10 times more memory hungry than VB.NET expressions! I wanted to do more with less memory but instead it did consume a magnitude more memory. That is surprising to say the least. The workflow does initialize in about the same time under x64 and x86 where the VB code does it in 2s whereas the C# version needs 18s. Also nearly ten times slower. That is a too high price to pay for any bigger sized xaml workflow to convert from VB.NET to C# expressions. If I do reduce the number of expressions to 500 then it does need 400MB which is about half of the memory. It seems that the cost per if does rise linear with the number of total expressions in a xaml workflow.  Expression Language Cost per IF Startup Time C# 1000 Ifs x64 1,5 MB 18s C# 500 Ifs x64 750 KB 9s VB 1000 Ifs x64 140 KB 2s VB 500 Ifs x64 70 KB 1s Now we can directly compare two MS implementations. It is clear that the VB.NET compiler uses the same underlying structure but it has much higher offset compared to the highly inefficient C# expression compiler. I have filed a connect bug here with a harsher wording about recent advances in memory consumption. The funniest thing is that one MS employee did give an Azure AppFabric demo around early 2011 which was so slow that he needed to investigate with xperf. He was after startup time and the call stacks with regards to VB.NET expression compilation were remarkably similar. In fact I only found this post by googling for parts of my call stacks. … “C# expressions will be coming soon to WF, and that will have different performance characteristics than VB” … What did he know Jan 2011 what I did no know until today? ;-). He knew that C# expression will come but that they will not be automatically have better footprint. It is about time to fix that. In its current state C# expressions are not usable for bigger workflows. That also explains the headline for today. You can cheat startup time by prestarting workflows so that the demo looks nice and snappy but it does hurt scalability a lot since you do need much more memory than necessary. I did find the stacks by enabling virtual allocation tracking within XPerf which is still the best tool out there. But first you need to look at your process to check where the memory is hiding: For the C# Expression compiler you do not need xperf. You can directly dump the managed heap and check with a profiler of your choice. But if the allocations are happening on the Private Data ( VirtualAlloc ) you can find it with xperf. There is a nice video on channel 9 explaining VirtualAlloc tracking it in greater detail. If your data allocations are on the Heap it does mean that the C/C++ runtime did create a heap for you where all malloc, new calls do allocate from it. You can enable heap tracing with xperf and full call stack support as well which is doable via xperf like it is shown also on channel 9. Or you can use WPRUI directly: To make “Heap Usage” it work you need to set for your executable the tracing flags (before you start it). For example devenv.exe HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\devenv.exe DWORD TracingFlags 1 Do not forget to disable it after you did complete profiling the process or it will impact the startup time quite a lot. You can with xperf attach directly to a running process and collect heap allocation information from a gone wild process. Very handy if you need to find out what a process was doing which has arrived in a funny state. “VirtualAlloc usage” does work without explicitly enabling stuff for a specific process and is always on machine wide. I had issues on my Windows 7 machines with the call stack collection and the latest Windows 8.1 Performance Toolkit. I was told that WPA from Windows 8.0 should work fine but I do not want to downgrade.

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >