Search Results

Search found 2691 results on 108 pages for 'michael bishop'.

Page 10/108 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Using NServiceBus behind a custom web service

    - by Michael Stephenson
    In this post I'd like to talk about an architecture scenario we had recently and how we were able to utilise NServiceBus to help us address this problem. Scenario Cognos is a reporting system used by one of my clients. A while back we developed a web service façade to allow line of business applications to be able to access reports from Cognos to support their various functions. The service was intended to provide access to reports which were quick running reports or pre-generated reports which could be accessed real-time on demand. One of the key aims of the web service was to provide a simple generic interface to allow applications to get any report without needing to worry about the complex .net SDK for Cognos. The web service also supported multi-hop kerberos delegation so that report data could be accesses under the context of the end user. This service was working well for a period of time. The Problem The problem we encountered was that reports were now also required to be available to batch processes. The original design was optimised for low latency so users would enjoy a positive experience, however when the batch processes started to request 250+ concurrent reports over an extended period of time you can begin to imagine the sorts of problems that come into play. The key problems this new scenario caused are: Users may be affected and the latency of on demand reports was significantly slower The Cognos infrastructure was not scaled sufficiently to be able to cope with these long peaks of load From a cost perspective it just isn't feasible to scale the Cognos infrastructure to be able to handle the load when it is only for a couple of hour window each night. We really needed to introduce a second pattern for accessing this service which would support high through-put scenarios. We also had little control over the batch process in terms of being able to throttle its load. We could however make some changes to the way it accessed the reports. The Approach My idea was to introduce a throttling mechanism between the Web Service Façade and Cognos. This would allow the batch processes to push reports requests hard at the web service which we were confident the web service can handle. The web service would then queue these requests and process them behind the scenes and make a call back to the batch application to provide the report once it had been accessed. In terms of technology we had some limitations because we were not able to use WCF or IIS7 where the MSMQ-Activated WCF services could have helped, but we did have MSMQ as an option and I thought NServiceBus could do just the job to help us here. The flow of how this would work was as follows: The batch applications would send a request for a report to the web service The web service uses NServiceBus to send the message to a Queue The NServiceBus Generic Host is running as a windows service with a message handler which subscribes to these messages The message handler gets the message, accesses the report from Cognos The message handler calls back to the original batch application, this is decoupled because the calling application provides a call back url The report gets into the batch application and is processed as normal This approach looks something like the below diagram: The key points are an application wanting to take advantage of the batch driven reports needs to do the following: Implement our call back contract Make a call to the service providing a call back url Provide a correlation ID so it knows how to tie each response back to its request What does NServiceBus offer in this solution So this scenario is not the typical messaging service bus type of solution people implement with NServiceBus, but it did offer the following: Simplified interaction with MSMQ Offered the ability to configure the number of processes working through the queue so we could find a balance between load on Cognos versus the applications end to end processing time NServiceBus offers retries and a way to manage failed messages NServiceBus offers a high availability setup The simple thing is that NServiceBus gave us the platform to build the solution on. We just implemented a message handler which functionally processed a message and we could rely on NServiceBus to do all of the hard work around managing the queues and all of the lower level things that would have took ages to write to any kind of robust level. Conclusion With this approach we were able to deal with a fairly significant performance issue with out too much rework. Hopefully this write up gives people some insight into ideas on how to leverage the excellent NServiceBus framework to help solve integration and high through-put scenarios.

    Read the article

  • C#/.NET Little Wonders: The Joy of Anonymous Types

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. In the .NET 3 Framework, Microsoft introduced the concept of anonymous types, which provide a way to create a quick, compiler-generated types at the point of instantiation.  These may seem trivial, but are very handy for concisely creating lightweight, strongly-typed objects containing only read-only properties that can be used within a given scope. Creating an Anonymous Type In short, an anonymous type is a reference type that derives directly from object and is defined by its set of properties base on their names, number, types, and order given at initialization.  In addition to just holding these properties, it is also given appropriate overridden implementations for Equals() and GetHashCode() that take into account all of the properties to correctly perform property comparisons and hashing.  Also overridden is an implementation of ToString() which makes it easy to display the contents of an anonymous type instance in a fairly concise manner. To construct an anonymous type instance, you use basically the same initialization syntax as with a regular type.  So, for example, if we wanted to create an anonymous type to represent a particular point, we could do this: 1: var point = new { X = 13, Y = 7 }; Note the similarity between anonymous type initialization and regular initialization.  The main difference is that the compiler generates the type name and the properties (as readonly) based on the names and order provided, and inferring their types from the expressions they are assigned to. It is key to remember that all of those factors (number, names, types, order of properties) determine the anonymous type.  This is important, because while these two instances share the same anonymous type: 1: // same names, types, and order 2: var point1 = new { X = 13, Y = 7 }; 3: var point2 = new { X = 5, Y = 0 }; These similar ones do not: 1: var point3 = new { Y = 3, X = 5 }; // different order 2: var point4 = new { X = 3, Y = 5.0 }; // different type for Y 3: var point5 = new {MyX = 3, MyY = 5 }; // different names 4: var point6 = new { X = 1, Y = 2, Z = 3 }; // different count Limitations on Property Initialization Expressions The expression for a property in an anonymous type initialization cannot be null (though it can evaluate to null) or an anonymous function.  For example, the following are illegal: 1: // Null can't be used directly. Null reference of what type? 2: var cantUseNull = new { Value = null }; 3:  4: // Anonymous methods cannot be used. 5: var cantUseAnonymousFxn = new { Value = () => Console.WriteLine(“Can’t.”) }; Note that the restriction on null is just that you can’t use it directly as the expression, because otherwise how would it be able to determine the type?  You can, however, use it indirectly assigning a null expression such as a typed variable with the value null, or by casting null to a specific type: 1: string str = null; 2: var fineIndirectly = new { Value = str }; 3: var fineCast = new { Value = (string)null }; All of the examples above name the properties explicitly, but you can also implicitly name properties if they are being set from a property, field, or variable.  In these cases, when a field, property, or variable is used alone, and you don’t specify a property name assigned to it, the new property will have the same name.  For example: 1: int variable = 42; 2:  3: // creates two properties named varriable and Now 4: var implicitProperties = new { variable, DateTime.Now }; Is the same type as: 1: var explicitProperties = new { variable = variable, Now = DateTime.Now }; But this only works if you are using an existing field, variable, or property directly as the expression.  If you use a more complex expression then the name cannot be inferred: 1: // can't infer the name variable from variable * 2, must name explicitly 2: var wontWork = new { variable * 2, DateTime.Now }; In the example above, since we typed variable * 2, it is no longer just a variable and thus we would have to assign the property a name explicitly. ToString() on Anonymous Types One of the more trivial overrides that an anonymous type provides you is a ToString() method that prints the value of the anonymous type instance in much the same format as it was initialized (except actual values instead of expressions as appropriate of course). For example, if you had: 1: var point = new { X = 13, Y = 42 }; And then print it out: 1: Console.WriteLine(point.ToString()); You will get: 1: { X = 13, Y = 42 } While this isn’t necessarily the most stunning feature of anonymous types, it can be handy for debugging or logging values in a fairly easy to read format. Comparing Anonymous Type Instances Because anonymous types automatically create appropriate overrides of Equals() and GetHashCode() based on the underlying properties, we can reliably compare two instances or get hash codes.  For example, if we had the following 3 points: 1: var point1 = new { X = 1, Y = 2 }; 2: var point2 = new { X = 1, Y = 2 }; 3: var point3 = new { Y = 2, X = 1 }; If we compare point1 and point2 we’ll see that Equals() returns true because they overridden version of Equals() sees that the types are the same (same number, names, types, and order of properties) and that the values are the same.   In addition, because all equal objects should have the same hash code, we’ll see that the hash codes evaluate to the same as well: 1: // true, same type, same values 2: Console.WriteLine(point1.Equals(point2)); 3:  4: // true, equal anonymous type instances always have same hash code 5: Console.WriteLine(point1.GetHashCode() == point2.GetHashCode()); However, if we compare point2 and point3 we get false.  Even though the names, types, and values of the properties are the same, the order is not, thus they are two different types and cannot be compared (and thus return false).  And, since they are not equal objects (even though they have the same value) there is a good chance their hash codes are different as well (though not guaranteed): 1: // false, different types 2: Console.WriteLine(point2.Equals(point3)); 3:  4: // quite possibly false (was false on my machine) 5: Console.WriteLine(point2.GetHashCode() == point3.GetHashCode()); Using Anonymous Types Now that we’ve created instances of anonymous types, let’s actually use them.  The property names (whether implicit or explicit) are used to access the individual properties of the anonymous type.  The main thing, once again, to keep in mind is that the properties are readonly, so you cannot assign the properties a new value (note: this does not mean that instances referred to by a property are immutable – for more information check out C#/.NET Fundamentals: Returning Data Immutably in a Mutable World). Thus, if we have the following anonymous type instance: 1: var point = new { X = 13, Y = 42 }; We can get the properties as you’d expect: 1: Console.WriteLine(“The point is: ({0},{1})”, point.X, point.Y); But we cannot alter the property values: 1: // compiler error, properties are readonly 2: point.X = 99; Further, since the anonymous type name is only known by the compiler, there is no easy way to pass anonymous type instances outside of a given scope.  The only real choices are to pass them as object or dynamic.  But really that is not the intention of using anonymous types.  If you find yourself needing to pass an anonymous type outside of a given scope, you should really consider making a POCO (Plain Old CLR Type – i.e. a class that contains just properties to hold data with little/no business logic) instead. Given that, why use them at all?  Couldn’t you always just create a POCO to represent every anonymous type you needed?  Sure you could, but then you might litter your solution with many small POCO classes that have very localized uses. It turns out this is the key to when to use anonymous types to your advantage: when you just need a lightweight type in a local context to store intermediate results, consider an anonymous type – but when that result is more long-lived and used outside of the current scope, consider a POCO instead. So what do we mean by intermediate results in a local context?  Well, a classic example would be filtering down results from a LINQ expression.  For example, let’s say we had a List<Transaction>, where Transaction is defined something like: 1: public class Transaction 2: { 3: public string UserId { get; set; } 4: public DateTime At { get; set; } 5: public decimal Amount { get; set; } 6: // … 7: } And let’s say we had this data in our List<Transaction>: 1: var transactions = new List<Transaction> 2: { 3: new Transaction { UserId = "Jim", At = DateTime.Now, Amount = 2200.00m }, 4: new Transaction { UserId = "Jim", At = DateTime.Now, Amount = -1100.00m }, 5: new Transaction { UserId = "Jim", At = DateTime.Now.AddDays(-1), Amount = 900.00m }, 6: new Transaction { UserId = "John", At = DateTime.Now.AddDays(-2), Amount = 300.00m }, 7: new Transaction { UserId = "John", At = DateTime.Now, Amount = -10.00m }, 8: new Transaction { UserId = "Jane", At = DateTime.Now, Amount = 200.00m }, 9: new Transaction { UserId = "Jane", At = DateTime.Now, Amount = -50.00m }, 10: new Transaction { UserId = "Jaime", At = DateTime.Now.AddDays(-3), Amount = -100.00m }, 11: new Transaction { UserId = "Jaime", At = DateTime.Now.AddDays(-3), Amount = 300.00m }, 12: }; So let’s say we wanted to get the transactions for each day for each user.  That is, for each day we’d want to see the transactions each user performed.  We could do this very simply with a nice LINQ expression, without the need of creating any POCOs: 1: // group the transactions based on an anonymous type with properties UserId and Date: 2: byUserAndDay = transactions 3: .GroupBy(tx => new { tx.UserId, tx.At.Date }) 4: .OrderBy(grp => grp.Key.Date) 5: .ThenBy(grp => grp.Key.UserId); Now, those of you who have attempted to use custom classes as a grouping type before (such as GroupBy(), Distinct(), etc.) may have discovered the hard way that LINQ gets a lot of its speed by utilizing not on Equals(), but also GetHashCode() on the type you are grouping by.  Thus, when you use custom types for these purposes, you generally end up having to write custom Equals() and GetHashCode() implementations or you won’t get the results you were expecting (the default implementations of Equals() and GetHashCode() are reference equality and reference identity based respectively). As we said before, it turns out that anonymous types already do these critical overrides for you.  This makes them even more convenient to use!  Instead of creating a small POCO to handle this grouping, and then having to implement a custom Equals() and GetHashCode() every time, we can just take advantage of the fact that anonymous types automatically override these methods with appropriate implementations that take into account the values of all of the properties. Now, we can look at our results: 1: foreach (var group in byUserAndDay) 2: { 3: // the group’s Key is an instance of our anonymous type 4: Console.WriteLine("{0} on {1:MM/dd/yyyy} did:", group.Key.UserId, group.Key.Date); 5:  6: // each grouping contains a sequence of the items. 7: foreach (var tx in group) 8: { 9: Console.WriteLine("\t{0}", tx.Amount); 10: } 11: } And see: 1: Jaime on 06/18/2012 did: 2: -100.00 3: 300.00 4:  5: John on 06/19/2012 did: 6: 300.00 7:  8: Jim on 06/20/2012 did: 9: 900.00 10:  11: Jane on 06/21/2012 did: 12: 200.00 13: -50.00 14:  15: Jim on 06/21/2012 did: 16: 2200.00 17: -1100.00 18:  19: John on 06/21/2012 did: 20: -10.00 Again, sure we could have just built a POCO to do this, given it an appropriate Equals() and GetHashCode() method, but that would have bloated our code with so many extra lines and been more difficult to maintain if the properties change.  Summary Anonymous types are one of those Little Wonders of the .NET language that are perfect at exactly that time when you need a temporary type to hold a set of properties together for an intermediate result.  While they are not very useful beyond the scope in which they are defined, they are excellent in LINQ expressions as a way to create and us intermediary values for further expressions and analysis. Anonymous types are defined by the compiler based on the number, type, names, and order of properties created, and they automatically implement appropriate Equals() and GetHashCode() overrides (as well as ToString()) which makes them ideal for LINQ expressions where you need to create a set of properties to group, evaluate, etc. Technorati Tags: C#,CSharp,.NET,Little Wonders,Anonymous Types,LINQ

    Read the article

  • Ubuntu Software Center does not allow changes to software sources

    - by Michael Goldshteyn
    The checkboxes that appear to be changeable under the Edit / Software Sources dialog box cannot be changed. I click on them and they just turn gray and stay at their current setting. Update: When I run software-center from a terminal window and try to change one of the checkbox settings, I get: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/softwareproperties/gtk/SoftwarePropertiesGtk.py", line 649, in on_isv_source_toggled self.backend.ToggleSourceUse(str(source_entry)) File "/usr/lib/python2.7/dist-packages/dbus/proxies.py", line 143, in __call__ **keywords) File "/usr/lib/python2.7/dist-packages/dbus/connection.py", line 630, in call_blocking message, timeout) dbus.exceptions.DBusException: com.ubuntu.SoftwareProperties.PermissionDeniedByPolicy: com.ubuntu.softwareproperties.applychanges These things happen instead of it properly prompting me for a password (for root privs).

    Read the article

  • C#/.NET Little Pitfalls: The Dangers of Casting Boxed Values

    - by James Michael Hare
    Starting a new series to parallel the Little Wonders series.  In this series, I will examine some of the small pitfalls that can occasionally trip up developers. Introduction: Of Casts and Conversions What happens when we try to assign from an int and a double and vice-versa? 1: double pi = 3.14; 2: int theAnswer = 42; 3:  4: // implicit widening conversion, compiles! 5: double doubleAnswer = theAnswer; 6:  7: // implicit narrowing conversion, compiler error! 8: int intPi = pi; As you can see from the comments above, a conversion from a value type where there is no potential data loss is can be done with an implicit conversion.  However, when converting from one value type to another may result in a loss of data, you must make the conversion explicit so the compiler knows you accept this risk.  That is why the conversion from double to int will not compile with an implicit conversion, we can make the conversion explicit by adding a cast: 1: // explicit narrowing conversion using a cast, compiler 2: // succeeds, but results may have data loss: 3: int intPi = (int)pi; So for value types, the conversions (implicit and explicit) both convert the original value to a new value of the given type.  With widening and narrowing references, however, this is not the case.  Converting reference types is a bit different from converting value types.  First of all when you perform a widening or narrowing you don’t really convert the instance of the object, you just convert the reference itself to the wider or narrower reference type, but both the original and new reference type both refer back to the same object. Secondly, widening and narrowing for reference types refers the going down and up the class hierarchy instead of referring to precision as in value types.  That is, a narrowing conversion for a reference type means you are going down the class hierarchy (for example from Shape to Square) whereas a widening conversion means you are going up the class hierarchy (from Square to Shape).  1: var square = new Square(); 2:  3: // implicitly convers because all squares are shapes 4: // (that is, all subclasses can be referenced by a superclass reference) 5: Shape myShape = square; 6:  7: // implicit conversion not possible, not all shapes are squares! 8: // (that is, not all superclasses can be referenced by a subclass reference) 9: Square mySquare = (Square) myShape; So we had to cast the Shape back to Square because at that point the compiler has no way of knowing until runtime whether the Shape in question is truly a Square.  But, because the compiler knows that it’s possible for a Shape to be a Square, it will compile.  However, if the object referenced by myShape is not truly a Square at runtime, you will get an invalid cast exception. Of course, there are other forms of conversions as well such as user-specified conversions and helper class conversions which are beyond the scope of this post.  The main thing we want to focus on is this seemingly innocuous casting method of widening and narrowing conversions that we come to depend on every day and, in some cases, can bite us if we don’t fully understand what is going on!  The Pitfall: Conversions on Boxed Value Types Can Fail What if you saw the following code and – knowing nothing else – you were asked if it was legal or not, what would you think: 1: // assuming x is defined above this and this 2: // assignment is syntactically legal. 3: x = 3.14; 4:  5: // convert 3.14 to int. 6: int truncated = (int)x; You may think that since x is obviously a double (can’t be a float) because 3.14 is a double literal, but this is inaccurate.  Our x could also be dynamic and this would work as well, or there could be user-defined conversions in play.  But there is another, even simpler option that can often bite us: what if x is object? 1: object x; 2:  3: x = 3.14; 4:  5: int truncated = (int) x; On the surface, this seems fine.  We have a double and we place it into an object which can be done implicitly through boxing (no cast) because all types inherit from object.  Then we cast it to int.  This theoretically should be possible because we know we can explicitly convert a double to an int through a conversion process which involves truncation. But here’s the pitfall: when casting an object to another type, we are casting a reference type, not a value type!  This means that it will attempt to see at runtime if the value boxed and referred to by x is of type int or derived from type int.  Since it obviously isn’t (it’s a double after all) we get an invalid cast exception! Now, you may say this looks awfully contrived, but in truth we can run into this a lot if we’re not careful.  Consider using an IDataReader to read from a database, and then attempting to select a result row of a particular column type: 1: using (var connection = new SqlConnection("some connection string")) 2: using (var command = new SqlCommand("select * from employee", connection)) 3: using (var reader = command.ExecuteReader()) 4: { 5: while (reader.Read()) 6: { 7: // if the salary is not an int32 in the SQL database, this is an error! 8: // doesn't matter if short, long, double, float, reader [] returns object! 9: total += (int) reader["annual_salary"]; 10: } 11: } Notice that since the reader indexer returns object, if we attempt to convert using a cast to a type, we have to make darn sure we use the true, actual type or this will fail!  If the SQL database column is a double, float, short, etc this will fail at runtime with an invalid cast exception because it attempts to convert the object reference! So, how do you get around this?  There are two ways, you could first cast the object to its actual type (double), and then do a narrowing cast to on the value to int.  Or you could use a helper class like Convert which analyzes the actual run-time type and will perform a conversion as long as the type implements IConvertible. 1: object x; 2:  3: x = 3.14; 4:  5: // if you want to cast, must cast out of object to double, then 6: // cast convert. 7: int truncated = (int)(double) x; 8:  9: // or you can call a helper class like Convert which examines runtime 10: // type of the value being converted 11: int anotherTruncated = Convert.ToInt32(x); Summary You should always be careful when performing a conversion cast from values boxed in object that you are actually casting to the true type (or a sub-type). Since casting from object is a widening of the reference, be careful that you either know the exact, explicit type you expect to be held in the object, or instead avoid the cast and use a helper class to perform a safe conversion to the type you desire. Technorati Tags: C#,.NET,Pitfalls,Little Pitfalls,BlackRabbitCoder

    Read the article

  • Silverlight Cream for November 21, 2011 -- #1171

    - by Dave Campbell
    In this Issue: Colin Eberhardt, Sumit Dutta, Morten Nielsen, Jesse Liberty, Jeff Blankenburg(-2-), Brian Noyes, and Tony Champion. Above the Fold: Silverlight: "PV Basics : Client-side Collections" Tony Champion WP7: "Pushpin Clustering with the Windows Phone 7 Bing Map control" Colin Eberhardt Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up From SilverlightCream.com: Pushpin Clustering with the Windows Phone 7 Bing Map control Colin Eberhardt is back discussing Pushpins for a BingMaps app on WP7 and provides a utility class that clusters pushpins, allowing you to render 1000s of pins on an app ... all the explanation and all the code Part 22 - Windows Phone 7 - Tile Push Notification Part 22 in Sumit Dutta's WP7 series is about Tile Push Notification... nice tutorial with all the code listed Correctly displaying your current location Morten Nielsen demonstrates formatting the information from the GPS on your WP7 into something intelligible and useful Spiking the Pomodoro Timer Jesse Liberty put up a quick and dirty version of a Pomodoro timer for WP7.1 to explore the technical challenges of the Full Stack Phase 2 he's cranking up 31 Days of Mango | Day #11: Network Jeff Blankenburg's Number 10 in his 31 Days quest of WP7.1 is about the NetworkInformation namespace which gives you all sorts of info on the user's device network connection availability, type, etc. 31 Days of Mango | Day #11: Live Tiles Jeff Blankenburg takes off on Live Tiles for Day 11... big topic for a 1 day post, but he takes off on it... updating and Live Tiles too Working with Prism 4 Part 2: MVVM Basics and Commands Brian Noyes has part 2 of his Prism/MVVM series up at SilverlightShow... very nice tutorial on the basics of getting a view and viewmodel up, and setting up an ICommand to launch an Edit View... plus the code to peruse. PV Basics : Client-side Collections Tony Champion is startig a series on Silverlight 5 and Pivot Viewer... First up is some basics in dealing with the control in SL5 and talking about Client-side Collections... great informative tutorial and all the code Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • C#/.NET Fundamentals: Choosing the Right Collection Class

    - by James Michael Hare
    The .NET Base Class Library (BCL) has a wide array of collection classes at your disposal which make it easy to manage collections of objects. While it's great to have so many classes available, it can be daunting to choose the right collection to use for any given situation. As hard as it may be, choosing the right collection can be absolutely key to the performance and maintainability of your application! This post will look at breaking down any confusion between each collection and the situations in which they excel. We will be spending most of our time looking at the System.Collections.Generic namespace, which is the recommended set of collections. The Generic Collections: System.Collections.Generic namespace The generic collections were introduced in .NET 2.0 in the System.Collections.Generic namespace. This is the main body of collections you should tend to focus on first, as they will tend to suit 99% of your needs right up front. It is important to note that the generic collections are unsynchronized. This decision was made for performance reasons because depending on how you are using the collections its completely possible that synchronization may not be required or may be needed on a higher level than simple method-level synchronization. Furthermore, concurrent read access (all writes done at beginning and never again) is always safe, but for concurrent mixed access you should either synchronize the collection or use one of the concurrent collections. So let's look at each of the collections in turn and its various pros and cons, at the end we'll summarize with a table to help make it easier to compare and contrast the different collections. The Associative Collection Classes Associative collections store a value in the collection by providing a key that is used to add/remove/lookup the item. Hence, the container associates the value with the key. These collections are most useful when you need to lookup/manipulate a collection using a key value. For example, if you wanted to look up an order in a collection of orders by an order id, you might have an associative collection where they key is the order id and the value is the order. The Dictionary<TKey,TVale> is probably the most used associative container class. The Dictionary<TKey,TValue> is the fastest class for associative lookups/inserts/deletes because it uses a hash table under the covers. Because the keys are hashed, the key type should correctly implement GetHashCode() and Equals() appropriately or you should provide an external IEqualityComparer to the dictionary on construction. The insert/delete/lookup time of items in the dictionary is amortized constant time - O(1) - which means no matter how big the dictionary gets, the time it takes to find something remains relatively constant. This is highly desirable for high-speed lookups. The only downside is that the dictionary, by nature of using a hash table, is unordered, so you cannot easily traverse the items in a Dictionary in order. The SortedDictionary<TKey,TValue> is similar to the Dictionary<TKey,TValue> in usage but very different in implementation. The SortedDictionary<TKey,TValye> uses a binary tree under the covers to maintain the items in order by the key. As a consequence of sorting, the type used for the key must correctly implement IComparable<TKey> so that the keys can be correctly sorted. The sorted dictionary trades a little bit of lookup time for the ability to maintain the items in order, thus insert/delete/lookup times in a sorted dictionary are logarithmic - O(log n). Generally speaking, with logarithmic time, you can double the size of the collection and it only has to perform one extra comparison to find the item. Use the SortedDictionary<TKey,TValue> when you want fast lookups but also want to be able to maintain the collection in order by the key. The SortedList<TKey,TValue> is the other ordered associative container class in the generic containers. Once again SortedList<TKey,TValue>, like SortedDictionary<TKey,TValue>, uses a key to sort key-value pairs. Unlike SortedDictionary, however, items in a SortedList are stored as an ordered array of items. This means that insertions and deletions are linear - O(n) - because deleting or adding an item may involve shifting all items up or down in the list. Lookup time, however is O(log n) because the SortedList can use a binary search to find any item in the list by its key. So why would you ever want to do this? Well, the answer is that if you are going to load the SortedList up-front, the insertions will be slower, but because array indexing is faster than following object links, lookups are marginally faster than a SortedDictionary. Once again I'd use this in situations where you want fast lookups and want to maintain the collection in order by the key, and where insertions and deletions are rare. The Non-Associative Containers The other container classes are non-associative. They don't use keys to manipulate the collection but rely on the object itself being stored or some other means (such as index) to manipulate the collection. The List<T> is a basic contiguous storage container. Some people may call this a vector or dynamic array. Essentially it is an array of items that grow once its current capacity is exceeded. Because the items are stored contiguously as an array, you can access items in the List<T> by index very quickly. However inserting and removing in the beginning or middle of the List<T> are very costly because you must shift all the items up or down as you delete or insert respectively. However, adding and removing at the end of a List<T> is an amortized constant operation - O(1). Typically List<T> is the standard go-to collection when you don't have any other constraints, and typically we favor a List<T> even over arrays unless we are sure the size will remain absolutely fixed. The LinkedList<T> is a basic implementation of a doubly-linked list. This means that you can add or remove items in the middle of a linked list very quickly (because there's no items to move up or down in contiguous memory), but you also lose the ability to index items by position quickly. Most of the time we tend to favor List<T> over LinkedList<T> unless you are doing a lot of adding and removing from the collection, in which case a LinkedList<T> may make more sense. The HashSet<T> is an unordered collection of unique items. This means that the collection cannot have duplicates and no order is maintained. Logically, this is very similar to having a Dictionary<TKey,TValue> where the TKey and TValue both refer to the same object. This collection is very useful for maintaining a collection of items you wish to check membership against. For example, if you receive an order for a given vendor code, you may want to check to make sure the vendor code belongs to the set of vendor codes you handle. In these cases a HashSet<T> is useful for super-quick lookups where order is not important. Once again, like in Dictionary, the type T should have a valid implementation of GetHashCode() and Equals(), or you should provide an appropriate IEqualityComparer<T> to the HashSet<T> on construction. The SortedSet<T> is to HashSet<T> what the SortedDictionary<TKey,TValue> is to Dictionary<TKey,TValue>. That is, the SortedSet<T> is a binary tree where the key and value are the same object. This once again means that adding/removing/lookups are logarithmic - O(log n) - but you gain the ability to iterate over the items in order. For this collection to be effective, type T must implement IComparable<T> or you need to supply an external IComparer<T>. Finally, the Stack<T> and Queue<T> are two very specific collections that allow you to handle a sequential collection of objects in very specific ways. The Stack<T> is a last-in-first-out (LIFO) container where items are added and removed from the top of the stack. Typically this is useful in situations where you want to stack actions and then be able to undo those actions in reverse order as needed. The Queue<T> on the other hand is a first-in-first-out container which adds items at the end of the queue and removes items from the front. This is useful for situations where you need to process items in the order in which they came, such as a print spooler or waiting lines. So that's the basic collections. Let's summarize what we've learned in a quick reference table.  Collection Ordered? Contiguous Storage? Direct Access? Lookup Efficiency Manipulate Efficiency Notes Dictionary No Yes Via Key Key: O(1) O(1) Best for high performance lookups. SortedDictionary Yes No Via Key Key: O(log n) O(log n) Compromise of Dictionary speed and ordering, uses binary search tree. SortedList Yes Yes Via Key Key: O(log n) O(n) Very similar to SortedDictionary, except tree is implemented in an array, so has faster lookup on preloaded data, but slower loads. List No Yes Via Index Index: O(1) Value: O(n) O(n) Best for smaller lists where direct access required and no ordering. LinkedList No No No Value: O(n) O(1) Best for lists where inserting/deleting in middle is common and no direct access required. HashSet No Yes Via Key Key: O(1) O(1) Unique unordered collection, like a Dictionary except key and value are same object. SortedSet Yes No Via Key Key: O(log n) O(log n) Unique ordered collection, like SortedDictionary except key and value are same object. Stack No Yes Only Top Top: O(1) O(1)* Essentially same as List<T> except only process as LIFO Queue No Yes Only Front Front: O(1) O(1) Essentially same as List<T> except only process as FIFO   The Original Collections: System.Collections namespace The original collection classes are largely considered deprecated by developers and by Microsoft itself. In fact they indicate that for the most part you should always favor the generic or concurrent collections, and only use the original collections when you are dealing with legacy .NET code. Because these collections are out of vogue, let's just briefly mention the original collection and their generic equivalents: ArrayList A dynamic, contiguous collection of objects. Favor the generic collection List<T> instead. Hashtable Associative, unordered collection of key-value pairs of objects. Favor the generic collection Dictionary<TKey,TValue> instead. Queue First-in-first-out (FIFO) collection of objects. Favor the generic collection Queue<T> instead. SortedList Associative, ordered collection of key-value pairs of objects. Favor the generic collection SortedList<T> instead. Stack Last-in-first-out (LIFO) collection of objects. Favor the generic collection Stack<T> instead. In general, the older collections are non-type-safe and in some cases less performant than their generic counterparts. Once again, the only reason you should fall back on these older collections is for backward compatibility with legacy code and libraries only. The Concurrent Collections: System.Collections.Concurrent namespace The concurrent collections are new as of .NET 4.0 and are included in the System.Collections.Concurrent namespace. These collections are optimized for use in situations where multi-threaded read and write access of a collection is desired. The concurrent queue, stack, and dictionary work much as you'd expect. The bag and blocking collection are more unique. Below is the summary of each with a link to a blog post I did on each of them. ConcurrentQueue Thread-safe version of a queue (FIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentStack Thread-safe version of a stack (LIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentBag Thread-safe unordered collection of objects. Optimized for situations where a thread may be bother reader and writer. For more information see: C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection ConcurrentDictionary Thread-safe version of a dictionary. Optimized for multiple readers (allows multiple readers under same lock). For more information see C#/.NET Little Wonders: The ConcurrentDictionary BlockingCollection Wrapper collection that implement producers & consumers paradigm. Readers can block until items are available to read. Writers can block until space is available to write (if bounded). For more information see C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection Summary The .NET BCL has lots of collections built in to help you store and manipulate collections of data. Understanding how these collections work and knowing in which situations each container is best is one of the key skills necessary to build more performant code. Choosing the wrong collection for the job can make your code much slower or even harder to maintain if you choose one that doesn’t perform as well or otherwise doesn’t exactly fit the situation. Remember to avoid the original collections and stick with the generic collections.  If you need concurrent access, you can use the generic collections if the data is read-only, or consider the concurrent collections for mixed-access if you are running on .NET 4.0 or higher.   Tweet Technorati Tags: C#,.NET,Collecitons,Generic,Concurrent,Dictionary,List,Stack,Queue,SortedList,SortedDictionary,HashSet,SortedSet

    Read the article

  • Do NOT Change "Copy Local” project references to false, unless understand subsequences.

    - by Michael Freidgeim
    To optimize performance of visual studio build I've found multiple recommendations to change CopyLocal property for dependent dlls to false,e.g. From http://stackoverflow.com/questions/690033/best-practices-for-large-solutions-in-visual-studio-2008 CopyLocal? For sure turn this offhttp://stackoverflow.com/questions/280751/what-is-the-best-practice-for-copy-local-and-with-project-referencesAlways set the Copy Local property to false and enforce this via a custom msbuild stephttp://codebetter.com/patricksmacchia/2007/06/20/benefit-from-the-c-and-vb-net-compilers-perf/BenefitBenefitMy advice is to always set ‘Copy Local’ to falseSome time ago we've tried to change the setting to false, and found that it causes problem for deployment of top-level projects.Recently I've followed the suggestion and changed the settings for middle-level projects. It didn't cause immediate issues, but I was warned by Readify Consultant Colin Savage about possible errors during deploymentsI haven't undone the changes immediately and we found a few issues during testing.There are many scenarios, when you need to have Copy Local’ left to True.The concerns are highlighted in some stack overflow answers, but they have small number of votes.Top-level projects:  set copy local = true.First of all, it doesn't work correctly for top-level projects, i.e. executables or web sites.As pointed in the answer http://stackoverflow.com/a/6529461/52277for all the references in the one at the top set copy local = true.Alternatively you have to change output directory as it's described in http://www.simple-talk.com/dotnet/.net-framework/partitioning-your-code-base-through-.net-assemblies-and-visual-studio-projects/If you set ‘ Copy Local = false’, VS will, unless you tell it otherwise, place each assembly alone in its own .\bin\Debugdirectory. Because of this, you will need to configure VS to place assemblies together in the same directory. To do so, for each VS project, go to VS > Project Properties > Build tab > Output path, and set the Ouput path to ..\bin\Debugfor debug configuration, and ..\bin\Release for release configuration.Second-level  dependencies:  set copy local = true.Another example when copylocal =false fails on run-time, is when top level assembly doesn't directly referenced one of indirect dependencies.E..g. Top-level assembly A has reference to assembly B with copylocal =true, but assembly B has reference to assembly C with copylocal =false. Most likely assembly C will be missing on runtime and will cause errors E.g. http://stackoverflow.com/questions/602765/when-should-copy-local-be-set-to-true-and-when-should-it-not?lq=1Copy local is important for deployment scenarios and tools. As a general rule you should use CopyLocal=True and http://stackoverflow.com/questions/602765/when-should-copy-local-be-set-to-true-and-when-should-it-not?lq=1 Unfortunately there are some quirks and CopyLocal won't necessary work as expected for assembly references in secondary assemblies structured as shown below.MainApp.exe MyLibrary.dll ThirdPartyLibrary.dll (if in the GAC CopyLocal won't copy to MainApp bin folder)This makes xcopy deployments difficult . .Reflection called DLLs  dependencies:  set copy local = true.E.g user can see error "ISystem.Reflection.ReflectionTypeLoadException: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information."The fix for the issue is recommended in http://stackoverflow.com/a/6200173/52277"I solved this issue by setting the Copy Local attribute of my project's references to true."In general, the problems with investigation of deployment issues may overweight the benefits of reduced build time. Setting the Copy Local to false without considering deployment issues is not a good idea.

    Read the article

  • I want to hit Apex SQL with a big stick

    - by Michael Stephenson
    <Whinge> Thought id just have a little whinge about this product which caused me a load of grief the other day..... So the background was that my development machine had a completely full hard disk which I needed to sort out.  Upon investigation I found the issue was that the msdb database had managed to get very large. This was caused because a long time ago (and I cant even remember why) I tried out Apex SQL.  After a few days I decided to uninstall it and thought nothing more of it.  What I didnt realise was that uninstalling it doesnt actually uninstall it (and it doesnt inform you about this), but there was still some assemblies left on my machine.  Everytime SQL Server was running it was starting the Apex SQL Connection monitor which was then running in the background and regularly recording information in the msdb database.  Over time it had recorded enough to fill the disk. The below article advises how to sort this out by removing this fully so if your having a problem then try this out:http://knowledgebase.apexsql.com/2007/08/how-to-uninstall-apexsqlconnectionmonit_09.htm Once this was sorted out its interesting to read the above article because I just dont think the approach used by the vendor of this software is a very good one.  So for the Apex team just wanted to pass on a thought: If I want to uninstall your product you should tell me if stuff is left on the machine especially if a process will be running which is going to fill my machine with useless data, </Whinge>

    Read the article

  • Silverlight Cream for April 25, 2010 -- #847

    - by Dave Campbell
    In this Issue: Michael Washington, David Poll, Andrea Boschin, Kunal Chowdhury(-2-), Lee, and Chad Campbell. Shoutout: Not at all Silverlight, but Kirupa has a great article up about Elastic Collisions From SilverlightCream.com: Easily decouple your MVVM ViewModel from your Model using RX Extensions Michael Washington continues with his Simplified MVVM and uses Rx to allow him to put web service methods in the model and call them from the ViewModel. A “refreshing” Authentication/Authorization experience with Silverlight 4 David Poll expands on his previous post and demonstrates returning the user to the page prior to the login, just the way you'd like it. Using a PollingDuplex service to handle long running activities Andrea Boschin is back discussing PollingDuplex again, this time is discussing getting updates from a long-running process. Introduction to Silverlight - Silverlight tutorials Chapter 1 Kunal Chowdhury has an intro to Silverlight Tutorial that looks good... if you're just getting started, here's a place to look. Introduction to Silverlight Application Development - Silverlight tutorial Chapter 2 In his 2nd introductory tutorial, Kunal Chowdhury gets into code and discusses User Controls. Drag and Drop grouping in Datagrid Lee has a post up where he's taken a DataGrid and produced some of the features of some of the 3rd-party controls, specifically dragging column headers to a place above the grid to sort the grid. Silverlight – HTTP request to [Url] was aborted. …local channel being closed while the request was still in progress Chad Campbell is addressing timeout problems you may hit with connecting up to your WCF service. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • How to run 'apt-get install' to install all dependencies?

    - by michael
    I am running this in ubuntu server installation: sudo apt-get install git-core gnupg flex bison gperf build-essential \ zip curl libc6-dev libncurses5-dev:i386 x11proto-core-dev \ libx11-dev:i386 libreadline6-dev:i386 libgl1-mesa-glx:i386 \ libgl1-mesa-dev g++-multilib mingw32 openjdk-6-jdk tofrodos \ python-markdown libxml2-utils xsltproc zlib1g-dev:i386 but I am getting this: Reading package lists... Building dependency tree... Reading state information... curl is already the newest version. gnupg is already the newest version. Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: build-essential : Depends: gcc (>= 4:4.4.3) but it is not going to be installed Depends: g++ (>= 4:4.4.3) but it is not going to be installed g++-multilib : Depends: cpp (>= 4:4.7.2-1ubuntu2) but it is not going to be installed Depends: gcc-multilib (>= 4:4.7.2-1ubuntu2) but it is not going to be installed Depends: g++ (>= 4:4.7.2-1ubuntu2) but it is not going to be installed Depends: g++-4.7-multilib (>= 4.7.2-1~) but it is not going to be installed How can I fix this?

    Read the article

  • Configuration setting of HttpWebRequest.Timeout value

    - by Michael Freidgeim
    I wanted to set in configuration on client HttpWebRequest.Timeout.I was surprised, that MS doesn’t provide it as a part of .Net configuration.(Answer in http://forums.silverlight.net/post/77818.aspx thread: “Unfortunately specifying the timeout is not supported in current version. We may support it in the future release.”) I added it to appSettings section of app.config and read it in the method of My HttpWebRequestHelper class  //The Method property can be set to any of the HTTP 1.1 protocol verbs: GET, HEAD, POST, PUT, DELETE, TRACE, or OPTIONS.        public static HttpWebRequest PrepareWebRequest(string sUrl, string Method, CookieContainer cntnrCookies)        {            HttpWebRequest webRequest = WebRequest.Create(sUrl) as HttpWebRequest;            webRequest.Method = Method;            webRequest.ContentType = "application/x-www-form-urlencoded";            webRequest.CookieContainer = cntnrCookies; webRequest.Timeout = ConfigurationExtensions.GetAppSetting("HttpWebRequest.Timeout", 100000);//default 100sec-http://blogs.msdn.com/b/buckh/archive/2005/02/01/365127.aspx)            /*                //try to change - from http://www.codeproject.com/csharp/ClientTicket_MSNP9.asp                                  webRequest.AllowAutoRedirect = false;                       webRequest.Pipelined = false;                        webRequest.KeepAlive = false;                        webRequest.ProtocolVersion = new Version(1,0);//protocol 1.0 works better that 1.1 ??            */            //MNF 26/5/2005 Some web servers expect UserAgent to be specified            //so let's say it's IE6            webRequest.UserAgent = "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; .NET CLR 1.1.4322)";            DebugOutputHelper.PrintHttpWebRequest(webRequest, TraceOutputHelper.LineWithTrace(""));            return webRequest;        }Related link:http://stackoverflow.com/questions/387247/i-need-help-setting-net-httpwebrequest-timeoutv

    Read the article

  • UK Connected Systems User Group Recap from July

    - by Michael Stephenson
    Originally posted on: http://geekswithblogs.net/michaelstephenson/archive/2013/07/29/153557.aspxJust a note to recap some of the discussion and activity from the recent UK Connected Systems User Group in July.AppFx.ServiceBusWe discussed some of the implementation details of the AppFx.ServiceBus codeplex project.  This brought up some discussion around peoples experiences with Windows Azure Service Bus and how this codeplex project can help.  The slides from this presentation are available at the following location.https://appfxservicebus.codeplex.com/downloads/get/711481BizTalk Maturity AssessmentThe session around the BizTalk Maturity Assessment brought up some interesting discussions.  I have created a video about the content from this session which is available online.To findout more about the BizTalk Maturity Assessment refer to:http://www.biztalkmaturity.com/To view the video refer to:http://www.youtube.com/watch?v=MZ1eC5SCDogOther NewsHybrid Organisation EventThe next user group session will be a full day event on the 11th September called The Hybrid Organisation.  We have some great sessions lined up and you can findout more about this event on the following link:http://ukcsug-hybridintegration-sept2013.eventbrite.com/Saravana's BizTalk Services VideosSaravana has recently published some BizTalk Services videos that he wanted to share with everyone.http://blogs.biztalk360.com/windows-azure-biztalk-serviceshello-word-and-hybrid-scenarios-demo-videos/Hope to see everyone soon and let me know if anyone has any questionsRegardsMike

    Read the article

  • New DMV… not yet

    - by Michael Zilberstein
    Downloaded and installed new toy: And while reading BOL, stumbled upon new extremely useful DMV: sys.dm_exec_query_profiles . This DMV enables DBA to monitor query progress while it is being executed. Counters in the DMV are per operation per thread. So we’ll be able to monitor in real time which thread (even for parallel processing) processes which node in the plan. Or find heavy operations “post mortem”. We all know the uncomfortable feeling when some heavy query runs and the boss starts asking...(read more)

    Read the article

  • C#: String Concatenation vs Format vs StringBuilder

    - by James Michael Hare
    I was looking through my groups’ C# coding standards the other day and there were a couple of legacy items in there that caught my eye.  They had been passed down from committee to committee so many times that no one even thought to second guess and try them for a long time.  It’s yet another example of how micro-optimizations can often get the best of us and cause us to write code that is not as maintainable as it could be for the sake of squeezing an extra ounce of performance out of our software. So the two standards in question were these, in paraphrase: Prefer StringBuilder or string.Format() to string concatenation. Prefer string.Equals() with case-insensitive option to string.ToUpper().Equals(). Now some of you may already know what my results are going to show, as these items have been compared before on many blogs, but I think it’s always worth repeating and trying these yourself.  So let’s dig in. The first test was a pretty standard one.  When concattenating strings, what is the best choice: StringBuilder, string concattenation, or string.Format()? So before we being I read in a number of iterations from the console and a length of each string to generate.  Then I generate that many random strings of the given length and an array to hold the results.  Why am I so keen to keep the results?  Because I want to be able to snapshot the memory and don’t want garbage collection to collect the strings, hence the array to keep hold of them.  I also didn’t want the random strings to be part of the allocation, so I pre-allocate them and the array up front before the snapshot.  So in the code snippets below: num – Number of iterations. strings – Array of randomly generated strings. results – Array to hold the results of the concatenation tests. timer – A System.Diagnostics.Stopwatch() instance to time code execution. start – Beginning memory size. stop – Ending memory size. after – Memory size after final GC. So first, let’s look at the concatenation loop: 1: // build num strings using concattenation. 2: for (int i = 0; i < num; i++) 3: { 4: results[i] = "This is test #" + i + " with a result of " + strings[i]; 5: } Pretty standard, right?  Next for string.Format(): 1: // build strings using string.Format() 2: for (int i = 0; i < num; i++) 3: { 4: results[i] = string.Format("This is test #{0} with a result of {1}", i, strings[i]); 5: }   Finally, StringBuilder: 1: // build strings using StringBuilder 2: for (int i = 0; i < num; i++) 3: { 4: var builder = new StringBuilder(); 5: builder.Append("This is test #"); 6: builder.Append(i); 7: builder.Append(" with a result of "); 8: builder.Append(strings[i]); 9: results[i] = builder.ToString(); 10: } So I take each of these loops, and time them by using a block like this: 1: // get the total amount of memory used, true tells it to run GC first. 2: start = System.GC.GetTotalMemory(true); 3:  4: // restart the timer 5: timer.Reset(); 6: timer.Start(); 7:  8: // *** code to time and measure goes here. *** 9:  10: // get the current amount of memory, stop the timer, then get memory after GC. 11: stop = System.GC.GetTotalMemory(false); 12: timer.Stop(); 13: other = System.GC.GetTotalMemory(true); So let’s look at what happens when I run each of these blocks through the timer and memory check at 500,000 iterations: 1: Operator + - Time: 547, Memory: 56104540/55595960 - 500000 2: string.Format() - Time: 749, Memory: 57295812/55595960 - 500000 3: StringBuilder - Time: 608, Memory: 55312888/55595960 – 500000   Egad!  string.Format brings up the rear and + triumphs, well, at least in terms of speed.  The concat burns more memory than StringBuilder but less than string.Format().  This shows two main things: StringBuilder is not always the panacea many think it is. The difference between any of the three is miniscule! The second point is extremely important!  You will often here people who will grasp at results and say, “look, operator + is 10% faster than StringBuilder so always use StringBuilder.”  Statements like this are a disservice and often misleading.  For example, if I had a good guess at what the size of the string would be, I could have preallocated my StringBuffer like so:   1: for (int i = 0; i < num; i++) 2: { 3: // pre-declare StringBuilder to have 100 char buffer. 4: var builder = new StringBuilder(100); 5: builder.Append("This is test #"); 6: builder.Append(i); 7: builder.Append(" with a result of "); 8: builder.Append(strings[i]); 9: results[i] = builder.ToString(); 10: }   Now let’s look at the times: 1: Operator + - Time: 551, Memory: 56104412/55595960 - 500000 2: string.Format() - Time: 753, Memory: 57296484/55595960 - 500000 3: StringBuilder - Time: 525, Memory: 59779156/55595960 - 500000   Whoa!  All of the sudden StringBuilder is back on top again!  But notice, it takes more memory now.  This makes perfect sense if you examine the IL behind the scenes.  Whenever you do a string concat (+) in your code, it examines the lengths of the arguments and creates a StringBuilder behind the scenes of the appropriate size for you. But even IF we know the approximate size of our StringBuilder, look how much less readable it is!  That’s why I feel you should always take into account both readability and performance.  After all, consider all these timings are over 500,000 iterations.   That’s at best  0.0004 ms difference per call which is neglidgable at best.  The key is to pick the best tool for the job.  What do I mean?  Consider these awesome words of wisdom: Concatenate (+) is best at concatenating.  StringBuilder is best when you need to building. Format is best at formatting. Totally Earth-shattering, right!  But if you consider it carefully, it actually has a lot of beauty in it’s simplicity.  Remember, there is no magic bullet.  If one of these always beat the others we’d only have one and not three choices. The fact is, the concattenation operator (+) has been optimized for speed and looks the cleanest for joining together a known set of strings in the simplest manner possible. StringBuilder, on the other hand, excels when you need to build a string of inderterminant length.  Use it in those times when you are looping till you hit a stop condition and building a result and it won’t steer you wrong. String.Format seems to be the looser from the stats, but consider which of these is more readable.  Yes, ignore the fact that you could do this with ToString() on a DateTime.  1: // build a date via concatenation 2: var date1 = (month < 10 ? string.Empty : "0") + month + '/' 3: + (day < 10 ? string.Empty : "0") + '/' + year; 4:  5: // build a date via string builder 6: var builder = new StringBuilder(10); 7: if (month < 10) builder.Append('0'); 8: builder.Append(month); 9: builder.Append('/'); 10: if (day < 10) builder.Append('0'); 11: builder.Append(day); 12: builder.Append('/'); 13: builder.Append(year); 14: var date2 = builder.ToString(); 15:  16: // build a date via string.Format 17: var date3 = string.Format("{0:00}/{1:00}/{2:0000}", month, day, year); 18:  So the strength in string.Format is that it makes constructing a formatted string easy to read.  Yes, it’s slower, but look at how much more elegant it is to do zero-padding and anything else string.Format does. So my lesson is, don’t look for the silver bullet!  Choose the best tool.  Micro-optimization almost always bites you in the end because you’re sacrificing readability for performance, which is almost exactly the wrong choice 90% of the time. I love the rules of optimization.  They’ve been stated before in many forms, but here’s how I always remember them: For Beginners: Do not optimize. For Experts: Do not optimize yet. It’s so true.  Most of the time on today’s modern hardware, a micro-second optimization at the sake of readability will net you nothing because it won’t be your bottleneck.  Code for readability, choose the best tool for the job which will usually be the most readable and maintainable as well.  Then, and only then, if you need that extra performance boost after profiling your code and exhausting all other options… then you can start to think about optimizing.

    Read the article

  • Life Technologies: Making Life Easier to Manage

    - by Michael Snow
    12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} When we’re thinking about customer engagement, we’re acutely aware of all the forces at play competing for our customer’s attention. Solutions that make life easier for our customers draw attention to themselves. We tend to engage more when there is a distinct benefit and we can take a deep breath and accept that there is hope in the world and everything isn’t designed to frustrate us and make our lives miserable. (sigh…) When products are designed to automate processes that were consuming hours of our time with no relief in sight, they deserve to be recognized. One of our recent Oracle Fusion Middleware Innovation Award Winners in the WebCenter category, Life Technologies, has recently posted a video promoting their “award winning” solution. The Oracle Innovation Awards are part of the overall Oracle Excellence awards given to customers for innovation with Oracle products. More info here. Their award nomination included this description: Life Technologies delivered the My Life Service Portal as part of a larger Digital Hub strategy. This Portal is the first of its kind in the biotechnology service providing industry. The Portal provides access to Life Technologies cloud based service monitoring system where all customer deployed instruments can be remotely monitored and proactively repaired. The portal provides alerts from these cloud based monitoring services directly to the customer and to Life Technologies Field Engineers. The Portal provides insight into the instruments and services customers purchased for the purpose of analyzing and anticipating future customer needs and creating targeted sales and service programs. This portal not only provides benefits for Life Technologies internal sales and service teams but provides customers a central place to track all pertinent instrument information including: instrument service history instrument status and previous activities instrument performance analytics planned service visits warranty/contract information discussion forums social networks for lab management and collaboration alerts and notifications on all of the above team scheduling for instrument usage promote optional reagents required to keep instruments performing From their website The Life Technologies Instruments & Services Portal Helps You Save Time and Gain Peace of Mind Introducing the new, award-winning, free online tool that enables easier management of your instrument use and care, faster response to requests for service or service quotes, and instant sharing of key instrument and service information with your colleagues. Now – this unto itself is obviously beneficial for their customers who were previously burdened with having to do all of these tasks separately, manually and inconsistently by nature. Now – all in one place and free to their customers – a portal that ties it all together. They now have built the platform to give their customers yet another reason to do business with them – Their headline on their product page says it all: “Life is now easier to manage - All your instrument use and care in one place – the no-cost, no-hassle Instruments and Services Portal.” Of course – it’s very convenient that the company name includes “Life” and now can also promote to their clients and prospects that doing business with them is easy and their sophisticated lab equipment is easy to manage. In an industry full of PhD’s – “Easy” isn’t usually the first word that comes to mind, but Life Technologies has now tied the word to their brand in a very eloquent way. Between our work lives and family or personal lives, getting any mono-focused minutes of dedicated attention has become such a rare occurrence in our current era of multi-tasking that those moments of focus are highly prized. So – when something is done really well – so well that it becomes captivating and urges sharing impulses – I take notice and dig deeper and most of the time I discover other gems not so hidden below the surface. And then I share with those I know would enjoy and understand. In the spirit of full disclosure, I must admit here that the first person I shared the videos below with was my daughter. She’s in her senior year of high school in the midst of her college search. She’s passionate about her academics and has already decided that she wants to study Neuroscience in college and like her mother will be in for the long haul to a PhD eventually. In a summer science program at Smith College 2 summers ago – she sent the family famous text to me – “I just dissected a sheep’s brain – wicked cool!” – This was followed by an equally memorable text this past summer in a research mentorship in Neuroscience at UConn – “Just sliced up some rat brain. Reminded me of a deli slicer at the supermarket… sorry I forgot to call last night…” So… needless to say – I knew I had an audience that would enjoy and understand these videos below and are now being shared among her science classmates and faculty. And evidently - so does Life Technologies! They’ve done a great job on these making them fun and something that will easily be shared among their customers social networks. They’ve created a neuro-archetypal character, “Ph.Diddy” and know that their world of clients in academics, research, and other institutions would understand and enjoy the “edutainment” value in this series of videos on their YouTube channel that pokes fun at the stereotypes while also promoting their products at the same time. They use their Facebook page for additional engagement with their clients and as another venue to promote these videos. Enjoy this one as well! More to be found here: http://www.youtube.com/lifetechnologies Stay tuned to this Oracle WebCenter blog channel. Tomorrow we'll be taking a look at another winner of the Innovation Awards, LADWP - helping to keep the citizens of Los Angeles engaged with their Water and Power provider.

    Read the article

  • C#/.NET Little Wonders: The EventHandler and EventHandler&lt;TEventArgs&gt; delegates

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. In the last two weeks, we examined the Action family of delegates (and delegates in general), and the Func family of delegates and how they can be used to support generic, reusable algorithms and classes. So this week, we are going to look at a handy pair of delegates that can be used to eliminate the need for defining custom delegates when creating events: the EventHandler and EventHandler<TEventArgs> delegates. Events and delegates Before we begin, let’s quickly consider events in .NET.  According to the MSDN: An event in C# is a way for a class to provide notifications to clients of that class when some interesting thing happens to an object. So, basically, you can create an event in a type so that users of that type can subscribe to notifications of things of interest.  How is this different than some of the delegate programming that we talked about in the last two weeks?  Well, you can think of an event as a special access modifier on a delegate.  Some differences between the two are: Events are a special access case of delegates They behave much like delegates instances inside the type they are declared in, but outside of that type they can only be (un)subscribed to. Events can specify add/remove behavior explicitly If you want to do additional work when someone subscribes or unsubscribes to an event, you can specify the add and remove actions explicitly. Events have access modifiers, but these only specify the access level of those who can (un)subscribe A public event, for example, means anyone can (un)subscribe, but it does not mean that anyone can raise (invoke) the event directly.  Events can only be raised by the type that contains them In contrast, if a delegate is visible, it can be invoked outside of the object (not even in a sub-class!). Events tend to be for notifications only, and should be treated as optional Semantically speaking, events typically don’t perform work on the the class directly, but tend to just notify subscribers when something of note occurs. My basic rule-of-thumb is that if you are just wanting to notify any listeners (who may or may not care) that something has happened, use an event.  However, if you want the caller to provide some function to perform to direct the class about how it should perform work, make it a delegate. Declaring events using custom delegates To declare an event in a type, we simply use the event keyword and specify its delegate type.  For example, let’s say you wanted to create a new TimeOfDayTimer that triggers at a given time of the day (as opposed to on an interval).  We could write something like this: 1: public delegate void TimeOfDayHandler(object source, ElapsedEventArgs e); 2:  3: // A timer that will fire at time of day each day. 4: public class TimeOfDayTimer : IDisposable 5: { 6: // Event that is triggered at time of day. 7: public event TimeOfDayHandler Elapsed; 8:  9: // ... 10: } The first thing to note is that the event is a delegate type, which tells us what types of methods may subscribe to it.  The second thing to note is the signature of the event handler delegate, according to the MSDN: The standard signature of an event handler delegate defines a method that does not return a value, whose first parameter is of type Object and refers to the instance that raises the event, and whose second parameter is derived from type EventArgs and holds the event data. If the event does not generate event data, the second parameter is simply an instance of EventArgs. Otherwise, the second parameter is a custom type derived from EventArgs and supplies any fields or properties needed to hold the event data. So, in a nutshell, the event handler delegates should return void and take two parameters: An object reference to the object that raised the event. An EventArgs (or a subclass of EventArgs) reference to event specific information. Even if your event has no additional information to provide, you are still expected to provide an EventArgs instance.  In this case, feel free to pass the EventArgs.Empty singleton instead of creating new instances of EventArgs (to avoid generating unneeded memory garbage). The EventHandler delegate Because many events have no additional information to pass, and thus do not require custom EventArgs, the signature of the delegates for subscribing to these events is typically: 1: // always takes an object and an EventArgs reference 2: public delegate void EventHandler(object sender, EventArgs e) It would be insane to recreate this delegate for every class that had a basic event with no additional event data, so there already exists a delegate for you called EventHandler that has this very definition!  Feel free to use it to define any events which supply no additional event information: 1: public class Cache 2: { 3: // event that is raised whenever the cache performs a cleanup 4: public event EventHandler OnCleanup; 5:  6: // ... 7: } This will handle any event with the standard EventArgs (no additional information).  But what of events that do need to supply additional information?  Does that mean we’re out of luck for subclasses of EventArgs?  That’s where the generic for of EventHandler comes into play… The generic EventHandler<TEventArgs> delegate Starting with the introduction of generics in .NET 2.0, we have a generic delegate called EventHandler<TEventArgs>.  Its signature is as follows: 1: public delegate void EventHandler<TEventArgs>(object sender, TEventArgs e) 2: where TEventArgs : EventArgs This is similar to EventHandler except it has been made generic to support the more general case.  Thus, it will work for any delegate where the first argument is an object (the sender) and the second argument is a class derived from EventArgs (the event data). For example, let’s say we wanted to create a message receiver, and we wanted it to have a few events such as OnConnected that will tell us when a connection is established (probably with no additional information) and OnMessageReceived that will tell us when a new message arrives (probably with a string for the new message text). So for OnMessageReceived, our MessageReceivedEventArgs might look like this: 1: public sealed class MessageReceivedEventArgs : EventArgs 2: { 3: public string Message { get; set; } 4: } And since OnConnected needs no event argument type defined, our class might look something like this: 1: public class MessageReceiver 2: { 3: // event that is called when the receiver connects with sender 4: public event EventHandler OnConnected; 5:  6: // event that is called when a new message is received. 7: public event EventHandler<MessageReceivedEventArgs> OnMessageReceived; 8:  9: // ... 10: } Notice, nowhere did we have to define a delegate to fit our event definition, the EventHandler and generic EventHandler<TEventArgs> delegates fit almost anything we’d need to do with events. Sidebar: Thread-safety and raising an event When the time comes to raise an event, we should always check to make sure there are subscribers, and then only raise the event if anyone is subscribed.  This is important because if no one is subscribed to the event, then the instance will be null and we will get a NullReferenceException if we attempt to raise the event. 1: // This protects against NullReferenceException... or does it? 2: if (OnMessageReceived != null) 3: { 4: OnMessageReceived(this, new MessageReceivedEventArgs(aMessage)); 5: } The above code seems to handle the null reference if no one is subscribed, but there’s a problem if this is being used in multi-threaded environments.  For example, assume we have thread A which is about to raise the event, and it checks and clears the null check and is about to raise the event.  However, before it can do that thread B unsubscribes to the event, which sets the delegate to null.  Now, when thread A attempts to raise the event, this causes the NullReferenceException that we were hoping to avoid! To counter this, the simplest best-practice method is to copy the event (just a multicast delegate) to a temporary local variable just before we raise it.  Since we are inside the class where this event is being raised, we can copy it to a local variable like this, and it will protect us from multi-threading since multicast delegates are immutable and assignments are atomic: 1: // always make copy of the event multi-cast delegate before checking 2: // for null to avoid race-condition between the null-check and raising it. 3: var handler = OnMessageReceived; 4: 5: if (handler != null) 6: { 7: handler(this, new MessageReceivedEventArgs(aMessage)); 8: } The very slight trade-off is that it’s possible a class may get an event after it unsubscribes in a multi-threaded environment, but this is a small risk and classes should be prepared for this possibility anyway.  For a more detailed discussion on this, check out this excellent Eric Lippert blog post on Events and Races. Summary Generic delegates give us a lot of power to make generic algorithms and classes, and the EventHandler delegate family gives us the flexibility to create events easily, without needing to redefine delegates over and over.  Use them whenever you need to define events with or without specialized EventArgs.   Tweet Technorati Tags: .NET, C#, CSharp, Little Wonders, Generics, Delegates, EventHandler

    Read the article

  • Three Principles to Fix Your Broken Organization

    - by Michael Snow
    Everyone's organization is broken in some capacity. For some this is painfully visible both inside and outside their organization. For others, there are cracks noticed by only the keenest trained eyes used to looking for problems in the midst of perfection. We all know that there is often incredible hope in the despair of chaos and recognition of your problems is the first step on the road to recovery. Let us help you in your path to recovery. Join our very own, Christian Finn,  this Thursday (11/15), as he guides you through three important principles you can take back to the office to start the mending process. (Above Image Credits: the BEST site on the web to make fun of our organizations and ourselves: http://www.despair.com/ ) His three principles are NOT "TeamWork", "Ignorance" and "Tradition", but - before jumping lower on this blog post to click and register for the upcoming webcast - I thought it would be a good opportunity to give you a little taste of what we have to offer beyond the array of our fabulous On-Demand webcasts from our Social Business Thought Leader Webcast Series featuring Christian as the host. Instead, here's a snippet from our marketing team friends across the pond in Europe, where they hosted a Social Business Forum recently and featured Christian in a segment.  Simple. Powerful. Proven. Face it, your organization is broken. Customers are not the focus they should be. Processes are running amok. Your intranet is a ghost town. And colleagues wonder why it’s easier to get things done on the Web than at work. What’s the solution?Join us for this Webcast. Christian Finn will talk about three simple, powerful, and proven principles for improving your organization through collaboration. Each principle will be illustrated by real-world examples. Discover: How to dramatically improve workplace collaboration Why improved employee engagement creates better business results What’s the value of a fully engaged customer Time to Fix What’s Broken Register now for this Webcast—the tenth in the Oracle Social Business Thought Leaders Series. Register Now Thurs., Nov. 15, 2012 10 a.m. PT / 1 p.m. ET Presented by: Christian Finn Senior Director, Product Management, Oracle Copyright © 2012, Oracle Corporation and/or its affiliates. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • hadoop install cluster

    - by chnet
    Is it possible to run hadoop on multi-node and these nodes install hadoop on different paths? I noticed the tutorial, such as Michael Noll's, http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/, asks hadoop to be installed on different nodes with the same installation path. I tried to run hadoop on multi-node. Each node installed hadoop on different path. But not success. I noticed that the hadoop assume installation paths are the same in default.

    Read the article

  • Who should ‘own’ the Enterprise Architecture?

    - by Michael Glas
    I recently had a discussion around who should own an organization’s Enterprise Architecture. It was spawned by an article titled “Busting CIO Myths” in CIO magazine1 where the author interviewed Jeanne Ross, director of MIT's Center for Information Systems Research and co-author of books on enterprise architecture, governance and IT value.In the article Jeanne states that companies need to acknowledge that "architecture says everything about how the company is going to function, operate, and grow; the only person who can own that is the CEO". "If the CEO doesn't accept that role, there really can be no architecture."The first question that came up when talking about ownership was whether you are talking about a person, role, or organization (there are pros and cons to each, but in general, I like to assign accountability to as few people as possible). After much thought and discussion, I came to the conclusion that we were answering the wrong question. Instead of talking about ownership we were talking about responsibility and accountability, and the answer varies depending on the particular role of the organization’s Enterprise Architecture and the activities of the enterprise architect(s).Instead of looking at just who owns the architecture, think about what the person/role/organization should do. This is one possible scenario (thanks to Bob Covington): The CEO should own the Enterprise Strategy which guides the business architecture. The Business units should own the business processes and information which guide the business, application and information architectures. The CIO should own the technology, IT Governance and the management of the application and information architectures/implementations. The EA Governance Team owns the EA process.  If EA is done well, the governance team consists of both IT and the business. While there are many more roles and responsibilities than listed here, it starts to provide a clearer understanding of ‘ownership’. Now back to Jeanne’s statement that the CEO should own the architecture. If you agree with the statement about what the architecture is (and I do agree), then ultimately the CEO does need to own it. However, what we ended up with was not really ownership, but more statements around roles and responsibilities tied to aspects of the enterprise architecture. You can debate the semantics of ownership vs. responsibility and accountability, but in the end the important thing is to come to a clearer understanding that is easily communicated (and hopefully measured) around the question “Who owns the Enterprise Architecture”.The next logical step . . . create a RACI matrix that details the findings . . . but that is a step that each organization needs to do on their own as it will vary based on current EA maturity, company culture, and a variety of other factors. Who ‘owns’ the Enterprise Architecture in your organization? 1 CIO Magazine Article (Busting CIO Myths): http://www.cio.com/article/704943/Busting_CIO_Myths Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Ray Wang: Why engagement matters in an era of customer experience

    - by Michael Snow
    Why engagement matters in an era of customer experience R "Ray" Wang Principal Analyst & CEO, Constellation Research Mobile enterprise, social business, cloud computing, advanced analytics, and unified communications are converging. Armed with the art of the possible, innovators are seeking to apply disruptive consumer technologies to enterprise class uses — call it the consumerization of IT in the enterprise. The likely results include new methods of furthering relationships, crafting longer term engagement, and creating transformational business models. It's part of a shift from transactional systems to engagement systems. These transactional systems have been around since the 1950s. You know them as ERP, finance and accounting systems, or even payroll. These systems are designed for massive computational scale; users find them rigid and techie. Meanwhile, we've moved to new engagement systems such as Facebook and Twitter in the consumer world. The rich usability and intuitive design reflect how users want to work — and now users are coming to expect the same paradigms and designs in their enterprise world. ~~~ Ray is a prolific contributor to his own blog as well as others. For a sneak peak at Ray's thoughts on engagement, take a look at this quick teaser on Avoiding Social Media Fatigue Through Engagement Or perhaps you might agree with Ray on Dealing With The Real Problem In Social Business Adoption – The People! Check out Ray's post on the Harvard Business Review Blog to get his perspective on "How to Engage Your Customers and Employees." For a daily dose of Ray - follow him on Twitter: @rwang0 But MOST IMPORTANTLY.... Don't miss the opportunity to join leading industry analyst, R "Ray" Wang of Constellation Research in the latest webcast of the Oracle Social Business Thought Leaders Series as he explains how to apply the 9 C's of Engagement for both your customers and employees.

    Read the article

  • Hello World - My Name is Christian Finn and I'm a WebCenter Evangelist

    - by Michael Snow
    12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}  Good Morning World! I'd like to introduce a new member of the Oracle WebCenter Team, Christian Finn. We decided to let him do his own intros today. Look for his guest posts next week and he'll be a frequent contributor to WebCenter blog and voice of the community. Hello (Oracle) World! Hi everyone, my name is Christian Finn. It’s a coder’s tradition to have “hello world” be the first output from a new program or in a new language. While I have left my coding days far behind, it still seems fitting to start my new role here at Oracle by saying hello to all of you—our customers, partners and my colleagues. So by way of introduction, a little background about me. I am the new senior director for evangelism on the WebCenter product management team. Not only am I new to Oracle, but the evangelism team is also brand new. Our mission is to raise the profile of Oracle in all of the markets/conversations in which WebCenter competes—social business, collaboration, portals, Internet sites, and customer/audience engagement. This is all pretty familiar turf for me because, as some of you may know, until recently I was the director of product management at Microsoft for Microsoft SharePoint Server and several other SharePoint products. And prior to that, I held management roles at Microsoft in marketing, channels, learning, and enterprise sales. Before Microsoft, I got my start in the industry as a software trainer and Lotus Notes consultant. I am incredibly excited to be joining Oracle at this time because of the tremendous opportunity that lies ahead to improve how people and businesses work. Of all the vendors offering a vision for social business, Oracle is unique in having best of breed strength in market (or coming soon) in all three critical areas: customer experience management; the middleware and back-end applications that run your business; and in the social, collaboration, and content technologies that are the connective tissue between them. Everyone else can offer one or two of the above, but not all three unified together. So it is a great time to come board and there’s a fantastic team of people hard at work on building great products for you. In the coming weeks and months you’ll be hearing much more from us. For now, we’ll kick things off with some blog posts here on the WebCenter blog. Enjoy the reads and please share your thoughts with me over Twitter on @cfinn.

    Read the article

  • C#/.NET Little Wonders: Of LINQ and Lambdas - A Presentation

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Today I’m giving a brief beginner’s guide to LINQ and Lambdas at the St. Louis .NET User’s Group so I thought I’d post the presentation here as well.  I updated the presentation a bit as well as added some notes on the query syntax.  Enjoy! The C#/.NET Fundaments: Of Lambdas and LINQ Presentation Of Lambdas and LINQ View more presentations from BlackRabbitCoder   Technorati Tags: C#, CSharp, .NET, Little Wonders, LINQ, Lambdas

    Read the article

  • C#/.NET Little Wonders: A Redux

    - by James Michael Hare
    I gave my Little Wonders presentation to the Topeka Dot Net Users' Group today, so re-posting the links to all the previous posts for them. The Presentation: C#/.NET Little Wonders: A Presentation The Original Trilogy: C#/.NET Five Little Wonders (part 1) C#/.NET Five More Little Wonders (part 2) C#/.NET Five Final Little Wonders (part 3) The Subsequent Sequels: C#/.NET Little Wonders: ToDictionary() and ToList() C#/.NET Little Wonders: DateTime is Packed With Goodies C#/.NET Little Wonders: Fun With Enum Methods C#/.NET Little Wonders: Cross-Calling Constructors C#/.NET Little Wonders: Constraining Generics With Where Clause C#/.NET Little Wonders: Comparer<T>.Default C#/.NET Little Wonders: The Useful (But Overlooked) Sets The Concurrent Wonders: C#/.NET Little Wonders: The Concurrent Collections (1 of 3) - ConcurrentQueue and ConcurrentStack C#/.NET Little Wonders: The Concurrent Collections (2 of 3) - ConcurrentDictionary Tweet   Technorati Tags: .NET,C#,Little Wonders

    Read the article

  • Scale plugin keeps forgetting hot corner settings on restart

    - by Michael Butler
    I'm using Ubuntu 12.04 with Unity, which I suppose uses Compiz as well. I have Compiz Settings Manager, and make the top left and bottom left corners of my screen activate the "Scale" (like Exposé) function to scale and show all windows. The problem is that when I restart the computer, the hot corners no longer do anything. I have to go back into compiz settings manager, delete the hot corner option, and then set it again. Something seems to be overriding or deleting the compiz hot corner setting on restart. Update: Sometimes, the setting loses its footing even while the computer is running. I haven't figured out yet what triggers it.

    Read the article

  • How to implement a 2d collision detection for Android

    - by Michael Seun Araromi
    I am making a 2d space shooter using opengl ES. Can someone please show me how to implement a collision detection between the enemy ship and player ship. The code for the two classes are below: Player Ship Class: package com.proandroidgames; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.FloatBuffer; import javax.microedition.khronos.opengles.GL10; public class SSGoodGuy { public boolean isDestroyed = false; private int damage = 0; private FloatBuffer vertexBuffer; private FloatBuffer textureBuffer; private ByteBuffer indexBuffer; private float vertices[] = { 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, }; private float texture[] = { 0.0f, 0.0f, 0.25f, 0.0f, 0.25f, 0.25f, 0.0f, 0.25f, }; private byte indices[] = { 0, 1, 2, 0, 2, 3, }; public void applyDamage(){ damage++; if (damage == SSEngine.PLAYER_SHIELDS){ isDestroyed = true; } } public SSGoodGuy() { ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4); byteBuf.order(ByteOrder.nativeOrder()); vertexBuffer = byteBuf.asFloatBuffer(); vertexBuffer.put(vertices); vertexBuffer.position(0); byteBuf = ByteBuffer.allocateDirect(texture.length * 4); byteBuf.order(ByteOrder.nativeOrder()); textureBuffer = byteBuf.asFloatBuffer(); textureBuffer.put(texture); textureBuffer.position(0); indexBuffer = ByteBuffer.allocateDirect(indices.length); indexBuffer.put(indices); indexBuffer.position(0); } public void draw(GL10 gl, int[] spriteSheet) { gl.glBindTexture(GL10.GL_TEXTURE_2D, spriteSheet[0]); gl.glFrontFace(GL10.GL_CCW); gl.glEnable(GL10.GL_CULL_FACE); gl.glCullFace(GL10.GL_BACK); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer); gl.glDrawElements(GL10.GL_TRIANGLES, indices.length, GL10.GL_UNSIGNED_BYTE, indexBuffer); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glDisable(GL10.GL_CULL_FACE); } } Enemy Ship Class: package com.proandroidgames; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.FloatBuffer; import java.util.Random; import javax.microedition.khronos.opengles.GL10; public class SSEnemy { public float posY = 0f; public float posX = 0f; public float posT = 0f; public float incrementXToTarget = 0f; public float incrementYToTarget = 0f; public int attackDirection = 0; public boolean isDestroyed = false; private int damage = 0; public int enemyType = 0; public boolean isLockedOn = false; public float lockOnPosX = 0f; public float lockOnPosY = 0f; private Random randomPos = new Random(); private FloatBuffer vertexBuffer; private FloatBuffer textureBuffer; private ByteBuffer indexBuffer; private float vertices[] = { 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, }; private float texture[] = { 0.0f, 0.0f, 0.25f, 0.0f, 0.25f, 0.25f, 0.0f, 0.25f, }; private byte indices[] = { 0, 1, 2, 0, 2, 3, }; public void applyDamage() { damage++; switch (enemyType) { case SSEngine.TYPE_INTERCEPTOR: if (damage == SSEngine.INTERCEPTOR_SHIELDS) { isDestroyed = true; } break; case SSEngine.TYPE_SCOUT: if (damage == SSEngine.SCOUT_SHIELDS) { isDestroyed = true; } break; case SSEngine.TYPE_WARSHIP: if (damage == SSEngine.WARSHIP_SHIELDS) { isDestroyed = true; } break; } } public SSEnemy(int type, int direction) { enemyType = type; attackDirection = direction; posY = (randomPos.nextFloat() * 4) + 4; switch (attackDirection) { case SSEngine.ATTACK_LEFT: posX = 0; break; case SSEngine.ATTACK_RANDOM: posX = randomPos.nextFloat() * 3; break; case SSEngine.ATTACK_RIGHT: posX = 3; break; } posT = SSEngine.SCOUT_SPEED; ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4); byteBuf.order(ByteOrder.nativeOrder()); vertexBuffer = byteBuf.asFloatBuffer(); vertexBuffer.put(vertices); vertexBuffer.position(0); byteBuf = ByteBuffer.allocateDirect(texture.length * 4); byteBuf.order(ByteOrder.nativeOrder()); textureBuffer = byteBuf.asFloatBuffer(); textureBuffer.put(texture); textureBuffer.position(0); indexBuffer = ByteBuffer.allocateDirect(indices.length); indexBuffer.put(indices); indexBuffer.position(0); } public float getNextScoutX() { if (attackDirection == SSEngine.ATTACK_LEFT) { return (float) ((SSEngine.BEZIER_X_4 * (posT * posT * posT)) + (SSEngine.BEZIER_X_3 * 3 * (posT * posT) * (1 - posT)) + (SSEngine.BEZIER_X_2 * 3 * posT * ((1 - posT) * (1 - posT))) + (SSEngine.BEZIER_X_1 * ((1 - posT) * (1 - posT) * (1 - posT)))); } else { return (float) ((SSEngine.BEZIER_X_1 * (posT * posT * posT)) + (SSEngine.BEZIER_X_2 * 3 * (posT * posT) * (1 - posT)) + (SSEngine.BEZIER_X_3 * 3 * posT * ((1 - posT) * (1 - posT))) + (SSEngine.BEZIER_X_4 * ((1 - posT) * (1 - posT) * (1 - posT)))); } } public float getNextScoutY() { return (float) ((SSEngine.BEZIER_Y_1 * (posT * posT * posT)) + (SSEngine.BEZIER_Y_2 * 3 * (posT * posT) * (1 - posT)) + (SSEngine.BEZIER_Y_3 * 3 * posT * ((1 - posT) * (1 - posT))) + (SSEngine.BEZIER_Y_4 * ((1 - posT) * (1 - posT) * (1 - posT)))); } public void draw(GL10 gl, int[] spriteSheet) { gl.glBindTexture(GL10.GL_TEXTURE_2D, spriteSheet[0]); gl.glFrontFace(GL10.GL_CCW); gl.glEnable(GL10.GL_CULL_FACE); gl.glCullFace(GL10.GL_BACK); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer); gl.glDrawElements(GL10.GL_TRIANGLES, indices.length, GL10.GL_UNSIGNED_BYTE, indexBuffer); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glDisable(GL10.GL_CULL_FACE); } }

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >