Search Results

Search found 4879 results on 196 pages for 'geeks'.

Page 99/196 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Silverlight Cream for November 27, 2011 -- #1176

    - by Dave Campbell
    In this Issue: Matt Eland, Parag Joshi, Jerrel Blankenship, and Joost van Schaik. Above the Fold: WP7: "Safe event detachment base class for Windows Phone 7 behaviors" Joost van Schaik Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up From SilverlightCream.com:31 Days of Mango | Day #22: App ConnectMatt Eland takes the reigns of Jeff's blog for Day 22 and is talking about App Connect... App Connect allows apps to be listed on Quick Cards relative to an app's subject matter, and Quick Cards are items that appear in searches to let users find out more info... check out the blog post if you're not familiar with this31 Days of Mango | Day #21: SocketsJeff's Day 21 is written by Parag Joshi, and is on sockets... and is building a WP7 app for posting restaurant orders to a Silverlight OOB app running on a host machine... good sized tutorial and discussion, plus a project to download and play with31 Days of Mango | Day #20: Creating RingtonesJerrel Blankenship has Day 20 for Jeff Blankenburg's 31 Days of Mango and is discussing Ringtones... how to create and save a custom ringtone for your userSafe event detachment base class for Windows Phone 7 behaviorsJoost van Schaik revisits his Safe Event Detachment pattern for WP7 and built a base class to take care of the initialization involved to be kind to us, the developers... code includedStay in the 'Light!Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCreamJoin me @ SilverlightCream | Phoenix Silverlight User GroupTechnorati Tags:Silverlight    Silverlight 3    Silverlight 4    Windows PhoneMIX10

    Read the article

  • Issues with ILMerge, Lambda Expressions and VS2010 merging?

    - by John Blumenauer
    A little Background For quite some time now, it’s been possible to merge multiple .NET assemblies into a single assembly using ILMerge in Visual Studio 2008.  This is especially helpful when writing wrapper assemblies for 3rd-party libraries where it’s desirable to minimize the number of assemblies for distribution.  During the merge process, ILMerge will take a set of assemblies and merge them into a single assembly.  The resulting assembly can be either an executable or a DLL and is identified as the primary assembly. Issue During a recent project, I discovered using ILMerge to merge assemblies containing lambda expressions in Visual Studio 2010 is resulting in invalid primary assemblies.  The code below is not where the initial issue was identified, I will merely use it to illustrate the problem at hand. In order to describe the issue, I created a console application and a class library for calculating a few math functions utilizing lambda expressions.  The code is available for download at the bottom of this blog entry. MathLib.cs using System; namespace MathLib { public static class MathHelpers { public static Func<double, double, double> Hypotenuse = (x, y) => Math.Sqrt(x * x + y * y); static readonly Func<int, int, bool> divisibleBy = (int a, int b) => a % b == 0; public static bool IsPrimeNumber(int x) { { for (int i = 2; i <= x / 2; i++) if (divisibleBy(x, i)) return false; return true; }; } } } Program.cs using System; using MathLib; namespace ILMergeLambdasConsole { class Program { static void Main(string[] args) { int n = 19; if (MathHelpers.IsPrimeNumber(n)) { Console.WriteLine(n + " is prime"); } else { Console.WriteLine(n + " is not prime"); } Console.ReadLine(); } } } Not surprisingly, the preceding code compiles, builds and executes without error prior to running the ILMerge tool.   ILMerge Setup In order to utilize ILMerge, the following changes were made to the project. The MathLib.dll assembly was built in release configuration and copied to the MathLib folder.  The following folder hierarchy was used for this example:   The project file for ILMergeLambdasConsole project file was edited to add the ILMerge post-build configuration.  The following lines were added near the bottom of the project file:  <Target Name="AfterBuild" Condition="'$(Configuration)' == 'Release'"> <Exec Command="&quot;..\..\lib\ILMerge\Ilmerge.exe&quot; /ndebug /out:@(MainAssembly) &quot;@(IntermediateAssembly)&quot; @(ReferenceCopyLocalPaths->'&quot;%(FullPath)&quot;', ' ')" /> <Delete Files="@(ReferenceCopyLocalPaths->'$(OutDir)%(DestinationSubDirectory)%(Filename)%(Extension)')" /> </Target> The ILMergeLambdasConsole project was modified to reference the MathLib.dll located in the MathLib folder above. ILMerge and ILMerge.exe.config was copied into the ILMerge folder shown above.  The contents of ILMerge.exe.config are: <?xml version="1.0" encoding="utf-8" ?> <configuration> <startup useLegacyV2RuntimeActivationPolicy="true"> <requiredRuntime safemode="true" imageVersion="v4.0.30319" version="v4.0.30319"/> </startup> </configuration> Post-ILMerge After compiling and building, the MathLib.dll assembly will be merged into the ILMergeLambdasConsole executable.  Unfortunately, executing ILMergeLambdasConsole.exe now results in a crash.  The ILMerge documentation recommends using PEVerify.exe to validate assemblies after merging.  Executing PEVerify.exe against the ILMergeLambdasConsole.exe assembly results in the following error:    Further investigation by using Reflector reveals the divisibleBy method in the MathHelpers class looks a bit questionable after the merge.     Prior to using ILMerge, the same divisibleBy method appeared as the following in Reflector: It’s pretty obvious something has gone awry during the merge process.  However, this is only occurring when building within the Visual Studio 2010 environment.  The same code and configuration built within Visual Studio 2008 executes fine.  I’m still investigating the issue.  If anyone has already experienced this situation and solved it, I would love to hear from you.  However, as of right now, it looks like something has gone terribly wrong when executing ILMerge against assemblies containing Lambdas in Visual Studio 2010. Solution Files ILMergeLambdaExpression

    Read the article

  • Kansas City Developer's Conference

    - by Brian Schroer
    I just found about about / registered for Saturday’s Kansas City Developer’s Conference, and am going to make the drive over from the right side of the state (Hey, no offense, KC – I’m just looking at a map, and St. Louis is on the right side, Kansas City’s on the left). (I’m sure the event’s been mentioned on geekswithblogs several times, but I’m on a “staycation” this week, getting cabin fever, and noticed @leebrandt’s tweet today.) I’m looking forward to some of the presentations in the Agile and Patterns tracks. I’m going to have to get up pretty early Saturday morning to descend from St. Louis to Kansas City (Again, no offense – St. Louis is just at a higher elevation*, that’s all), so if you see a tired-looking guy wandering around wearing a St. Louis Day of .NET shirt, please be nice. I’m not sure how much longer registration will be open, but here’s the link: http://kcdc.eventbrite.com/ *Not true – St. Louis is closer to sea level than Kansas City, but I’ll start my drive from the top of the Arch, OK?

    Read the article

  • I&rsquo;m back, now with Windows Live Writer goodness

    - by Dave Yasko
    I’ve reimaged my home laptop.  I’m trying to populate it with as much free goodness as possible to see if the free way is as good as the old pay way.  Turns out, I’ve got access to Windows Live Writer.  I’m not sure where that came from, maybe with Vista Ultimate.  I don’t know.  Either way, it makes my blog posting a whole lot easier.  So, maybe, just maybe, it will make me more likely to post.  We’ll see. Later.

    Read the article

  • A Generic Boolean Value Converter

    - by codingbloke
    On fairly regular intervals a question on Stackoverflow like this one:  Silverlight Bind to inverse of boolean property value appears.  The same answers also regularly appear.  They all involve an implementation of IValueConverter and basically include the same boilerplate code. The required output type sometimes varies, other examples that have passed by are Boolean to Brush and Boolean to String conversions.  Yet the code remains pretty much the same.  There is therefore a good case to create a generic Boolean to value converter to contain this common code and then just specialise it for use in Xaml. Here is the basic converter:- BoolToValueConverter using System; using System.Windows.Data; namespace SilverlightApplication1 {     public class BoolToValueConverter<T> : IValueConverter     {         public T FalseValue { get; set; }         public T TrueValue { get; set; }         public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)         {             if (value == null)                 return FalseValue;             else                 return (bool)value ? TrueValue : FalseValue;         }         public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)         {             return value.Equals(TrueValue);         }     } } With this generic converter in place it easy to create a set of converters for various types.  For example here are all the converters mentioned so far:- Value Converters using System; using System.Windows; using System.Windows.Media; namespace SilverlightApplication1 {     public class BoolToStringConverter : BoolToValueConverter<String> { }     public class BoolToBrushConverter : BoolToValueConverter<Brush> { }     public class BoolToVisibilityConverter : BoolToValueConverter<Visibility> { }     public class BoolToObjectConverter : BoolToValueConverter<Object> { } } With the specialised converters created they can be specified in a Resources property on a user control like this:- <local:BoolToBrushConverter x:Key="Highlighter" FalseValue="Transparent" TrueValue="Yellow" /> <local:BoolToStringConverter x:Key="CYesNo" FalseValue="No" TrueValue="Yes" /> <local:BoolToVisibilityConverter x:Key="InverseVisibility" TrueValue="Collapsed" FalseValue="Visible" />

    Read the article

  • C#/.NET Little Wonders: Comparer&lt;T&gt;.Default

    - by James Michael Hare
    I’ve been working with a wonderful team on a major release where I work, which has had the side-effect of occupying most of my spare time preparing, testing, and monitoring.  However, I do have this Little Wonder tidbit to offer today. Introduction The IComparable<T> interface is great for implementing a natural order for a data type.  It’s a very simple interface with a single method: 1: public interface IComparer<in T> 2: { 3: // Compare two instances of same type. 4: int Compare(T x, T y); 5: }  So what do we expect for the integer return value?  It’s a pseudo-relative measure of the ordering of x and y, which returns an integer value in much the same way C++ returns an integer result from the strcmp() c-style string comparison function: If x == y, returns 0. If x > y, returns > 0 (often +1, but not guaranteed) If x < y, returns < 0 (often –1, but not guaranteed) Notice that the comparison operator used to evaluate against zero should be the same comparison operator you’d use as the comparison operator between x and y.  That is, if you want to see if x > y you’d see if the result > 0. The Problem: Comparing With null Can Be Messy This gets tricky though when you have null arguments.  According to the MSDN, a null value should be considered equal to a null value, and a null value should be less than a non-null value.  So taking this into account we’d expect this instead: If x == y (or both null), return 0. If x > y (or y only is null), return > 0. If x < y (or x only is null), return < 0. But here’s the problem – if x is null, what happens when we attempt to call CompareTo() off of x? 1: // what happens if x is null? 2: x.CompareTo(y); It’s pretty obvious we’ll get a NullReferenceException here.  Now, we could guard against this before calling CompareTo(): 1: int result; 2:  3: // first check to see if lhs is null. 4: if (x == null) 5: { 6: // if lhs null, check rhs to decide on return value. 7: if (y == null) 8: { 9: result = 0; 10: } 11: else 12: { 13: result = -1; 14: } 15: } 16: else 17: { 18: // CompareTo() should handle a null y correctly and return > 0 if so. 19: result = x.CompareTo(y); 20: } Of course, we could shorten this with the ternary operator (?:), but even then it’s ugly repetitive code: 1: int result = (x == null) 2: ? ((y == null) ? 0 : -1) 3: : x.CompareTo(y); Fortunately, the null issues can be cleaned up by drafting in an external Comparer.  The Soltuion: Comparer<T>.Default You can always develop your own instance of IComparer<T> for the job of comparing two items of the same type.  The nice thing about a IComparer is its is independent of the things you are comparing, so this makes it great for comparing in an alternative order to the natural order of items, or when one or both of the items may be null. 1: public class NullableIntComparer : IComparer<int?> 2: { 3: public int Compare(int? x, int? y) 4: { 5: return (x == null) 6: ? ((y == null) ? 0 : -1) 7: : x.Value.CompareTo(y); 8: } 9: }  Now, if you want a custom sort -- especially on large-grained objects with different possible sort fields -- this is the best option you have.  But if you just want to take advantage of the natural ordering of the type, there is an easier way.  If the type you want to compare already implements IComparable<T> or if the type is System.Nullable<T> where T implements IComparable, there is a class in the System.Collections.Generic namespace called Comparer<T> which exposes a property called Default that will create a singleton that represents the default comparer for items of that type.  For example: 1: // compares integers 2: var intComparer = Comparer<int>.Default; 3:  4: // compares DateTime values 5: var dateTimeComparer = Comparer<DateTime>.Default; 6:  7: // compares nullable doubles using the null rules! 8: var nullableDoubleComparer = Comparer<double?>.Default;  This helps you avoid having to remember the messy null logic and makes it to compare objects where you don’t know if one or more of the values is null. This works especially well when creating say an IComparer<T> implementation for a large-grained class that may or may not contain a field.  For example, let’s say you want to create a sorting comparer for a stock open price, but if the market the stock is trading in hasn’t opened yet, the open price will be null.  We could handle this (assuming a reasonable Quote definition) like: 1: public class Quote 2: { 3: // the opening price of the symbol quoted 4: public double? Open { get; set; } 5:  6: // ticker symbol 7: public string Symbol { get; set; } 8:  9: // etc. 10: } 11:  12: public class OpenPriceQuoteComparer : IComparer<Quote> 13: { 14: // Compares two quotes by opening price 15: public int Compare(Quote x, Quote y) 16: { 17: return Comparer<double?>.Default.Compare(x.Open, y.Open); 18: } 19: } Summary Defining a custom comparer is often needed for non-natural ordering or defining alternative orderings, but when you just want to compare two items that are IComparable<T> and account for null behavior, you can use the Comparer<T>.Default comparer generator and you’ll never have to worry about correct null value sorting again.     Technorati Tags: C#,.NET,Little Wonders,BlackRabbitCoder,IComparable,Comparer

    Read the article

  • Reporting Services 2008 Hosting :: How to Solve Error - "Maximum request length exceeded."

    - by mbridge
    Problem: How to Solve it? Please change your web.config file for SSRS which located on your Report Server. You can see the picture below: Edit this Web config file adding or replacin with this line <httpRuntime executionTimeout = "9000" maxRequestLength="500000" /> This will incress the timeout and the length of the data able to be pushed to the report server. Here is a sample of where I added it in my config file:

    Read the article

  • My Latest Books &ndash; Professional C# 2010 and Professional ASP.NET 4

    - by Bill Evjen
    My two latest books are out! Professional ASP.NET 4 in C# and VB Professional C# 4 and .NET 4 From the back covers: Take your web development to the next level using ASP.NET 4 ASP.NET is about making you as productive as possible when building fast and secure web applications. Each release of ASP.NET gets better and removes a lot of the tedious code that you previously needed to put in place, making common ASP.NET tasks easier. With this book, an unparalleled team of authors walks you through the full breadth of ASP.NET and the new and exciting capabilities of ASP.NET 4. The authors also show you how to maximize the abundance of features that ASP.NET offers to make your development process smoother and more efficient. Professional ASP.NET 4: Demonstrates ASP.NET built-in systems such as the membership and role management systems Covers everything you need to know about working with and manipulating data Discusses the plethora of server controls that are at your disposal Explores new ways to build ASP.NET, such as working with ASP.NET MVC and ASP.NET AJAX Examines the full life cycle of ASP.NET, including debugging and error handling, HTTP modules, the provider model, and more Features both printed and downloadable C# and VB code examples Start using the new features of C# 4 and .NET 4 right away The new C# 4 language version is indispensable for writing code in Visual Studio 2010. This essential guide emphasizes that C# is the language of choice for your .NET 4 applications. The unparalleled author team of experts begins with a refresher of C# basics and quickly moves on to provide detailed coverage of all the recently added language and Framework features so that you can start writing Windows applications and ASP.NET web applications immediately. Reviews the .NET architecture, objects, generics, inheritance, arrays, operators, casts, delegates, events, Lambda expressions, and more Details integration with dynamic objects in C#, named and optional parameters, COM-specific interop features, and type-safe variance Provides coverage of new features of .NET 4, Workflow Foundation 4, ADO.NET Data Services, MEF, the Parallel Task Library, and PLINQ Has deep coverage of great technologies including LINQ, WCF, WPF, flow and fixed documents, and Silverlight Reviews ASP.NET programming and goes into new features such as ASP.NET MVC and ASP.NET Dynamic Data Discusses communication with WCF, MSMQ, peer-to-peer, and syndication

    Read the article

  • How do I run (execute) a .bin file in Linux (Ubuntu) ?

    - by Paula DiTallo
    If you are on a desktop version of Ubuntu, you can right-click on the file icon, click the permissions tab and click on "allow execution". If you are on a server copy without the desktop bells and whistles (or you would rather work with a command line in a terminal window), then do the following: sudo chmod +x myProgram.bin after you enter your password and get the prompt back type: ./myProgram.bin

    Read the article

  • Flow-Design Cheat Sheet &ndash; Part II, Translation

    - by Ralf Westphal
    In my previous post I summarized the notation for Flow-Design (FD) diagrams. Now is the time to show you how to translate those diagrams into code. Hopefully you feel how different this is from UML. UML leaves you alone with your sequence diagram or component diagram or activity diagram. They leave it to you how to translate your elaborate design into code. Or maybe UML thinks it´s so easy no further explanations are needed? I don´t know. I just know that, as soon as people stop designing with UML and start coding, things end up to be very different from the design. And that´s bad. That degrades graphical designs to just time waste on paper (or some designer). I even believe that´s the reason why most programmers view textual source code as the only and single source of truth. Design and code usually do not match. FD is trying to change that. It wants to make true design a first class method in every developers toolchest. For that the first prerequisite is to be able to easily translate any design into code. Mechanically, without thinking. Even a compiler could do it :-) (More of that in some other article.) Translating to Methods The first translation I want to show you is for small designs. When you start using FD you should translate your diagrams like this. Functional units become methods. That´s it. An input-pin becomes a method parameter, an output-pin becomes a return value: The above is a part. But a board can be translated likewise and calls the nested FUs in order: In any case be sure to keep the board method clear of any and all business logic. It should not contain any control structures like if, switch, or a loop. Boards do just one thing: calling nested functional units in proper sequence. What about multiple input-pins? Try to avoid them. Replace them with a join returning a tuple: What about multiple output-pins? Try to avoid them. Or return a tuple. Or use out-parameters: But as I said, this simple translation is for simple designs only. Splits and joins are easily done with method translation: All pretty straightforward, isn´t it. But what about wires, named pins, entry points, explicit dependencies? I suggest you don´t use this kind of translation when your designs need these features. Translating to methods is for small scale designs like you might do once you´re working on the implementation of a part of a larger design. Or maybe for a code kata you´re doing in your local coding dojo. Instead of doing TDD try doing FD and translate your design into methods. You´ll see that way it´s much easier to work collaboratively on designs, remember them more easily, keep them clean, and lessen the need for refactoring. Translating to Events [coming soon]

    Read the article

  • C#/.NET Little Wonders: Constraining Generics with Where Clause

    - by James Michael Hare
    Back when I was primarily a C++ developer, I loved C++ templates.  The power of writing very reusable generic classes brought the art of programming to a brand new level.  Unfortunately, when .NET 1.0 came about, they didn’t have a template equivalent.  With .NET 2.0 however, we finally got generics, which once again let us spread our wings and program more generically in the world of .NET However, C# generics behave in some ways very differently from their C++ template cousins.  There is a handy clause, however, that helps you navigate these waters to make your generics more powerful. The Problem – C# Assumes Lowest Common Denominator In C++, you can create a template and do nearly anything syntactically possible on the template parameter, and C++ will not check if the method/fields/operations invoked are valid until you declare a realization of the type.  Let me illustrate with a C++ example: 1: // compiles fine, C++ makes no assumptions as to T 2: template <typename T> 3: class ReverseComparer 4: { 5: public: 6: int Compare(const T& lhs, const T& rhs) 7: { 8: return rhs.CompareTo(lhs); 9: } 10: }; Notice that we are invoking a method CompareTo() off of template type T.  Because we don’t know at this point what type T is, C++ makes no assumptions and there are no errors. C++ tends to take the path of not checking the template type usage until the method is actually invoked with a specific type, which differs from the behavior of C#: 1: // this will NOT compile! C# assumes lowest common denominator. 2: public class ReverseComparer<T> 3: { 4: public int Compare(T lhs, T rhs) 5: { 6: return lhs.CompareTo(rhs); 7: } 8: } So why does C# give us a compiler error even when we don’t yet know what type T is?  This is because C# took a different path in how they made generics.  Unless you specify otherwise, for the purposes of the code inside the generic method, T is basically treated like an object (notice I didn’t say T is an object). That means that any operations, fields, methods, properties, etc that you attempt to use of type T must be available at the lowest common denominator type: object.  Now, while object has the broadest applicability, it also has the fewest specific.  So how do we allow our generic type placeholder to do things more than just what object can do? Solution: Constraint the Type With Where Clause So how do we get around this in C#?  The answer is to constrain the generic type placeholder with the where clause.  Basically, the where clause allows you to specify additional constraints on what the actual type used to fill the generic type placeholder must support. You might think that narrowing the scope of a generic means a weaker generic.  In reality, though it limits the number of types that can be used with the generic, it also gives the generic more power to deal with those types.  In effect these constraints says that if the type meets the given constraint, you can perform the activities that pertain to that constraint with the generic placeholders. Constraining Generic Type to Interface or Superclass One of the handiest where clause constraints is the ability to specify the type generic type must implement a certain interface or be inherited from a certain base class. For example, you can’t call CompareTo() in our first C# generic without constraints, but if we constrain T to IComparable<T>, we can: 1: public class ReverseComparer<T> 2: where T : IComparable<T> 3: { 4: public int Compare(T lhs, T rhs) 5: { 6: return lhs.CompareTo(rhs); 7: } 8: } Now that we’ve constrained T to an implementation of IComparable<T>, this means that our variables of generic type T may now call any members specified in IComparable<T> as well.  This means that the call to CompareTo() is now legal. If you constrain your type, also, you will get compiler warnings if you attempt to use a type that doesn’t meet the constraint.  This is much better than the syntax error you would get within C++ template code itself when you used a type not supported by a C++ template. Constraining Generic Type to Only Reference Types Sometimes, you want to assign an instance of a generic type to null, but you can’t do this without constraints, because you have no guarantee that the type used to realize the generic is not a value type, where null is meaningless. Well, we can fix this by specifying the class constraint in the where clause.  By declaring that a generic type must be a class, we are saying that it is a reference type, and this allows us to assign null to instances of that type: 1: public static class ObjectExtensions 2: { 3: public static TOut Maybe<TIn, TOut>(this TIn value, Func<TIn, TOut> accessor) 4: where TOut : class 5: where TIn : class 6: { 7: return (value != null) ? accessor(value) : null; 8: } 9: } In the example above, we want to be able to access a property off of a reference, and if that reference is null, pass the null on down the line.  To do this, both the input type and the output type must be reference types (yes, nullable value types could also be considered applicable at a logical level, but there’s not a direct constraint for those). Constraining Generic Type to only Value Types Similarly to constraining a generic type to be a reference type, you can also constrain a generic type to be a value type.  To do this you use the struct constraint which specifies that the generic type must be a value type (primitive, struct, enum, etc). Consider the following method, that will convert anything that is IConvertible (int, double, string, etc) to the value type you specify, or null if the instance is null. 1: public static T? ConvertToNullable<T>(IConvertible value) 2: where T : struct 3: { 4: T? result = null; 5:  6: if (value != null) 7: { 8: result = (T)Convert.ChangeType(value, typeof(T)); 9: } 10:  11: return result; 12: } Because T was constrained to be a value type, we can use T? (System.Nullable<T>) where we could not do this if T was a reference type. Constraining Generic Type to Require Default Constructor You can also constrain a type to require existence of a default constructor.  Because by default C# doesn’t know what constructors a generic type placeholder does or does not have available, it can’t typically allow you to call one.  That said, if you give it the new() constraint, it will mean that the type used to realize the generic type must have a default (no argument) constructor. Let’s assume you have a generic adapter class that, given some mappings, will adapt an item from type TFrom to type TTo.  Because it must create a new instance of type TTo in the process, we need to specify that TTo has a default constructor: 1: // Given a set of Action<TFrom,TTo> mappings will map TFrom to TTo 2: public class Adapter<TFrom, TTo> : IEnumerable<Action<TFrom, TTo>> 3: where TTo : class, new() 4: { 5: // The list of translations from TFrom to TTo 6: public List<Action<TFrom, TTo>> Translations { get; private set; } 7:  8: // Construct with empty translation and reverse translation sets. 9: public Adapter() 10: { 11: // did this instead of auto-properties to allow simple use of initializers 12: Translations = new List<Action<TFrom, TTo>>(); 13: } 14:  15: // Add a translator to the collection, useful for initializer list 16: public void Add(Action<TFrom, TTo> translation) 17: { 18: Translations.Add(translation); 19: } 20:  21: // Add a translator that first checks a predicate to determine if the translation 22: // should be performed, then translates if the predicate returns true 23: public void Add(Predicate<TFrom> conditional, Action<TFrom, TTo> translation) 24: { 25: Translations.Add((from, to) => 26: { 27: if (conditional(from)) 28: { 29: translation(from, to); 30: } 31: }); 32: } 33:  34: // Translates an object forward from TFrom object to TTo object. 35: public TTo Adapt(TFrom sourceObject) 36: { 37: var resultObject = new TTo(); 38:  39: // Process each translation 40: Translations.ForEach(t => t(sourceObject, resultObject)); 41:  42: return resultObject; 43: } 44:  45: // Returns an enumerator that iterates through the collection. 46: public IEnumerator<Action<TFrom, TTo>> GetEnumerator() 47: { 48: return Translations.GetEnumerator(); 49: } 50:  51: // Returns an enumerator that iterates through a collection. 52: IEnumerator IEnumerable.GetEnumerator() 53: { 54: return GetEnumerator(); 55: } 56: } Notice, however, you can’t specify any other constructor, you can only specify that the type has a default (no argument) constructor. Summary The where clause is an excellent tool that gives your .NET generics even more power to perform tasks higher than just the base "object level" behavior.  There are a few things you cannot specify with constraints (currently) though: Cannot specify the generic type must be an enum. Cannot specify the generic type must have a certain property or method without specifying a base class or interface – that is, you can’t say that the generic must have a Start() method. Cannot specify that the generic type allows arithmetic operations. Cannot specify that the generic type requires a specific non-default constructor. In addition, you cannot overload a template definition with different, opposing constraints.  For example you can’t define a Adapter<T> where T : struct and Adapter<T> where T : class.  Hopefully, in the future we will get some of these things to make the where clause even more useful, but until then what we have is extremely valuable in making our generics more user friendly and more powerful!   Technorati Tags: C#,.NET,Little Wonders,BlackRabbitCoder,where,generics

    Read the article

  • Things I've noticed with DVCS

    - by Wes McClure
    Things I encourage: Frequent local commits This way you don't have to be bothered by changes others are making to the central repository while working on a handful of related tasks.  It's a good idea to try to work on one task at a time and commit all changes at partitioned stopping points.  A local commit doesn't have to build, just FYI, so a stopping point doesn't mean a build point nor a point that you can push centrally.  There should be several of these in any given day.  2 hours is a good indicator that you might not be leveraging the power of frequent local commits.  Once you have verified a set of changes works, save them away, otherwise run the risk of introducing bugs into it when working on the next task.  The notion of a task By task I mean a related set of changes that can be completed in a few hours or less.  In the same token don’t make your tasks so small that critically related changes aren’t grouped together.  Use your intuition and the rest of these principles and I think you will find what is comfortable for you. Partial commits Sometimes one task explodes or unknowingly encompasses other tasks, at this point, try to get to a stopping point on part of the work you are doing and commit it so you can get that out of the way to focus on the remainder.  This will often entail committing part of the work and continuing on the rest. Outstanding changes as a guide If you don't commit often it might mean you are not leveraging your version control history to help guide your work.  It's a great way to see what has changed and might be causing problems.  The longer you wait, the more that has changed and the harder it is to test/debug what your changes are doing! This is a reason why I am so picky about my VCS tools on the client side and why I talk a lot about the quality of a diff tool and the ability to integrate that with a simple view of everything that has changed.  This is why I love using TortoiseHg and SmartGit: they show changed files, a diff (or two way diff with SmartGit) of the current selected file and a commit message all in one window that I keep maximized on one monitor at all times. Throw away / stash commits There is extreme value in being able to throw away a commit (or stash it) that is getting out of hand.  If you do not commit often you will have to isolate the work you want to commit from the work you want to throw away, which is wasted productivity and highly prone to errors.  I find myself doing this about once a week, especially when doing exploratory re-factoring.  It's much easier if I can just revert all outstanding changes. Sync with the central repository daily The rest of us depend on your changes.  Don't let them sit on your computer longer than they have to.  Waiting increases the chances of merge conflict which just decreases productivity.  It also prohibits us from doing deploys when people say they are done but have not merged centrally.  This should be done daily!  Find a way to partition the work you are doing so that you can sync at least once daily. Things I discourage: Lots of partial commits right at the end of a series of changes If you notice lots of partial commits at the end of a set of changes, it's likely because you weren't frequently committing, nor were you watching for the size of the task expanding beyond a single commit.  Chances are this cost you productivity if you use your outstanding changes as a guide, since you would have an ever growing list of changes. Committing single files Committing single files means you waited too long and no longer understand all the changes involved.  It may mean there were overlapping changes in single files that cannot be isolated.  In either case, go back to the suggestions above to avoid this.  Committing frequently does not mean committing frequently right at the end of a day's work. It should be spaced out over the course of several tasks, not all at the end in a 5 minute window.

    Read the article

  • Exit Infragistics, Enter Telerik

    - by Anthony Trudeau
    Today I made the purchase of the Premium Collection of components from Telerik.  This follows an evaluation I’ve been doing to replace the Infragistics components we currently use for Windows Forms, ASP.NET MVC, and WPF. It was not a formal evaluation.  I had already decided to move the company away from Infragistics.  That decision was mostly born out of frustration with support over using the Infragistics components in my first production MVC application. One such issue was a simple scenario where you have a model that has a scalar property that can be one value out of a list.  The built-in combobox does this, but I was told by Infragistics support that they didn’t support it – and it took them several emails and days of waiting between responses to determine that.  I implemented this in Telerik in a minute not including the several minutes it took me to get a rudimentary understanding for the component and its API. Here’s the code using the built-in combobox:@Html.DropDownListFor(x => x.VendorId, new SelectList(ViewBag.Vendors, "VendorId", "VendorName", Model.VendorId), "Select Id") Here’s the code using the Telerik combobox:@(Html.Telerik().ComboBoxFor(model => model.VendorId) .AutoFill(true) .BindTo(new SelectList(ViewBag.Vendors, "VendorId", "VendorName", Model.VendorId)) )   I chose Telerik over other competitors based on the professional appearance of their website, and how easy it was to find information.  I’d like to say I had time to evaluate other Infragistics competitors.  Due to time constraints I had to make an initial decision based on superficial, but still important things. I picked Telerik with the plan to only look further at other companies if my evaluation didn’t meet my expectations.  Luckily they did, because I didn’t relish the thought of carving out more time to evaluate another set of components. Overall my experience with Telerik has been superior to Infragistics in every way.  The installation was easy using their control panel installer application.  Getting up to speed has been easy.  And the communication from Telerik has met my expectations.  And we’ll continue to be good as long as I don’t start getting email messages from a sales rep saying that they want to talk to me about training and consulting – I’m looking at you Infragistics.

    Read the article

  • ViewModelLocators

    - by samkea
    Bobby's http://blog.bmdiaz.com/archive/2010/03/31/kiss-and-tell---mvvm-and-the-viewmodellocator.aspx Kelly's http://blog.kellybrownsberger.com/archive/2010/03/31/81.aspx John Papa and Glen Block http://johnpapa.net/silverlight/simple-viewmodel-locator-for-mvvm-the-patients-have-left-the-asylum/

    Read the article

  • A lot has happened since last post!!!!!!!!!

    - by Ratman21
    And I mean a lot! First off had two interviews (one was selling insurance) and other installing cable. I have more hope for the cable one. Getting more emails from my job search engines (having problems with going through them and applying for those jobs, I know I can do). It seems the more I apply to, the more job emails pop up in my in box. But at the same time I am fighting feelings of worthlessness (18 months and no job). It is putting a strain on my marriage (We had had blow out over a broken drinking glass since I last posted).     But, at the same time, I am fighting mad about (a figure of speech, really) not having a job. Look just because I am over 55 and have gray hair. It does not mean, my brain is dead or I can not longer trouble shoot a router or circuit or LAN issue. Or that I can do “IT” work at all. And I could prove this if; some one would give me at job. Come on try me for 90 days at min. wage.   I know you will end up keeping me (hope fully at normal pay) around. Is any one hearing me…come on take up the challenge!

    Read the article

  • TDD - Red-Light-Green_Light:: A critical view

    - by Renso
    Subject: The concept of red-light-green-light for TDD/BDD style testing has been around since the dawn of time (well almost). Having written thousands of tests using this approach I find myself questioning the validity of the principle The issue: False positive or a valid test strategy that can be trusted? A critical view: I agree that the red-green-light concept has some validity, but who has ever written 2000 tests for a system that goes through a ton of chnages due to the organic nature fo the application and does not have to change, delete or restructure their existing tests? If you asnwer to the latter question is" "Yes I had a situation(s) where I had to refactor my code and it caused me to have to rewrite/change/delete my existing tests", read on, else press CTRL+ALT+Del :-) Once a test has been written, failed the test (red light), and then you comlpete your code and now get the green light for the last test, the test for that functionality is now in green light mode. It can never return to red light again as long as the test exists, even if the test itself is not changed, and only the code it tests is changed to fail the test. Why you ask? because the reason for the initial red-light when you created the test is not guaranteed to have triggered the initial red-light result for the same reasons it is now failing after a code change has been made. Furthermore, when the same test is changed to compile correctly in case of a compile-breaking code change, the green-light once again has been invalidated. Why? Because there is no guarantee that the test code fix is in the same green-light state as it was when it first ran successfully. To make matters worse, if you fix a compile-breaking test without going through the red-light-green-light test process, your test fix is essentially useless and very dangerous as it now provides you with a false-positive at best. Thinking your code has passed all tests and that it works correctly is far worse than not having any tests at all, well at least for that part of the system that the test-code represents. What to do? My recommendation is to delete the tests affected, and re-create them from scratch. I have to agree. Hard to do and justify if it has a significant impact on project deadlines. What do you think?

    Read the article

  • Less is more: The making of a 37 signals style pizza

    - by Liam McLennan
    For years now we have been hearing from 37 signals that the way to bake a great web app is to build less – well the same is true of pizza. Our western hedonism has led us to pursue ever cheesier and more stuffed crusts at the expense of the simple flavours. All we are left with is a fatty, salty heart attack in waiting. The Italians know that the secret to great taste is simplicity. With that in mind I decided to base my pizza masterpiece on these simple flavours: tomato sopressa (spicy aged salami) mozzarella garlic basil Of course the first thing one needs when making pizza is a base.   A freshly made base is extremely important but unfortunately I was too lazy. Next up is the tomato sauce. My wife made the sauce by reducing some tomatoes and adding herbs and sugar. We had selected some fine ingredients to make our topping: sopressa salami, fresh basil and the best mozzarella we could find.   It is, according to google, important to bake pizza at a high temperature, so I set the oven to 250C (480F). Here are the before and after shots: Meanwhile, the dog did nothing.

    Read the article

  • Android: debug certificate expired error

    - by Bill Osuch
    I started up Eclipse today, created a new project, and immediately had an error before I had changed a single line: Error generating final archive: Debug Certificate expired on 11/12/11 When installed, the Android SDK generates a "debug" signing certificate for you in a file called "debug.keystore". Eclipse uses this certificate rather than forcing you to create a new one for every project. In older versions of Eclipse, the certificate was only valid for 365 days, but as I understand it the default has been changed to 30 years in newer versions. If for whatever reason you don't want to upgrade Eclipse, you can manually delete the certificate to for Eclipse to generate a new one. You can find the location in Preferences -> Android -> Build -> Default debug keystore (mine was in C:\Users\myUserName\.android\); just delete the "debug.keystore" file, then go back into Eclipse and Clean the project to generate a new file.

    Read the article

  • Merry Christmas and Happy new year everyone and bye bye we meet again soon as possible

    - by anirudha
    few days ago i plan that i enjoy this cold days with my family so i decide that i go their and enjoy Christmas even i am Hindus no problem. so those days i stop everything relate to my daily technical life routine. because on Christmas and  new year i never meet you so Merry Christmas  and Happy new year to all [ friends , relative and all guys who know me or  even not] bye bye take care i come again as soon as possible.

    Read the article

  • BizTalk: History of one project architecture

    - by Leonid Ganeline
    "In the beginning God made heaven and earth. Then he started to integrate." At the very start was the requirement: integrate two working systems. Small digging up: It was one system. It was good but IT guys want to change it to the new one, much better, chipper, more flexible, and more progressive in technologies, more suitable for the future, for the faster world and hungry competitors. One thing. One small, little thing. We cannot turn off the old system (call it A, because it was the first), turn on the new one (call it B, because it is second but not the last one). The A has a hundreds users all across a country, they must study B. A still has a lot nice custom features, home-made features that cannot disappear. These features have to be moved to the B and it is a long process, months and months of redevelopment. So, the decision was simple. Let’s move not jump, let’s both systems working side-by-side several months. In this time we could teach the users and move all custom A’s special functionality to B. That automatically means both systems should work side-by-side all these months and use the same data. Data in A and B must be in sync. That’s how the integration projects get birth. Moreover, the specific of the user tasks requires the both systems must be in sync in real-time. Nightly synchronization is not working, absolutely.   First draft The first draft seems simple. Both systems keep data in SQL databases. When data changes, the Create, Update, Delete operations performed on the data, and the sync process could be started. The obvious decision is to use triggers on tables. When we are talking about data, we are talking about several entities. For example, Orders and Items [in Orders]. We decided to use the BizTalk Server to synchronize systems. Why it was chosen is another story. Second draft   Let’s take an example how it works in more details. 1.       User creates a new entity in the A system. This fires an insert trigger on the entity table. Trigger has to pass the message “Entity created”. This message includes all attributes of the new entity, but I focused on the Id of this entity in the A system. Notation for this message is id.A. System A sends id.A to the BizTalk Server. 2.       BizTalk transforms id.A to the format of the system B. This is easiest part and I will not focus on this kind of transformations in the following text. The message on the picture is still id.A but it is in slightly different format, that’s why it is changing in color. BizTalk sends id.A to the system B. 3.       The system B creates the entity on its side. But it uses different id-s for entities, these id-s are id.B. System B saves id.A+id.B. System B sends the message id.A+id.B back to the BizTalk. 4.       BizTalk sends the message id.A+id.B to the system A. 5.       System A saves id.A+id.B. Why both id-s should be saved on both systems? It was one of the next requirements. Users of both systems have to know the systems are in sync or not in sync. Users working with the entity on the system A can see the id.B and use it to switch to the system B and work there with the copy of the same entity. The decision was to store the pairs of entity id-s on both sides. If there is only one id, the entities are not in sync yet (for the Create operation). Third draft Next problem was the reliability of the synchronization. The synchronizing process can be interrupted on each step, when message goes through the wires. It can be communication problem, timeout, temporary shutdown one of the systems, the second system cannot be synchronized by some internal reason. There were several potential problems that prevented from enclosing the whole synchronization process in one transaction. Decision was to restart the whole sync process if it was not finished (in case of the error). For this purpose was created an additional service. Let’s call it the Resync service. We still keep the id pairs in both systems, but only for the fast access not for the synchronization process. For the synchronizing these id-s now are kept in one main place, in the Resync service database. The Resync service keeps record as: ·       Id.A ·       Id.B ·       Entity.Type ·       Operation (Create, Update, Delete) ·       IsSyncStarted (true/false) ·       IsSyncFinished (true/false0 The example now looks like: 1.       System A creates id.A. id.A is saved on the A. Id.A is sent to the BizTalk. 2.       BizTalk sends id.A to the Resync and to the B. id.A is saved on the Resync. 3.       System B creates id.B. id.A+id.B are saved on the B. id.A+id.B are sent to the BizTalk. 4.       BizTalk sends id.A+id.B to the Resync and to the A. id.A+id.B are saved on the Resync. 5.       id.A+id.B are saved on the B. Resync changes the IsSyncStarted and IsSyncFinished flags accordingly. The Resync service implements three main methods: ·       Save (id.A, Entity.Type, Operation) ·       Save (id.A, id.B, Entity.Type, Operation) ·       Resync () Two Save() are used to save id-s to the service storage. See in the above example, in 2 and 4 steps. What about the Resync()? It is the method that finishes the interrupted synchronization processes. If Save() is started by the trigger event, the Resync() is working as an independent process. It periodically scans the Resync storage to find out “unfinished” records. Then it restarts the synchronization processes. It tries to synchronize them several times then gives up.     One more thing, both systems A and B must tolerate duplicates of one synchronizing process. Say on the step 3 the system B was not able to send id.A+id.B back. The Resync service must restart the synchronization process that will send the id.A to B second time. In this case system B must just send back again also created id.A+id.B pair without errors. That means “tolerate duplicates”. Fourth draft Next draft was created only because of the aesthetics. As it always happens, aesthetics gave significant performance gain to the whole system. First was the stupid question. Why do we need this additional service with special database? Can we just master the BizTalk to do something like this Resync() does? So the Resync orchestration is doing the same thing as the Resync service. It is started by the Id.A and finished by the id.A+id.B message. The first works as a Start message, the second works as a Finish message.     Here is a diagram the whole process without errors. It is pretty straightforward. The Resync orchestration is waiting for the Finish message specific period of time then resubmits the Id.A message. It resubmits the Id.A message specific number of times then gives up and gets suspended. It can be resubmitted then it starts the whole process again: waiting [, resubmitting [, get suspended]], finishing. Tuning up The Resync orchestration resubmits the id.A message with special “Resubmitted” flag. The subscription filter on the Resync orchestration includes predicate as (Resubmit_Flag != “Resubmitted”). That means only the first Sync orchestration starts the Resync orchestration. Other Sync orchestration instantiated by the resubmitting can finish this Resync orchestration but cannot start another instance of the Resync   Here is a diagram where system B was inaccessible for some period of time. The Resync orchestration resubmitted the id.A two times. Then system B got the response the id.A+id.B and this finished the Resync service execution. What is interesting about this, there were submitted several identical id.A messages and only one id.A+id.B message. Because of this, the system B and the Resync must tolerate the duplicate messages. We also told about this requirement for the system B. Now the same requirement is for the Resunc. Let’s assume the system B was very slow in the first response and the Resync service had time to resubmit two id.A messages. System B responded not, as it was in previous case, with one id.A+id.B but with two id.A+id.B messages. First of them finished the Resync execution for the id.A. What about the second id.A+id.B? Where it goes? So, we have to add one more internal requirement. The whole solution must tolerate many identical id.A+id.B messages. It is easy task with the BizTalk. I added the “SinkExtraMessages” subscriber (orchestration with one receive shape), that just get these messages and do nothing. Real design Real architecture is much more complex and interesting. In reality each system can submit several id.A almost simultaneously and completely unordered. There are not only the “Create entity” operation but the Update and Delete operations. And these operations relate each other. Say the Update operation after Delete means not the same as Update after Create. In reality there are entities related each other. Say the Order and Order Items. Change on one of it could start the series of the operations on another. Moreover, the system internals are the “black boxes” and we cannot predict the exact content and order of the operation series. It worth to say, I had to spend a time to manage the zombie message problems. The zombies are still here, but this is not a problem now. And this is another story. What is interesting in the last design? One orchestration works to help another to be more reliable. Why two orchestration design is more reliable, isn’t it something strange? The Synch orchestration takes all the message exchange between systems, here is the area where most of the errors could happen. The Resync orchestration sends and receives messages only within the BizTalk server. Is there another design? Sure. All Resync functionality could be implemented inside the Sync orchestration. Hey guys, some other ideas?

    Read the article

  • Future Trends and Challenges for Aircraft Cabins

    - by Bill Evjen
    Ingo Wuggetzer The aircraft cabin changes from the 60s till now has worsened. First class is actually premium / economy is still moving down in quality The challenge is to do efficiency and comfort Graying population is a challenge will be 14% of the world’s population soon Obesity increasingly becoming an all-milieu core societal problem Will have impact on seat sizes Female forces – women will increasingly influence business and lifestyle There are now more women in college than men People want to be green and this reflects into aircrafts. You can now buy carbon-offsets when you buy a ticket in some airlines 20% are willing to pay for green products 13% would like to but are not doing it yet Seamless Connectivity Internet is obviously mainstream and the influence of our daily lives 2 billion users in 2010 One direction is going mobile Another direction is going social computing We have to explore this to use more with our products Convergence of products iPad usage on Finair , Virgin, Jetstar iPhone share 2% Other smartphones – 11% Feature Phone – 87% Plans to invest in technology trends within the next 3 years connectivity to/from aircraft – 21% major investment / 47% R&D nominal investment Web 2.0 – 22% major investment / 57% R&D nominal investment Cabin technical investments Lighting Wireless Sensors Displays People want to use technologies on the plane that they can use on the ground Planes have moved to digital in the last decade – now they are moving to wireless Data volumes are going through the roof – (Moore’s Law)

    Read the article

  • Microsoft Codename Dallas

    - by kaleidoscope
    Dallas is Microsoft’s Information Service offering which allows developers and information workers to find, acquire and consume published datasets and web services. Users subscribe to datasets and web services of interest and can integrate the information into their own applications via a standardized set of API’s. Data can also be analyzed online using the Dallas Service Explorer or externally using the Power Pivot Add-In for Excel. We can explore all the datasets and subscribe to the catalog for using the data. Dallas Developer Portal https://www.sqlazureservices.com More information can be found at:      http://channel9.msdn.com/learn/courses/Azure/Dallas/IntroToDallas/Overview/   Lokesh, M

    Read the article

  • Windows Azure: Server and Cloud Division

    - by kaleidoscope
    On 8th Dec 2009 Microsoft announced the formation of a new organization within the Server & Tools Business that combines the Windows Server & Solutions group and the Windows Azure group, into a single organization called the Server & Cloud Division (SCD). SCD will deliver solutions that help our customers realize even greater benefits from Microsoft’s investments in on-premises and cloud technologies.  And the new division will help strengthen an already solid and extensive partner ecosystem. Together, Windows Server, Windows Azure, SQL Server, SQL Azure, Visual Studio and System Center help customers extend existing investments to include a future that will combine both on-premises and cloud solutions, and SCD is now a key player in that effort. http://blogs.technet.com/windowsserver/archive/2009/12/08/windows-server-and-windows-azure-come-together-in-a-new-stb-organization-the-server-cloud-division.aspx   Tinu, O

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >