Search Results

Search found 2672 results on 107 pages for 'michael joell'.

Page 15/107 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Humerous Word 2010 "feature"?

    - by Michael Stephenson
    Im just sitting on the train to work and had a funny experience with word 2010 that I thought id share. Im writing a document and all of a sudden like usually happens the train gets a little bit bumpy.  Word decides it doesnt like this (maybe it prefers to fly?).  Anyway to show its dissatisfaction with the journey it starts adding new rows to my table in the document all by itself. 5 pages of rows later I still cant workout how to stop itso have to kill word. Thank you autosave

    Read the article

  • C#/.NET Little Wonders: Use Cast() and TypeOf() to Change Sequence Type

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. We’ve seen how the Select() extension method lets you project a sequence from one type to a new type which is handy for getting just parts of items, or building new items.  But what happens when the items in the sequence are already the type you want, but the sequence itself is typed to an interface or super-type instead of the sub-type you need? For example, you may have a sequence of Rectangle stored in an IEnumerable<Shape> and want to consider it an IEnumerable<Rectangle> sequence instead.  Today we’ll look at two handy extension methods, Cast<TResult>() and OfType<TResult>() which help you with this task. Cast<TResult>() – Attempt to cast all items to type TResult So, the first thing we can do would be to attempt to create a sequence of TResult from every item in the source sequence.  Typically we’d do this if we had an IEnumerable<T> where we knew that every item was actually a TResult where TResult inherits/implements T. For example, assume the typical Shape example classes: 1: // abstract base class 2: public abstract class Shape { } 3:  4: // a basic rectangle 5: public class Rectangle : Shape 6: { 7: public int Widtgh { get; set; } 8: public int Height { get; set; } 9: } And let’s assume we have a sequence of Shape where every Shape is a Rectangle… 1: var shapes = new List<Shape> 2: { 3: new Rectangle { Width = 3, Height = 5 }, 4: new Rectangle { Width = 10, Height = 13 }, 5: // ... 6: }; To get the sequence of Shape as a sequence of Rectangle, of course, we could use a Select() clause, such as: 1: // select each Shape, cast it to Rectangle 2: var rectangles = shapes 3: .Select(s => (Rectangle)s) 4: .ToList(); But that’s a bit verbose, and fortunately there is already a facility built in and ready to use in the form of the Cast<TResult>() extension method: 1: // cast each item to Rectangle and store in a List<Rectangle> 2: var rectangles = shapes 3: .Cast<Rectangle>() 4: .ToList(); However, we should note that if anything in the list cannot be cast to a Rectangle, you will get an InvalidCastException thrown at runtime.  Thus, if our Shape sequence had a Circle in it, the call to Cast<Rectangle>() would have failed.  As such, you should only do this when you are reasonably sure of what the sequence actually contains (or are willing to handle an exception if you’re wrong). Another handy use of Cast<TResult>() is using it to convert an IEnumerable to an IEnumerable<T>.  If you look at the signature, you’ll see that the Cast<TResult>() extension method actually extends the older, object-based IEnumerable interface instead of the newer, generic IEnumerable<T>.  This is your gateway method for being able to use LINQ on older, non-generic sequences.  For example, consider the following: 1: // the older, non-generic collections are sequence of object 2: var shapes = new ArrayList 3: { 4: new Rectangle { Width = 3, Height = 13 }, 5: new Rectangle { Width = 10, Height = 20 }, 6: // ... 7: }; Since this is an older, object based collection, we cannot use the LINQ extension methods on it directly.  For example, if I wanted to query the Shape sequence for only those Rectangles whose Width is > 5, I can’t do this: 1: // compiler error, Where() operates on IEnumerable<T>, not IEnumerable 2: var bigRectangles = shapes.Where(r => r.Width > 5); However, I can use Cast<Rectangle>() to treat my ArrayList as an IEnumerable<Rectangle> and then do the query! 1: // ah, that’s better! 2: var bigRectangles = shapes.Cast<Rectangle>().Where(r => r.Width > 5); Or, if you prefer, in LINQ query expression syntax: 1: var bigRectangles = from s in shapes.Cast<Rectangle>() 2: where s.Width > 5 3: select s; One quick warning: Cast<TResult>() only attempts to cast, it won’t perform a cast conversion.  That is, consider this: 1: var intList = new List<int> { 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89 }; 2:  3: // casting ints to longs, this should work, right? 4: var asLong = intList.Cast<long>().ToList(); Will the code above work?  No, you’ll get a InvalidCastException. Remember that Cast<TResult>() is an extension of IEnumerable, thus it is a sequence of object, which means that it will box every int as an object as it enumerates over it, and there is no cast conversion from object to long, and thus the cast fails.  In other words, a cast from int to long will succeed because there is a conversion from int to long.  But a cast from int to object to long will not, because you can only unbox an item by casting it to its exact type. For more information on why cast-converting boxed values doesn’t work, see this post on The Dangers of Casting Boxed Values (here). OfType<TResult>() – Filter sequence to only items of type TResult So, we’ve seen how we can use Cast<TResult>() to change the type of our sequence, when we expect all the items of the sequence to be of a specific type.  But what do we do when a sequence contains many different types, and we are only concerned with a subset of a given type? For example, what if a sequence of Shape contains Rectangle and Circle instances, and we just want to select all of the Rectangle instances?  Well, let’s say we had this sequence of Shape: 1: var shapes = new List<Shape> 2: { 3: new Rectangle { Width = 3, Height = 5 }, 4: new Rectangle { Width = 10, Height = 13 }, 5: new Circle { Radius = 10 }, 6: new Square { Side = 13 }, 7: // ... 8: }; Well, we could get the rectangles using Select(), like: 1: var onlyRectangles = shapes.Where(s => s is Rectangle).ToList(); But fortunately, an easier way has already been written for us in the form of the OfType<T>() extension method: 1: // returns only a sequence of the shapes that are Rectangles 2: var onlyRectangles = shapes.OfType<Rectangle>().ToList(); Now we have a sequence of only the Rectangles in the original sequence, we can also use this to chain other queries that depend on Rectangles, such as: 1: // select only Rectangles, then filter to only those more than 2: // 5 units wide... 3: var onlyBigRectangles = shapes.OfType<Rectangle>() 4: .Where(r => r.Width > 5) 5: .ToList(); The OfType<Rectangle>() will filter the sequence to only the items that are of type Rectangle (or a subclass of it), and that results in an IEnumerable<Rectangle>, we can then apply the other LINQ extension methods to query that list further. Just as Cast<TResult>() is an extension method on IEnumerable (and not IEnumerable<T>), the same is true for OfType<T>().  This means that you can use OfType<TResult>() on object-based collections as well. For example, given an ArrayList containing Shapes, as below: 1: // object-based collections are a sequence of object 2: var shapes = new ArrayList 3: { 4: new Rectangle { Width = 3, Height = 5 }, 5: new Rectangle { Width = 10, Height = 13 }, 6: new Circle { Radius = 10 }, 7: new Square { Side = 13 }, 8: // ... 9: }; We can use OfType<Rectangle> to filter the sequence to only Rectangle items (and subclasses), and then chain other LINQ expressions, since we will then be of type IEnumerable<Rectangle>: 1: // OfType() converts the sequence of object to a new sequence 2: // containing only Rectangle or sub-types of Rectangle. 3: var onlyBigRectangles = shapes.OfType<Rectangle>() 4: .Where(r => r.Width > 5) 5: .ToList(); Summary So now we’ve seen two different ways to get a sequence of a superclass or interface down to a more specific sequence of a subclass or implementation.  The Cast<TResult>() method casts every item in the source sequence to type TResult, and the OfType<TResult>() method selects only those items in the source sequence that are of type TResult. You can use these to downcast sequences, or adapt older types and sequences that only implement IEnumerable (such as DataTable, ArrayList, etc.). Technorati Tags: C#,CSharp,.NET,LINQ,Little Wonders,TypeOf,Cast,IEnumerable<T>

    Read the article

  • C#/.NET Little Wonders: The Nullable static class

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Today we’re going to look at an interesting Little Wonder that can be used to mitigate what could be considered a Little Pitfall.  The Little Wonder we’ll be examining is the System.Nullable static class.  No, not the System.Nullable<T> class, but a static helper class that has one useful method in particular that we will examine… but first, let’s look at the Little Pitfall that makes this wonder so useful. Little Pitfall: Comparing nullable value types using <, >, <=, >= Examine this piece of code, without examining it too deeply, what’s your gut reaction as to the result? 1: int? x = null; 2:  3: if (x < 100) 4: { 5: Console.WriteLine("True, {0} is less than 100.", 6: x.HasValue ? x.ToString() : "null"); 7: } 8: else 9: { 10: Console.WriteLine("False, {0} is NOT less than 100.", 11: x.HasValue ? x.ToString() : "null"); 12: } Your gut would be to say true right?  It would seem to make sense that a null integer is less than the integer constant 100.  But the result is actually false!  The null value is not less than 100 according to the less-than operator. It looks even more outrageous when you consider this also evaluates to false: 1: int? x = null; 2:  3: if (x < int.MaxValue) 4: { 5: // ... 6: } So, are we saying that null is less than every valid int value?  If that were true, null should be less than int.MinValue, right?  Well… no: 1: int? x = null; 2:  3: // um... hold on here, x is NOT less than min value? 4: if (x < int.MinValue) 5: { 6: // ... 7: } So what’s going on here?  If we use greater than instead of less than, we see the same little dilemma: 1: int? x = null; 2:  3: // once again, null is not greater than anything either... 4: if (x > int.MinValue) 5: { 6: // ... 7: } It turns out that four of the comparison operators (<, <=, >, >=) are designed to return false anytime at least one of the arguments is null when comparing System.Nullable wrapped types that expose the comparison operators (short, int, float, double, DateTime, TimeSpan, etc.).  What’s even odder is that even though the two equality operators (== and !=) work correctly, >= and <= have the same issue as < and > and return false if both System.Nullable wrapped operator comparable types are null! 1: DateTime? x = null; 2: DateTime? y = null; 3:  4: if (x <= y) 5: { 6: Console.WriteLine("You'd think this is true, since both are null, but it's not."); 7: } 8: else 9: { 10: Console.WriteLine("It's false because <=, <, >, >= don't work on null."); 11: } To make matters even more confusing, take for example your usual check to see if something is less than, greater to, or equal: 1: int? x = null; 2: int? y = 100; 3:  4: if (x < y) 5: { 6: Console.WriteLine("X is less than Y"); 7: } 8: else if (x > y) 9: { 10: Console.WriteLine("X is greater than Y"); 11: } 12: else 13: { 14: // We fall into the "equals" assumption, but clearly null != 100! 15: Console.WriteLine("X is equal to Y"); 16: } Yes, this code outputs “X is equal to Y” because both the less-than and greater-than operators return false when a Nullable wrapped operator comparable type is null.  This violates a lot of our assumptions because we assume is something is not less than something, and it’s not greater than something, it must be equal.  So keep in mind, that the only two comparison operators that work on Nullable wrapped types where at least one is null are the equals (==) and not equals (!=) operators: 1: int? x = null; 2: int? y = 100; 3:  4: if (x == y) 5: { 6: Console.WriteLine("False, x is null, y is not."); 7: } 8:  9: if (x != y) 10: { 11: Console.WriteLine("True, x is null, y is not."); 12: } Solution: The Nullable static class So we’ve seen that <, <=, >, and >= have some interesting and perhaps unexpected behaviors that can trip up a novice developer who isn’t expecting the kinks that System.Nullable<T> types with comparison operators can throw.  How can we easily mitigate this? Well, obviously, you could do null checks before each check, but that starts to get ugly: 1: if (x.HasValue) 2: { 3: if (y.HasValue) 4: { 5: if (x < y) 6: { 7: Console.WriteLine("x < y"); 8: } 9: else if (x > y) 10: { 11: Console.WriteLine("x > y"); 12: } 13: else 14: { 15: Console.WriteLine("x == y"); 16: } 17: } 18: else 19: { 20: Console.WriteLine("x > y because y is null and x isn't"); 21: } 22: } 23: else if (y.HasValue) 24: { 25: Console.WriteLine("x < y because x is null and y isn't"); 26: } 27: else 28: { 29: Console.WriteLine("x == y because both are null"); 30: } Yes, we could probably simplify this logic a bit, but it’s still horrendous!  So what do we do if we want to consider null less than everything and be able to properly compare Nullable<T> wrapped value types? The key is the System.Nullable static class.  This class is a companion class to the System.Nullable<T> class and allows you to use a few helper methods for Nullable<T> wrapped types, including a static Compare<T>() method of the. What’s so big about the static Compare<T>() method?  It implements an IComparer compatible comparison on Nullable<T> types.  Why do we care?  Well, if you look at the MSDN description for how IComparer works, you’ll read: Comparing null with any type is allowed and does not generate an exception when using IComparable. When sorting, null is considered to be less than any other object. This is what we probably want!  We want null to be less than everything!  So now we can change our logic to use the Nullable.Compare<T>() static method: 1: int? x = null; 2: int? y = 100; 3:  4: if (Nullable.Compare(x, y) < 0) 5: { 6: // Yes! x is null, y is not, so x is less than y according to Compare(). 7: Console.WriteLine("x < y"); 8: } 9: else if (Nullable.Compare(x, y) > 0) 10: { 11: Console.WriteLine("x > y"); 12: } 13: else 14: { 15: Console.WriteLine("x == y"); 16: } Summary So, when doing math comparisons between two numeric values where one of them may be a null Nullable<T>, consider using the System.Nullable.Compare<T>() method instead of the comparison operators.  It will treat null less than any value, and will avoid logic consistency problems when relying on < returning false to indicate >= is true and so on. Tweet   Technorati Tags: C#,C-Sharp,.NET,Little Wonders,Little Pitfalls,Nulalble

    Read the article

  • Timeout Considerations for Solicit Response

    - by Michael Stephenson
    Background One of the clients I work with had been experiencing some issues for a while surrounding web service timeouts.  It's been a little challenging to work through the problems due to limitations in the diagnostic information available from one of the applications, but I learned some interesting things while troubleshooting the problem which don't seem to have been discussed much in the community so I thought I'd share my findings. In the scenario we have BizTalk trying to make calls to a .net web service which was exposed as a WSE 2 endpoint.  In the process BizTalk will try to make a large number of concurrent web service calls to the application, and the backend application has more than enough infrastructure and capability to handle the load. We have configured the <ConnectionManagement> section of the BizTalk configuration file to support up to 100 concurrent connections from each of our 2 BizTalk send servers to the web servers of the application. The problem we were facing was that the BizTalk side was reporting a significant number of timeouts when calling the web service.   One of the biggest issues was the challenge of being able to correlate a message from BizTalk to the IIS log in the .net application and the custom logs in the application especially when there was a fairly large number of servers hosting the web services.  However the key moment came when we were able to identify a specific call which had taken 40 seconds to execute on the server (yes a long time I know but that's a different story!).  Anyway we were able to identify that this had timed out on the BizTalk side.  Based on the normal 2 minute timeout we knew something unexpected was going on. From here I decided to do some experimentation and I wanted to start outside of BizTalk because my hunch was this was not a BizTalk behaviour but something which was being highlighted by BizTalk because of our large load.     Server-side - Sample Web Service To begin with I created a sample web service.  Nothing special just a vanilla asmx web service hosted in IIS6 on Windows 2003 Standard Edition.  The web service is just a hello world style web service as shown in the below picture.  The only key feature is that the server side web method has a 30 second sleep in it and will trace out some information before and after the thread is set to sleep.      In the configuration for this web service there again is nothing special it's pretty much the most plain simple web service you could build. Client-Side To begin looking at what was happening with our example I created a number of different ways to consume the web service. SoapHttpClientProtocol Example I created a small application which would use a normal proxy generated to call the web service.  It would iterate around a loop and make calls using the begin/end methods so I can do this asynchronously.  I would do a loop of 20 calls with the ConnectionManager configuration section supporting only 5 concurrent connections to the server.     <connectionManagement> <remove address="*"/> <add address = "*" maxconnection = "12" /> <add address = "http://<ServerName>" maxconnection = "5" />                         </connectionManagement> </system.net>     The below picture shows an example of the service calling code, key points are: I have configured the timeout of 40 seconds for the proxy I am using the asynchronous methods on the proxy to call the web service         The Test I would run the client and execute 21 calls to the web service.   The Results  Below is the client side trace showing what's happening on the client. In the below diagram is the web service side trace showing what's happening on the server Some observations on the results are: All of the calls were successful from the clients perspective You could see the next call starting on the server as soon as the previous one had completed Calls took significantly longer than 40 seconds from the start of our call to the return. In fact call 20 took 2 minutes and 30 seconds from the perspective of my code to execute even though I had set the timeout to 40 seconds     WSE 2 Sample In the second example I used the exact same code to call the web service again with a single exception that I modified the web service proxy to derive from WebServiceClient protocol which is part of WSE 2 (using SP3).  The below picture shows the basic code and the key points are: I have configured the timeout of 40 seconds for the proxy I am using the asynchronous methods on the proxy to call the web service        The Test This test would execute 21 calls from the client to the web service.   The Results  The below trace is from the client side: The below trace is from the server side:   Some observations on the trace results for this scenario are: With call 4 if you look at the server side trace it did not start executing on the server for a number of seconds after the other 4 initial calls which were accepted by the server. I re-ran the test and this happened a couple of times and not on most others so at this point I'm just putting this down to something unexpected happening on the development machine and we will leave this observation out of scope of this article. You can see that the client side trace statement executed almost immediately in all cases All calls after the initial few calls would timeout On the client side the calls that did timeout; timed out in a longer duration than the 40 seconds we set as the timeout You can see that as calls were completing on the server the next calls were starting to come through The calls that timed out on the client did actually connect to the server and their server side execution completed successfully     Elaboration on the findings Based on the above observations I have drawn the below sequence diagram to illustrate conceptually what is happening.  Everything except the final web service object is on the client side of the call. In the diagram below I've put two notes on the Web Service Proxy to show the two different places where the different base classes seem to start their timeout counters. From the earlier samples we can work out that the timeout counter for the WSE web service proxy starts before the one for the SoapHttpClientProtocol proxy and the WSE one includes the time to get a connection from the pool; whereas the Soap proxy timeout just covers the method execution. One interesting observation is if we rerun the above sample and increase the number of calls from 21 to 100,000 then for the WSE sample we will see a similar pattern where everything after the first few calls will timeout on the client as soon as it makes a connection to the server whereas the soap proxy will happily plug away and process all of the calls without a single timeout. I have actually set the sample running overnight and this did happen. At this point you are probably thinking the same thoughts I was at the time about the differences in behaviour and which is right and why are they different? I'm not sure there is a definitive answer to this in the documentation, or at least not that I could find! I think you just have to consider that they are different and they could have different effects depending on your messaging solution. In lots of situations this is just not an issue as your concurrent requests doesn't get to the situation where you end up throttling the web service calls on the client side, however this is definitely more common with an integration broker such as BizTalk where you often have high throughput requirements.  Some of the considerations you should make Based on this behaviour you should be aware of the following: In a .net application if you are making lots of concurrent web service calls from an application in an asynchronous manner your user may thing they are experiencing poor performance but you think your web service is working well. The problem could be that the client will have a default of 2 connections to remote servers so you should bear this in mind When you are developing a BizTalk solution or a .net solution with the WSE 2 stack you may experience timeouts under load and throttling the number of connections using the max connections element in the configuration file will not help you For an application using WSE2 or SoapHttpClientProtocol an expired timeout will not throw an error until after a connection to the server has been made so you should consider this in your transaction and durability patterns     Our Work Around In the short term for our specific scenario we know that we can handle this by just increasing our timeout value.  There is only a specific small window when we get lots of concurrent traffic that causes this scenario so we should be able to increase the timeout to take into consideration the additional client side wait, and on the odd occasion where we do get a timeout the BizTalk send port retry will handle this. What was causing our original problem was that for that short window we were getting a lot of retries which significantly increased the load on our send servers and highlighted the issue.  Longer Term Solution As a longer term solution this really gives us more ammunition to argue a migration to WCF. The application we are calling has some factors which limit the protocols we can use but with WCF we would have more control on the various timeout options because in WCF you can configure specific parts of the timeout. Summary I've had this blog post on my to do list for ages but hopefully it will be useful to some people to just understand this behaviour and to possibly help you with some performance issues you may have. I do not believe there is too much in the way of documentation particularly around WSE2 and ASMX in this area so again another bit of ammunition for migrating to WCF. I'll try to do a follow up post with the sample for WCF to show how this changes things.

    Read the article

  • Webcast: Leveraging Mobile And Social Commerce To Deliver A Complete Customer Experience

    - by Michael Hylton
      Mobile and social media are emerging as new channels for customers to interact and transact with brands. Mobile users demand experiences that are relevant and engaging and are designed with the capabilities and constraints of devices in mind. Just having a mobile app or mobile-specific website is not a long-term strategy. Brands must invest in an optimized experience, especially as mobile becomes critical to an overall digital commerce strategy.Debating the merits of using Facebook or not is missing the point when it comes to social media. True innovators are thinking beyond the social channel and are building programs that leverage Facebook data to drive conversions and engagement both on and off Facebook.  Learn how to be more strategic about mobile and social commerce in this informative editorial webcast.Attend this webcast and you will learn: How to leverage mobile and social touchpoints in digital commerce Why having a Facebook page or a mobile app is not enough The benefits of a consistent, personalized and relevant customer experience Strategies for integrating mobile and social into an overall digital commerce strategy Featured Speakers: Peter Sheldon, Senior Analyst, eBusiness & Channel Strategy Professionals, Forrester Research Brenna Johnson, Product Manager, Oracle Commerce Click here to register.

    Read the article

  • Webcast: Leveraging Mobile And Social Commerce To Deliver A Complete Customer Experience

    - by Michael Hylton
      Mobile and social media are emerging as new channels for customers to interact and transact with brands. Mobile users demand experiences that are relevant and engaging and are designed with the capabilities and constraints of devices in mind. Just having a mobile app or mobile-specific website is not a long-term strategy. Brands must invest in an optimized experience, especially as mobile becomes critical to an overall digital commerce strategy.Debating the merits of using Facebook or not is missing the point when it comes to social media. True innovators are thinking beyond the social channel and are building programs that leverage Facebook data to drive conversions and engagement both on and off Facebook.  Learn how to be more strategic about mobile and social commerce in this informative editorial webcast.Attend this webcast and you will learn: How to leverage mobile and social touchpoints in digital commerce Why having a Facebook page or a mobile app is not enough The benefits of a consistent, personalized and relevant customer experience Strategies for integrating mobile and social into an overall digital commerce strategy Featured Speakers: Peter Sheldon, Senior Analyst, eBusiness & Channel Strategy Professionals, Forrester Research Brenna Johnson, Product Manager, Oracle Commerce Click here to register.

    Read the article

  • Why prefer a wildcard to a type discriminator in a Java API (Re: Effective Java)

    - by Michael Campbell
    In the generics section of Bloch's Effective Java (which handily is the "free" chapter available to all: http://java.sun.com/docs/books/effective/generics.pdf), he says: If a type parameter appears only once in a method declaration, replace it with a wildcard. (See page 31-33 of that pdf) The signature in question is: public static void swap(List<?> list, int i, int j) vs public static void swap(List<E> list, int i, int j) And then proceeds to use a static private "helper" function with an actual type parameter to perform the work. The helper function signature is EXACTLY that of the second option. Why is the wildcard preferable, since you need to NOT use a wildcard to get the work done anyway? I understand that in this case since he's modifying the List and you can't add to a collection with an unbounded wildcard, so why use it at all?

    Read the article

  • BizTalk 360 Alarms, How do you configure yours?

    - by Michael Stephenson
    Originally posted on: http://geekswithblogs.net/michaelstephenson/archive/2013/06/18/153157.aspxIve recently written a guest post for BizTalk 360 on their blog about how customers may configure BizTalk 360 Alarms to optimize getting the right information to the right type of support people.This is my thoughts on how users of BTS 360 can get the best value out of BizTalk 360 alarmshttp://blogs.biztalk360.com/what-are-the-different-types-of-alarms-alerts-you-should-configure-in-biztalk360/

    Read the article

  • What exactly is UV and UVW Mapping?

    - by Michael Stum
    Trying to understand some basic 3D concepts, at the moment I'm trying to figure out how textures actually work. I know that UV and UVW mapping are techniques that map 2D Textures to 3D Objects - Wikipedia told me as much. I googled for explanations but only found tutorials that assumed that I already know what it is. From my understanding, each 3D Model is made out of Points, and several points create a face? Does each point or face have a secondary coordinate that maps to a x/y position in the 2D Texture? Or how does unwrapping manipulate the model? Also, what does the W in UVW really do, what does it offer over UV? As I understand it, W maps to the Z coordinate, but in what situation would I have different textures for the same X/Y and different Z, wouldn't the Z part be invisible? Or am I completely misunderstanding this?

    Read the article

  • Chem eStandards 5.1 in Public Review

    - by michael.rowell
    The Open Applications Group has announced the opening of the 45 day public review period for Chem eStandards version 5.1. Interested parties have until 13 July to submit comments. There will be two webinars review sessions on 23 June and 24 June. The details of the webinars will be available soon. You can download the Chem eStandards review package. If you have any questions, contact Jim Wilson, the OAGi Chemical Council Architect.

    Read the article

  • JavaScript function to Redirects parent of IFrame to specified URL

    - by Michael Freidgeim
    /// <summary>    /// Redirects parent of IFrame to specified URL    /// If current page doesn't have parent, redirect itself    /// </summary>    /// <param name="page"></param>    /// <param name="url"></param>    public static void NavigateParentToUrl(Page page, string url)    {     String script = @" try { var sUrl='" + url + @"'; if (self.parent.frames.length != 0)     self.parent.location=sUrl; else   self.location = sUrl; } catch (Exception) {} ";     page.ClientScript.RegisterStartupScript(TypeForClientScript(), MethodBase.GetCurrentMethod().Name, script, true);    }    /// <summary>

    Read the article

  • Playing a Song causing WP7 to crash on phone, but not on emulator

    - by Michael Zehnich
    Hi there, I am trying to implement a song into a game that begins playing and continually loops on Windows Phone 7 via XNA 4.0. On the emulator, this works fine, however when deployed to a phone, it simply gives a black screen before going back to the home screen. Here is the rogue code in question, and commenting this code out makes the app run fine on the phone: // in the constructor fields private Song song; // in the LoadContent() method song = Content.Load<Song>("song"); // in the Update() method if (MediaPlayer.GameHasControl && MediaPlayer.State != MediaState.Playing) { MediaPlayer.Play(song); } The song file itself is a 2:53 long, 2.28mb .wma file at 106kbps bitrate. Again this works perfectly on emulator but does not run at all on phone. Thanks for any help you can provide!

    Read the article

  • University Choices For Programmers

    - by Michael
    I've noticed that the majority of eminent hackers seem to have come from prestigious universities. How true is this, and is it important to have this type of background to become prominent in the programming field? I don't necessarily have the means to attend a top school, but I have the desire to work among the best. Is it possible without coming from a highly-regarded program? Is graduate study at a good school more important than undergraduate in this regard?

    Read the article

  • Rapid Evolution of Society & Technology

    - by Michael Snow
    We caught up with Brian Solis on the phone the other day and Christie Flanagan had a chance to chat with him and learn a bit more about him and some of the concepts he'll be addressing in our Social Business Thought Leaders Webcast on Thursday 12/13/12. «--- Interview with Brian Solis  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast- mso-fareast-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Be sure and register for this week's webcast ---» ------------------- Guest post by Brian Solis. Reposted (Borrowed) from his posting of May 24, 2012 Dear [insert business name], what’s your promise? - Brian Solis You say you want to get closer to customers, but your actions are different than your words. You say you want to “surprise and delight” customers, but your product development teams are too busy building against a roadmap without consideration of the 5th P of marketing…people. Your employees are your number one asset, however the infrastructure of the organization has turned once optimistic and ambitious intrapreneurs into complacent cogs or worse, your greatest detractors. You question the adoption of disruptive technology by your internal champions yet you’ve not tried to find the value for yourself. You’re a change agent and you truly wish to bring about change, but you’ve not invested time or resources to answer “why” in your endeavors to become a connected or social business. If we are to truly change, we must find purpose. We must uncover the essence of our business and the value it delivers to traditional and connected consumers. We must rethink the spirit of today’s embrace and clearly articulate how transformation is going to improve customer and employee experiences and relationships now and over time. Without doing so, any attempts at evolution will be thwarted by reality. In an era of Digital Darwinism, no business is too big to fail or too small to succeed. These are undisciplined times which require alternative approaches to recognize and pursue new opportunities. But everything begins with acknowledging the 360 view of the world that you see today is actually a filtered view of managed and efficient convenience. Today, many organizations that were once inspired by innovation and engagement have fallen into a process of marketing, operationalizing, managing, and optimizing. That might have worked for the better part of the last century, but for the next 10 years and beyond, new vision, leadership and supporting business models will be written to move businesses from rigid frameworks to adaptive and agile entities. I believe that today’s executives will undergo a great test; a test of character, vision, intention, and universal leadership. It starts with a simple, but essential question…what is your promise? Notice, I didn’t ask about your brand promise. Nor did I ask for you to cite your mission and vision statements. This is much more than value propositions or manufactured marketing language designed to hook audiences and stakeholders. I asked for your promise to me as your consumer, stakeholder, and partner. This isn’t about B2B or B2C, but instead, people to people, person to person. It is this promise that will breathe new life into an organization that on the outside, could be misdiagnosed as catatonic by those who are disrupting your markets. A promise, for example, is meant to inspire. It creates alignment. It serves as the foundation for your vision, mission, and all business strategies and it must come from the top to mean anything. For without it, we cannot genuinely voice what it is we stand for or stand behind. Think for a moment about the definition of community. It’s easy to confuse a workplace or a market where everyone simply shares common characteristics. However, a community in this day and age is much more than belonging to something, it’s about doing something together that makes belonging matter The next few years will force a divide where companies are separated by intention as measured by actions and words. But, becoming a social business is not enough. Becoming more authentic and transparent doesn’t serve as a mantra for a renaissance. A promise is the ink that inscribes the spirit of the relationship between you and me. A promise serves as the words that influence change from within and change beyond the halls of our business. It is the foundation for a renewed embrace, one that must then find its way to every aspect of the organization. It’s the difference between a social business and an adaptive business. While an adaptive business can also be social, it is the culture of the organization that strives to not just use technology to extend current philosophies or processes into new domains, but instead give rise to a new culture where striving for relevance is among its goals. The tools and networks simply become enablers of a greater mission You are reading this because you believe in something more than what you’re doing today. While you fight for change within your organization, remember to aim for a higher purpose. Organizations that strive for innovation, imagination, and relevance will outperform those that do not. Part of your job is to lead a missionary push that unites the groundswell with a top down cascade. Change will only happen because you and other internal champions see what others can’t and will do what other won’t. It takes resolve. It takes the ability to translate new opportunities into business value. And, it takes courage. “This is a very noisy world, so we have to be very clear what we want them to know about us”-Steve Jobs ----------------------------------------------------------------- So -- where do you begin to evaluate the kind of experience you are delivering for your customers, partners, and employees?  Take a look at this White Paper: Creating a Successful and Meaningful Customer Experience on the Web and then have a cup of coffee while you listen to the sage advice of Guy Kawasaki in a short video below.   An interview with Guy Kawasaki on Maximizing Social Media Channels 

    Read the article

  • I want to build a Virtual Machine, are there any good references?

    - by Michael Stum
    I'm looking to build a Virtual Machine as a platform independent way to run some game code (essentially scripting). The Virtual Machines that I'm aware of in games are rather old: Infocom's Z-Machine, LucasArts' SCUMM, id Software's Quake 3. As a .net Developer, I'm familiar with the CLR and looked into the CIL Instructions to get an overview of what you actually implement on a VM Level (vs. the language level). I've also dabbled a bit in 6502 Assembler during the last year. The thing is, now that I want¹ to implement one, I need to dig a bit deeper. I know that there are stack based and register based VMs, but I don't really know which one is better at what and if there are more or hybrid approaches. I need to deal with memory management, decide which low level types are part of the VM and need to understand why stuff like ldstr works the way it does. My only reference book (apart from the Z-Machine stuff) is the CLI Annotated Standard, but I wonder if there is a better, more general/fundamental lecture for VMs? Basically something like the Dragon Book, but for VMs? I'm aware of Donald Knuth's Art of Computer Programming which uses a register-based VM, but I'm not sure how applicable that series still is, especially since it's still unfinished? Clarification: The goal is to build a specialized VM. For example, Infocom's Z-Machine contains OpCodes for setting the Background Color or playing a sound. So I need to figure out how much goes into the VM as OpCodes vs. the compiler that takes a script (language TBD) and generates the bytecode from it, but for that I need to understand what I'm really doing. ¹ I know, modern technology would allow me to just interpret a high level scripting language on the fly. But where is the fun in that? :) It's also a bit hard to google because Virtual Machines is nowadays often associated with VMWare-type OS Virtualization...

    Read the article

  • C#: A "Dumbed-Down" C++?

    - by James Michael Hare
    I was spending a lovely day this last weekend watching my sons play outside in one of the better weekends we've had here in Saint Louis for quite some time, and whilst watching them and making sure no limbs were broken or eyes poked out with sticks and other various potential injuries, I was perusing (in the correct sense of the word) this month's MSDN magazine to get a sense of the latest VS2010 features in both IDE and in languages. When I got to the back pages, I saw a wonderful article by David S. Platt entitled, "In Praise of Dumbing Down"  (msdn.microsoft.com/en-us/magazine/ee336129.aspx).  The title captivated me and I read it and found myself agreeing with it completely especially as it related to my first post on divorcing C++ as my favorite language. Unfortunately, as Mr. Platt mentions, the term dumbing-down has negative connotations, but is really and truly a good thing.  You are, in essence, taking something that is extremely complex and reducing it to something that is much easier to use and far less error prone.  Adding safeties to power tools and anti-kick mechanisms to chainsaws are in some sense "dumbing them down" to the common user -- but that also makes them safer and more accessible for the common user.  This was exactly my point with C++ and C#.  I did not mean to infer that C++ was not a useful or good language, but that in a very high percentage of cases, is too complex and error prone for the job at hand. Choosing the correct programming language for a job is a lot like choosing any other tool for a task.  For example: if I want to dig a French drain in my lawn, I can attempt to use a huge tractor-like backhoe and the job would be done far quicker than if I would dig it by hand.  I can't deny that the backhoe has the raw power and speed to perform.  But you also cannot deny that my chances of injury or chances of severing utility lines or other resources climb at an exponential rate inverse to the amount of training I may have on that machinery. Is C++ a powerful tool?  Oh yes, and it's great for those tasks where speed and performance are paramount.  But for most of us, it's the wrong tool.  And keep in mind, I say this even though I have 17 years of experience in using it and feel myself highly adept in utilizing its features both in the standard libraries, the STL, and in supplemental libraries such as BOOST.  Which, although greatly help with adding powerful features quickly, do very little to curb the relative dangers of the language. So, you may say, the fault is in the developer, that if the developer had some higher skills or if we only hired C++ experts this would not be an issue.  Now, I will concede there is some truth to this.  Obviously, the higher skilled C++ developers you hire the better the chance they will produce highly performant and error-free code.  However, what good is that to the average developer who cannot afford a full stable of C++ experts? That's my point with C#:  It's like a kinder, gentler C++.  It gives you nearly the same speed, and in many ways even more power than C++, and it gives you a much softer cushion for novices to fall against if they code less-than-optimally.  A bug is a bug, of course, in any language, but C# does a good job of hiding and taking on the task of handling almost all of the resource issues that make C++ so tricky.  For my money, C# is much more maintainable, more feature-rich, second only slightly in performance, faster to market, and -- last but not least -- safer and easier to use.  That's why, where I work, I much prefer to see the developers moving to C#.  The quantity of bugs is much lower, and we don't need to hire "experts" to achieve the same results since the language itself handles those resource pitfalls so prevalent in poorly written C++ code.  C++ will still have its place in the world, and I'm sure I'll still use it now and again where it is truly the correct tool for the job, but for nearly every other project C# is a wonderfully "dumbed-down" version of C++ -- in the very best sense -- and to me, that's the smart choice.

    Read the article

  • C#: Handling Notifications: inheritance, events, or delegates?

    - by James Michael Hare
    Often times as developers we have to design a class where we get notification when certain things happen. In older object-oriented code this would often be implemented by overriding methods -- with events, delegates, and interfaces, however, we have far more elegant options. So, when should you use each of these methods and what are their strengths and weaknesses? Now, for the purposes of this article when I say notification, I'm just talking about ways for a class to let a user know that something has occurred. This can be through any programmatic means such as inheritance, events, delegates, etc. So let's build some context. I'm sitting here thinking about a provider neutral messaging layer for the place I work, and I got to the point where I needed to design the message subscriber which will receive messages from the message bus. Basically, what we want is to be able to create a message listener and have it be called whenever a new message arrives. Now, back before the flood we would have done this via inheritance and an abstract class: 1:  2: // using inheritance - omitting argument null checks and halt logic 3: public abstract class MessageListener 4: { 5: private ISubscriber _subscriber; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber) 11: { 12: _subscriber = subscriber; 13: _messageThread = new Thread(MessageLoop); 14: _messageThread.Start(); 15: } 16:  17: // user will override this to process their messages 18: protected abstract void OnMessageReceived(Message msg); 19:  20: // handle the looping in the thread 21: private void MessageLoop() 22: { 23: while(!_isHalted) 24: { 25: // as long as processing, wait 1 second for message 26: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 27: if(msg != null) 28: { 29: OnMessageReceived(msg); 30: } 31: } 32: } 33: ... 34: } It seems so odd to write this kind of code now. Does it feel odd to you? Maybe it's just because I've gotten so used to delegation that I really don't like the feel of this. To me it is akin to saying that if I want to drive my car I need to derive a new instance of it just to put myself in the driver's seat. And yet, unquestionably, five years ago I would have probably written the code as you see above. To me, inheritance is a flawed approach for notifications due to several reasons: Inheritance is one of the HIGHEST forms of coupling. You can't seal the listener class because it depends on sub-classing to work. Because C# does not allow multiple-inheritance, I've spent my one inheritance implementing this class. Every time you need to listen to a bus, you have to derive a class which leads to lots of trivial sub-classes. The act of consuming a message should be a separate responsibility than the act of listening for a message (SRP). Inheritance is such a strong statement (this IS-A that) that it should only be used in building type hierarchies and not for overriding use-specific behaviors and notifications. Chances are, if a class needs to be inherited to be used, it most likely is not designed as well as it could be in today's modern programming languages. So lets look at the other tools available to us for getting notified instead. Here's a few other choices to consider. Have the listener expose a MessageReceived event. Have the listener accept a new IMessageHandler interface instance. Have the listener accept an Action<Message> delegate. Really, all of these are different forms of delegation. Now, .NET events are a bit heavier than the other types of delegates in terms of run-time execution, but they are a great way to allow others using your class to subscribe to your events: 1: // using event - ommiting argument null checks and halt logic 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private bool _isHalted = false; 6: private Thread _messageThread; 7:  8: // assign the subscriber and start the messaging loop 9: public MessageListener(ISubscriber subscriber) 10: { 11: _subscriber = subscriber; 12: _messageThread = new Thread(MessageLoop); 13: _messageThread.Start(); 14: } 15:  16: // user will override this to process their messages 17: public event Action<Message> MessageReceived; 18:  19: // handle the looping in the thread 20: private void MessageLoop() 21: { 22: while(!_isHalted) 23: { 24: // as long as processing, wait 1 second for message 25: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 26: if(msg != null && MessageReceived != null) 27: { 28: MessageReceived(msg); 29: } 30: } 31: } 32: } Note, now we can seal the class to avoid changes and the user just needs to provide a message handling method: 1: theListener.MessageReceived += CustomReceiveMethod; However, personally I don't think events hold up as well in this case because events are largely optional. To me, what is the point of a listener if you create one with no event listeners? So in my mind, use events when handling the notification is optional. So how about the delegation via interface? I personally like this method quite a bit. Basically what it does is similar to inheritance method mentioned first, but better because it makes it easy to split the part of the class that doesn't change (the base listener behavior) from the part that does change (the user-specified action after receiving a message). So assuming we had an interface like: 1: public interface IMessageHandler 2: { 3: void OnMessageReceived(Message receivedMessage); 4: } Our listener would look like this: 1: // using delegation via interface - omitting argument null checks and halt logic 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private IMessageHandler _handler; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber, IMessageHandler handler) 11: { 12: _subscriber = subscriber; 13: _handler = handler; 14: _messageThread = new Thread(MessageLoop); 15: _messageThread.Start(); 16: } 17:  18: // handle the looping in the thread 19: private void MessageLoop() 20: { 21: while(!_isHalted) 22: { 23: // as long as processing, wait 1 second for message 24: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 25: if(msg != null) 26: { 27: _handler.OnMessageReceived(msg); 28: } 29: } 30: } 31: } And they would call it by creating a class that implements IMessageHandler and pass that instance into the constructor of the listener. I like that this alleviates the issues of inheritance and essentially forces you to provide a handler (as opposed to events) on construction. Well, this is good, but personally I think we could go one step further. While I like this better than events or inheritance, it still forces you to implement a specific method name. What if that name collides? Furthermore if you have lots of these you end up either with large classes inheriting multiple interfaces to implement one method, or lots of small classes. Also, if you had one class that wanted to manage messages from two different subscribers differently, it wouldn't be able to because the interface can't be overloaded. This brings me to using delegates directly. In general, every time I think about creating an interface for something, and if that interface contains only one method, I start thinking a delegate is a better approach. Now, that said delegates don't accomplish everything an interface can. Obviously having the interface allows you to refer to the classes that implement the interface which can be very handy. In this case, though, really all you want is a method to handle the messages. So let's look at a method delegate: 1: // using delegation via delegate - omitting argument null checks and halt logic 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private Action<Message> _handler; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber, Action<Message> handler) 11: { 12: _subscriber = subscriber; 13: _handler = handler; 14: _messageThread = new Thread(MessageLoop); 15: _messageThread.Start(); 16: } 17:  18: // handle the looping in the thread 19: private void MessageLoop() 20: { 21: while(!_isHalted) 22: { 23: // as long as processing, wait 1 second for message 24: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 25: if(msg != null) 26: { 27: _handler(msg); 28: } 29: } 30: } 31: } Here the MessageListener now takes an Action<Message>.  For those of you unfamiliar with the pre-defined delegate types in .NET, that is a method with the signature: void SomeMethodName(Message). The great thing about delegates is it gives you a lot of power. You could create an anonymous delegate, a lambda, or specify any other method as long as it satisfies the Action<Message> signature. This way, you don't need to define an arbitrary helper class or name the method a specific thing. Incidentally, we could combine both the interface and delegate approach to allow maximum flexibility. Doing this, the user could either pass in a delegate, or specify a delegate interface: 1: // using delegation - give users choice of interface or delegate 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private Action<Message> _handler; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber, Action<Message> handler) 11: { 12: _subscriber = subscriber; 13: _handler = handler; 14: _messageThread = new Thread(MessageLoop); 15: _messageThread.Start(); 16: } 17:  18: // passes the interface method as a delegate using method group 19: public MessageListener(ISubscriber subscriber, IMessageHandler handler) 20: : this(subscriber, handler.OnMessageReceived) 21: { 22: } 23:  24: // handle the looping in the thread 25: private void MessageLoop() 26: { 27: while(!_isHalted) 28: { 29: // as long as processing, wait 1 second for message 30: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 31: if(msg != null) 32: { 33: _handler(msg); 34: } 35: } 36: } 37: } } This is the method I tend to prefer because it allows the user of the class to choose which method works best for them. You may be curious about the actual performance of these different methods. 1: Enter iterations: 2: 1000000 3:  4: Inheritance took 4 ms. 5: Events took 7 ms. 6: Interface delegation took 4 ms. 7: Lambda delegate took 5 ms. Before you get too caught up in the numbers, however, keep in mind that this is performance over over 1,000,000 iterations. Since they are all < 10 ms which boils down to fractions of a micro-second per iteration so really any of them are a fine choice performance wise. As such, I think the choice of what to do really boils down to what you're trying to do. Here's my guidelines: Inheritance should be used only when defining a collection of related types with implementation specific behaviors, it should not be used as a hook for users to add their own functionality. Events should be used when subscription is optional or multi-cast is desired. Interface delegation should be used when you wish to refer to implementing classes by the interface type or if the type requires several methods to be implemented. Delegate method delegation should be used when you only need to provide one method and do not need to refer to implementers by the interface name.

    Read the article

  • Howto fix "[Errno 13] Permission denied" in mailman mailing lists

    - by Michael
    After migrating domains from one plesk server onto another, I got several of those mails every day: (the target mailbox does not exist, so I get those as undeliverable mail bounces) Return-Path: <[email protected]> Received: (qmail 26460 invoked by uid 38); 26 May 2012 12:00:02 +0200 Date: 26 May 2012 12:00:02 +0200 Message-ID: <20120526100002.xyzxx.qmail@lvpsxxx-xx-xx-xx.dedicated.hosteurope.de> From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <list@lvpsxxx-xx-xx-xx> [ -x /usr/lib/mailman/cron/senddigests ] && /usr/lib/mailman/cron/senddigests Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/var/list> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=list> List: xyzxyz: problem processing /var/lib/mailman/lists/xyzxyz/digest.mbox: [Errno 13] Permission denied: '/var/lib/mailman/archives/private/xyzxyz' I tried to fix the permissions myself, but the problem still exists.

    Read the article

  • BizTalk Testing Series - The xpath Function

    - by Michael Stephenson
    Background While the xpath function in a BizTalk orchestration is a very powerful feature I have often come across the situation where someone has hard coded an xpath expression in an orchestration. If you have read some of my previous posts about testing I've tried to get across the general theme like test-driven or test-assisted development approaches where the underlying principle is that your building up your solution of small well tested units that are put together and the resulting solution is usually quite robust. You will be finding more bugs within your unit tests and fewer outside of your team. The thing I don't like about the xpath functions usual usage is when you come across an orchestration which has something like the below snippet in an expression or assign shape: string result = xpath(myMessage,"string(//Order/OrderItem/ProductName)"); My main issue with this is that the xpath statement is hard coded in the orchestration and you don't really know it works until you are running the orchestration. Some of the problems I think you end up with are: You waste time with lengthy debugging of the orchestration when your statement isn't working You might not know the function isn't working quite as expected because the testable unit around it is big You are much more open to regression issues if your schema changes     Approach to Testing The technique I usually follow is to hold the xpath statement as a constant in a helper class or to format a constant with a helper function to get the actual xpath statement. It is then used by the orchestration like follows. string result = xpath(myMessage, MyHelperClass.ProductNameXPathStatement); This means that because the xpath statement is available outside of the orchestration it now becomes testable in its own right. This means: I can test it in its own right I'm less likely to waste time tracking down problems caused by an error in the statement I can reduce the risk or regression issuess I'm now able to implement some testing around my xpath statements which usually are something like the following:    The test will use a sample xml file The sample will be validated against the schema The test will execute the xpath statement and then check the results are as expected     Walk-through BizTalk uses the XPathNavigator internally behind the xpath function to implement the queries you will usually use using the navigators select or evaluate functions. In the sample (link at bottom) I have a small solution which contains a schema from which I have generated a sample instance. I will then use this instance as the basis for my tests.     In the below diagram you can see the helper class which I've encapsulated my xpath expressions in, and some helper functions which will format the expression in the case of a repeating node which would want to inject an index into the xpath query.             I have then created a test class which has some functions to execute some queries against my sample xml file. An example of this is below.         In the test class I have a couple of helper functions which will execute the xpath expressions in a similar way to BizTalk. You could have a proper helper class to do this if you wanted.         You can see now in the BizTalk expression editor I can use these functions alongside the xpath function.         Conclusion I hope you can see with very little effort you can make your life much easier by testing xpath statements outside of an orchestration rather than using them directly hard coded into the orchestration.     This can also save you lots of pain longer term because your build should break if your schema changes unexpectedly causing these xpath tests to fail where as your tests around the orchestration will be more difficult to troubleshoot and workout the cause of the problem.     Sample Link The sample is available from the following link: http://code.msdn.microsoft.com/testbtsxpathfunction     Other Tools On the subject of using the xpath function, if you don't already use it the below tool is very useful for creating your xpath statements (thanks BizBert) http://www.bizbert.com/bizbert/2007/11/30/XPath+The+Hidden+Language+Of+BizTalk.aspx

    Read the article

  • Anomaly with bash PS1 definition

    - by Michael Wiles
    My root and admin user both have the same .bashrc file. The prompt section of the .bashrc is the following: if [ "$color_prompt" = yes ]; then PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ ' else PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ ' fi unset color_prompt force_color_prompt # If this is an xterm set the title to user@host:dir case "$TERM" in xterm*|rxvt*) PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1" ;; *) ;; esac But the problem is that the admin user and root user have different prompts. admin's prompt is: admin@hostname:~$ and root's prompt is root@hostname:/home# So it seems root is using the "xterm" version and admin is not. Why does the .bashrc file have this difference in prompts? How do I get the admin user to also use the xterm version? How would I test that condition? If I run echo $TERM while running as the admin user I get xterm so as far as I can tell, it should be using the xterm version for the admin user.

    Read the article

  • Camera lookAt target changes when rotating parent node

    - by Michael IV
    have the following issue.I have a camera with lookAt method which works fine.I have a parent node to which I parent the camera.If I rotate the parent node while keeping the camera lookAt the target , the camera lookAt changes too.That is nor what I want to achieve.I need it to work like in Adobe AE when you parent camera to a null object:when null object is rotated the camera starts orbiting around the target while still looking at the target.What I do currently is multiplying parent's model matrix with camera model matrix which is calculated from lookAt() method.I am sure I need to decompose (or recompose ) one of the matrices before multiplying them .Parent model or camera model ? Anyone here can show the right way doing it ? UPDATE: The parent is just a node .The child is the camera.The parented camera in AfterEffects works like this: If you rotate the parent node while camera looks at the target , the camera actually starts orbiting around the target based on the parent rotation.In my case the parent rotation changes also Camera's lookAt direction which IS NOT what I want.Hope now it is clear .

    Read the article

  • How can we stop GitHub from emailing too many people too much? [migrated]

    - by Michael Bishop
    I recently joined a research team that uses R and Git/GitHub. The team includes 4 full-time R programmers and 10 social scientists who only run simple analyses. I was told by one of the more experienced programmers on the project that they haven't found a way to use many of GitHub's tools for collaboration (bug reports, to-do lists, code comments, etc.) because they generate emails to everyone who is a contributor to the repo every time. This is incredibly puzzling to me, so I'd love to hear from someone that there are ways to adjust the email settings. I'd expect there would be multiple ways, so that individuals could opt-in or opt-out of certain emails, and also so contributors could explicitly choose whether certain people get certain emails or not. Is it possible to adjust these settings?

    Read the article

  • Creating a Training Lab on Windows Azure

    - by Michael Stephenson
    Originally posted on: http://geekswithblogs.net/michaelstephenson/archive/2013/06/17/153149.aspxThis week we are preparing for a training course that Alan Smith will be running for the support teams at one of my customers around Windows Azure. In order to facilitate the training lab we have a few prerequisites we need to handle. One of the biggest ones is that although the support team all have MSDN accounts the local desktops they work on are not ideal for running most of the labs as we want to give them some additional developer background training around Azure. Some recent Azure announcements really help us in this area: MSDN software can now be used on Azure VM You don't pay for Azure VM's when they are no longer used  Since the support team only have limited experience of Windows Azure and the organisation also have an Enterprise Agreement we decided it would be best value for money to spin up a training lab in a subscription on the EA and then we can turn the machines off when we are done. At the same time we would be able to spin them back up when the users need to do some additional lab work once the training course is completed. In order to achieve this I wanted to create a powershell script which would setup my training lab. The aim was to create 18 VM's which would be based on a prebuilt template with Visual Studio and the Azure development tools. The script I used is described below The Start & Variables The below text will setup the powershell environment and some variables which I will use elsewhere in the script. It will also import the Azure Powershell cmdlets. You can see below that I will need to download my publisher settings file and know some details from my Azure account. At this point I will assume you have a basic understanding of Azure & Powershell so already know how to do this. Set-ExecutionPolicy Unrestrictedcls $startTime = get-dateImport-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\Azure.psd1"# Azure Publisher Settings $azurePublisherSettings = '<Your settings file>.publishsettings'  # Subscription Details $subscriptionName = "<Your subscription name>" $defaultStorageAccount = "<Your default storage account>"  # Affinity Group Details $affinityGroup = '<Your affinity group>' $dataCenter = 'West Europe' # From Get-AzureLocation  # VM Details $baseVMName = 'TRN' $adminUserName = '<Your admin username>' $password = '<Your admin password>' $size = 'Medium' $vmTemplate = '<The name of your VM template image>' $rdpFilePath = '<File path to save RDP files to>' $machineSettingsPath = '<File path to save machine info to>'    Functions In the next section of the script I have some functions which are used to perform certain actions. The first is called CreateVM. This will do the following actions: If the VM already exists it will be deleted Create the cloud service Create the VM from the template I have created Add an endpoint so we can RDP to them all over the same port Download the RDP file so there is a short cut the trainees can easily access the machine via Write settings for the machine to a log file  function CreateVM($machineNo) { # Specify a name for the new VM $machineName = "$baseVMName-$machineNo" Write-Host "Creating VM: $machineName"       # Get the Azure VM Image      $myImage = Get-AzureVMImage $vmTemplate   #If the VM already exists delete and re-create it $existingVm = Get-AzureVM -Name $machineName -ServiceName $serviceName if($existingVm -ne $null) { Write-Host "VM already exists so deleting it" Remove-AzureVM -Name $machineName -ServiceName $serviceName }   "Creating Service" $serviceName = "bupa-azure-train-$machineName" Remove-AzureService -Force -ServiceName $serviceName New-AzureService -Location $dataCenter -ServiceName $serviceName   Write-Host "Creating VM: $machineName" New-AzureQuickVM -Windows -name $machineName -ServiceName $serviceName -ImageName $myImage.ImageName -InstanceSize $size -AdminUsername $adminUserName -Password $password  Write-Host "Updating the RDP endpoint for $machineName" Get-AzureVM -name $machineName -ServiceName $serviceName ` | Add-AzureEndpoint -Name RDP -Protocol TCP -LocalPort 3389 -PublicPort 550 ` | Update-AzureVM    Write-Host "Get the RDP File for machine $machineName" $machineRDPFilePath = "$rdpFilePath\$machineName.rdp" Get-AzureRemoteDesktopFile -name $machineName -ServiceName $serviceName -LocalPath "$machineRDPFilePath"   WriteMachineSettings "$machineName" "$serviceName" }    The delete machine settings function is used to delete the log file before we start re-running the process.  function DeleteMachineSettings() { Write-Host "Deleting the machine settings output file" [System.IO.File]::Delete("$machineSettingsPath"); }    The write machine settings function will get the VM and then record its details to the log file. The importance of the log file is that I can easily provide the information for all of the VM's to our infrastructure team to be able to configure access to all of the VM's    function WriteMachineSettings([string]$vmName, [string]$vmServiceName) { Write-Host "Writing to the machine settings output file"   $vm = Get-AzureVM -name $vmName -ServiceName $vmServiceName $vmEndpoint = Get-AzureEndpoint -VM $vm -Name RDP   $sb = new-object System.Text.StringBuilder $sb.Append("Service Name: "); $sb.Append($vm.ServiceName); $sb.Append(", "); $sb.Append("VM: "); $sb.Append($vm.Name); $sb.Append(", "); $sb.Append("RDP Public Port: "); $sb.Append($vmEndpoint.Port); $sb.Append(", "); $sb.Append("Public DNS: "); $sb.Append($vmEndpoint.Vip); $sb.AppendLine(""); [System.IO.File]::AppendAllText($machineSettingsPath, $sb.ToString());  } # end functions    Rest of Script In the rest of the script it is really just the bit that orchestrates the actions we want to happen. It will load the publisher settings, select the Azure subscription and then loop around the CreateVM function and create 16 VM's  Import-AzurePublishSettingsFile $azurePublisherSettings Set-AzureSubscription -SubscriptionName $subscriptionName -CurrentStorageAccount $defaultStorageAccount Select-AzureSubscription -SubscriptionName $subscriptionName  DeleteMachineSettings    "Starting creating Bupa International Azure Training Lab" $numberOfVMs = 16  for ($index=1; $index -le $numberOfVMs; $index++) { $vmNo = "$index" CreateVM($vmNo); }    "Finished creating Bupa International Azure Training Lab" # Give it a Minute Start-Sleep -s 60  $endTime = get-date "Script run time " + ($endTime - $startTime)    Conclusion As you can see there is nothing too fancy about this script but in our case of creating a small isolated training lab which is not connected to our corporate network then we can easily use this to provision the lab. Im sure if this is of use to anyone you can easily modify it to do other things with the lab environment too. A couple of points to note are that there are some soft limits in Azure about the number of cores and services your subscription can use. You may need to contact the Azure support team to be able to increase this limit. In terms of the real business value of this approach, it was not possible to use the existing desktops to do the training on, and getting some internal virtual machines would have been relatively expensive and time consuming for our ops team to do. With the Azure option we are able to spin these machines up for a temporary period during the training course and then throw them away when we are done. We expect the costing of this test lab to be very small, especially considering we have EA pricing. As a ball park I think my 18 lab VM training environment will cost in the region of $80 per day on our EA. This is a fraction of the cost of the creation of a single VM on premise.

    Read the article

  • Do programmers at non-software companies need the same things as at software companies?

    - by Michael
    There is a lot of evidence that things like offices, multiple screens, administration rights of your own computer, and being allowed whatever software you want is great for productivity while developing. However, the studies I've seen tend toward companies that sell software. Therefore, keeping the programmers productive is paramount to the company's profitability. However, at companies that produce software simply to support their primary function, programming is merely a support role. Do the same rules apply at a company that only uses the software they produce to support their business, and a lot of a programmer's work is maintainence?

    Read the article

  • C#/.NET Little Wonders &ndash; Cross Calling Constructors

    - by James Michael Hare
    Just a small post today, it’s the final iteration before our release and things are crazy here!  This is another little tidbit that I love using, and it should be fairly common knowledge, yet I’ve noticed many times that less experienced developers tend to have redundant constructor code when they overload their constructors. The Problem – repetitive code is less maintainable Let’s say you were designing a messaging system, and so you want to create a class to represent the properties for a Receiver, so perhaps you design a ReceiverProperties class to represent this collection of properties. Perhaps, you decide to make ReceiverProperties immutable, and so you have several constructors that you can use for alternative construction: 1: // Constructs a set of receiver properties. 2: public ReceiverProperties(ReceiverType receiverType, string source, bool isDurable, bool isBuffered) 3: { 4: ReceiverType = receiverType; 5: Source = source; 6: IsDurable = isDurable; 7: IsBuffered = isBuffered; 8: } 9: 10: // Constructs a set of receiver properties with buffering on by default. 11: public ReceiverProperties(ReceiverType receiverType, string source, bool isDurable) 12: { 13: ReceiverType = receiverType; 14: Source = source; 15: IsDurable = isDurable; 16: IsBuffered = true; 17: } 18:  19: // Constructs a set of receiver properties with buffering on and durability off. 20: public ReceiverProperties(ReceiverType receiverType, string source) 21: { 22: ReceiverType = receiverType; 23: Source = source; 24: IsDurable = false; 25: IsBuffered = true; 26: } Note: keep in mind this is just a simple example for illustration, and in same cases default parameters can also help clean this up, but they have issues of their own. While strictly speaking, there is nothing wrong with this code, logically, it suffers from maintainability flaws.  Consider what happens if you add a new property to the class?  You have to remember to guarantee that it is set appropriately in every constructor call. This can cause subtle bugs and becomes even uglier when the constructors do more complex logic, error handling, or there are numerous potential overloads (especially if you can’t easily see them all on one screen’s height). The Solution – cross-calling constructors I’d wager nearly everyone knows how to call your base class’s constructor, but you can also cross-call to one of the constructors in the same class by using the this keyword in the same way you use base to call a base constructor. 1: // Constructs a set of receiver properties. 2: public ReceiverProperties(ReceiverType receiverType, string source, bool isDurable, bool isBuffered) 3: { 4: ReceiverType = receiverType; 5: Source = source; 6: IsDurable = isDurable; 7: IsBuffered = isBuffered; 8: } 9: 10: // Constructs a set of receiver properties with buffering on by default. 11: public ReceiverProperties(ReceiverType receiverType, string source, bool isDurable) 12: : this(receiverType, source, isDurable, true) 13: { 14: } 15:  16: // Constructs a set of receiver properties with buffering on and durability off. 17: public ReceiverProperties(ReceiverType receiverType, string source) 18: : this(receiverType, source, false, true) 19: { 20: } Notice, there is much less code.  In addition, the code you have has no repetitive logic.  You can define the main constructor that takes all arguments, and the remaining constructors with defaults simply cross-call the main constructor, passing in the defaults. Yes, in some cases default parameters can ease some of this for you, but default parameters only work for compile-time constants (null, string and number literals).  For example, if you were creating a TradingDataAdapter that relied on an implementation of ITradingDao which is the data access object to retreive records from the database, you might want two constructors: one that takes an ITradingDao reference, and a default constructor which constructs a specific ITradingDao for ease of use: 1: public TradingDataAdapter(ITradingDao dao) 2: { 3: _tradingDao = dao; 4:  5: // other constructor logic 6: } 7:  8: public TradingDataAdapter() 9: { 10: _tradingDao = new SqlTradingDao(); 11:  12: // same constructor logic as above 13: }   As you can see, this isn’t something we can solve with a default parameter, but we could with cross-calling constructors: 1: public TradingDataAdapter(ITradingDao dao) 2: { 3: _tradingDao = dao; 4:  5: // other constructor logic 6: } 7:  8: public TradingDataAdapter() 9: : this(new SqlTradingDao()) 10: { 11: }   So in cases like this where you have constructors with non compiler-time constant defaults, default parameters can’t help you and cross-calling constructors is one of your best options. Summary When you have just one constructor doing the job of initializing the class, you can consolidate all your logic and error-handling in one place, thus ensuring that your behavior will be consistent across the constructor calls. This makes the code more maintainable and even easier to read.  There will be some cases where cross-calling constructors may be sub-optimal or not possible (if, for example, the overloaded constructors take completely different types and are not just “defaulting” behaviors). You can also use default parameters, of course, but default parameter behavior in a class hierarchy can be problematic (default values are not inherited and in fact can differ) so sometimes multiple constructors are actually preferable. Regardless of why you may need to have multiple constructors, consider cross-calling where you can to reduce redundant logic and clean up the code.   Technorati Tags: C#,.NET,Little Wonders

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >