Search Results

Search found 2692 results on 108 pages for 'michael leach'.

Page 11/108 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Ray Wang: Why engagement matters in an era of customer experience

    - by Michael Snow
    Why engagement matters in an era of customer experience R "Ray" Wang Principal Analyst & CEO, Constellation Research Mobile enterprise, social business, cloud computing, advanced analytics, and unified communications are converging. Armed with the art of the possible, innovators are seeking to apply disruptive consumer technologies to enterprise class uses — call it the consumerization of IT in the enterprise. The likely results include new methods of furthering relationships, crafting longer term engagement, and creating transformational business models. It's part of a shift from transactional systems to engagement systems. These transactional systems have been around since the 1950s. You know them as ERP, finance and accounting systems, or even payroll. These systems are designed for massive computational scale; users find them rigid and techie. Meanwhile, we've moved to new engagement systems such as Facebook and Twitter in the consumer world. The rich usability and intuitive design reflect how users want to work — and now users are coming to expect the same paradigms and designs in their enterprise world. ~~~ Ray is a prolific contributor to his own blog as well as others. For a sneak peak at Ray's thoughts on engagement, take a look at this quick teaser on Avoiding Social Media Fatigue Through Engagement Or perhaps you might agree with Ray on Dealing With The Real Problem In Social Business Adoption – The People! Check out Ray's post on the Harvard Business Review Blog to get his perspective on "How to Engage Your Customers and Employees." For a daily dose of Ray - follow him on Twitter: @rwang0 But MOST IMPORTANTLY.... Don't miss the opportunity to join leading industry analyst, R "Ray" Wang of Constellation Research in the latest webcast of the Oracle Social Business Thought Leaders Series as he explains how to apply the 9 C's of Engagement for both your customers and employees.

    Read the article

  • Who should ‘own’ the Enterprise Architecture?

    - by Michael Glas
    I recently had a discussion around who should own an organization’s Enterprise Architecture. It was spawned by an article titled “Busting CIO Myths” in CIO magazine1 where the author interviewed Jeanne Ross, director of MIT's Center for Information Systems Research and co-author of books on enterprise architecture, governance and IT value.In the article Jeanne states that companies need to acknowledge that "architecture says everything about how the company is going to function, operate, and grow; the only person who can own that is the CEO". "If the CEO doesn't accept that role, there really can be no architecture."The first question that came up when talking about ownership was whether you are talking about a person, role, or organization (there are pros and cons to each, but in general, I like to assign accountability to as few people as possible). After much thought and discussion, I came to the conclusion that we were answering the wrong question. Instead of talking about ownership we were talking about responsibility and accountability, and the answer varies depending on the particular role of the organization’s Enterprise Architecture and the activities of the enterprise architect(s).Instead of looking at just who owns the architecture, think about what the person/role/organization should do. This is one possible scenario (thanks to Bob Covington): The CEO should own the Enterprise Strategy which guides the business architecture. The Business units should own the business processes and information which guide the business, application and information architectures. The CIO should own the technology, IT Governance and the management of the application and information architectures/implementations. The EA Governance Team owns the EA process.  If EA is done well, the governance team consists of both IT and the business. While there are many more roles and responsibilities than listed here, it starts to provide a clearer understanding of ‘ownership’. Now back to Jeanne’s statement that the CEO should own the architecture. If you agree with the statement about what the architecture is (and I do agree), then ultimately the CEO does need to own it. However, what we ended up with was not really ownership, but more statements around roles and responsibilities tied to aspects of the enterprise architecture. You can debate the semantics of ownership vs. responsibility and accountability, but in the end the important thing is to come to a clearer understanding that is easily communicated (and hopefully measured) around the question “Who owns the Enterprise Architecture”.The next logical step . . . create a RACI matrix that details the findings . . . but that is a step that each organization needs to do on their own as it will vary based on current EA maturity, company culture, and a variety of other factors. Who ‘owns’ the Enterprise Architecture in your organization? 1 CIO Magazine Article (Busting CIO Myths): http://www.cio.com/article/704943/Busting_CIO_Myths Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • C#/.NET Little Wonders: A Redux

    - by James Michael Hare
    I gave my Little Wonders presentation to the Topeka Dot Net Users' Group today, so re-posting the links to all the previous posts for them. The Presentation: C#/.NET Little Wonders: A Presentation The Original Trilogy: C#/.NET Five Little Wonders (part 1) C#/.NET Five More Little Wonders (part 2) C#/.NET Five Final Little Wonders (part 3) The Subsequent Sequels: C#/.NET Little Wonders: ToDictionary() and ToList() C#/.NET Little Wonders: DateTime is Packed With Goodies C#/.NET Little Wonders: Fun With Enum Methods C#/.NET Little Wonders: Cross-Calling Constructors C#/.NET Little Wonders: Constraining Generics With Where Clause C#/.NET Little Wonders: Comparer<T>.Default C#/.NET Little Wonders: The Useful (But Overlooked) Sets The Concurrent Wonders: C#/.NET Little Wonders: The Concurrent Collections (1 of 3) - ConcurrentQueue and ConcurrentStack C#/.NET Little Wonders: The Concurrent Collections (2 of 3) - ConcurrentDictionary Tweet   Technorati Tags: .NET,C#,Little Wonders

    Read the article

  • C#/.NET Little Wonders: Of LINQ and Lambdas - A Presentation

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Today I’m giving a brief beginner’s guide to LINQ and Lambdas at the St. Louis .NET User’s Group so I thought I’d post the presentation here as well.  I updated the presentation a bit as well as added some notes on the query syntax.  Enjoy! The C#/.NET Fundaments: Of Lambdas and LINQ Presentation Of Lambdas and LINQ View more presentations from BlackRabbitCoder   Technorati Tags: C#, CSharp, .NET, Little Wonders, LINQ, Lambdas

    Read the article

  • Scale plugin keeps forgetting hot corner settings on restart

    - by Michael Butler
    I'm using Ubuntu 12.04 with Unity, which I suppose uses Compiz as well. I have Compiz Settings Manager, and make the top left and bottom left corners of my screen activate the "Scale" (like Exposé) function to scale and show all windows. The problem is that when I restart the computer, the hot corners no longer do anything. I have to go back into compiz settings manager, delete the hot corner option, and then set it again. Something seems to be overriding or deleting the compiz hot corner setting on restart. Update: Sometimes, the setting loses its footing even while the computer is running. I haven't figured out yet what triggers it.

    Read the article

  • How to implement a 2d collision detection for Android

    - by Michael Seun Araromi
    I am making a 2d space shooter using opengl ES. Can someone please show me how to implement a collision detection between the enemy ship and player ship. The code for the two classes are below: Player Ship Class: package com.proandroidgames; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.FloatBuffer; import javax.microedition.khronos.opengles.GL10; public class SSGoodGuy { public boolean isDestroyed = false; private int damage = 0; private FloatBuffer vertexBuffer; private FloatBuffer textureBuffer; private ByteBuffer indexBuffer; private float vertices[] = { 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, }; private float texture[] = { 0.0f, 0.0f, 0.25f, 0.0f, 0.25f, 0.25f, 0.0f, 0.25f, }; private byte indices[] = { 0, 1, 2, 0, 2, 3, }; public void applyDamage(){ damage++; if (damage == SSEngine.PLAYER_SHIELDS){ isDestroyed = true; } } public SSGoodGuy() { ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4); byteBuf.order(ByteOrder.nativeOrder()); vertexBuffer = byteBuf.asFloatBuffer(); vertexBuffer.put(vertices); vertexBuffer.position(0); byteBuf = ByteBuffer.allocateDirect(texture.length * 4); byteBuf.order(ByteOrder.nativeOrder()); textureBuffer = byteBuf.asFloatBuffer(); textureBuffer.put(texture); textureBuffer.position(0); indexBuffer = ByteBuffer.allocateDirect(indices.length); indexBuffer.put(indices); indexBuffer.position(0); } public void draw(GL10 gl, int[] spriteSheet) { gl.glBindTexture(GL10.GL_TEXTURE_2D, spriteSheet[0]); gl.glFrontFace(GL10.GL_CCW); gl.glEnable(GL10.GL_CULL_FACE); gl.glCullFace(GL10.GL_BACK); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer); gl.glDrawElements(GL10.GL_TRIANGLES, indices.length, GL10.GL_UNSIGNED_BYTE, indexBuffer); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glDisable(GL10.GL_CULL_FACE); } } Enemy Ship Class: package com.proandroidgames; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.FloatBuffer; import java.util.Random; import javax.microedition.khronos.opengles.GL10; public class SSEnemy { public float posY = 0f; public float posX = 0f; public float posT = 0f; public float incrementXToTarget = 0f; public float incrementYToTarget = 0f; public int attackDirection = 0; public boolean isDestroyed = false; private int damage = 0; public int enemyType = 0; public boolean isLockedOn = false; public float lockOnPosX = 0f; public float lockOnPosY = 0f; private Random randomPos = new Random(); private FloatBuffer vertexBuffer; private FloatBuffer textureBuffer; private ByteBuffer indexBuffer; private float vertices[] = { 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, }; private float texture[] = { 0.0f, 0.0f, 0.25f, 0.0f, 0.25f, 0.25f, 0.0f, 0.25f, }; private byte indices[] = { 0, 1, 2, 0, 2, 3, }; public void applyDamage() { damage++; switch (enemyType) { case SSEngine.TYPE_INTERCEPTOR: if (damage == SSEngine.INTERCEPTOR_SHIELDS) { isDestroyed = true; } break; case SSEngine.TYPE_SCOUT: if (damage == SSEngine.SCOUT_SHIELDS) { isDestroyed = true; } break; case SSEngine.TYPE_WARSHIP: if (damage == SSEngine.WARSHIP_SHIELDS) { isDestroyed = true; } break; } } public SSEnemy(int type, int direction) { enemyType = type; attackDirection = direction; posY = (randomPos.nextFloat() * 4) + 4; switch (attackDirection) { case SSEngine.ATTACK_LEFT: posX = 0; break; case SSEngine.ATTACK_RANDOM: posX = randomPos.nextFloat() * 3; break; case SSEngine.ATTACK_RIGHT: posX = 3; break; } posT = SSEngine.SCOUT_SPEED; ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4); byteBuf.order(ByteOrder.nativeOrder()); vertexBuffer = byteBuf.asFloatBuffer(); vertexBuffer.put(vertices); vertexBuffer.position(0); byteBuf = ByteBuffer.allocateDirect(texture.length * 4); byteBuf.order(ByteOrder.nativeOrder()); textureBuffer = byteBuf.asFloatBuffer(); textureBuffer.put(texture); textureBuffer.position(0); indexBuffer = ByteBuffer.allocateDirect(indices.length); indexBuffer.put(indices); indexBuffer.position(0); } public float getNextScoutX() { if (attackDirection == SSEngine.ATTACK_LEFT) { return (float) ((SSEngine.BEZIER_X_4 * (posT * posT * posT)) + (SSEngine.BEZIER_X_3 * 3 * (posT * posT) * (1 - posT)) + (SSEngine.BEZIER_X_2 * 3 * posT * ((1 - posT) * (1 - posT))) + (SSEngine.BEZIER_X_1 * ((1 - posT) * (1 - posT) * (1 - posT)))); } else { return (float) ((SSEngine.BEZIER_X_1 * (posT * posT * posT)) + (SSEngine.BEZIER_X_2 * 3 * (posT * posT) * (1 - posT)) + (SSEngine.BEZIER_X_3 * 3 * posT * ((1 - posT) * (1 - posT))) + (SSEngine.BEZIER_X_4 * ((1 - posT) * (1 - posT) * (1 - posT)))); } } public float getNextScoutY() { return (float) ((SSEngine.BEZIER_Y_1 * (posT * posT * posT)) + (SSEngine.BEZIER_Y_2 * 3 * (posT * posT) * (1 - posT)) + (SSEngine.BEZIER_Y_3 * 3 * posT * ((1 - posT) * (1 - posT))) + (SSEngine.BEZIER_Y_4 * ((1 - posT) * (1 - posT) * (1 - posT)))); } public void draw(GL10 gl, int[] spriteSheet) { gl.glBindTexture(GL10.GL_TEXTURE_2D, spriteSheet[0]); gl.glFrontFace(GL10.GL_CCW); gl.glEnable(GL10.GL_CULL_FACE); gl.glCullFace(GL10.GL_BACK); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer); gl.glDrawElements(GL10.GL_TRIANGLES, indices.length, GL10.GL_UNSIGNED_BYTE, indexBuffer); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glDisable(GL10.GL_CULL_FACE); } }

    Read the article

  • Silverlight Cream for June 12, 2010 -- #880

    - by Dave Campbell
    In this Issue: Michael Washington, Diego Poza, Viktor Larsson, Brian Noyes, Charles Petzold, Laurent Bugnion, Anjaiah Keesari, David Anson, and Jeremy Likness. From SilverlightCream.com: My MEF Rant Read Michael Washington's discussion about MEF from someone that's got some experience, but not enough to remember the pain points... how it works, and what he'd like to see. Prism 4: What’s new and what’s next Diego Poza Why Office Hub is important for WP7 Viktor Larsson has another WP7 post up and he's talking about the Office Hub ... good description and maybe the first I've seen on the Office Hub. WCF RIA Services Part 1: Getting Started Brian Noyes has part 1 of a 10-part tutorial series on WCF RIA Services up at SilverlightShow. This first is the intro, but it's a good one. CompositionTarget.Rendering and RenderEventArgs Charles Petzold talks about CompositionTarget.Rendering and using it for calculating time span ... and it works in WPF and WP7 too... cool example from his WPF book, and all the code. Two small issues with Windows Phone 7 ApplicationBar buttons (and workaround) Laurent Bugnion has a post up from earlier this week that I missed describing problems with the WP7 ApplicationBar ... oh, and a workaround for it :) Animation in Silverlight Anjaiah Keesari has a really extensive post up on Silverlight animation, and this is an all-XAML thing... so buckle up we're going old-school :) Two fixes for my Silverlight SplitButton/MenuButton implementation - and true WPF support David Anson revisits and revises his SplitButton code based on a couple problem reports he received. Source for the button and the test project is included. Tips and Tricks for INotifyPropertyChanged Jeremy Likness is discussing INotifyPropertyChanged and describes an extension method. He does bring up a problem associated with this, so check that out. He finishes the post off with a discussion of "Observable Enumerables" Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • C#: Optional Parameters - Pros and Pitfalls

    - by James Michael Hare
    When Microsoft rolled out Visual Studio 2010 with C# 4, I was very excited to learn how I could apply all the new features and enhancements to help make me and my team more productive developers. Default parameters have been around forever in C++, and were intentionally omitted in Java in favor of using overloading to satisfy that need as it was though that having too many default parameters could introduce code safety issues.  To some extent I can understand that move, as I’ve been bitten by default parameter pitfalls before, but at the same time I feel like Java threw out the baby with the bathwater in that move and I’m glad to see C# now has them. This post briefly discusses the pros and pitfalls of using default parameters.  I’m avoiding saying cons, because I really don’t believe using default parameters is a negative thing, I just think there are things you must watch for and guard against to avoid abuses that can cause code safety issues. Pro: Default Parameters Can Simplify Code Let’s start out with positives.  Consider how much cleaner it is to reduce all the overloads in methods or constructors that simply exist to give the semblance of optional parameters.  For example, we could have a Message class defined which allows for all possible initializations of a Message: 1: public class Message 2: { 3: // can either cascade these like this or duplicate the defaults (which can introduce risk) 4: public Message() 5: : this(string.Empty) 6: { 7: } 8:  9: public Message(string text) 10: : this(text, null) 11: { 12: } 13:  14: public Message(string text, IDictionary<string, string> properties) 15: : this(text, properties, -1) 16: { 17: } 18:  19: public Message(string text, IDictionary<string, string> properties, long timeToLive) 20: { 21: // ... 22: } 23: }   Now consider the same code with default parameters: 1: public class Message 2: { 3: // can either cascade these like this or duplicate the defaults (which can introduce risk) 4: public Message(string text = "", IDictionary<string, string> properties = null, long timeToLive = -1) 5: { 6: // ... 7: } 8: }   Much more clean and concise and no repetitive coding!  In addition, in the past if you wanted to be able to cleanly supply timeToLive and accept the default on text and properties above, you would need to either create another overload, or pass in the defaults explicitly.  With named parameters, though, we can do this easily: 1: var msg = new Message(timeToLive: 100);   Pro: Named Parameters can Improve Readability I must say one of my favorite things with the default parameters addition in C# is the named parameters.  It lets code be a lot easier to understand visually with no comments.  Think how many times you’ve run across a TimeSpan declaration with 4 arguments and wondered if they were passing in days/hours/minutes/seconds or hours/minutes/seconds/milliseconds.  A novice running through your code may wonder what it is.  Named arguments can help resolve the visual ambiguity: 1: // is this days/hours/minutes/seconds (no) or hours/minutes/seconds/milliseconds (yes) 2: var ts = new TimeSpan(1, 2, 3, 4); 3:  4: // this however is visually very explicit 5: var ts = new TimeSpan(days: 1, hours: 2, minutes: 3, seconds: 4);   Or think of the times you’ve run across something passing a Boolean literal and wondered what it was: 1: // what is false here? 2: var sub = CreateSubscriber(hostname, port, false); 3:  4: // aha! Much more visibly clear 5: var sub = CreateSubscriber(hostname, port, isBuffered: false);   Pitfall: Don't Insert new Default Parameters In Between Existing Defaults Now let’s consider a two potential pitfalls.  The first is really an abuse.  It’s not really a fault of the default parameters themselves, but a fault in the use of them.  Let’s consider that Message constructor again with defaults.  Let’s say you want to add a messagePriority to the message and you think this is more important than a timeToLive value, so you decide to put messagePriority before it in the default, this gives you: 1: public class Message 2: { 3: public Message(string text = "", IDictionary<string, string> properties = null, int priority = 5, long timeToLive = -1) 4: { 5: // ... 6: } 7: }   Oh boy have we set ourselves up for failure!  Why?  Think of all the code out there that could already be using the library that already specified the timeToLive, such as this possible call: 1: var msg = new Message(“An error occurred”, myProperties, 1000);   Before this specified a message with a TTL of 1000, now it specifies a message with a priority of 1000 and a time to live of -1 (infinite).  All of this with NO compiler errors or warnings. So the rule to take away is if you are adding new default parameters to a method that’s currently in use, make sure you add them to the end of the list or create a brand new method or overload. Pitfall: Beware of Default Parameters in Inheritance and Interface Implementation Now, the second potential pitfalls has to do with inheritance and interface implementation.  I’ll illustrate with a puzzle: 1: public interface ITag 2: { 3: void WriteTag(string tagName = "ITag"); 4: } 5:  6: public class BaseTag : ITag 7: { 8: public virtual void WriteTag(string tagName = "BaseTag") { Console.WriteLine(tagName); } 9: } 10:  11: public class SubTag : BaseTag 12: { 13: public override void WriteTag(string tagName = "SubTag") { Console.WriteLine(tagName); } 14: } 15:  16: public static class Program 17: { 18: public static void Main() 19: { 20: SubTag subTag = new SubTag(); 21: BaseTag subByBaseTag = subTag; 22: ITag subByInterfaceTag = subTag; 23:  24: // what happens here? 25: subTag.WriteTag(); 26: subByBaseTag.WriteTag(); 27: subByInterfaceTag.WriteTag(); 28: } 29: }   What happens?  Well, even though the object in each case is SubTag whose tag is “SubTag”, you will get: 1: SubTag 2: BaseTag 3: ITag   Why?  Because default parameter are resolved at compile time, not runtime!  This means that the default does not belong to the object being called, but by the reference type it’s being called through.  Since the SubTag instance is being called through an ITag reference, it will use the default specified in ITag. So the moral of the story here is to be very careful how you specify defaults in interfaces or inheritance hierarchies.  I would suggest avoiding repeating them, and instead concentrating on the layer of classes or interfaces you must likely expect your caller to be calling from. For example, if you have a messaging factory that returns an IMessage which can be either an MsmqMessage or JmsMessage, it only makes since to put the defaults at the IMessage level since chances are your user will be using the interface only. So let’s sum up.  In general, I really love default and named parameters in C# 4.0.  I think they’re a great tool to help make your code easier to read and maintain when used correctly. On the plus side, default parameters: Reduce redundant overloading for the sake of providing optional calling structures. Improve readability by being able to name an ambiguous argument. But remember to make sure you: Do not insert new default parameters in the middle of an existing set of default parameters, this may cause unpredictable behavior that may not necessarily throw a syntax error – add to end of list or create new method. Be extremely careful how you use default parameters in inheritance hierarchies and interfaces – choose the most appropriate level to add the defaults based on expected usage. Technorati Tags: C#,.NET,Software,Default Parameters

    Read the article

  • Simple-Talk development: a quick history lesson

    - by Michael Williamson
    Up until a few months ago, Simple-Talk ran on a pure .NET stack, with IIS as the web server and SQL Server as the database. Unfortunately, the platform for the site hadn’t quite gotten the love and attention it deserved. On the one hand, in the words of our esteemed editor Tony “I’d consider the current platform to be a “success”; it cost $10K, has lasted for 6 years, was finished, end to end in 6 months, and although we moan about it has got us quite a long way.” On the other hand, it was becoming increasingly clear that it needed some serious work. Among other issues, we had authors that wouldn’t blog because our current blogging platform, Community Server, was too painful for them to use. Forgetting about Simple-Talk for a moment, if you ask somebody what blogging platform they’d choose, the odds are they’d say WordPress. Regardless of its technical merits, it’s probably the most popular blogging platform, and it certainly seemed easier to use than Community Server. The issue was that WordPress is normally hosted on a Linux stack running PHP, Apache and MySQL — quite a difference from our Microsoft technology stack. We certainly didn’t want to rewrite the entire site — we just wanted a better blogging platform, with the rest of the existing, legacy site left as is. At a very high level, Simple-Talk’s technical design was originally very straightforward: when your browser sends an HTTP request to Simple-Talk, IIS (the web server) takes the request, does some work, and sends back a response. In order to keep the legacy site running, except with WordPress running the blogs, a different design is called for. We now use nginx as a reverse-proxy, which can then delegate requests to the appropriate application: So, when your browser sends a request to Simple-Talk, nginx takes that request and checks which part of the site you’re trying to access. Most of the time, it just passes the request along to IIS, which can then respond in much the same way it always has. However, if your request is for the blogs, then nginx delegates the request to WordPress. Unfortunately, as simple as that diagram looks, it hides an awful lot of complexity. In particular, the legacy site running on IIS was made up of four .NET applications. I’ve already mentioned one of these applications, Community Server, which handled the old blogs as well as managing membership and the forums. We have a couple of other applications to manage both our newsletters and our articles, and our own custom application to do some of the rendering on the site, such as the front page and the articles. When I say that it was made up of four .NET applications, this might conjure up an image in your mind of how they fit together: You might imagine four .NET applications, each with their own database, communicating over well-defined APIs. Sadly, reality was a little disappointing: We had four .NET applications that all ran on the same database. Worse still, there were many queries that happily joined across tables from multiple applications, meaning that each application was heavily dependent on the exact data schema that each other application used. Add to this that many of the queries were at least dozens of lines long, and practically identical to other queries except in a few key spots, and we can see that attempting to replace one component of the system would be more than a little tricky. However, the problems with the old system do give us a good place to start thinking about desirable qualities from any changes to the platform. Specifically: Maintainability — the tight coupling between each .NET application made it difficult to update any one application without also having to make changes elsewhere Replaceability — the tight coupling also meant that replacing one component wouldn’t be straightforward, especially if it wasn’t on a similar Microsoft stack. We’d like to be able to replace different parts without having to modify the existing codebase extensively Reusability — we’d like to be able to combine the different pieces of the system in different ways for different sites Repeatable deployments — rather than having to deploy the site manually with a long list of instructions, we should be able to deploy the entire site with a single command, allowing you to create a new instance of the site easily whether on production, staging servers, test servers or your own local machine Testability — if we can deploy the site with a single command, and each part of the site is no longer dependent on the specifics of how every other part of the site works, we can begin to run automated tests against the site, and against individual parts, both to prevent regressions and to do a little test-driven development In the next part, I’ll describe the high-level architecture we now have that hopefully brings us a little closer to these five traits.

    Read the article

  • 3 Key Trends For Mobile Commerce – Location, Location, Location

    - by Michael Hylton
    This past weekend I was at a major bookstore chain and looking for a particular book.  Rather than ask the clerk, I went to my smartphone and went online to find the book title, author, and competing price.  I know I’m not alone in this effort and more and more individuals (and businesses) will use the power of mobility to tilt the scale in their favor. Armed with a mobile device – smartphone or tablet – folks will use them to research, compare, and ultimately purchase.  A recent PayPal survey found that 46% of respondents plan to use a mobile device this holiday season to make a purchase.   An astounding 27% of consumers in an e-tailing group survey commissioned by Oracle, use a tablet device daily or several times a week to research products and services. Beyond researching or making purchases, 35% of consumers use their smartphone to receive offers and coupons, and 32% access coupons and redeem them at their local retail store.  And with GPS capabilities in smartphones and tablet (and with user’s approval), retailers will start pushing coupons and offers directly to phone users based on their proximity to their store (or their competitors). Security is one concern that both shoppers, companies and phone manufacturers will have to deal with in the coming years.  In that same Oracle-sponsored e-tailing group consumer survey, 32% of consumers were concerned about giving their credit card information via a smartphone. You can gain further insight into the mind of today’s consumer by reading the e-tailing group white paper, titled “the connected consumer”.

    Read the article

  • 3 Key Trends For Mobile Commerce – Location, Location, Location

    - by Michael Hylton
    This past weekend I was at a major bookstore chain and looking for a particular book.  Rather than ask the clerk, I went to my smartphone and went online to find the book title, author, and competing price.  I know I’m not alone in this effort and more and more individuals (and businesses) will use the power of mobility to tilt the scale in their favor. Armed with a mobile device – smartphone or tablet – folks will use them to research, compare, and ultimately purchase.  A recent PayPal survey found that 46% of respondents plan to use a mobile device this holiday season to make a purchase.   An astounding 27% of consumers in an e-tailing group survey commissioned by Oracle, use a tablet device daily or several times a week to research products and services. Beyond researching or making purchases, 35% of consumers use their smartphone to receive offers and coupons, and 32% access coupons and redeem them at their local retail store.  And with GPS capabilities in smartphones and tablet (and with user’s approval), retailers will start pushing coupons and offers directly to phone users based on their proximity to their store (or their competitors). Security is one concern that both shoppers, companies and phone manufacturers will have to deal with in the coming years.  In that same Oracle-sponsored e-tailing group consumer survey, 32% of consumers were concerned about giving their credit card information via a smartphone. You can gain further insight into the mind of today’s consumer by reading the e-tailing group white paper, titled “the connected consumer”.

    Read the article

  • Silverlight Cream for April 06, 2010 -- #832

    - by Dave Campbell
    In this Issue: Alex van Beek, Gill Cleeren, SilverlightShow, Michael Sync, Rénald Nollet, Charles Petzold, The-Oliver, and Max Paulousky. Shoutouts: Denislav Savkov of SilverlightShow ported his Slider control to WP7: Windows Phone 7 Series Sample Image Viewer SilverlightShow interview: The Silverlight Tour - what, where and why. Interview with one of the Tour organizers Laurent Duveau From SilverlightCream.com: Silverlight 4: using the VisualStateManager for state animations with MVVM Alex van Beek has an approach to resolving the MVVM issue of Animations without keeping a reference to the ViewModel by way of VisualStateManager Leveraging the ASP.NET Membership in Silverlight Gill Cleeren's post at SilverlightShow talks about using ASP.NET authentication inside your Silverlight making membership not only something you know and understand, but now the transition from your ASP.NET apps to Silverlight is simple as well. Windows Phone 7 Series RSS reader SilverlightShow has a demo RSS Reader for WP7 up... no text, but the code is there. Step by Step Tutorial : Installing Multi-Touch Simulator for Silverlight Phone 7 Michael Sync actually has a multi-touch simulator working for WP7 ... it involves a bunch of moving parts and one of the requirements is Windows 7, but if that works for you, this will too :) Element Property Binding Improvements in Blend 4 Beta and Visual Studio 2010 RC Rénald Nollet demonstrates new Blend and VS2010 features that assists you in Element Property binding with real examples. Projection Transforms Sans Math Charles Petzold is writing about Silverlight and 3D and specifically in this post 3D without math which becomes PlaneProjection... good long tutorial on it and code to back it all up. Daily Demo: Silverlight Install out of browser & Check for Update Behaviors The-Oliver has a post up about OOB and checking for updates using behaviors with only a slight change to your xaml... cool! Wizards. Prototype of sketching Wizard for WPF – 2 Max Paulousky has part 2 of his tutorial on a sketchflow Wizard for WPF ... yes WPF, but check it out... source too. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Pella Increases Online Appointment Scheduling and Rapidly Personalizes and Updates Marketing Initiatives

    - by Michael Snow
    Originally posted on Oracle Customers page.Oracle Customer: Pella CorporationLocation:  Pella, IowaIndustry: Industrial Manufacturing Employees:  7,100 Pella Corporation is an innovative leader in creating a better view for homes and businesses by designing, testing, manufacturing, and installing quality windows and doors for new construction, remodeling, and replacement applications. A family-owned company, Pella has an 88-year history of innovation and, today, is the second-largest manufacturer in the country of windows and doors, including patio, entry, and storm doors. The company has 10 manufacturing facilities in United States and window and door showrooms across the United States and Canada. In-home consultations are an important part of Pella’s sales process. Several years ago, the company launched an online appointment scheduling tool to improve customer convenience. While the functionality worked well, the company wanted to increase online conversion rates and decrease the number of incomplete, online appointment schedules. It also wanted to give its business analysts and other line-of-business personnel the ability to update the scheduling tool and interface quickly, without needing IT team intervention and recoding, to better capitalize on opportunities and personalize the interface for specific markets. Pella also looked to reduce IT complexity by selecting a system that integrated easily with its Oracle E-Business Suite Release 12.1 enterprise applications.Pella, which has a large Oracle footprint, selected Oracle WebCenter Sites as the foundation for its new, real-time appointment scheduling application. It used the solution to re-engineer the scheduling process and the information required to set up an appointment. Just a few months after launch, it is seeing improvement in the number of appointments booked online and experiencing fewer abandoned appointments during the scheduling process. As important, Pella can now quickly and easily make changes to images, video, and content displayed on the scheduling tool interface, delivering greater business agility. Previously, such changes required a developer and weeks of coding and testing. Today, a member of Pella’s business analyst team can complete the changes in hours. This capability enables Pella to personalize the Web experience for customers. For example, it can display different products or images for clients in different regions.The solution is also highly scalable. Pella is using Oracle WebCenter Sites for appointment scheduling now and plans to migrate Pella.com, its configurator tool, and dealer microsites onto the platform. Further, Pella plans to leverage the solution to optimize mobile devices. “Moving ahead, we expect to extensively leverage Oracle WebCenter Sites to gain greater flexibility in updating the Web experience, thanks to the ability to make updates quickly without developer resources. Segmentation and targeting capabilities will allow us to create a more personalized experience across both traditional and mobile platforms,” said Teri Lancaster, IT manager, customer experience applications, Pella Corporation. A word from Pella Corporation "Oracle WebCenter Sites?from the start?delivered important benefits. We’ve redesigned the online scheduling process and are seeing more potential customers completing consultation bookings online. More important, the solution opens a world of other possibilities as we plan to migrate Pella.com and our dealer microsites to the platform, and leverage it to optimize the Web experience for our mobile devices.” – Teri Lancaster, IT Manager, Customer Experience Applications, Pella Corporation Oracle Product and Services Oracle WebCenter Sites Why Oracle Pella has a long-standing relationship with Oracle. “We look to Oracle first for a solution. Our Oracle account team came to us with several solutions, and Oracle WebCenter Sites delivered the scalability, ease-of-use, flexibility, and scalability that we required for the appointment scheduling initiative and other Web projects on the horizon, including migrating Pella.com and optimizing our site for mobile platforms,”said Teri Lancaster, IT manager, customer experience applications, Pella Corporation. Implementation Process The Pella implementation team, working with Oracle partner Element Solutions, LLC, integrated the appointment setting application with Pella.com as well as the company’s Oracle E-Business Suite customer relationship management applications. Using Oracle WebCenter Site’s development tools and subversion capabilities to develop the application, the Element Solutions and Pella teams could work remotely and collaboratively, accelerating deployment. Pella went live with the new scheduling tool in just six months. Partner Oracle PartnerElement Solutions, LLC Element Solutions was instrumental at every major stage of the project, including design creation and approval, development, training, and rollout. “Element Solutions was a vital partner for our Oracle WebCenter Sites initiative. The team provided guidance, and more important, critical knowledge transfer at every stage?which equipped us to get the most out of this powerful and versatile solution. We were definitely collaboration partners,” Lancaster said. Resources Pella Corporation Upgrades Enterprise Applications to Continue to Improve Manufacturing Efficiency Thousands of Customers Successfully and Smoothly Upgrade to Oracle E-Business Suite 12.1 for New Functionality, Lower Operating Costs and Improved Shared Operations Managing the Virtual World

    Read the article

  • Who is Jeremiah Owyang?

    - by Michael Hylton
    Q: What’s your current role and what career path brought you here? J.O.: I'm currently a partner and one of the founding team members at Altimeter Group.  I'm currently the Research Director, as well as wear the hat of Industry Analyst. Prior to joining Altimeter, I was an Industry Analyst at Forrester covering Social Computing, and before that, deployed and managed the social media program at Hitachi Data Systems in Santa Clara.  Around that time, I started a career blog called Web Strategy which focused on how companies were using the web to connect with customers --and never looked back. Q: As an industry analyst, what are you focused on these days? J.O.: There are three trends that I'm focused my research on at this time:  1) The Dynamic Customer Journey:  Individuals (both b2c and b2b) are given so many options in their sources of data, channels to choose from and screens to consume them on that we've found that at each given touchpoint there are 75 potential permutations.  Companies that can map this, then deliver information to individuals when they need it will have a competitive advantage and we want to find out who's doing this.  2) One of the sub themes that supports this trend is Social Performance.  Yesterday's social web was disparate engagement of humans, but the next phase will be data driven, and soon new technologies will emerge to help all those that are consuming, publishing, and engaging on the social web to be more efficient with their time through forms of automation.  As you might expect, this comes with upsides and downsides.  3) The Sentient World is our research theme that looks out the furthest as the world around us (even inanimate objects) become 'self aware' and are able to talk back to us via digital devices and beyond.  Big data, internet of things, mobile devices will all be this next set. Q: People cite that the line between work and life is getting more and more blurred. Do you see your personal life influencing your professional work? J.O.: The lines between our work and personal lives are dissolving, and this leads to a greater upside of being always connected and have deeper relationships with those that are not.  It also means a downside of society expectations that we're always around and available for colleagues, customers, and beyond.  In the future, a balance will be sought as we seek to achieve the goals of family, friends, work, and our own personal desires.  All of this is being ironically written at 430 am on a Sunday am.  Q: How can people keep up with what you’re working on? J.O.: A great question, thanks.  There are a few sources of information to find out, I'll lead with the first which is my blog at web-strategist.com.  A few times a week I'll publish my industry insights (hires, trends, forces, funding, M&A, business needs) as well as on twitter where I'll point to all the news that's fit to print @jowyang.  As my research reports go live (we publish them for all to read --called Open Research-- at no cost) they'll emerge on my blog, or checkout the research tab to find out more now.  http://www.web-strategist.com/blog/research/ Q: Recently, you’ve been working with us here at Oracle on something exciting coming up later this week. What’s on the horizon?  J.O.: Absolutely! This coming Thursday, September 13th, I’m doing a webcast with Oracle on “Managing Social Relationships for the Enterprise”. This is going to be a great discussion with Reggie Bradford, Senior Vice President of Product Development at Oracle and Christian Finn, Senior Director of Product Management for Oracle WebCenter. I’m looking forward to a great discussion around all those issues that so many companies are struggling with these days as they realize how much social media is impacting their business. It’s changing the way your customers and employees interact with your brand. Today it’s no longer a matter of when to become a social-enabled enterprise, but how to become a successful one. Q: You’ve been very actively pursued for media interviews and conference and company speaking engagements – anything you’d like to share to give us a sneak peak of what to expect on Thursday’s webcast?  J.O.: Below is a 15 minute video which encapsulates Altimeter’s themes on the Dynamic Customer Journey and the Sentient World. I’m really proud to have taken an active role in the first ever LeWeb outside of Paris. This one, which was featured in downtown London across the street from Westminster Abbey was sold out. If you’ve not heard of LeWeb, this is a global Internet conference hosted by Loic and Geraldine Le Meur, a power couple that stem from Paris but are also living in Silicon Valley, this is one of my favorite conferences to connect with brands, technology innovators, investors and friends. Altimeter was able to play a minor role in suggesting the theme for the event “Faster Than Real Time” which stems off previous LeWebs that focused on the “Real time web”. In this radical state, companies are able to anticipate the needs of their customers by using data, technology, and devices and deliver meaningful experiences before customers even know they need it. I explore two of three of Altimeter’s research themes, the Dynamic Customer Journey, and the Sentient World in my speech, but due to time, did not focus on Adaptive Organization.

    Read the article

  • Who is Jeremiah Owyang?

    - by Michael Hylton
    12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Q: What’s your current role and what career path brought you here? J.O.: I'm currently a partner and one of the founding team members at Altimeter Group.  I'm currently the Research Director, as well as wear the hat of Industry Analyst. Prior to joining Altimeter, I was an Industry Analyst at Forrester covering Social Computing, and before that, deployed and managed the social media program at Hitachi Data Systems in Santa Clara.  Around that time, I started a career blog called Web Strategy which focused on how companies were using the web to connect with customers --and never looked back. Q: As an industry analyst, what are you focused on these days? J.O.: There are three trends that I'm focused my research on at this time:  1) The Dynamic Customer Journey:  Individuals (both b2c and b2b) are given so many options in their sources of data, channels to choose from and screens to consume them on that we've found that at each given touchpoint there are 75 potential permutations.  Companies that can map this, then deliver information to individuals when they need it will have a competitive advantage and we want to find out who's doing this.  2) One of the sub themes that supports this trend is Social Performance.  Yesterday's social web was disparate engagement of humans, but the next phase will be data driven, and soon new technologies will emerge to help all those that are consuming, publishing, and engaging on the social web to be more efficient with their time through forms of automation.  As you might expect, this comes with upsides and downsides.  3) The Sentient World is our research theme that looks out the furthest as the world around us (even inanimate objects) become 'self aware' and are able to talk back to us via digital devices and beyond.  Big data, internet of things, mobile devices will all be this next set. Q: People cite that the line between work and life is getting more and more blurred. Do you see your personal life influencing your professional work? J.O.: The lines between our work and personal lives are dissolving, and this leads to a greater upside of being always connected and have deeper relationships with those that are not.  It also means a downside of society expectations that we're always around and available for colleagues, customers, and beyond.  In the future, a balance will be sought as we seek to achieve the goals of family, friends, work, and our own personal desires.  All of this is being ironically written at 430 am on a Sunday am.  Q: How can people keep up with what you’re working on? J.O.: A great question, thanks.  There are a few sources of information to find out, I'll lead with the first which is my blog at web-strategist.com.  A few times a week I'll publish my industry insights (hires, trends, forces, funding, M&A, business needs) as well as on twitter where I'll point to all the news that's fit to print @jowyang.  As my research reports go live (we publish them for all to read --called Open Research-- at no cost) they'll emerge on my blog, or checkout the research tab to find out more now.  http://www.web-strategist.com/blog/research/ Q: Recently, you’ve been working with us here at Oracle on something exciting coming up later this week. What’s on the horizon?  J.O.: Absolutely! This coming Thursday, September 13th, I’m doing a webcast with Oracle on “Managing Social Relationships for the Enterprise”. This is going to be a great discussion with Reggie Bradford, Senior Vice President of Product Development at Oracle and Christian Finn, Senior Director of Product Management for Oracle WebCenter. I’m looking forward to a great discussion around all those issues that so many companies are struggling with these days as they realize how much social media is impacting their business. It’s changing the way your customers and employees interact with your brand. Today it’s no longer a matter of when to become a social-enabled enterprise, but how to become a successful one. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Q: You’ve been very actively pursued for media interviews and conference and company speaking engagements – anything you’d like to share to give us a sneak peak of what to expect on Thursday’s webcast?  J.O.: Below is a 15 minute video which encapsulates Altimeter’s themes on the Dynamic Customer Journey and the Sentient World. I’m really proud to have taken an active role in the first ever LeWeb outside of Paris. This one, which was featured in downtown London across the street from Westminster Abbey was sold out. If you’ve not heard of LeWeb, this is a global Internet conference hosted by Loic and Geraldine Le Meur, a power couple that stem from Paris but are also living in Silicon Valley, this is one of my favorite conferences to connect with brands, technology innovators, investors and friends. Altimeter was able to play a minor role in suggesting the theme for the event “Faster Than Real Time” which stems off previous LeWebs that focused on the “Real time web”. In this radical state, companies are able to anticipate the needs of their customers by using data, technology, and devices and deliver meaningful experiences before customers even know they need it. I explore two of three of Altimeter’s research themes, the Dynamic Customer Journey, and the Sentient World in my speech, but due to time, did not focus on Adaptive Organization.

    Read the article

  • Who is Jeremiah Owyang?

    - by Michael Snow
    12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Q: What’s your current role and what career path brought you here? J.O.: I'm currently a partner and one of the founding team members at Altimeter Group.  I'm currently the Research Director, as well as wear the hat of Industry Analyst. Prior to joining Altimeter, I was an Industry Analyst at Forrester covering Social Computing, and before that, deployed and managed the social media program at Hitachi Data Systems in Santa Clara.  Around that time, I started a career blog called Web Strategy which focused on how companies were using the web to connect with customers --and never looked back. Q: As an industry analyst, what are you focused on these days? J.O.: There are three trends that I'm focused my research on at this time:  1) The Dynamic Customer Journey:  Individuals (both b2c and b2b) are given so many options in their sources of data, channels to choose from and screens to consume them on that we've found that at each given touchpoint there are 75 potential permutations.  Companies that can map this, then deliver information to individuals when they need it will have a competitive advantage and we want to find out who's doing this.  2) One of the sub themes that supports this trend is Social Performance.  Yesterday's social web was disparate engagement of humans, but the next phase will be data driven, and soon new technologies will emerge to help all those that are consuming, publishing, and engaging on the social web to be more efficient with their time through forms of automation.  As you might expect, this comes with upsides and downsides.  3) The Sentient World is our research theme that looks out the furthest as the world around us (even inanimate objects) become 'self aware' and are able to talk back to us via digital devices and beyond.  Big data, internet of things, mobile devices will all be this next set. Q: People cite that the line between work and life is getting more and more blurred. Do you see your personal life influencing your professional work? J.O.: The lines between our work and personal lives are dissolving, and this leads to a greater upside of being always connected and have deeper relationships with those that are not.  It also means a downside of society expectations that we're always around and available for colleagues, customers, and beyond.  In the future, a balance will be sought as we seek to achieve the goals of family, friends, work, and our own personal desires.  All of this is being ironically written at 430 am on a Sunday am.  Q: How can people keep up with what you’re working on? J.O.: A great question, thanks.  There are a few sources of information to find out, I'll lead with the first which is my blog at web-strategist.com.  A few times a week I'll publish my industry insights (hires, trends, forces, funding, M&A, business needs) as well as on twitter where I'll point to all the news that's fit to print @jowyang.  As my research reports go live (we publish them for all to read --called Open Research-- at no cost) they'll emerge on my blog, or checkout the research tab to find out more now.  http://www.web-strategist.com/blog/research/ Q: Recently, you’ve been working with us here at Oracle on something exciting coming up later this week. What’s on the horizon?  J.O.: Absolutely! This coming Thursday, September 13th, I’m doing a webcast with Oracle on “Managing Social Relationships for the Enterprise”. This is going to be a great discussion with Reggie Bradford, Senior Vice President of Product Development at Oracle and Christian Finn, Senior Director of Product Management for Oracle WebCenter. I’m looking forward to a great discussion around all those issues that so many companies are struggling with these days as they realize how much social media is impacting their business. It’s changing the way your customers and employees interact with your brand. Today it’s no longer a matter of when to become a social-enabled enterprise, but how to become a successful one. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Q: You’ve been very actively pursued for media interviews and conference and company speaking engagements – anything you’d like to share to give us a sneak peak of what to expect on Thursday’s webcast?  J.O.: Below is a 15 minute video which encapsulates Altimeter’s themes on the Dynamic Customer Journey and the Sentient World. I’m really proud to have taken an active role in the first ever LeWeb outside of Paris. This one, which was featured in downtown London across the street from Westminster Abbey was sold out. If you’ve not heard of LeWeb, this is a global Internet conference hosted by Loic and Geraldine Le Meur, a power couple that stem from Paris but are also living in Silicon Valley, this is one of my favorite conferences to connect with brands, technology innovators, investors and friends. Altimeter was able to play a minor role in suggesting the theme for the event “Faster Than Real Time” which stems off previous LeWebs that focused on the “Real time web”. In this radical state, companies are able to anticipate the needs of their customers by using data, technology, and devices and deliver meaningful experiences before customers even know they need it. I explore two of three of Altimeter’s research themes, the Dynamic Customer Journey, and the Sentient World in my speech, but due to time, did not focus on Adaptive Organization.

    Read the article

  • C#/.NET Little Wonders: ConcurrentBag and BlockingCollection

    - by James Michael Hare
    In the first week of concurrent collections, began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  The last post discussed the ConcurrentDictionary<T> .  Finally this week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see C#/.NET Little Wonders: A Redux. Recap As you'll recall from the previous posts, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  With .NET 4.0, a new breed of collections was born in the System.Collections.Concurrent namespace.  Of these, the final concurrent collection we will examine is the ConcurrentBag and a very useful wrapper class called the BlockingCollection. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this informative whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentBag<T> – Thread-safe unordered collection. Unlike the other concurrent collections, the ConcurrentBag<T> has no non-concurrent counterpart in the .NET collections libraries.  Items can be added and removed from a bag just like any other collection, but unlike the other collections, the items are not maintained in any order.  This makes the bag handy for those cases when all you care about is that the data be consumed eventually, without regard for order of consumption or even fairness – that is, it’s possible new items could be consumed before older items given the right circumstances for a period of time. So why would you ever want a container that can be unfair?  Well, to look at it another way, you can use a ConcurrentQueue and get the fairness, but it comes at a cost in that the ordering rules and synchronization required to maintain that ordering can affect scalability a bit.  Thus sometimes the bag is great when you want the fastest way to get the next item to process, and don’t care what item it is or how long its been waiting. The way that the ConcurrentBag works is to take advantage of the new ThreadLocal<T> type (new in System.Threading for .NET 4.0) so that each thread using the bag has a list local to just that thread.  This means that adding or removing to a thread-local list requires very low synchronization.  The problem comes in where a thread goes to consume an item but it’s local list is empty.  In this case the bag performs “work-stealing” where it will rob an item from another thread that has items in its list.  This requires a higher level of synchronization which adds a bit of overhead to the take operation. So, as you can imagine, this makes the ConcurrentBag good for situations where each thread both produces and consumes items from the bag, but it would be less-than-idea in situations where some threads are dedicated producers and the other threads are dedicated consumers because the work-stealing synchronization would outweigh the thread-local optimization for a thread taking its own items. Like the other concurrent collections, there are some curiosities to keep in mind: IsEmpty(), Count, ToArray(), and GetEnumerator() lock collection Each of these needs to take a snapshot of whole bag to determine if empty, thus they tend to be more expensive and cause Add() and Take() operations to block. ToArray() and GetEnumerator() are static snapshots Because it is based on a snapshot, will not show subsequent updates after snapshot. Add() is lightweight Since adding to the thread-local list, there is very little overhead on Add. TryTake() is lightweight if items in thread-local list As long as items are in the thread-local list, TryTake() is very lightweight, much more so than ConcurrentStack() and ConcurrentQueue(), however if the local thread list is empty, it must steal work from another thread, which is more expensive. Remember, a bag is not ideal for all situations, it is mainly ideal for situations where a process consumes an item and either decomposes it into more items to be processed, or handles the item partially and places it back to be processed again until some point when it will complete.  The main point is that the bag works best when each thread both takes and adds items. For example, we could create a totally contrived example where perhaps we want to see the largest power of a number before it crosses a certain threshold.  Yes, obviously we could easily do this with a log function, but bare with me while I use this contrived example for simplicity. So let’s say we have a work function that will take a Tuple out of a bag, this Tuple will contain two ints.  The first int is the original number, and the second int is the last multiple of that number.  So we could load our bag with the initial values (let’s say we want to know the last multiple of each of 2, 3, 5, and 7 under 100. 1: var bag = new ConcurrentBag<Tuple<int, int>> 2: { 3: Tuple.Create(2, 1), 4: Tuple.Create(3, 1), 5: Tuple.Create(5, 1), 6: Tuple.Create(7, 1) 7: }; Then we can create a method that given the bag, will take out an item, apply the multiplier again, 1: public static void FindHighestPowerUnder(ConcurrentBag<Tuple<int,int>> bag, int threshold) 2: { 3: Tuple<int,int> pair; 4:  5: // while there are items to take, this will prefer local first, then steal if no local 6: while (bag.TryTake(out pair)) 7: { 8: // look at next power 9: var result = Math.Pow(pair.Item1, pair.Item2 + 1); 10:  11: if (result < threshold) 12: { 13: // if smaller than threshold bump power by 1 14: bag.Add(Tuple.Create(pair.Item1, pair.Item2 + 1)); 15: } 16: else 17: { 18: // otherwise, we're done 19: Console.WriteLine("Highest power of {0} under {3} is {0}^{1} = {2}.", 20: pair.Item1, pair.Item2, Math.Pow(pair.Item1, pair.Item2), threshold); 21: } 22: } 23: } Now that we have this, we can load up this method as an Action into our Tasks and run it: 1: // create array of tasks, start all, wait for all 2: var tasks = new[] 3: { 4: new Task(() => FindHighestPowerUnder(bag, 100)), 5: new Task(() => FindHighestPowerUnder(bag, 100)), 6: }; 7:  8: Array.ForEach(tasks, t => t.Start()); 9:  10: Task.WaitAll(tasks); Totally contrived, I know, but keep in mind the main point!  When you have a thread or task that operates on an item, and then puts it back for further consumption – or decomposes an item into further sub-items to be processed – you should consider a ConcurrentBag as the thread-local lists will allow for quick processing.  However, if you need ordering or if your processes are dedicated producers or consumers, this collection is not ideal.  As with anything, you should performance test as your mileage will vary depending on your situation! BlockingCollection<T> – A producers & consumers pattern collection The BlockingCollection<T> can be treated like a collection in its own right, but in reality it adds a producers and consumers paradigm to any collection that implements the interface IProducerConsumerCollection<T>.  If you don’t specify one at the time of construction, it will use a ConcurrentQueue<T> as its underlying store. If you don’t want to use the ConcurrentQueue, the ConcurrentStack and ConcurrentBag also implement the interface (though ConcurrentDictionary does not).  In addition, you are of course free to create your own implementation of the interface. So, for those who don’t remember the producers and consumers classical computer-science problem, the gist of it is that you have one (or more) processes that are creating items (producers) and one (or more) processes that are consuming these items (consumers).  Now, the crux of the problem is that there is a bin (queue) where the produced items are placed, and typically that bin has a limited size.  Thus if a producer creates an item, but there is no space to store it, it must wait until an item is consumed.  Also if a consumer goes to consume an item and none exists, it must wait until an item is produced. The BlockingCollection makes it trivial to implement any standard producers/consumers process set by providing that “bin” where the items can be produced into and consumed from with the appropriate blocking operations.  In addition, you can specify whether the bin should have a limited size or can be (theoretically) unbounded, and you can specify timeouts on the blocking operations. As far as your choice of “bin”, for the most part the ConcurrentQueue is the right choice because it is fairly light and maximizes fairness by ordering items so that they are consumed in the same order they are produced.  You can use the concurrent bag or stack, of course, but your ordering would be random-ish in the case of the former and LIFO in the case of the latter. So let’s look at some of the methods of note in BlockingCollection: BoundedCapacity returns capacity of the “bin” If the bin is unbounded, the capacity is int.MaxValue. Count returns an internally-kept count of items This makes it O(1), but if you modify underlying collection directly (not recommended) it is unreliable. CompleteAdding() is used to cut off further adds. This sets IsAddingCompleted and begins to wind down consumers once empty. IsAddingCompleted is true when producers are “done”. Once you are done producing, should complete the add process to alert consumers. IsCompleted is true when producers are “done” and “bin” is empty. Once you mark the producers done, and all items removed, this will be true. Add() is a blocking add to collection. If bin is full, will wait till space frees up Take() is a blocking remove from collection. If bin is empty, will wait until item is produced or adding is completed. GetConsumingEnumerable() is used to iterate and consume items. Unlike the standard enumerator, this one consumes the items instead of iteration. TryAdd() attempts add but does not block completely If adding would block, returns false instead, can specify TimeSpan to wait before stopping. TryTake() attempts to take but does not block completely Like TryAdd(), if taking would block, returns false instead, can specify TimeSpan to wait. Note the use of CompleteAdding() to signal the BlockingCollection that nothing else should be added.  This means that any attempts to TryAdd() or Add() after marked completed will throw an InvalidOperationException.  In addition, once adding is complete you can still continue to TryTake() and Take() until the bin is empty, and then Take() will throw the InvalidOperationException and TryTake() will return false. So let’s create a simple program to try this out.  Let’s say that you have one process that will be producing items, but a slower consumer process that handles them.  This gives us a chance to peek inside what happens when the bin is bounded (by default, the bin is NOT bounded). 1: var bin = new BlockingCollection<int>(5); Now, we create a method to produce items: 1: public static void ProduceItems(BlockingCollection<int> bin, int numToProduce) 2: { 3: for (int i = 0; i < numToProduce; i++) 4: { 5: // try for 10 ms to add an item 6: while (!bin.TryAdd(i, TimeSpan.FromMilliseconds(10))) 7: { 8: Console.WriteLine("Bin is full, retrying..."); 9: } 10: } 11:  12: // once done producing, call CompleteAdding() 13: Console.WriteLine("Adding is completed."); 14: bin.CompleteAdding(); 15: } And one to consume them: 1: public static void ConsumeItems(BlockingCollection<int> bin) 2: { 3: // This will only be true if CompleteAdding() was called AND the bin is empty. 4: while (!bin.IsCompleted) 5: { 6: int item; 7:  8: if (!bin.TryTake(out item, TimeSpan.FromMilliseconds(10))) 9: { 10: Console.WriteLine("Bin is empty, retrying..."); 11: } 12: else 13: { 14: Console.WriteLine("Consuming item {0}.", item); 15: Thread.Sleep(TimeSpan.FromMilliseconds(20)); 16: } 17: } 18: } Then we can fire them off: 1: // create one producer and two consumers 2: var tasks = new[] 3: { 4: new Task(() => ProduceItems(bin, 20)), 5: new Task(() => ConsumeItems(bin)), 6: new Task(() => ConsumeItems(bin)), 7: }; 8:  9: Array.ForEach(tasks, t => t.Start()); 10:  11: Task.WaitAll(tasks); Notice that the producer is faster than the consumer, thus it should be hitting a full bin often and displaying the message after it times out on TryAdd(). 1: Consuming item 0. 2: Consuming item 1. 3: Bin is full, retrying... 4: Bin is full, retrying... 5: Consuming item 3. 6: Consuming item 2. 7: Bin is full, retrying... 8: Consuming item 4. 9: Consuming item 5. 10: Bin is full, retrying... 11: Consuming item 6. 12: Consuming item 7. 13: Bin is full, retrying... 14: Consuming item 8. 15: Consuming item 9. 16: Bin is full, retrying... 17: Consuming item 10. 18: Consuming item 11. 19: Bin is full, retrying... 20: Consuming item 12. 21: Consuming item 13. 22: Bin is full, retrying... 23: Bin is full, retrying... 24: Consuming item 14. 25: Adding is completed. 26: Consuming item 15. 27: Consuming item 16. 28: Consuming item 17. 29: Consuming item 19. 30: Consuming item 18. Also notice that once CompleteAdding() is called and the bin is empty, the IsCompleted property returns true, and the consumers will exit. Summary The ConcurrentBag is an interesting collection that can be used to optimize concurrency scenarios where tasks or threads both produce and consume items.  In this way, it will choose to consume its own work if available, and then steal if not.  However, in situations where you want fair consumption or ordering, or in situations where the producers and consumers are distinct processes, the bag is not optimal. The BlockingCollection is a great wrapper around all of the concurrent queue, stack, and bag that allows you to add producer and consumer semantics easily including waiting when the bin is full or empty. That’s the end of my dive into the concurrent collections.  I’d also strongly recommend, once again, you read this excellent Microsoft white paper that goes into much greater detail on the efficiencies you can gain using these collections judiciously (here). Tweet Technorati Tags: C#,.NET,Concurrent Collections,Little Wonders

    Read the article

  • C++ Little Wonders: The C++11 auto keyword redux

    - by James Michael Hare
    I’ve decided to create a sub-series of my Little Wonders posts to focus on C++.  Just like their C# counterparts, these posts will focus on those features of the C++ language that can help improve code by making it easier to write and maintain.  The index of the C# Little Wonders can be found here. This has been a busy week with a rollout of some new website features here at my work, so I don’t have a big post for this week.  But I wanted to write something up, and since lately I’ve been renewing my C++ skills in a separate project, it seemed like a good opportunity to start a C++ Little Wonders series.  Most of my development work still tends to focus on C#, but it was great to get back into the saddle and renew my C++ knowledge.  Today I’m going to focus on a new feature in C++11 (formerly known as C++0x, which is a major move forward in the C++ language standard).  While this small keyword can seem so trivial, I feel it is a big step forward in improving readability in C++ programs. The auto keyword If you’ve worked on C++ for a long time, you probably have some passing familiarity with the old auto keyword as one of those rarely used C++ keywords that was almost never used because it was the default. That is, in the code below (before C++11): 1: int foo() 2: { 3: // automatic variables (allocated and deallocated on stack) 4: int x; 5: auto int y; 6:  7: // static variables (retain their value across calls) 8: static int z; 9:  10: return 0; 11: } The variable x is assumed to be auto because that is the default, thus it is unnecessary to specify it explicitly as in the declaration of y below that.  Basically, an auto variable is one that is allocated and de-allocated on the stack automatically.  Contrast this to static variables, that are allocated statically and exist across the lifetime of the program. Because auto was so rarely (if ever) used since it is the norm, they decided to remove it for this purpose and give it new meaning in C++11.  The new meaning of auto: implicit typing Now, if your compiler supports C++ 11 (or at least a good subset of C++11 or 0x) you can take advantage of type inference in C++.  For those of you from the C# world, this means that the auto keyword in C++ now behaves a lot like the var keyword in C#! For example, many of us have had to declare those massive type declarations for an iterator before.  Let’s say we have a std::map of std::string to int which will map names to ages: 1: std::map<std::string, int> myMap; And then let’s say we want to find the age of a given person: 1: // Egad that's a long type... 2: std::map<std::string, int>::const_iterator pos = myMap.find(targetName); Notice that big ugly type definition to declare variable pos?  Sure, we could shorten this by creating a typedef of our specific map type if we wanted, but now with the auto keyword there’s no need: 1: // much shorter! 2: auto pos = myMap.find(targetName); The auto now tells the compiler to determine what type pos should be based on what it’s being assigned to.  This is not dynamic typing, it still determines the type as if it were explicitly declared and once declared that type cannot be changed.  That is, this is invalid: 1: // x is type int 2: auto x = 42; 3:  4: // can't assign string to int 5: x = "Hello"; Once the compiler determines x is type int it is exactly as if we typed int x = 42; instead, so don’t' confuse it with dynamic typing, it’s still very type-safe. An interesting feature of the auto keyword is that you can modify the inferred type: 1: // declare method that returns int* 2: int* GetPointer(); 3:  4: // p1 is int*, auto inferred type is int 5: auto *p1 = GetPointer(); 6:  7: // ps is int*, auto inferred type is int* 8: auto p2 = GetPointer(); Notice in both of these cases, p1 and p2 are determined to be int* but in each case the inferred type was different.  because we declared p1 as auto *p1 and GetPointer() returns int*, it inferred the type int was needed to complete the declaration.  In the second case, however, we declared p2 as auto p2 which means the inferred type was int*.  Ultimately, this make p1 and p2 the same type, but which type is inferred makes a difference, if you are chaining multiple inferred declarations together.  In these cases, the inferred type of each must match the first: 1: // Type inferred is int 2: // p1 is int* 3: // p2 is int 4: // p3 is int& 5: auto *p1 = GetPointer(), p2 = 42, &p3 = p2; Note that this works because the inferred type was int, if the inferred type was int* instead: 1: // syntax error, p1 was inferred to be int* so p2 and p3 don't make sense 2: auto p1 = GetPointer(), p2 = 42, &p3 = p2; You could also use const or static to modify the inferred type: 1: // inferred type is an int, theAnswer is a const int 2: const auto theAnswer = 42; 3:  4: // inferred type is double, Pi is a static double 5: static auto Pi = 3.1415927; Thus in the examples above it inferred the types int and double respectively, which were then modified to const and static. Summary The auto keyword has gotten new life in C++11 to allow you to infer the type of a variable from it’s initialization.  This simple little keyword can be used to cut down large declarations for complex types into a much more readable form, where appropriate.   Technorati Tags: C++, C++11, Little Wonders, auto

    Read the article

  • Silverlight Cream for April 24, 2010 -- #846

    - by Dave Campbell
    In this Issue: Michael Washington, Timmy Kokke, Pete Brown, Paul Yanez, Emil Stoychev, Jeremy Likness, and Pavan Podila. Shoutouts: If you've got some time to spend, the User Experience Kit is packed with info: User Experience Kit, and just plain fun to navigate ... thanks Scott Barnes for reminding me about it! Jesse Liberty is looking for some help organizing and cataloging posts for a new project he's got going: Help Wanted Emil Stoychev posted Slides and demos from my talk on Silverlight 4 From SilverlightCream.com: Silverlight 4 Drag and Drop File Manager Michael Washington has a post up about a Silverlight Drag and Drop File Manager in MVVM, but a secondary important point about the post is that he and Alan Beasley followed strict Designer/Developer rules on this... you recognized Alan's ListBox didn't you? Changing CSS with jQuery syntax in Silverlight using jLight Timmy Kokke is using jLight as introduced in a prior post to interact with the DOM from Silverlight. Essential Silverlight and WPF Skills: The UI Thread, Dispatchers, Background Workers and Async Network Programming Pete Brown has a great backrounder up for WPF and Silverlight devs on threading and networking, good comments too so far. Fluid layout and Fullscreen in Silverlight Paul Yanez has a quick post and demo up on forcing full-screen with a fluid layout, all code included -- and it doesn't take much Data Binding in Silverlight Emil Stoychev has a great long tutorial up on DataBinding in Silverlight ... he hits all the major points with text, samples, and code... definitely one to read! Yet Another MVVM Locator Pattern Another not-necessarily Silverlight post from Jeremy Likness -- but definitely a good one on MVVM and locator patterns. The SpiderWebControl for Silverlight Pavan Podila has a 'SpiderWebControl' for Silverlight 4 up... this is a great network graph control with any sort of feature I can think of... check out the demo, then grab the code... or the other way around, your choice :) Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • User Group Meeting Summary - April 2010

    - by Michael Stephenson
    Thanks to everyone who could make it to what turned out to be an excellent SBUG event.  First some thanks to:  Speakers: Anthony Ross and Elton Stoneman Host: The various people at Hitachi who helped to organise and arrange the venue.   Session 1 - Getting up and running with Windows Mobile and the Windows Azure Service Bus In this session Anthony discussed some considerations for using Windows Mobile and the Windows Azure Service Bus from a real-world project which Hitachi have been working on with EasyJet.  Anthony also walked through a simplified demo of the concepts which applied on the project.   In addition to the slides and demo it was also very interesting to discuss with the guys involved on this project to hear about their real experiences developing with the Azure Service Bus and some of the limitations they have had to work around in Windows Mobiles ability to interact with the service bus.   On the back of this session we will look to do some further activities around this topic and the guys offered to share their wish list of features for both Windows Mobile and Windows Azure which we will look to share for user group discussion.   Another interesting point was the cost aspects of using the ISB which were very low.   Session 2 - The Enterprise Cache In the second session Elton used a few slides which are based around one of his customer scenario's where they are looking into the concept of an Enterprise Cache within the organisation.  Elton discusses this concept and also a codeplex project he is putting together which allows you to take advantage of a cache with various providers such as Memcached, AppFabric Caching and Ncache.   Following the presentation it was interesting to hear peoples thoughts on various aspects such as the enterprise cache versus an out of process application cache.  Also there was interesting discussion around how people would like to search the cache in the future.   We will again look to put together some follow-up activity on this   Meeting Summary Following the meeting all slide decks are saved in the skydrive location where we keep content from all meetings: http://cid-40015ea59a1307c8.skydrive.live.com/browse.aspx/.Public/SBUG/SBUG%20Meetings/2010%20April   Remember that the details of all previous events are on the following page. http://uksoabpm.org/Events.aspx   Competition We had three copies of the Windows Identity Foundation Patterns and Practices book that were raffles on the night, it would be great to hear any feedback on the book from those who won it.   Recording The user group meeting was recorded and we will look to make this available online sometime soon.   UG Business The following things were discussed as general UG topics:   We will change the name of the user group to the UK Connected Systems User Group to we are more inline with other user groups who cover similar topics and we believe this will help us to attract more members.  The content or focus of the user group is not expected to change.   The next meeting is 26th May and can be registered at the following link: http://sbugmay2010.eventbrite.com/

    Read the article

  • Code style Tip: Case insensitive string comparison

    - by Michael Freidgeim
    Goodif (String.Compare(myString, ALL_TEXT, StringComparison.OrdinalIgnoreCase) == 0)                                {                                         return true;                                }OK(not obvious what true means) if (String.Compare(myString, ALL_TEXT, true) == 0)                                {                                         return true;                                }BAD: (non null safe) if (myString.ToLower()==ALL_TEXT.ToLower()                                {                                         return true;                                }

    Read the article

  • C#/.NET Little Wonders: Fun With Enum Methods

    - by James Michael Hare
    Once again lets dive into the Little Wonders of .NET, those small things in the .NET languages and BCL classes that make development easier by increasing readability, maintainability, and/or performance. So probably every one of us has used an enumerated type at one time or another in a C# program.  The enumerated types we create are a great way to represent that a value can be one of a set of discrete values (or a combination of those values in the case of bit flags). But the power of enum types go far beyond simple assignment and comparison, there are many methods in the Enum class (that all enum types “inherit” from) that can give you even more power when dealing with them. IsDefined() – check if a given value exists in the enum Are you reading a value for an enum from a data source, but are unsure if it is actually a valid value or not?  Casting won’t tell you this, and Parse() isn’t guaranteed to balk either if you give it an int or a combination of flags.  So what can we do? Let’s assume we have a small enum like this for result codes we want to return back from our business logic layer: 1: public enum ResultCode 2: { 3: Success, 4: Warning, 5: Error 6: } In this enum, Success will be zero (unless given another value explicitly), Warning will be one, and Error will be two. So what happens if we have code like this where perhaps we’re getting the result code from another data source (could be database, could be web service, etc)? 1: public ResultCode PerformAction() 2: { 3: // set up and call some method that returns an int. 4: int result = ResultCodeFromDataSource(); 5:  6: // this will suceed even if result is < 0 or > 2. 7: return (ResultCode) result; 8: } So what happens if result is –1 or 4?  Well, the cast does not fail, so what we end up with would be an instance of a ResultCode that would have a value that’s outside of the bounds of the enum constants we defined. This means if you had a block of code like: 1: switch (result) 2: { 3: case ResultType.Success: 4: // do success stuff 5: break; 6:  7: case ResultType.Warning: 8: // do warning stuff 9: break; 10:  11: case ResultType.Error: 12: // do error stuff 13: break; 14: } That you would hit none of these blocks (which is a good argument for always having a default in a switch by the way). So what can you do?  Well, there is a handy static method called IsDefined() on the Enum class which will tell you if an enum value is defined.  1: public ResultCode PerformAction() 2: { 3: int result = ResultCodeFromDataSource(); 4:  5: if (!Enum.IsDefined(typeof(ResultCode), result)) 6: { 7: throw new InvalidOperationException("Enum out of range."); 8: } 9:  10: return (ResultCode) result; 11: } In fact, this is often recommended after you Parse() or cast a value to an enum as there are ways for values to get past these methods that may not be defined. If you don’t like the syntax of passing in the type of the enum, you could clean it up a bit by creating an extension method instead that would allow you to call IsDefined() off any isntance of the enum: 1: public static class EnumExtensions 2: { 3: // helper method that tells you if an enum value is defined for it's enumeration 4: public static bool IsDefined(this Enum value) 5: { 6: return Enum.IsDefined(value.GetType(), value); 7: } 8: }   HasFlag() – an easier way to see if a bit (or bits) are set Most of us who came from the land of C programming have had to deal extensively with bit flags many times in our lives.  As such, using bit flags may be almost second nature (for a quick refresher on bit flags in enum types see one of my old posts here). However, in higher-level languages like C#, the need to manipulate individual bit flags is somewhat diminished, and the code to check for bit flag enum values may be obvious to an advanced developer but cryptic to a novice developer. For example, let’s say you have an enum for a messaging platform that contains bit flags: 1: // usually, we pluralize flags enum type names 2: [Flags] 3: public enum MessagingOptions 4: { 5: None = 0, 6: Buffered = 0x01, 7: Persistent = 0x02, 8: Durable = 0x04, 9: Broadcast = 0x08 10: } We can combine these bit flags using the bitwise OR operator (the ‘|’ pipe character): 1: // combine bit flags using 2: var myMessenger = new Messenger(MessagingOptions.Buffered | MessagingOptions.Broadcast); Now, if we wanted to check the flags, we’d have to test then using the bit-wise AND operator (the ‘&’ character): 1: if ((options & MessagingOptions.Buffered) == MessagingOptions.Buffered) 2: { 3: // do code to set up buffering... 4: // ... 5: } While the ‘|’ for combining flags is easy enough to read for advanced developers, the ‘&’ test tends to be easy for novice developers to get wrong.  First of all you have to AND the flag combination with the value, and then typically you should test against the flag combination itself (and not just for a non-zero)!  This is because the flag combination you are testing with may combine multiple bits, in which case if only one bit is set, the result will be non-zero but not necessarily all desired bits! Thanks goodness in .NET 4.0 they gave us the HasFlag() method.  This method can be called from an enum instance to test to see if a flag is set, and best of all you can avoid writing the bit wise logic yourself.  Not to mention it will be more readable to a novice developer as well: 1: if (options.HasFlag(MessagingOptions.Buffered)) 2: { 3: // do code to set up buffering... 4: // ... 5: } It is much more concise and unambiguous, thus increasing your maintainability and readability. It would be nice to have a corresponding SetFlag() method, but unfortunately generic types don’t allow you to specialize on Enum, which makes it a bit more difficult.  It can be done but you have to do some conversions to numeric and then back to the enum which makes it less of a payoff than having the HasFlag() method.  But if you want to create it for symmetry, it would look something like this: 1: public static T SetFlag<T>(this Enum value, T flags) 2: { 3: if (!value.GetType().IsEquivalentTo(typeof(T))) 4: { 5: throw new ArgumentException("Enum value and flags types don't match."); 6: } 7:  8: // yes this is ugly, but unfortunately we need to use an intermediate boxing cast 9: return (T)Enum.ToObject(typeof (T), Convert.ToUInt64(value) | Convert.ToUInt64(flags)); 10: } Note that since the enum types are value types, we need to assign the result to something (much like string.Trim()).  Also, you could chain several SetFlag() operations together or create one that takes a variable arg list if desired. Parse() and ToString() – transitioning from string to enum and back Sometimes, you may want to be able to parse an enum from a string or convert it to a string - Enum has methods built in to let you do this.  Now, many may already know this, but may not appreciate how much power are in these two methods. For example, if you want to parse a string as an enum, it’s easy and works just like you’d expect from the numeric types: 1: string optionsString = "Persistent"; 2:  3: // can use Enum.Parse, which throws if finds something it doesn't like... 4: var result = (MessagingOptions)Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result == MessagingOptions.Persistent) 7: { 8: Console.WriteLine("It worked!"); 9: } Note that Enum.Parse() will throw if it finds a value it doesn’t like.  But the values it likes are fairly flexible!  You can pass in a single value, or a comma separated list of values for flags and it will parse them all and set all bits: 1: // for string values, can have one, or comma separated. 2: string optionsString = "Persistent, Buffered"; 3:  4: var result = (MessagingOptions)Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 7: { 8: Console.WriteLine("It worked!"); 9: } Or you can parse in a string containing a number that represents a single value or combination of values to set: 1: // 3 is the combination of Buffered (0x01) and Persistent (0x02) 2: var optionsString = "3"; 3:  4: var result = (MessagingOptions) Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 7: { 8: Console.WriteLine("It worked again!"); 9: } And, if you really aren’t sure if the parse will work, and don’t want to handle an exception, you can use TryParse() instead: 1: string optionsString = "Persistent, Buffered"; 2: MessagingOptions result; 3:  4: // try parse returns true if successful, and takes an out parm for the result 5: if (Enum.TryParse(optionsString, out result)) 6: { 7: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 8: { 9: Console.WriteLine("It worked!"); 10: } 11: } So we covered parsing a string to an enum, what about reversing that and converting an enum to a string?  The ToString() method is the obvious and most basic choice for most of us, but did you know you can pass a format string for enum types that dictate how they are written as a string?: 1: MessagingOptions value = MessagingOptions.Buffered | MessagingOptions.Persistent; 2:  3: // general format, which is the default, 4: Console.WriteLine("Default : " + value); 5: Console.WriteLine("G (default): " + value.ToString("G")); 6:  7: // Flags format, even if type does not have Flags attribute. 8: Console.WriteLine("F (flags) : " + value.ToString("F")); 9:  10: // integer format, value as number. 11: Console.WriteLine("D (num) : " + value.ToString("D")); 12:  13: // hex format, value as hex 14: Console.WriteLine("X (hex) : " + value.ToString("X")); Which displays: 1: Default : Buffered, Persistent 2: G (default): Buffered, Persistent 3: F (flags) : Buffered, Persistent 4: D (num) : 3 5: X (hex) : 00000003 Now, you may not really see a difference here between G and F because I used a [Flags] enum, the difference is that the “F” option treats the enum as if it were flags even if the [Flags] attribute is not present.  Let’s take a non-flags enum like the ResultCode used earlier: 1: // yes, we can do this even if it is not [Flags] enum. 2: ResultCode value = ResultCode.Warning | ResultCode.Error; And if we run that through the same formats again we get: 1: Default : 3 2: G (default): 3 3: F (flags) : Warning, Error 4: D (num) : 3 5: X (hex) : 00000003 Notice that since we had multiple values combined, but it was not a [Flags] marked enum, the G and default format gave us a number instead of a value name.  This is because the value was not a valid single-value constant of the enum.  However, using the F flags format string, it broke out the value into its component flags even though it wasn’t marked [Flags]. So, if you want to get an enum to display appropriately for whether or not it has the [Flags] attribute, use G which is the default.  If you always want it to attempt to break down the flags, use F.  For numeric output, obviously D or  X are the best choice depending on whether you want decimal or hex. Summary Hopefully, you learned a couple of new tricks with using the Enum class today!  I’ll add more little wonders as I think of them and thanks for all the invaluable input!   Technorati Tags: C#,.NET,Little Wonders,Enum,BlackRabbitCoder

    Read the article

  • No endpoint listening at.........

    - by Michael Stephenson
    I was having some very frustrating behaviour on our build server and while I found a number of articles online with similar error messages none of them helped me.  I thought I would just explain this here incase if helps me or anyone else in future.The error message we were getting is:There was no endpoint listening at http://localhostStubs.ExternalApplication/SampleService.svc that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more detailsOur scenario is as follows:We have a solution where a WCF service application hosting the WCF routing service is listening to the Windows Azure Service Bus Relay.  We have an acceptance test project in the solution which sends a message to the service bus which is then received by the WCF routing service and routed to SampleService.svc which is hosted in another IIS application on the same box.  A response is flowed back through to the test.  In the tests there are 5 scenarios simulating a successful message, and various error conditions.  On my developer machine it was working absolutely fine every time, and a clean build on my developer machine worked fine.  On the build server however one or more of the tests would fail each time with the above error message.  There didnt seem to be any pattern to which test would fail.The solution was building on a Windows 2008 R2 machine with IIS 7 and AppFabric Server installed with auto-start configured for the IIS Application which would be listening to service bus.After lots of searching online and looking at logs etc it turned out to be a simple solution to just restart the WAS service (Windows Process Activation Service) and the services it advised you to restart with it.  Hope this helps someone else

    Read the article

  • How can I promote clean coding at my workplace?

    - by Michael
    I work with a lot of legacy Java and RPG code on an internal company application. As you might expect, a lot of the code is written in many different styles, and often is difficult to read because of poorly named variables, inconsistent formatting, and contradictory comments (if they're there at all). Also, a good amount of code is not robust. Many times code is pushed to production quickly by the more experienced programmers, while code by newer programmers is held back by "code reviews" that IMO are unsatisfactory. (They usually take the form of, "It works, must be ok," than a serious critique of the code.) We have a fair number of production issues, which I feel could be lessened by giving more thought to the original design and testing. I have been working for this company for about 4 months, and have been complimented on my coding style a couple of times. My manager is also a fan of cleaner coding than is the norm. Is it my place to try to push for better style and better defensive coding, or should I simply code in the best way I can, and hope that my example will help others see how cleaner, more robust code (as well as aggressive refactoring) will result in less debugging and change time?

    Read the article

  • Behaviour Driven Maturity Model

    - by Michael Stephenson
    Originally posted on: http://geekswithblogs.net/michaelstephenson/archive/2013/07/02/153326.aspxFor anyone who is interested I have written a small paper about the theory behind the BizTalk Maturity Assessment using a generic framework I have called the "Behaviour Driven Maturity Model" and then how it could be applied to the assessment of other subjects.The paper is on the following link:http://btsmaturity.blob.core.windows.net/behaviour-driven-model/Behaviour%20Based%20Maturity%20Model%20-%20Introduction.pdfIf you would like to create a model for a different subject area based on the details of this paper then I would encourage this as much as possible, all I ask is the following:1. Let us know your doing it so we can help tell people about each others activities2. Make it free to the community3. Reference back to BizTalkMaturity.com as the source of your model

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >