Search Results

Search found 2672 results on 107 pages for 'michael ulmann'.

Page 10/107 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Life Technologies: Making Life Easier to Manage

    - by Michael Snow
    12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} When we’re thinking about customer engagement, we’re acutely aware of all the forces at play competing for our customer’s attention. Solutions that make life easier for our customers draw attention to themselves. We tend to engage more when there is a distinct benefit and we can take a deep breath and accept that there is hope in the world and everything isn’t designed to frustrate us and make our lives miserable. (sigh…) When products are designed to automate processes that were consuming hours of our time with no relief in sight, they deserve to be recognized. One of our recent Oracle Fusion Middleware Innovation Award Winners in the WebCenter category, Life Technologies, has recently posted a video promoting their “award winning” solution. The Oracle Innovation Awards are part of the overall Oracle Excellence awards given to customers for innovation with Oracle products. More info here. Their award nomination included this description: Life Technologies delivered the My Life Service Portal as part of a larger Digital Hub strategy. This Portal is the first of its kind in the biotechnology service providing industry. The Portal provides access to Life Technologies cloud based service monitoring system where all customer deployed instruments can be remotely monitored and proactively repaired. The portal provides alerts from these cloud based monitoring services directly to the customer and to Life Technologies Field Engineers. The Portal provides insight into the instruments and services customers purchased for the purpose of analyzing and anticipating future customer needs and creating targeted sales and service programs. This portal not only provides benefits for Life Technologies internal sales and service teams but provides customers a central place to track all pertinent instrument information including: instrument service history instrument status and previous activities instrument performance analytics planned service visits warranty/contract information discussion forums social networks for lab management and collaboration alerts and notifications on all of the above team scheduling for instrument usage promote optional reagents required to keep instruments performing From their website The Life Technologies Instruments & Services Portal Helps You Save Time and Gain Peace of Mind Introducing the new, award-winning, free online tool that enables easier management of your instrument use and care, faster response to requests for service or service quotes, and instant sharing of key instrument and service information with your colleagues. Now – this unto itself is obviously beneficial for their customers who were previously burdened with having to do all of these tasks separately, manually and inconsistently by nature. Now – all in one place and free to their customers – a portal that ties it all together. They now have built the platform to give their customers yet another reason to do business with them – Their headline on their product page says it all: “Life is now easier to manage - All your instrument use and care in one place – the no-cost, no-hassle Instruments and Services Portal.” Of course – it’s very convenient that the company name includes “Life” and now can also promote to their clients and prospects that doing business with them is easy and their sophisticated lab equipment is easy to manage. In an industry full of PhD’s – “Easy” isn’t usually the first word that comes to mind, but Life Technologies has now tied the word to their brand in a very eloquent way. Between our work lives and family or personal lives, getting any mono-focused minutes of dedicated attention has become such a rare occurrence in our current era of multi-tasking that those moments of focus are highly prized. So – when something is done really well – so well that it becomes captivating and urges sharing impulses – I take notice and dig deeper and most of the time I discover other gems not so hidden below the surface. And then I share with those I know would enjoy and understand. In the spirit of full disclosure, I must admit here that the first person I shared the videos below with was my daughter. She’s in her senior year of high school in the midst of her college search. She’s passionate about her academics and has already decided that she wants to study Neuroscience in college and like her mother will be in for the long haul to a PhD eventually. In a summer science program at Smith College 2 summers ago – she sent the family famous text to me – “I just dissected a sheep’s brain – wicked cool!” – This was followed by an equally memorable text this past summer in a research mentorship in Neuroscience at UConn – “Just sliced up some rat brain. Reminded me of a deli slicer at the supermarket… sorry I forgot to call last night…” So… needless to say – I knew I had an audience that would enjoy and understand these videos below and are now being shared among her science classmates and faculty. And evidently - so does Life Technologies! They’ve done a great job on these making them fun and something that will easily be shared among their customers social networks. They’ve created a neuro-archetypal character, “Ph.Diddy” and know that their world of clients in academics, research, and other institutions would understand and enjoy the “edutainment” value in this series of videos on their YouTube channel that pokes fun at the stereotypes while also promoting their products at the same time. They use their Facebook page for additional engagement with their clients and as another venue to promote these videos. Enjoy this one as well! More to be found here: http://www.youtube.com/lifetechnologies Stay tuned to this Oracle WebCenter blog channel. Tomorrow we'll be taking a look at another winner of the Innovation Awards, LADWP - helping to keep the citizens of Los Angeles engaged with their Water and Power provider.

    Read the article

  • C#/.NET Little Wonders: The EventHandler and EventHandler&lt;TEventArgs&gt; delegates

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. In the last two weeks, we examined the Action family of delegates (and delegates in general), and the Func family of delegates and how they can be used to support generic, reusable algorithms and classes. So this week, we are going to look at a handy pair of delegates that can be used to eliminate the need for defining custom delegates when creating events: the EventHandler and EventHandler<TEventArgs> delegates. Events and delegates Before we begin, let’s quickly consider events in .NET.  According to the MSDN: An event in C# is a way for a class to provide notifications to clients of that class when some interesting thing happens to an object. So, basically, you can create an event in a type so that users of that type can subscribe to notifications of things of interest.  How is this different than some of the delegate programming that we talked about in the last two weeks?  Well, you can think of an event as a special access modifier on a delegate.  Some differences between the two are: Events are a special access case of delegates They behave much like delegates instances inside the type they are declared in, but outside of that type they can only be (un)subscribed to. Events can specify add/remove behavior explicitly If you want to do additional work when someone subscribes or unsubscribes to an event, you can specify the add and remove actions explicitly. Events have access modifiers, but these only specify the access level of those who can (un)subscribe A public event, for example, means anyone can (un)subscribe, but it does not mean that anyone can raise (invoke) the event directly.  Events can only be raised by the type that contains them In contrast, if a delegate is visible, it can be invoked outside of the object (not even in a sub-class!). Events tend to be for notifications only, and should be treated as optional Semantically speaking, events typically don’t perform work on the the class directly, but tend to just notify subscribers when something of note occurs. My basic rule-of-thumb is that if you are just wanting to notify any listeners (who may or may not care) that something has happened, use an event.  However, if you want the caller to provide some function to perform to direct the class about how it should perform work, make it a delegate. Declaring events using custom delegates To declare an event in a type, we simply use the event keyword and specify its delegate type.  For example, let’s say you wanted to create a new TimeOfDayTimer that triggers at a given time of the day (as opposed to on an interval).  We could write something like this: 1: public delegate void TimeOfDayHandler(object source, ElapsedEventArgs e); 2:  3: // A timer that will fire at time of day each day. 4: public class TimeOfDayTimer : IDisposable 5: { 6: // Event that is triggered at time of day. 7: public event TimeOfDayHandler Elapsed; 8:  9: // ... 10: } The first thing to note is that the event is a delegate type, which tells us what types of methods may subscribe to it.  The second thing to note is the signature of the event handler delegate, according to the MSDN: The standard signature of an event handler delegate defines a method that does not return a value, whose first parameter is of type Object and refers to the instance that raises the event, and whose second parameter is derived from type EventArgs and holds the event data. If the event does not generate event data, the second parameter is simply an instance of EventArgs. Otherwise, the second parameter is a custom type derived from EventArgs and supplies any fields or properties needed to hold the event data. So, in a nutshell, the event handler delegates should return void and take two parameters: An object reference to the object that raised the event. An EventArgs (or a subclass of EventArgs) reference to event specific information. Even if your event has no additional information to provide, you are still expected to provide an EventArgs instance.  In this case, feel free to pass the EventArgs.Empty singleton instead of creating new instances of EventArgs (to avoid generating unneeded memory garbage). The EventHandler delegate Because many events have no additional information to pass, and thus do not require custom EventArgs, the signature of the delegates for subscribing to these events is typically: 1: // always takes an object and an EventArgs reference 2: public delegate void EventHandler(object sender, EventArgs e) It would be insane to recreate this delegate for every class that had a basic event with no additional event data, so there already exists a delegate for you called EventHandler that has this very definition!  Feel free to use it to define any events which supply no additional event information: 1: public class Cache 2: { 3: // event that is raised whenever the cache performs a cleanup 4: public event EventHandler OnCleanup; 5:  6: // ... 7: } This will handle any event with the standard EventArgs (no additional information).  But what of events that do need to supply additional information?  Does that mean we’re out of luck for subclasses of EventArgs?  That’s where the generic for of EventHandler comes into play… The generic EventHandler<TEventArgs> delegate Starting with the introduction of generics in .NET 2.0, we have a generic delegate called EventHandler<TEventArgs>.  Its signature is as follows: 1: public delegate void EventHandler<TEventArgs>(object sender, TEventArgs e) 2: where TEventArgs : EventArgs This is similar to EventHandler except it has been made generic to support the more general case.  Thus, it will work for any delegate where the first argument is an object (the sender) and the second argument is a class derived from EventArgs (the event data). For example, let’s say we wanted to create a message receiver, and we wanted it to have a few events such as OnConnected that will tell us when a connection is established (probably with no additional information) and OnMessageReceived that will tell us when a new message arrives (probably with a string for the new message text). So for OnMessageReceived, our MessageReceivedEventArgs might look like this: 1: public sealed class MessageReceivedEventArgs : EventArgs 2: { 3: public string Message { get; set; } 4: } And since OnConnected needs no event argument type defined, our class might look something like this: 1: public class MessageReceiver 2: { 3: // event that is called when the receiver connects with sender 4: public event EventHandler OnConnected; 5:  6: // event that is called when a new message is received. 7: public event EventHandler<MessageReceivedEventArgs> OnMessageReceived; 8:  9: // ... 10: } Notice, nowhere did we have to define a delegate to fit our event definition, the EventHandler and generic EventHandler<TEventArgs> delegates fit almost anything we’d need to do with events. Sidebar: Thread-safety and raising an event When the time comes to raise an event, we should always check to make sure there are subscribers, and then only raise the event if anyone is subscribed.  This is important because if no one is subscribed to the event, then the instance will be null and we will get a NullReferenceException if we attempt to raise the event. 1: // This protects against NullReferenceException... or does it? 2: if (OnMessageReceived != null) 3: { 4: OnMessageReceived(this, new MessageReceivedEventArgs(aMessage)); 5: } The above code seems to handle the null reference if no one is subscribed, but there’s a problem if this is being used in multi-threaded environments.  For example, assume we have thread A which is about to raise the event, and it checks and clears the null check and is about to raise the event.  However, before it can do that thread B unsubscribes to the event, which sets the delegate to null.  Now, when thread A attempts to raise the event, this causes the NullReferenceException that we were hoping to avoid! To counter this, the simplest best-practice method is to copy the event (just a multicast delegate) to a temporary local variable just before we raise it.  Since we are inside the class where this event is being raised, we can copy it to a local variable like this, and it will protect us from multi-threading since multicast delegates are immutable and assignments are atomic: 1: // always make copy of the event multi-cast delegate before checking 2: // for null to avoid race-condition between the null-check and raising it. 3: var handler = OnMessageReceived; 4: 5: if (handler != null) 6: { 7: handler(this, new MessageReceivedEventArgs(aMessage)); 8: } The very slight trade-off is that it’s possible a class may get an event after it unsubscribes in a multi-threaded environment, but this is a small risk and classes should be prepared for this possibility anyway.  For a more detailed discussion on this, check out this excellent Eric Lippert blog post on Events and Races. Summary Generic delegates give us a lot of power to make generic algorithms and classes, and the EventHandler delegate family gives us the flexibility to create events easily, without needing to redefine delegates over and over.  Use them whenever you need to define events with or without specialized EventArgs.   Tweet Technorati Tags: .NET, C#, CSharp, Little Wonders, Generics, Delegates, EventHandler

    Read the article

  • Three Principles to Fix Your Broken Organization

    - by Michael Snow
    Everyone's organization is broken in some capacity. For some this is painfully visible both inside and outside their organization. For others, there are cracks noticed by only the keenest trained eyes used to looking for problems in the midst of perfection. We all know that there is often incredible hope in the despair of chaos and recognition of your problems is the first step on the road to recovery. Let us help you in your path to recovery. Join our very own, Christian Finn,  this Thursday (11/15), as he guides you through three important principles you can take back to the office to start the mending process. (Above Image Credits: the BEST site on the web to make fun of our organizations and ourselves: http://www.despair.com/ ) His three principles are NOT "TeamWork", "Ignorance" and "Tradition", but - before jumping lower on this blog post to click and register for the upcoming webcast - I thought it would be a good opportunity to give you a little taste of what we have to offer beyond the array of our fabulous On-Demand webcasts from our Social Business Thought Leader Webcast Series featuring Christian as the host. Instead, here's a snippet from our marketing team friends across the pond in Europe, where they hosted a Social Business Forum recently and featured Christian in a segment.  Simple. Powerful. Proven. Face it, your organization is broken. Customers are not the focus they should be. Processes are running amok. Your intranet is a ghost town. And colleagues wonder why it’s easier to get things done on the Web than at work. What’s the solution?Join us for this Webcast. Christian Finn will talk about three simple, powerful, and proven principles for improving your organization through collaboration. Each principle will be illustrated by real-world examples. Discover: How to dramatically improve workplace collaboration Why improved employee engagement creates better business results What’s the value of a fully engaged customer Time to Fix What’s Broken Register now for this Webcast—the tenth in the Oracle Social Business Thought Leaders Series. Register Now Thurs., Nov. 15, 2012 10 a.m. PT / 1 p.m. ET Presented by: Christian Finn Senior Director, Product Management, Oracle Copyright © 2012, Oracle Corporation and/or its affiliates. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • hadoop install cluster

    - by chnet
    Is it possible to run hadoop on multi-node and these nodes install hadoop on different paths? I noticed the tutorial, such as Michael Noll's, http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/, asks hadoop to be installed on different nodes with the same installation path. I tried to run hadoop on multi-node. Each node installed hadoop on different path. But not success. I noticed that the hadoop assume installation paths are the same in default.

    Read the article

  • Who should ‘own’ the Enterprise Architecture?

    - by Michael Glas
    I recently had a discussion around who should own an organization’s Enterprise Architecture. It was spawned by an article titled “Busting CIO Myths” in CIO magazine1 where the author interviewed Jeanne Ross, director of MIT's Center for Information Systems Research and co-author of books on enterprise architecture, governance and IT value.In the article Jeanne states that companies need to acknowledge that "architecture says everything about how the company is going to function, operate, and grow; the only person who can own that is the CEO". "If the CEO doesn't accept that role, there really can be no architecture."The first question that came up when talking about ownership was whether you are talking about a person, role, or organization (there are pros and cons to each, but in general, I like to assign accountability to as few people as possible). After much thought and discussion, I came to the conclusion that we were answering the wrong question. Instead of talking about ownership we were talking about responsibility and accountability, and the answer varies depending on the particular role of the organization’s Enterprise Architecture and the activities of the enterprise architect(s).Instead of looking at just who owns the architecture, think about what the person/role/organization should do. This is one possible scenario (thanks to Bob Covington): The CEO should own the Enterprise Strategy which guides the business architecture. The Business units should own the business processes and information which guide the business, application and information architectures. The CIO should own the technology, IT Governance and the management of the application and information architectures/implementations. The EA Governance Team owns the EA process.  If EA is done well, the governance team consists of both IT and the business. While there are many more roles and responsibilities than listed here, it starts to provide a clearer understanding of ‘ownership’. Now back to Jeanne’s statement that the CEO should own the architecture. If you agree with the statement about what the architecture is (and I do agree), then ultimately the CEO does need to own it. However, what we ended up with was not really ownership, but more statements around roles and responsibilities tied to aspects of the enterprise architecture. You can debate the semantics of ownership vs. responsibility and accountability, but in the end the important thing is to come to a clearer understanding that is easily communicated (and hopefully measured) around the question “Who owns the Enterprise Architecture”.The next logical step . . . create a RACI matrix that details the findings . . . but that is a step that each organization needs to do on their own as it will vary based on current EA maturity, company culture, and a variety of other factors. Who ‘owns’ the Enterprise Architecture in your organization? 1 CIO Magazine Article (Busting CIO Myths): http://www.cio.com/article/704943/Busting_CIO_Myths Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Ray Wang: Why engagement matters in an era of customer experience

    - by Michael Snow
    Why engagement matters in an era of customer experience R "Ray" Wang Principal Analyst & CEO, Constellation Research Mobile enterprise, social business, cloud computing, advanced analytics, and unified communications are converging. Armed with the art of the possible, innovators are seeking to apply disruptive consumer technologies to enterprise class uses — call it the consumerization of IT in the enterprise. The likely results include new methods of furthering relationships, crafting longer term engagement, and creating transformational business models. It's part of a shift from transactional systems to engagement systems. These transactional systems have been around since the 1950s. You know them as ERP, finance and accounting systems, or even payroll. These systems are designed for massive computational scale; users find them rigid and techie. Meanwhile, we've moved to new engagement systems such as Facebook and Twitter in the consumer world. The rich usability and intuitive design reflect how users want to work — and now users are coming to expect the same paradigms and designs in their enterprise world. ~~~ Ray is a prolific contributor to his own blog as well as others. For a sneak peak at Ray's thoughts on engagement, take a look at this quick teaser on Avoiding Social Media Fatigue Through Engagement Or perhaps you might agree with Ray on Dealing With The Real Problem In Social Business Adoption – The People! Check out Ray's post on the Harvard Business Review Blog to get his perspective on "How to Engage Your Customers and Employees." For a daily dose of Ray - follow him on Twitter: @rwang0 But MOST IMPORTANTLY.... Don't miss the opportunity to join leading industry analyst, R "Ray" Wang of Constellation Research in the latest webcast of the Oracle Social Business Thought Leaders Series as he explains how to apply the 9 C's of Engagement for both your customers and employees.

    Read the article

  • Hello World - My Name is Christian Finn and I'm a WebCenter Evangelist

    - by Michael Snow
    12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}  Good Morning World! I'd like to introduce a new member of the Oracle WebCenter Team, Christian Finn. We decided to let him do his own intros today. Look for his guest posts next week and he'll be a frequent contributor to WebCenter blog and voice of the community. Hello (Oracle) World! Hi everyone, my name is Christian Finn. It’s a coder’s tradition to have “hello world” be the first output from a new program or in a new language. While I have left my coding days far behind, it still seems fitting to start my new role here at Oracle by saying hello to all of you—our customers, partners and my colleagues. So by way of introduction, a little background about me. I am the new senior director for evangelism on the WebCenter product management team. Not only am I new to Oracle, but the evangelism team is also brand new. Our mission is to raise the profile of Oracle in all of the markets/conversations in which WebCenter competes—social business, collaboration, portals, Internet sites, and customer/audience engagement. This is all pretty familiar turf for me because, as some of you may know, until recently I was the director of product management at Microsoft for Microsoft SharePoint Server and several other SharePoint products. And prior to that, I held management roles at Microsoft in marketing, channels, learning, and enterprise sales. Before Microsoft, I got my start in the industry as a software trainer and Lotus Notes consultant. I am incredibly excited to be joining Oracle at this time because of the tremendous opportunity that lies ahead to improve how people and businesses work. Of all the vendors offering a vision for social business, Oracle is unique in having best of breed strength in market (or coming soon) in all three critical areas: customer experience management; the middleware and back-end applications that run your business; and in the social, collaboration, and content technologies that are the connective tissue between them. Everyone else can offer one or two of the above, but not all three unified together. So it is a great time to come board and there’s a fantastic team of people hard at work on building great products for you. In the coming weeks and months you’ll be hearing much more from us. For now, we’ll kick things off with some blog posts here on the WebCenter blog. Enjoy the reads and please share your thoughts with me over Twitter on @cfinn.

    Read the article

  • C#/.NET Little Wonders: Of LINQ and Lambdas - A Presentation

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Today I’m giving a brief beginner’s guide to LINQ and Lambdas at the St. Louis .NET User’s Group so I thought I’d post the presentation here as well.  I updated the presentation a bit as well as added some notes on the query syntax.  Enjoy! The C#/.NET Fundaments: Of Lambdas and LINQ Presentation Of Lambdas and LINQ View more presentations from BlackRabbitCoder   Technorati Tags: C#, CSharp, .NET, Little Wonders, LINQ, Lambdas

    Read the article

  • C#/.NET Little Wonders: A Redux

    - by James Michael Hare
    I gave my Little Wonders presentation to the Topeka Dot Net Users' Group today, so re-posting the links to all the previous posts for them. The Presentation: C#/.NET Little Wonders: A Presentation The Original Trilogy: C#/.NET Five Little Wonders (part 1) C#/.NET Five More Little Wonders (part 2) C#/.NET Five Final Little Wonders (part 3) The Subsequent Sequels: C#/.NET Little Wonders: ToDictionary() and ToList() C#/.NET Little Wonders: DateTime is Packed With Goodies C#/.NET Little Wonders: Fun With Enum Methods C#/.NET Little Wonders: Cross-Calling Constructors C#/.NET Little Wonders: Constraining Generics With Where Clause C#/.NET Little Wonders: Comparer<T>.Default C#/.NET Little Wonders: The Useful (But Overlooked) Sets The Concurrent Wonders: C#/.NET Little Wonders: The Concurrent Collections (1 of 3) - ConcurrentQueue and ConcurrentStack C#/.NET Little Wonders: The Concurrent Collections (2 of 3) - ConcurrentDictionary Tweet   Technorati Tags: .NET,C#,Little Wonders

    Read the article

  • Scale plugin keeps forgetting hot corner settings on restart

    - by Michael Butler
    I'm using Ubuntu 12.04 with Unity, which I suppose uses Compiz as well. I have Compiz Settings Manager, and make the top left and bottom left corners of my screen activate the "Scale" (like Exposé) function to scale and show all windows. The problem is that when I restart the computer, the hot corners no longer do anything. I have to go back into compiz settings manager, delete the hot corner option, and then set it again. Something seems to be overriding or deleting the compiz hot corner setting on restart. Update: Sometimes, the setting loses its footing even while the computer is running. I haven't figured out yet what triggers it.

    Read the article

  • How to implement a 2d collision detection for Android

    - by Michael Seun Araromi
    I am making a 2d space shooter using opengl ES. Can someone please show me how to implement a collision detection between the enemy ship and player ship. The code for the two classes are below: Player Ship Class: package com.proandroidgames; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.FloatBuffer; import javax.microedition.khronos.opengles.GL10; public class SSGoodGuy { public boolean isDestroyed = false; private int damage = 0; private FloatBuffer vertexBuffer; private FloatBuffer textureBuffer; private ByteBuffer indexBuffer; private float vertices[] = { 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, }; private float texture[] = { 0.0f, 0.0f, 0.25f, 0.0f, 0.25f, 0.25f, 0.0f, 0.25f, }; private byte indices[] = { 0, 1, 2, 0, 2, 3, }; public void applyDamage(){ damage++; if (damage == SSEngine.PLAYER_SHIELDS){ isDestroyed = true; } } public SSGoodGuy() { ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4); byteBuf.order(ByteOrder.nativeOrder()); vertexBuffer = byteBuf.asFloatBuffer(); vertexBuffer.put(vertices); vertexBuffer.position(0); byteBuf = ByteBuffer.allocateDirect(texture.length * 4); byteBuf.order(ByteOrder.nativeOrder()); textureBuffer = byteBuf.asFloatBuffer(); textureBuffer.put(texture); textureBuffer.position(0); indexBuffer = ByteBuffer.allocateDirect(indices.length); indexBuffer.put(indices); indexBuffer.position(0); } public void draw(GL10 gl, int[] spriteSheet) { gl.glBindTexture(GL10.GL_TEXTURE_2D, spriteSheet[0]); gl.glFrontFace(GL10.GL_CCW); gl.glEnable(GL10.GL_CULL_FACE); gl.glCullFace(GL10.GL_BACK); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer); gl.glDrawElements(GL10.GL_TRIANGLES, indices.length, GL10.GL_UNSIGNED_BYTE, indexBuffer); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glDisable(GL10.GL_CULL_FACE); } } Enemy Ship Class: package com.proandroidgames; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.FloatBuffer; import java.util.Random; import javax.microedition.khronos.opengles.GL10; public class SSEnemy { public float posY = 0f; public float posX = 0f; public float posT = 0f; public float incrementXToTarget = 0f; public float incrementYToTarget = 0f; public int attackDirection = 0; public boolean isDestroyed = false; private int damage = 0; public int enemyType = 0; public boolean isLockedOn = false; public float lockOnPosX = 0f; public float lockOnPosY = 0f; private Random randomPos = new Random(); private FloatBuffer vertexBuffer; private FloatBuffer textureBuffer; private ByteBuffer indexBuffer; private float vertices[] = { 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, }; private float texture[] = { 0.0f, 0.0f, 0.25f, 0.0f, 0.25f, 0.25f, 0.0f, 0.25f, }; private byte indices[] = { 0, 1, 2, 0, 2, 3, }; public void applyDamage() { damage++; switch (enemyType) { case SSEngine.TYPE_INTERCEPTOR: if (damage == SSEngine.INTERCEPTOR_SHIELDS) { isDestroyed = true; } break; case SSEngine.TYPE_SCOUT: if (damage == SSEngine.SCOUT_SHIELDS) { isDestroyed = true; } break; case SSEngine.TYPE_WARSHIP: if (damage == SSEngine.WARSHIP_SHIELDS) { isDestroyed = true; } break; } } public SSEnemy(int type, int direction) { enemyType = type; attackDirection = direction; posY = (randomPos.nextFloat() * 4) + 4; switch (attackDirection) { case SSEngine.ATTACK_LEFT: posX = 0; break; case SSEngine.ATTACK_RANDOM: posX = randomPos.nextFloat() * 3; break; case SSEngine.ATTACK_RIGHT: posX = 3; break; } posT = SSEngine.SCOUT_SPEED; ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4); byteBuf.order(ByteOrder.nativeOrder()); vertexBuffer = byteBuf.asFloatBuffer(); vertexBuffer.put(vertices); vertexBuffer.position(0); byteBuf = ByteBuffer.allocateDirect(texture.length * 4); byteBuf.order(ByteOrder.nativeOrder()); textureBuffer = byteBuf.asFloatBuffer(); textureBuffer.put(texture); textureBuffer.position(0); indexBuffer = ByteBuffer.allocateDirect(indices.length); indexBuffer.put(indices); indexBuffer.position(0); } public float getNextScoutX() { if (attackDirection == SSEngine.ATTACK_LEFT) { return (float) ((SSEngine.BEZIER_X_4 * (posT * posT * posT)) + (SSEngine.BEZIER_X_3 * 3 * (posT * posT) * (1 - posT)) + (SSEngine.BEZIER_X_2 * 3 * posT * ((1 - posT) * (1 - posT))) + (SSEngine.BEZIER_X_1 * ((1 - posT) * (1 - posT) * (1 - posT)))); } else { return (float) ((SSEngine.BEZIER_X_1 * (posT * posT * posT)) + (SSEngine.BEZIER_X_2 * 3 * (posT * posT) * (1 - posT)) + (SSEngine.BEZIER_X_3 * 3 * posT * ((1 - posT) * (1 - posT))) + (SSEngine.BEZIER_X_4 * ((1 - posT) * (1 - posT) * (1 - posT)))); } } public float getNextScoutY() { return (float) ((SSEngine.BEZIER_Y_1 * (posT * posT * posT)) + (SSEngine.BEZIER_Y_2 * 3 * (posT * posT) * (1 - posT)) + (SSEngine.BEZIER_Y_3 * 3 * posT * ((1 - posT) * (1 - posT))) + (SSEngine.BEZIER_Y_4 * ((1 - posT) * (1 - posT) * (1 - posT)))); } public void draw(GL10 gl, int[] spriteSheet) { gl.glBindTexture(GL10.GL_TEXTURE_2D, spriteSheet[0]); gl.glFrontFace(GL10.GL_CCW); gl.glEnable(GL10.GL_CULL_FACE); gl.glCullFace(GL10.GL_BACK); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer); gl.glDrawElements(GL10.GL_TRIANGLES, indices.length, GL10.GL_UNSIGNED_BYTE, indexBuffer); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glDisable(GL10.GL_CULL_FACE); } }

    Read the article

  • Silverlight Cream for June 12, 2010 -- #880

    - by Dave Campbell
    In this Issue: Michael Washington, Diego Poza, Viktor Larsson, Brian Noyes, Charles Petzold, Laurent Bugnion, Anjaiah Keesari, David Anson, and Jeremy Likness. From SilverlightCream.com: My MEF Rant Read Michael Washington's discussion about MEF from someone that's got some experience, but not enough to remember the pain points... how it works, and what he'd like to see. Prism 4: What’s new and what’s next Diego Poza Why Office Hub is important for WP7 Viktor Larsson has another WP7 post up and he's talking about the Office Hub ... good description and maybe the first I've seen on the Office Hub. WCF RIA Services Part 1: Getting Started Brian Noyes has part 1 of a 10-part tutorial series on WCF RIA Services up at SilverlightShow. This first is the intro, but it's a good one. CompositionTarget.Rendering and RenderEventArgs Charles Petzold talks about CompositionTarget.Rendering and using it for calculating time span ... and it works in WPF and WP7 too... cool example from his WPF book, and all the code. Two small issues with Windows Phone 7 ApplicationBar buttons (and workaround) Laurent Bugnion has a post up from earlier this week that I missed describing problems with the WP7 ApplicationBar ... oh, and a workaround for it :) Animation in Silverlight Anjaiah Keesari has a really extensive post up on Silverlight animation, and this is an all-XAML thing... so buckle up we're going old-school :) Two fixes for my Silverlight SplitButton/MenuButton implementation - and true WPF support David Anson revisits and revises his SplitButton code based on a couple problem reports he received. Source for the button and the test project is included. Tips and Tricks for INotifyPropertyChanged Jeremy Likness is discussing INotifyPropertyChanged and describes an extension method. He does bring up a problem associated with this, so check that out. He finishes the post off with a discussion of "Observable Enumerables" Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • C#: Optional Parameters - Pros and Pitfalls

    - by James Michael Hare
    When Microsoft rolled out Visual Studio 2010 with C# 4, I was very excited to learn how I could apply all the new features and enhancements to help make me and my team more productive developers. Default parameters have been around forever in C++, and were intentionally omitted in Java in favor of using overloading to satisfy that need as it was though that having too many default parameters could introduce code safety issues.  To some extent I can understand that move, as I’ve been bitten by default parameter pitfalls before, but at the same time I feel like Java threw out the baby with the bathwater in that move and I’m glad to see C# now has them. This post briefly discusses the pros and pitfalls of using default parameters.  I’m avoiding saying cons, because I really don’t believe using default parameters is a negative thing, I just think there are things you must watch for and guard against to avoid abuses that can cause code safety issues. Pro: Default Parameters Can Simplify Code Let’s start out with positives.  Consider how much cleaner it is to reduce all the overloads in methods or constructors that simply exist to give the semblance of optional parameters.  For example, we could have a Message class defined which allows for all possible initializations of a Message: 1: public class Message 2: { 3: // can either cascade these like this or duplicate the defaults (which can introduce risk) 4: public Message() 5: : this(string.Empty) 6: { 7: } 8:  9: public Message(string text) 10: : this(text, null) 11: { 12: } 13:  14: public Message(string text, IDictionary<string, string> properties) 15: : this(text, properties, -1) 16: { 17: } 18:  19: public Message(string text, IDictionary<string, string> properties, long timeToLive) 20: { 21: // ... 22: } 23: }   Now consider the same code with default parameters: 1: public class Message 2: { 3: // can either cascade these like this or duplicate the defaults (which can introduce risk) 4: public Message(string text = "", IDictionary<string, string> properties = null, long timeToLive = -1) 5: { 6: // ... 7: } 8: }   Much more clean and concise and no repetitive coding!  In addition, in the past if you wanted to be able to cleanly supply timeToLive and accept the default on text and properties above, you would need to either create another overload, or pass in the defaults explicitly.  With named parameters, though, we can do this easily: 1: var msg = new Message(timeToLive: 100);   Pro: Named Parameters can Improve Readability I must say one of my favorite things with the default parameters addition in C# is the named parameters.  It lets code be a lot easier to understand visually with no comments.  Think how many times you’ve run across a TimeSpan declaration with 4 arguments and wondered if they were passing in days/hours/minutes/seconds or hours/minutes/seconds/milliseconds.  A novice running through your code may wonder what it is.  Named arguments can help resolve the visual ambiguity: 1: // is this days/hours/minutes/seconds (no) or hours/minutes/seconds/milliseconds (yes) 2: var ts = new TimeSpan(1, 2, 3, 4); 3:  4: // this however is visually very explicit 5: var ts = new TimeSpan(days: 1, hours: 2, minutes: 3, seconds: 4);   Or think of the times you’ve run across something passing a Boolean literal and wondered what it was: 1: // what is false here? 2: var sub = CreateSubscriber(hostname, port, false); 3:  4: // aha! Much more visibly clear 5: var sub = CreateSubscriber(hostname, port, isBuffered: false);   Pitfall: Don't Insert new Default Parameters In Between Existing Defaults Now let’s consider a two potential pitfalls.  The first is really an abuse.  It’s not really a fault of the default parameters themselves, but a fault in the use of them.  Let’s consider that Message constructor again with defaults.  Let’s say you want to add a messagePriority to the message and you think this is more important than a timeToLive value, so you decide to put messagePriority before it in the default, this gives you: 1: public class Message 2: { 3: public Message(string text = "", IDictionary<string, string> properties = null, int priority = 5, long timeToLive = -1) 4: { 5: // ... 6: } 7: }   Oh boy have we set ourselves up for failure!  Why?  Think of all the code out there that could already be using the library that already specified the timeToLive, such as this possible call: 1: var msg = new Message(“An error occurred”, myProperties, 1000);   Before this specified a message with a TTL of 1000, now it specifies a message with a priority of 1000 and a time to live of -1 (infinite).  All of this with NO compiler errors or warnings. So the rule to take away is if you are adding new default parameters to a method that’s currently in use, make sure you add them to the end of the list or create a brand new method or overload. Pitfall: Beware of Default Parameters in Inheritance and Interface Implementation Now, the second potential pitfalls has to do with inheritance and interface implementation.  I’ll illustrate with a puzzle: 1: public interface ITag 2: { 3: void WriteTag(string tagName = "ITag"); 4: } 5:  6: public class BaseTag : ITag 7: { 8: public virtual void WriteTag(string tagName = "BaseTag") { Console.WriteLine(tagName); } 9: } 10:  11: public class SubTag : BaseTag 12: { 13: public override void WriteTag(string tagName = "SubTag") { Console.WriteLine(tagName); } 14: } 15:  16: public static class Program 17: { 18: public static void Main() 19: { 20: SubTag subTag = new SubTag(); 21: BaseTag subByBaseTag = subTag; 22: ITag subByInterfaceTag = subTag; 23:  24: // what happens here? 25: subTag.WriteTag(); 26: subByBaseTag.WriteTag(); 27: subByInterfaceTag.WriteTag(); 28: } 29: }   What happens?  Well, even though the object in each case is SubTag whose tag is “SubTag”, you will get: 1: SubTag 2: BaseTag 3: ITag   Why?  Because default parameter are resolved at compile time, not runtime!  This means that the default does not belong to the object being called, but by the reference type it’s being called through.  Since the SubTag instance is being called through an ITag reference, it will use the default specified in ITag. So the moral of the story here is to be very careful how you specify defaults in interfaces or inheritance hierarchies.  I would suggest avoiding repeating them, and instead concentrating on the layer of classes or interfaces you must likely expect your caller to be calling from. For example, if you have a messaging factory that returns an IMessage which can be either an MsmqMessage or JmsMessage, it only makes since to put the defaults at the IMessage level since chances are your user will be using the interface only. So let’s sum up.  In general, I really love default and named parameters in C# 4.0.  I think they’re a great tool to help make your code easier to read and maintain when used correctly. On the plus side, default parameters: Reduce redundant overloading for the sake of providing optional calling structures. Improve readability by being able to name an ambiguous argument. But remember to make sure you: Do not insert new default parameters in the middle of an existing set of default parameters, this may cause unpredictable behavior that may not necessarily throw a syntax error – add to end of list or create new method. Be extremely careful how you use default parameters in inheritance hierarchies and interfaces – choose the most appropriate level to add the defaults based on expected usage. Technorati Tags: C#,.NET,Software,Default Parameters

    Read the article

  • 3 Key Trends For Mobile Commerce – Location, Location, Location

    - by Michael Hylton
    This past weekend I was at a major bookstore chain and looking for a particular book.  Rather than ask the clerk, I went to my smartphone and went online to find the book title, author, and competing price.  I know I’m not alone in this effort and more and more individuals (and businesses) will use the power of mobility to tilt the scale in their favor. Armed with a mobile device – smartphone or tablet – folks will use them to research, compare, and ultimately purchase.  A recent PayPal survey found that 46% of respondents plan to use a mobile device this holiday season to make a purchase.   An astounding 27% of consumers in an e-tailing group survey commissioned by Oracle, use a tablet device daily or several times a week to research products and services. Beyond researching or making purchases, 35% of consumers use their smartphone to receive offers and coupons, and 32% access coupons and redeem them at their local retail store.  And with GPS capabilities in smartphones and tablet (and with user’s approval), retailers will start pushing coupons and offers directly to phone users based on their proximity to their store (or their competitors). Security is one concern that both shoppers, companies and phone manufacturers will have to deal with in the coming years.  In that same Oracle-sponsored e-tailing group consumer survey, 32% of consumers were concerned about giving their credit card information via a smartphone. You can gain further insight into the mind of today’s consumer by reading the e-tailing group white paper, titled “the connected consumer”.

    Read the article

  • 3 Key Trends For Mobile Commerce – Location, Location, Location

    - by Michael Hylton
    This past weekend I was at a major bookstore chain and looking for a particular book.  Rather than ask the clerk, I went to my smartphone and went online to find the book title, author, and competing price.  I know I’m not alone in this effort and more and more individuals (and businesses) will use the power of mobility to tilt the scale in their favor. Armed with a mobile device – smartphone or tablet – folks will use them to research, compare, and ultimately purchase.  A recent PayPal survey found that 46% of respondents plan to use a mobile device this holiday season to make a purchase.   An astounding 27% of consumers in an e-tailing group survey commissioned by Oracle, use a tablet device daily or several times a week to research products and services. Beyond researching or making purchases, 35% of consumers use their smartphone to receive offers and coupons, and 32% access coupons and redeem them at their local retail store.  And with GPS capabilities in smartphones and tablet (and with user’s approval), retailers will start pushing coupons and offers directly to phone users based on their proximity to their store (or their competitors). Security is one concern that both shoppers, companies and phone manufacturers will have to deal with in the coming years.  In that same Oracle-sponsored e-tailing group consumer survey, 32% of consumers were concerned about giving their credit card information via a smartphone. You can gain further insight into the mind of today’s consumer by reading the e-tailing group white paper, titled “the connected consumer”.

    Read the article

  • Simple-Talk development: a quick history lesson

    - by Michael Williamson
    Up until a few months ago, Simple-Talk ran on a pure .NET stack, with IIS as the web server and SQL Server as the database. Unfortunately, the platform for the site hadn’t quite gotten the love and attention it deserved. On the one hand, in the words of our esteemed editor Tony “I’d consider the current platform to be a “success”; it cost $10K, has lasted for 6 years, was finished, end to end in 6 months, and although we moan about it has got us quite a long way.” On the other hand, it was becoming increasingly clear that it needed some serious work. Among other issues, we had authors that wouldn’t blog because our current blogging platform, Community Server, was too painful for them to use. Forgetting about Simple-Talk for a moment, if you ask somebody what blogging platform they’d choose, the odds are they’d say WordPress. Regardless of its technical merits, it’s probably the most popular blogging platform, and it certainly seemed easier to use than Community Server. The issue was that WordPress is normally hosted on a Linux stack running PHP, Apache and MySQL — quite a difference from our Microsoft technology stack. We certainly didn’t want to rewrite the entire site — we just wanted a better blogging platform, with the rest of the existing, legacy site left as is. At a very high level, Simple-Talk’s technical design was originally very straightforward: when your browser sends an HTTP request to Simple-Talk, IIS (the web server) takes the request, does some work, and sends back a response. In order to keep the legacy site running, except with WordPress running the blogs, a different design is called for. We now use nginx as a reverse-proxy, which can then delegate requests to the appropriate application: So, when your browser sends a request to Simple-Talk, nginx takes that request and checks which part of the site you’re trying to access. Most of the time, it just passes the request along to IIS, which can then respond in much the same way it always has. However, if your request is for the blogs, then nginx delegates the request to WordPress. Unfortunately, as simple as that diagram looks, it hides an awful lot of complexity. In particular, the legacy site running on IIS was made up of four .NET applications. I’ve already mentioned one of these applications, Community Server, which handled the old blogs as well as managing membership and the forums. We have a couple of other applications to manage both our newsletters and our articles, and our own custom application to do some of the rendering on the site, such as the front page and the articles. When I say that it was made up of four .NET applications, this might conjure up an image in your mind of how they fit together: You might imagine four .NET applications, each with their own database, communicating over well-defined APIs. Sadly, reality was a little disappointing: We had four .NET applications that all ran on the same database. Worse still, there were many queries that happily joined across tables from multiple applications, meaning that each application was heavily dependent on the exact data schema that each other application used. Add to this that many of the queries were at least dozens of lines long, and practically identical to other queries except in a few key spots, and we can see that attempting to replace one component of the system would be more than a little tricky. However, the problems with the old system do give us a good place to start thinking about desirable qualities from any changes to the platform. Specifically: Maintainability — the tight coupling between each .NET application made it difficult to update any one application without also having to make changes elsewhere Replaceability — the tight coupling also meant that replacing one component wouldn’t be straightforward, especially if it wasn’t on a similar Microsoft stack. We’d like to be able to replace different parts without having to modify the existing codebase extensively Reusability — we’d like to be able to combine the different pieces of the system in different ways for different sites Repeatable deployments — rather than having to deploy the site manually with a long list of instructions, we should be able to deploy the entire site with a single command, allowing you to create a new instance of the site easily whether on production, staging servers, test servers or your own local machine Testability — if we can deploy the site with a single command, and each part of the site is no longer dependent on the specifics of how every other part of the site works, we can begin to run automated tests against the site, and against individual parts, both to prevent regressions and to do a little test-driven development In the next part, I’ll describe the high-level architecture we now have that hopefully brings us a little closer to these five traits.

    Read the article

  • Silverlight Cream for April 06, 2010 -- #832

    - by Dave Campbell
    In this Issue: Alex van Beek, Gill Cleeren, SilverlightShow, Michael Sync, Rénald Nollet, Charles Petzold, The-Oliver, and Max Paulousky. Shoutouts: Denislav Savkov of SilverlightShow ported his Slider control to WP7: Windows Phone 7 Series Sample Image Viewer SilverlightShow interview: The Silverlight Tour - what, where and why. Interview with one of the Tour organizers Laurent Duveau From SilverlightCream.com: Silverlight 4: using the VisualStateManager for state animations with MVVM Alex van Beek has an approach to resolving the MVVM issue of Animations without keeping a reference to the ViewModel by way of VisualStateManager Leveraging the ASP.NET Membership in Silverlight Gill Cleeren's post at SilverlightShow talks about using ASP.NET authentication inside your Silverlight making membership not only something you know and understand, but now the transition from your ASP.NET apps to Silverlight is simple as well. Windows Phone 7 Series RSS reader SilverlightShow has a demo RSS Reader for WP7 up... no text, but the code is there. Step by Step Tutorial : Installing Multi-Touch Simulator for Silverlight Phone 7 Michael Sync actually has a multi-touch simulator working for WP7 ... it involves a bunch of moving parts and one of the requirements is Windows 7, but if that works for you, this will too :) Element Property Binding Improvements in Blend 4 Beta and Visual Studio 2010 RC Rénald Nollet demonstrates new Blend and VS2010 features that assists you in Element Property binding with real examples. Projection Transforms Sans Math Charles Petzold is writing about Silverlight and 3D and specifically in this post 3D without math which becomes PlaneProjection... good long tutorial on it and code to back it all up. Daily Demo: Silverlight Install out of browser & Check for Update Behaviors The-Oliver has a post up about OOB and checking for updates using behaviors with only a slight change to your xaml... cool! Wizards. Prototype of sketching Wizard for WPF – 2 Max Paulousky has part 2 of his tutorial on a sketchflow Wizard for WPF ... yes WPF, but check it out... source too. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • C++ Little Wonders: The C++11 auto keyword redux

    - by James Michael Hare
    I’ve decided to create a sub-series of my Little Wonders posts to focus on C++.  Just like their C# counterparts, these posts will focus on those features of the C++ language that can help improve code by making it easier to write and maintain.  The index of the C# Little Wonders can be found here. This has been a busy week with a rollout of some new website features here at my work, so I don’t have a big post for this week.  But I wanted to write something up, and since lately I’ve been renewing my C++ skills in a separate project, it seemed like a good opportunity to start a C++ Little Wonders series.  Most of my development work still tends to focus on C#, but it was great to get back into the saddle and renew my C++ knowledge.  Today I’m going to focus on a new feature in C++11 (formerly known as C++0x, which is a major move forward in the C++ language standard).  While this small keyword can seem so trivial, I feel it is a big step forward in improving readability in C++ programs. The auto keyword If you’ve worked on C++ for a long time, you probably have some passing familiarity with the old auto keyword as one of those rarely used C++ keywords that was almost never used because it was the default. That is, in the code below (before C++11): 1: int foo() 2: { 3: // automatic variables (allocated and deallocated on stack) 4: int x; 5: auto int y; 6:  7: // static variables (retain their value across calls) 8: static int z; 9:  10: return 0; 11: } The variable x is assumed to be auto because that is the default, thus it is unnecessary to specify it explicitly as in the declaration of y below that.  Basically, an auto variable is one that is allocated and de-allocated on the stack automatically.  Contrast this to static variables, that are allocated statically and exist across the lifetime of the program. Because auto was so rarely (if ever) used since it is the norm, they decided to remove it for this purpose and give it new meaning in C++11.  The new meaning of auto: implicit typing Now, if your compiler supports C++ 11 (or at least a good subset of C++11 or 0x) you can take advantage of type inference in C++.  For those of you from the C# world, this means that the auto keyword in C++ now behaves a lot like the var keyword in C#! For example, many of us have had to declare those massive type declarations for an iterator before.  Let’s say we have a std::map of std::string to int which will map names to ages: 1: std::map<std::string, int> myMap; And then let’s say we want to find the age of a given person: 1: // Egad that's a long type... 2: std::map<std::string, int>::const_iterator pos = myMap.find(targetName); Notice that big ugly type definition to declare variable pos?  Sure, we could shorten this by creating a typedef of our specific map type if we wanted, but now with the auto keyword there’s no need: 1: // much shorter! 2: auto pos = myMap.find(targetName); The auto now tells the compiler to determine what type pos should be based on what it’s being assigned to.  This is not dynamic typing, it still determines the type as if it were explicitly declared and once declared that type cannot be changed.  That is, this is invalid: 1: // x is type int 2: auto x = 42; 3:  4: // can't assign string to int 5: x = "Hello"; Once the compiler determines x is type int it is exactly as if we typed int x = 42; instead, so don’t' confuse it with dynamic typing, it’s still very type-safe. An interesting feature of the auto keyword is that you can modify the inferred type: 1: // declare method that returns int* 2: int* GetPointer(); 3:  4: // p1 is int*, auto inferred type is int 5: auto *p1 = GetPointer(); 6:  7: // ps is int*, auto inferred type is int* 8: auto p2 = GetPointer(); Notice in both of these cases, p1 and p2 are determined to be int* but in each case the inferred type was different.  because we declared p1 as auto *p1 and GetPointer() returns int*, it inferred the type int was needed to complete the declaration.  In the second case, however, we declared p2 as auto p2 which means the inferred type was int*.  Ultimately, this make p1 and p2 the same type, but which type is inferred makes a difference, if you are chaining multiple inferred declarations together.  In these cases, the inferred type of each must match the first: 1: // Type inferred is int 2: // p1 is int* 3: // p2 is int 4: // p3 is int& 5: auto *p1 = GetPointer(), p2 = 42, &p3 = p2; Note that this works because the inferred type was int, if the inferred type was int* instead: 1: // syntax error, p1 was inferred to be int* so p2 and p3 don't make sense 2: auto p1 = GetPointer(), p2 = 42, &p3 = p2; You could also use const or static to modify the inferred type: 1: // inferred type is an int, theAnswer is a const int 2: const auto theAnswer = 42; 3:  4: // inferred type is double, Pi is a static double 5: static auto Pi = 3.1415927; Thus in the examples above it inferred the types int and double respectively, which were then modified to const and static. Summary The auto keyword has gotten new life in C++11 to allow you to infer the type of a variable from it’s initialization.  This simple little keyword can be used to cut down large declarations for complex types into a much more readable form, where appropriate.   Technorati Tags: C++, C++11, Little Wonders, auto

    Read the article

  • Who is Jeremiah Owyang?

    - by Michael Hylton
    Q: What’s your current role and what career path brought you here? J.O.: I'm currently a partner and one of the founding team members at Altimeter Group.  I'm currently the Research Director, as well as wear the hat of Industry Analyst. Prior to joining Altimeter, I was an Industry Analyst at Forrester covering Social Computing, and before that, deployed and managed the social media program at Hitachi Data Systems in Santa Clara.  Around that time, I started a career blog called Web Strategy which focused on how companies were using the web to connect with customers --and never looked back. Q: As an industry analyst, what are you focused on these days? J.O.: There are three trends that I'm focused my research on at this time:  1) The Dynamic Customer Journey:  Individuals (both b2c and b2b) are given so many options in their sources of data, channels to choose from and screens to consume them on that we've found that at each given touchpoint there are 75 potential permutations.  Companies that can map this, then deliver information to individuals when they need it will have a competitive advantage and we want to find out who's doing this.  2) One of the sub themes that supports this trend is Social Performance.  Yesterday's social web was disparate engagement of humans, but the next phase will be data driven, and soon new technologies will emerge to help all those that are consuming, publishing, and engaging on the social web to be more efficient with their time through forms of automation.  As you might expect, this comes with upsides and downsides.  3) The Sentient World is our research theme that looks out the furthest as the world around us (even inanimate objects) become 'self aware' and are able to talk back to us via digital devices and beyond.  Big data, internet of things, mobile devices will all be this next set. Q: People cite that the line between work and life is getting more and more blurred. Do you see your personal life influencing your professional work? J.O.: The lines between our work and personal lives are dissolving, and this leads to a greater upside of being always connected and have deeper relationships with those that are not.  It also means a downside of society expectations that we're always around and available for colleagues, customers, and beyond.  In the future, a balance will be sought as we seek to achieve the goals of family, friends, work, and our own personal desires.  All of this is being ironically written at 430 am on a Sunday am.  Q: How can people keep up with what you’re working on? J.O.: A great question, thanks.  There are a few sources of information to find out, I'll lead with the first which is my blog at web-strategist.com.  A few times a week I'll publish my industry insights (hires, trends, forces, funding, M&A, business needs) as well as on twitter where I'll point to all the news that's fit to print @jowyang.  As my research reports go live (we publish them for all to read --called Open Research-- at no cost) they'll emerge on my blog, or checkout the research tab to find out more now.  http://www.web-strategist.com/blog/research/ Q: Recently, you’ve been working with us here at Oracle on something exciting coming up later this week. What’s on the horizon?  J.O.: Absolutely! This coming Thursday, September 13th, I’m doing a webcast with Oracle on “Managing Social Relationships for the Enterprise”. This is going to be a great discussion with Reggie Bradford, Senior Vice President of Product Development at Oracle and Christian Finn, Senior Director of Product Management for Oracle WebCenter. I’m looking forward to a great discussion around all those issues that so many companies are struggling with these days as they realize how much social media is impacting their business. It’s changing the way your customers and employees interact with your brand. Today it’s no longer a matter of when to become a social-enabled enterprise, but how to become a successful one. Q: You’ve been very actively pursued for media interviews and conference and company speaking engagements – anything you’d like to share to give us a sneak peak of what to expect on Thursday’s webcast?  J.O.: Below is a 15 minute video which encapsulates Altimeter’s themes on the Dynamic Customer Journey and the Sentient World. I’m really proud to have taken an active role in the first ever LeWeb outside of Paris. This one, which was featured in downtown London across the street from Westminster Abbey was sold out. If you’ve not heard of LeWeb, this is a global Internet conference hosted by Loic and Geraldine Le Meur, a power couple that stem from Paris but are also living in Silicon Valley, this is one of my favorite conferences to connect with brands, technology innovators, investors and friends. Altimeter was able to play a minor role in suggesting the theme for the event “Faster Than Real Time” which stems off previous LeWebs that focused on the “Real time web”. In this radical state, companies are able to anticipate the needs of their customers by using data, technology, and devices and deliver meaningful experiences before customers even know they need it. I explore two of three of Altimeter’s research themes, the Dynamic Customer Journey, and the Sentient World in my speech, but due to time, did not focus on Adaptive Organization.

    Read the article

  • Who is Jeremiah Owyang?

    - by Michael Hylton
    12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Q: What’s your current role and what career path brought you here? J.O.: I'm currently a partner and one of the founding team members at Altimeter Group.  I'm currently the Research Director, as well as wear the hat of Industry Analyst. Prior to joining Altimeter, I was an Industry Analyst at Forrester covering Social Computing, and before that, deployed and managed the social media program at Hitachi Data Systems in Santa Clara.  Around that time, I started a career blog called Web Strategy which focused on how companies were using the web to connect with customers --and never looked back. Q: As an industry analyst, what are you focused on these days? J.O.: There are three trends that I'm focused my research on at this time:  1) The Dynamic Customer Journey:  Individuals (both b2c and b2b) are given so many options in their sources of data, channels to choose from and screens to consume them on that we've found that at each given touchpoint there are 75 potential permutations.  Companies that can map this, then deliver information to individuals when they need it will have a competitive advantage and we want to find out who's doing this.  2) One of the sub themes that supports this trend is Social Performance.  Yesterday's social web was disparate engagement of humans, but the next phase will be data driven, and soon new technologies will emerge to help all those that are consuming, publishing, and engaging on the social web to be more efficient with their time through forms of automation.  As you might expect, this comes with upsides and downsides.  3) The Sentient World is our research theme that looks out the furthest as the world around us (even inanimate objects) become 'self aware' and are able to talk back to us via digital devices and beyond.  Big data, internet of things, mobile devices will all be this next set. Q: People cite that the line between work and life is getting more and more blurred. Do you see your personal life influencing your professional work? J.O.: The lines between our work and personal lives are dissolving, and this leads to a greater upside of being always connected and have deeper relationships with those that are not.  It also means a downside of society expectations that we're always around and available for colleagues, customers, and beyond.  In the future, a balance will be sought as we seek to achieve the goals of family, friends, work, and our own personal desires.  All of this is being ironically written at 430 am on a Sunday am.  Q: How can people keep up with what you’re working on? J.O.: A great question, thanks.  There are a few sources of information to find out, I'll lead with the first which is my blog at web-strategist.com.  A few times a week I'll publish my industry insights (hires, trends, forces, funding, M&A, business needs) as well as on twitter where I'll point to all the news that's fit to print @jowyang.  As my research reports go live (we publish them for all to read --called Open Research-- at no cost) they'll emerge on my blog, or checkout the research tab to find out more now.  http://www.web-strategist.com/blog/research/ Q: Recently, you’ve been working with us here at Oracle on something exciting coming up later this week. What’s on the horizon?  J.O.: Absolutely! This coming Thursday, September 13th, I’m doing a webcast with Oracle on “Managing Social Relationships for the Enterprise”. This is going to be a great discussion with Reggie Bradford, Senior Vice President of Product Development at Oracle and Christian Finn, Senior Director of Product Management for Oracle WebCenter. I’m looking forward to a great discussion around all those issues that so many companies are struggling with these days as they realize how much social media is impacting their business. It’s changing the way your customers and employees interact with your brand. Today it’s no longer a matter of when to become a social-enabled enterprise, but how to become a successful one. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Q: You’ve been very actively pursued for media interviews and conference and company speaking engagements – anything you’d like to share to give us a sneak peak of what to expect on Thursday’s webcast?  J.O.: Below is a 15 minute video which encapsulates Altimeter’s themes on the Dynamic Customer Journey and the Sentient World. I’m really proud to have taken an active role in the first ever LeWeb outside of Paris. This one, which was featured in downtown London across the street from Westminster Abbey was sold out. If you’ve not heard of LeWeb, this is a global Internet conference hosted by Loic and Geraldine Le Meur, a power couple that stem from Paris but are also living in Silicon Valley, this is one of my favorite conferences to connect with brands, technology innovators, investors and friends. Altimeter was able to play a minor role in suggesting the theme for the event “Faster Than Real Time” which stems off previous LeWebs that focused on the “Real time web”. In this radical state, companies are able to anticipate the needs of their customers by using data, technology, and devices and deliver meaningful experiences before customers even know they need it. I explore two of three of Altimeter’s research themes, the Dynamic Customer Journey, and the Sentient World in my speech, but due to time, did not focus on Adaptive Organization.

    Read the article

  • Who is Jeremiah Owyang?

    - by Michael Snow
    12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Q: What’s your current role and what career path brought you here? J.O.: I'm currently a partner and one of the founding team members at Altimeter Group.  I'm currently the Research Director, as well as wear the hat of Industry Analyst. Prior to joining Altimeter, I was an Industry Analyst at Forrester covering Social Computing, and before that, deployed and managed the social media program at Hitachi Data Systems in Santa Clara.  Around that time, I started a career blog called Web Strategy which focused on how companies were using the web to connect with customers --and never looked back. Q: As an industry analyst, what are you focused on these days? J.O.: There are three trends that I'm focused my research on at this time:  1) The Dynamic Customer Journey:  Individuals (both b2c and b2b) are given so many options in their sources of data, channels to choose from and screens to consume them on that we've found that at each given touchpoint there are 75 potential permutations.  Companies that can map this, then deliver information to individuals when they need it will have a competitive advantage and we want to find out who's doing this.  2) One of the sub themes that supports this trend is Social Performance.  Yesterday's social web was disparate engagement of humans, but the next phase will be data driven, and soon new technologies will emerge to help all those that are consuming, publishing, and engaging on the social web to be more efficient with their time through forms of automation.  As you might expect, this comes with upsides and downsides.  3) The Sentient World is our research theme that looks out the furthest as the world around us (even inanimate objects) become 'self aware' and are able to talk back to us via digital devices and beyond.  Big data, internet of things, mobile devices will all be this next set. Q: People cite that the line between work and life is getting more and more blurred. Do you see your personal life influencing your professional work? J.O.: The lines between our work and personal lives are dissolving, and this leads to a greater upside of being always connected and have deeper relationships with those that are not.  It also means a downside of society expectations that we're always around and available for colleagues, customers, and beyond.  In the future, a balance will be sought as we seek to achieve the goals of family, friends, work, and our own personal desires.  All of this is being ironically written at 430 am on a Sunday am.  Q: How can people keep up with what you’re working on? J.O.: A great question, thanks.  There are a few sources of information to find out, I'll lead with the first which is my blog at web-strategist.com.  A few times a week I'll publish my industry insights (hires, trends, forces, funding, M&A, business needs) as well as on twitter where I'll point to all the news that's fit to print @jowyang.  As my research reports go live (we publish them for all to read --called Open Research-- at no cost) they'll emerge on my blog, or checkout the research tab to find out more now.  http://www.web-strategist.com/blog/research/ Q: Recently, you’ve been working with us here at Oracle on something exciting coming up later this week. What’s on the horizon?  J.O.: Absolutely! This coming Thursday, September 13th, I’m doing a webcast with Oracle on “Managing Social Relationships for the Enterprise”. This is going to be a great discussion with Reggie Bradford, Senior Vice President of Product Development at Oracle and Christian Finn, Senior Director of Product Management for Oracle WebCenter. I’m looking forward to a great discussion around all those issues that so many companies are struggling with these days as they realize how much social media is impacting their business. It’s changing the way your customers and employees interact with your brand. Today it’s no longer a matter of when to become a social-enabled enterprise, but how to become a successful one. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Q: You’ve been very actively pursued for media interviews and conference and company speaking engagements – anything you’d like to share to give us a sneak peak of what to expect on Thursday’s webcast?  J.O.: Below is a 15 minute video which encapsulates Altimeter’s themes on the Dynamic Customer Journey and the Sentient World. I’m really proud to have taken an active role in the first ever LeWeb outside of Paris. This one, which was featured in downtown London across the street from Westminster Abbey was sold out. If you’ve not heard of LeWeb, this is a global Internet conference hosted by Loic and Geraldine Le Meur, a power couple that stem from Paris but are also living in Silicon Valley, this is one of my favorite conferences to connect with brands, technology innovators, investors and friends. Altimeter was able to play a minor role in suggesting the theme for the event “Faster Than Real Time” which stems off previous LeWebs that focused on the “Real time web”. In this radical state, companies are able to anticipate the needs of their customers by using data, technology, and devices and deliver meaningful experiences before customers even know they need it. I explore two of three of Altimeter’s research themes, the Dynamic Customer Journey, and the Sentient World in my speech, but due to time, did not focus on Adaptive Organization.

    Read the article

  • Pella Increases Online Appointment Scheduling and Rapidly Personalizes and Updates Marketing Initiatives

    - by Michael Snow
    Originally posted on Oracle Customers page.Oracle Customer: Pella CorporationLocation:  Pella, IowaIndustry: Industrial Manufacturing Employees:  7,100 Pella Corporation is an innovative leader in creating a better view for homes and businesses by designing, testing, manufacturing, and installing quality windows and doors for new construction, remodeling, and replacement applications. A family-owned company, Pella has an 88-year history of innovation and, today, is the second-largest manufacturer in the country of windows and doors, including patio, entry, and storm doors. The company has 10 manufacturing facilities in United States and window and door showrooms across the United States and Canada. In-home consultations are an important part of Pella’s sales process. Several years ago, the company launched an online appointment scheduling tool to improve customer convenience. While the functionality worked well, the company wanted to increase online conversion rates and decrease the number of incomplete, online appointment schedules. It also wanted to give its business analysts and other line-of-business personnel the ability to update the scheduling tool and interface quickly, without needing IT team intervention and recoding, to better capitalize on opportunities and personalize the interface for specific markets. Pella also looked to reduce IT complexity by selecting a system that integrated easily with its Oracle E-Business Suite Release 12.1 enterprise applications.Pella, which has a large Oracle footprint, selected Oracle WebCenter Sites as the foundation for its new, real-time appointment scheduling application. It used the solution to re-engineer the scheduling process and the information required to set up an appointment. Just a few months after launch, it is seeing improvement in the number of appointments booked online and experiencing fewer abandoned appointments during the scheduling process. As important, Pella can now quickly and easily make changes to images, video, and content displayed on the scheduling tool interface, delivering greater business agility. Previously, such changes required a developer and weeks of coding and testing. Today, a member of Pella’s business analyst team can complete the changes in hours. This capability enables Pella to personalize the Web experience for customers. For example, it can display different products or images for clients in different regions.The solution is also highly scalable. Pella is using Oracle WebCenter Sites for appointment scheduling now and plans to migrate Pella.com, its configurator tool, and dealer microsites onto the platform. Further, Pella plans to leverage the solution to optimize mobile devices. “Moving ahead, we expect to extensively leverage Oracle WebCenter Sites to gain greater flexibility in updating the Web experience, thanks to the ability to make updates quickly without developer resources. Segmentation and targeting capabilities will allow us to create a more personalized experience across both traditional and mobile platforms,” said Teri Lancaster, IT manager, customer experience applications, Pella Corporation. A word from Pella Corporation "Oracle WebCenter Sites?from the start?delivered important benefits. We’ve redesigned the online scheduling process and are seeing more potential customers completing consultation bookings online. More important, the solution opens a world of other possibilities as we plan to migrate Pella.com and our dealer microsites to the platform, and leverage it to optimize the Web experience for our mobile devices.” – Teri Lancaster, IT Manager, Customer Experience Applications, Pella Corporation Oracle Product and Services Oracle WebCenter Sites Why Oracle Pella has a long-standing relationship with Oracle. “We look to Oracle first for a solution. Our Oracle account team came to us with several solutions, and Oracle WebCenter Sites delivered the scalability, ease-of-use, flexibility, and scalability that we required for the appointment scheduling initiative and other Web projects on the horizon, including migrating Pella.com and optimizing our site for mobile platforms,”said Teri Lancaster, IT manager, customer experience applications, Pella Corporation. Implementation Process The Pella implementation team, working with Oracle partner Element Solutions, LLC, integrated the appointment setting application with Pella.com as well as the company’s Oracle E-Business Suite customer relationship management applications. Using Oracle WebCenter Site’s development tools and subversion capabilities to develop the application, the Element Solutions and Pella teams could work remotely and collaboratively, accelerating deployment. Pella went live with the new scheduling tool in just six months. Partner Oracle PartnerElement Solutions, LLC Element Solutions was instrumental at every major stage of the project, including design creation and approval, development, training, and rollout. “Element Solutions was a vital partner for our Oracle WebCenter Sites initiative. The team provided guidance, and more important, critical knowledge transfer at every stage?which equipped us to get the most out of this powerful and versatile solution. We were definitely collaboration partners,” Lancaster said. Resources Pella Corporation Upgrades Enterprise Applications to Continue to Improve Manufacturing Efficiency Thousands of Customers Successfully and Smoothly Upgrade to Oracle E-Business Suite 12.1 for New Functionality, Lower Operating Costs and Improved Shared Operations Managing the Virtual World

    Read the article

  • User Group Meeting Summary - April 2010

    - by Michael Stephenson
    Thanks to everyone who could make it to what turned out to be an excellent SBUG event.  First some thanks to:  Speakers: Anthony Ross and Elton Stoneman Host: The various people at Hitachi who helped to organise and arrange the venue.   Session 1 - Getting up and running with Windows Mobile and the Windows Azure Service Bus In this session Anthony discussed some considerations for using Windows Mobile and the Windows Azure Service Bus from a real-world project which Hitachi have been working on with EasyJet.  Anthony also walked through a simplified demo of the concepts which applied on the project.   In addition to the slides and demo it was also very interesting to discuss with the guys involved on this project to hear about their real experiences developing with the Azure Service Bus and some of the limitations they have had to work around in Windows Mobiles ability to interact with the service bus.   On the back of this session we will look to do some further activities around this topic and the guys offered to share their wish list of features for both Windows Mobile and Windows Azure which we will look to share for user group discussion.   Another interesting point was the cost aspects of using the ISB which were very low.   Session 2 - The Enterprise Cache In the second session Elton used a few slides which are based around one of his customer scenario's where they are looking into the concept of an Enterprise Cache within the organisation.  Elton discusses this concept and also a codeplex project he is putting together which allows you to take advantage of a cache with various providers such as Memcached, AppFabric Caching and Ncache.   Following the presentation it was interesting to hear peoples thoughts on various aspects such as the enterprise cache versus an out of process application cache.  Also there was interesting discussion around how people would like to search the cache in the future.   We will again look to put together some follow-up activity on this   Meeting Summary Following the meeting all slide decks are saved in the skydrive location where we keep content from all meetings: http://cid-40015ea59a1307c8.skydrive.live.com/browse.aspx/.Public/SBUG/SBUG%20Meetings/2010%20April   Remember that the details of all previous events are on the following page. http://uksoabpm.org/Events.aspx   Competition We had three copies of the Windows Identity Foundation Patterns and Practices book that were raffles on the night, it would be great to hear any feedback on the book from those who won it.   Recording The user group meeting was recorded and we will look to make this available online sometime soon.   UG Business The following things were discussed as general UG topics:   We will change the name of the user group to the UK Connected Systems User Group to we are more inline with other user groups who cover similar topics and we believe this will help us to attract more members.  The content or focus of the user group is not expected to change.   The next meeting is 26th May and can be registered at the following link: http://sbugmay2010.eventbrite.com/

    Read the article

  • Silverlight Cream for April 24, 2010 -- #846

    - by Dave Campbell
    In this Issue: Michael Washington, Timmy Kokke, Pete Brown, Paul Yanez, Emil Stoychev, Jeremy Likness, and Pavan Podila. Shoutouts: If you've got some time to spend, the User Experience Kit is packed with info: User Experience Kit, and just plain fun to navigate ... thanks Scott Barnes for reminding me about it! Jesse Liberty is looking for some help organizing and cataloging posts for a new project he's got going: Help Wanted Emil Stoychev posted Slides and demos from my talk on Silverlight 4 From SilverlightCream.com: Silverlight 4 Drag and Drop File Manager Michael Washington has a post up about a Silverlight Drag and Drop File Manager in MVVM, but a secondary important point about the post is that he and Alan Beasley followed strict Designer/Developer rules on this... you recognized Alan's ListBox didn't you? Changing CSS with jQuery syntax in Silverlight using jLight Timmy Kokke is using jLight as introduced in a prior post to interact with the DOM from Silverlight. Essential Silverlight and WPF Skills: The UI Thread, Dispatchers, Background Workers and Async Network Programming Pete Brown has a great backrounder up for WPF and Silverlight devs on threading and networking, good comments too so far. Fluid layout and Fullscreen in Silverlight Paul Yanez has a quick post and demo up on forcing full-screen with a fluid layout, all code included -- and it doesn't take much Data Binding in Silverlight Emil Stoychev has a great long tutorial up on DataBinding in Silverlight ... he hits all the major points with text, samples, and code... definitely one to read! Yet Another MVVM Locator Pattern Another not-necessarily Silverlight post from Jeremy Likness -- but definitely a good one on MVVM and locator patterns. The SpiderWebControl for Silverlight Pavan Podila has a 'SpiderWebControl' for Silverlight 4 up... this is a great network graph control with any sort of feature I can think of... check out the demo, then grab the code... or the other way around, your choice :) Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • C#/.NET Little Wonders: ConcurrentBag and BlockingCollection

    - by James Michael Hare
    In the first week of concurrent collections, began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  The last post discussed the ConcurrentDictionary<T> .  Finally this week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see C#/.NET Little Wonders: A Redux. Recap As you'll recall from the previous posts, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  With .NET 4.0, a new breed of collections was born in the System.Collections.Concurrent namespace.  Of these, the final concurrent collection we will examine is the ConcurrentBag and a very useful wrapper class called the BlockingCollection. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this informative whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentBag<T> – Thread-safe unordered collection. Unlike the other concurrent collections, the ConcurrentBag<T> has no non-concurrent counterpart in the .NET collections libraries.  Items can be added and removed from a bag just like any other collection, but unlike the other collections, the items are not maintained in any order.  This makes the bag handy for those cases when all you care about is that the data be consumed eventually, without regard for order of consumption or even fairness – that is, it’s possible new items could be consumed before older items given the right circumstances for a period of time. So why would you ever want a container that can be unfair?  Well, to look at it another way, you can use a ConcurrentQueue and get the fairness, but it comes at a cost in that the ordering rules and synchronization required to maintain that ordering can affect scalability a bit.  Thus sometimes the bag is great when you want the fastest way to get the next item to process, and don’t care what item it is or how long its been waiting. The way that the ConcurrentBag works is to take advantage of the new ThreadLocal<T> type (new in System.Threading for .NET 4.0) so that each thread using the bag has a list local to just that thread.  This means that adding or removing to a thread-local list requires very low synchronization.  The problem comes in where a thread goes to consume an item but it’s local list is empty.  In this case the bag performs “work-stealing” where it will rob an item from another thread that has items in its list.  This requires a higher level of synchronization which adds a bit of overhead to the take operation. So, as you can imagine, this makes the ConcurrentBag good for situations where each thread both produces and consumes items from the bag, but it would be less-than-idea in situations where some threads are dedicated producers and the other threads are dedicated consumers because the work-stealing synchronization would outweigh the thread-local optimization for a thread taking its own items. Like the other concurrent collections, there are some curiosities to keep in mind: IsEmpty(), Count, ToArray(), and GetEnumerator() lock collection Each of these needs to take a snapshot of whole bag to determine if empty, thus they tend to be more expensive and cause Add() and Take() operations to block. ToArray() and GetEnumerator() are static snapshots Because it is based on a snapshot, will not show subsequent updates after snapshot. Add() is lightweight Since adding to the thread-local list, there is very little overhead on Add. TryTake() is lightweight if items in thread-local list As long as items are in the thread-local list, TryTake() is very lightweight, much more so than ConcurrentStack() and ConcurrentQueue(), however if the local thread list is empty, it must steal work from another thread, which is more expensive. Remember, a bag is not ideal for all situations, it is mainly ideal for situations where a process consumes an item and either decomposes it into more items to be processed, or handles the item partially and places it back to be processed again until some point when it will complete.  The main point is that the bag works best when each thread both takes and adds items. For example, we could create a totally contrived example where perhaps we want to see the largest power of a number before it crosses a certain threshold.  Yes, obviously we could easily do this with a log function, but bare with me while I use this contrived example for simplicity. So let’s say we have a work function that will take a Tuple out of a bag, this Tuple will contain two ints.  The first int is the original number, and the second int is the last multiple of that number.  So we could load our bag with the initial values (let’s say we want to know the last multiple of each of 2, 3, 5, and 7 under 100. 1: var bag = new ConcurrentBag<Tuple<int, int>> 2: { 3: Tuple.Create(2, 1), 4: Tuple.Create(3, 1), 5: Tuple.Create(5, 1), 6: Tuple.Create(7, 1) 7: }; Then we can create a method that given the bag, will take out an item, apply the multiplier again, 1: public static void FindHighestPowerUnder(ConcurrentBag<Tuple<int,int>> bag, int threshold) 2: { 3: Tuple<int,int> pair; 4:  5: // while there are items to take, this will prefer local first, then steal if no local 6: while (bag.TryTake(out pair)) 7: { 8: // look at next power 9: var result = Math.Pow(pair.Item1, pair.Item2 + 1); 10:  11: if (result < threshold) 12: { 13: // if smaller than threshold bump power by 1 14: bag.Add(Tuple.Create(pair.Item1, pair.Item2 + 1)); 15: } 16: else 17: { 18: // otherwise, we're done 19: Console.WriteLine("Highest power of {0} under {3} is {0}^{1} = {2}.", 20: pair.Item1, pair.Item2, Math.Pow(pair.Item1, pair.Item2), threshold); 21: } 22: } 23: } Now that we have this, we can load up this method as an Action into our Tasks and run it: 1: // create array of tasks, start all, wait for all 2: var tasks = new[] 3: { 4: new Task(() => FindHighestPowerUnder(bag, 100)), 5: new Task(() => FindHighestPowerUnder(bag, 100)), 6: }; 7:  8: Array.ForEach(tasks, t => t.Start()); 9:  10: Task.WaitAll(tasks); Totally contrived, I know, but keep in mind the main point!  When you have a thread or task that operates on an item, and then puts it back for further consumption – or decomposes an item into further sub-items to be processed – you should consider a ConcurrentBag as the thread-local lists will allow for quick processing.  However, if you need ordering or if your processes are dedicated producers or consumers, this collection is not ideal.  As with anything, you should performance test as your mileage will vary depending on your situation! BlockingCollection<T> – A producers & consumers pattern collection The BlockingCollection<T> can be treated like a collection in its own right, but in reality it adds a producers and consumers paradigm to any collection that implements the interface IProducerConsumerCollection<T>.  If you don’t specify one at the time of construction, it will use a ConcurrentQueue<T> as its underlying store. If you don’t want to use the ConcurrentQueue, the ConcurrentStack and ConcurrentBag also implement the interface (though ConcurrentDictionary does not).  In addition, you are of course free to create your own implementation of the interface. So, for those who don’t remember the producers and consumers classical computer-science problem, the gist of it is that you have one (or more) processes that are creating items (producers) and one (or more) processes that are consuming these items (consumers).  Now, the crux of the problem is that there is a bin (queue) where the produced items are placed, and typically that bin has a limited size.  Thus if a producer creates an item, but there is no space to store it, it must wait until an item is consumed.  Also if a consumer goes to consume an item and none exists, it must wait until an item is produced. The BlockingCollection makes it trivial to implement any standard producers/consumers process set by providing that “bin” where the items can be produced into and consumed from with the appropriate blocking operations.  In addition, you can specify whether the bin should have a limited size or can be (theoretically) unbounded, and you can specify timeouts on the blocking operations. As far as your choice of “bin”, for the most part the ConcurrentQueue is the right choice because it is fairly light and maximizes fairness by ordering items so that they are consumed in the same order they are produced.  You can use the concurrent bag or stack, of course, but your ordering would be random-ish in the case of the former and LIFO in the case of the latter. So let’s look at some of the methods of note in BlockingCollection: BoundedCapacity returns capacity of the “bin” If the bin is unbounded, the capacity is int.MaxValue. Count returns an internally-kept count of items This makes it O(1), but if you modify underlying collection directly (not recommended) it is unreliable. CompleteAdding() is used to cut off further adds. This sets IsAddingCompleted and begins to wind down consumers once empty. IsAddingCompleted is true when producers are “done”. Once you are done producing, should complete the add process to alert consumers. IsCompleted is true when producers are “done” and “bin” is empty. Once you mark the producers done, and all items removed, this will be true. Add() is a blocking add to collection. If bin is full, will wait till space frees up Take() is a blocking remove from collection. If bin is empty, will wait until item is produced or adding is completed. GetConsumingEnumerable() is used to iterate and consume items. Unlike the standard enumerator, this one consumes the items instead of iteration. TryAdd() attempts add but does not block completely If adding would block, returns false instead, can specify TimeSpan to wait before stopping. TryTake() attempts to take but does not block completely Like TryAdd(), if taking would block, returns false instead, can specify TimeSpan to wait. Note the use of CompleteAdding() to signal the BlockingCollection that nothing else should be added.  This means that any attempts to TryAdd() or Add() after marked completed will throw an InvalidOperationException.  In addition, once adding is complete you can still continue to TryTake() and Take() until the bin is empty, and then Take() will throw the InvalidOperationException and TryTake() will return false. So let’s create a simple program to try this out.  Let’s say that you have one process that will be producing items, but a slower consumer process that handles them.  This gives us a chance to peek inside what happens when the bin is bounded (by default, the bin is NOT bounded). 1: var bin = new BlockingCollection<int>(5); Now, we create a method to produce items: 1: public static void ProduceItems(BlockingCollection<int> bin, int numToProduce) 2: { 3: for (int i = 0; i < numToProduce; i++) 4: { 5: // try for 10 ms to add an item 6: while (!bin.TryAdd(i, TimeSpan.FromMilliseconds(10))) 7: { 8: Console.WriteLine("Bin is full, retrying..."); 9: } 10: } 11:  12: // once done producing, call CompleteAdding() 13: Console.WriteLine("Adding is completed."); 14: bin.CompleteAdding(); 15: } And one to consume them: 1: public static void ConsumeItems(BlockingCollection<int> bin) 2: { 3: // This will only be true if CompleteAdding() was called AND the bin is empty. 4: while (!bin.IsCompleted) 5: { 6: int item; 7:  8: if (!bin.TryTake(out item, TimeSpan.FromMilliseconds(10))) 9: { 10: Console.WriteLine("Bin is empty, retrying..."); 11: } 12: else 13: { 14: Console.WriteLine("Consuming item {0}.", item); 15: Thread.Sleep(TimeSpan.FromMilliseconds(20)); 16: } 17: } 18: } Then we can fire them off: 1: // create one producer and two consumers 2: var tasks = new[] 3: { 4: new Task(() => ProduceItems(bin, 20)), 5: new Task(() => ConsumeItems(bin)), 6: new Task(() => ConsumeItems(bin)), 7: }; 8:  9: Array.ForEach(tasks, t => t.Start()); 10:  11: Task.WaitAll(tasks); Notice that the producer is faster than the consumer, thus it should be hitting a full bin often and displaying the message after it times out on TryAdd(). 1: Consuming item 0. 2: Consuming item 1. 3: Bin is full, retrying... 4: Bin is full, retrying... 5: Consuming item 3. 6: Consuming item 2. 7: Bin is full, retrying... 8: Consuming item 4. 9: Consuming item 5. 10: Bin is full, retrying... 11: Consuming item 6. 12: Consuming item 7. 13: Bin is full, retrying... 14: Consuming item 8. 15: Consuming item 9. 16: Bin is full, retrying... 17: Consuming item 10. 18: Consuming item 11. 19: Bin is full, retrying... 20: Consuming item 12. 21: Consuming item 13. 22: Bin is full, retrying... 23: Bin is full, retrying... 24: Consuming item 14. 25: Adding is completed. 26: Consuming item 15. 27: Consuming item 16. 28: Consuming item 17. 29: Consuming item 19. 30: Consuming item 18. Also notice that once CompleteAdding() is called and the bin is empty, the IsCompleted property returns true, and the consumers will exit. Summary The ConcurrentBag is an interesting collection that can be used to optimize concurrency scenarios where tasks or threads both produce and consume items.  In this way, it will choose to consume its own work if available, and then steal if not.  However, in situations where you want fair consumption or ordering, or in situations where the producers and consumers are distinct processes, the bag is not optimal. The BlockingCollection is a great wrapper around all of the concurrent queue, stack, and bag that allows you to add producer and consumer semantics easily including waiting when the bin is full or empty. That’s the end of my dive into the concurrent collections.  I’d also strongly recommend, once again, you read this excellent Microsoft white paper that goes into much greater detail on the efficiencies you can gain using these collections judiciously (here). Tweet Technorati Tags: C#,.NET,Concurrent Collections,Little Wonders

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >