Search Results

Search found 787 results on 32 pages for 'augmented reality'.

Page 28/32 | < Previous Page | 24 25 26 27 28 29 30 31 32  | Next Page >

  • RSS feeds in Orchard

    - by Bertrand Le Roy
    When we added RSS to Orchard, we wanted to make it easy for any module to expose any contents as a feed. We also wanted the rendering of the feed to be handled by Orchard in order to minimize the amount of work from the module developer. A typical example of such feed exposition is of course blog feeds. We have an IFeedManager interface for which you can get the built-in implementation through dependency injection. Look at the BlogController constructor for an example: public BlogController( IOrchardServices services, IBlogService blogService, IBlogSlugConstraint blogSlugConstraint, IFeedManager feedManager, RouteCollection routeCollection) { If you look a little further in that same controller, in the Item action, you’ll see a call to the Register method of the feed manager: _feedManager.Register(blog); This in reality is a call into an extension method that is specialized for blogs, but we could have made the two calls to the actual generic Register directly in the action instead, that is just an implementation detail: feedManager.Register(blog.Name, "rss", new RouteValueDictionary { { "containerid", blog.Id } }); feedManager.Register(blog.Name + " - Comments", "rss", new RouteValueDictionary { { "commentedoncontainer", blog.Id } }); What those two effective calls are doing is to register two feeds: one for the blog itself and one for the comments on the blog. For each call, the name of the feed is provided, then we have the type of feed (“rss”) and some values to be injected into the generic RSS route that will be used later to route the feed to the right providers. This is all you have to do to expose a new feed. If you’re only interested in exposing feeds, you can stop right there. If on the other hand you want to know what happens after that under the hood, carry on. What happens after that is that the feedmanager will take care of formatting the link tag for the feed (see FeedManager.GetRegisteredLinks). The GetRegisteredLinks method itself will be called from a specialized filter, FeedFilter. FeedFilter is an MVC filter and the event we’re interested in hooking into is OnResultExecuting, which happens after the controller action has returned an ActionResult and just before MVC executes that action result. In other words, our feed registration has already been called but the view is not yet rendered. Here’s the code for OnResultExecuting: model.Zones.AddAction("head:after", html => html.ViewContext.Writer.Write( _feedManager.GetRegisteredLinks(html))); This is another piece of code whose execution is differed. It is saying that whenever comes time to render the “head” zone, this code should be called right after. The code itself is rendering the link tags. As a result of all that, here’s what can be found in an Orchard blog’s head section: <link rel="alternate" type="application/rss+xml"     title="Tales from the Evil Empire"     href="/rss?containerid=5" /> <link rel="alternate" type="application/rss+xml"     title="Tales from the Evil Empire - Comments"     href="/rss?commentedoncontainer=5" /> The generic action that these two feeds point to is Index on FeedController. That controller has three important dependencies: an IFeedBuilderProvider, an IFeedQueryProvider and an IFeedItemProvider. Different implementations of these interfaces can provide different formats of feeds, such as RSS and Atom. The Match method enables each of the competing providers to provide a priority for themselves based on arbitrary criteria that can be found on the FeedContext. This means that a provider can be selected based not only on the desired format, but also on the nature of the objects being exposed as a feed or on something even more arbitrary such as the destination device (you could imagine for example giving shorter text only excerpts of posts on mobile devices, and full HTML on desktop). The key here is extensibility and dynamic competition and collaboration from unknown and loosely coupled parts. You’ll find this pattern pretty much everywhere in the Orchard architecture. The RssFeedBuilder implementation of IFeedBuilderProvider is also a regular controller with a Process action that builds a RssResult, which is itself a thin ActionResult wrapper around an XDocument. Let’s get back to the FeedController’s Index action. After having called into each known feed builder to get its priority on the currently requested feed, it will select the one with the highest priority. The next thing it needs to do is to actually fetch the data for the feed. This again is a collaborative effort from a priori unknown providers, the implementations of IFeedQueryProvider. There are several implementations by default in Orchard, the choice of which is again done through a Match method. ContainerFeedQuery for example chimes in when a “containerid” parameter is found in the context (see URL in the link tag above): public FeedQueryMatch Match(FeedContext context) { var containerIdValue = context.ValueProvider.GetValue("containerid"); if (containerIdValue == null) return null; return new FeedQueryMatch { FeedQuery = this, Priority = -5 }; } The actual work is done in the Execute method, which finds the right container content item in the Orchard database and adds elements for each of them. In other words, the feed query provider knows how to retrieve the list of content items to add to the feed. The last step is to translate each of the content items into feed entries, which is done by implementations of IFeedItemBuilder. There is no Match method this time. Instead, all providers are called with the collection of items (or more accurately with the FeedContext, but this contains the list of items, which is what’s relevant in most cases). Each provider can then choose to pick those items that it knows how to treat and transform them into the format requested. This enables the construction of heterogeneous feeds that expose content items of various types into a single feed. That will be extremely important when you’ll want to expose a single feed for all your site. So here are feeds in Orchard in a nutshell. The main point here is that there is a fair number of components involved, with some complexity in implementation in order to allow for extreme flexibility, but the part that you use to expose a new feed is extremely simple and light: declare that you want your content exposed as a feed and you’re done. There are cases where you’ll have to dive in and provide new implementations for some or all of the interfaces involved, but that requirement will only arise as needed. For example, you might need to create a new feed item builder to include your custom content type but that effort will be extremely focused on the specialized task at hand. The rest of the system won’t need to change. So what do you think?

    Read the article

  • SQL SERVER – Step by Step Guide to Beginning Data Quality Services in SQL Server 2012 – Introduction to DQS

    - by pinaldave
    Data Quality Services is a very important concept of SQL Server. I have recently started to explore the same and I am really learning some good concepts. Here are two very important blog posts which one should go over before continuing this blog post. Installing Data Quality Services (DQS) on SQL Server 2012 Connecting Error to Data Quality Services (DQS) on SQL Server 2012 This article is introduction to Data Quality Services for beginners. We will be using an Excel file Click on the image to enlarge the it. In the first article we learned to install DQS. In this article we will see how we can learn about building Knowledge Base and using it to help us identify the quality of the data as well help correct the bad quality of the data. Here are the two very important steps we will be learning in this tutorial. Building a New Knowledge Base  Creating a New Data Quality Project Let us start the building the Knowledge Base. Click on New Knowledge Base. In our project we will be using the Excel as a knowledge base. Here is the Excel which we will be using. There are two columns. One is Colors and another is Shade. They are independent columns and not related to each other. The point which I am trying to show is that in Column A there are unique data and in Column B there are duplicate records. Clicking on New Knowledge Base will bring up the following screen. Enter the name of the new knowledge base. Clicking NEXT will bring up following screen where it will allow to select the EXCE file and it will also let users select the source column. I have selected Colors and Shade both as a source column. Creating a domain is very important. Here you can create a unique domain or domain which is compositely build from Colors and Shade. As this is the first example, I will create unique domain – for Colors I will create domain Colors and for Shade I will create domain Shade. Here is the screen which will demonstrate how the screen will look after creating domains. Clicking NEXT it will bring you to following screen where you can do the data discovery. Clicking on the START will start the processing of the source data provided. Pre-processed data will show various information related to the source data. In our case it shows that Colors column have unique data whereas Shade have non-unique data and unique data rows are only two. In the next screen you can actually add more rows as well see the frequency of the data as the values are listed unique. Clicking next will publish the knowledge base which is just created. Now the knowledge base is created. We will try to take any random data and attempt to do DQS implementation over it. I am using another excel sheet here for simplicity purpose. In reality you can easily use SQL Server table for the same. Click on New Data Quality Project to see start DQS Project. In the next screen it will ask which knowledge base to use. We will be using our Colors knowledge base which we have recently created. In the Colors knowledge base we had two columns – 1) Colors and 2) Shade. In our case we will be using both of the mappings here. User can select one or multiple column mapping over here. Now the most important phase of the complete project. Click on Start and it will make the cleaning process and shows various results. In our case there were two columns to be processed and it completed the task with necessary information. It demonstrated that in Colors columns it has not corrected any value by itself but in Shade value there is a suggestion it has. We can train the DQS to correct values but let us keep that subject for future blog posts. Now click next and keep the domain Colors selected left side. It will demonstrate that there are two incorrect columns which it needs to be corrected. Here is the place where once corrected value will be auto-corrected in future. I manually corrected the value here and clicked on Approve radio buttons. As soon as I click on Approve buttons the rows will be disappeared from this tab and will move to Corrected Tab. If I had rejected tab it would have moved the rows to Invalid tab as well. In this screen you can see how the corrected 2 rows are demonstrated. You can click on Correct tab and see previously validated 6 rows which passed the DQS process. Now let us click on the Shade domain on the left side of the screen. This domain shows very interesting details as there DQS system guessed the correct answer as Dark with the confidence level of 77%. It is quite a high confidence level and manual observation also demonstrate that Dark is the correct answer. I clicked on Approve and the row moved to corrected tab. On the next screen DQS shows the summary of all the activities. It also demonstrates how the correction of the quality of the data was performed. The user can explore their data to a SQL Server Table, CSV file or Excel. The user also has an option to either explore data and all the associated cleansing info or data only. I will select Data only for demonstration purpose. Clicking explore will generate the files. Let us open the generated file. It will look as following and it looks pretty complete and corrected. Well, we have successfully completed DQS Process. The process is indeed very easy. I suggest you try this out yourself and you will find it very easy to learn. In future we will go over advanced concepts. Are you using this feature on your production server? If yes, would you please leave a comment with your environment and business need. It will be indeed interesting to see where it is implemented. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • Prioritizing Product Features

    - by Robert May
    A very common task in Agile Environments is prioritization.  Teams that are functioning well will prioritize new features, old features, the backlog, and any other source of stories for the team, and they’ll do it regularly. Not all teams are good at prioritizing according to the real return on investment that building stories will yield to the company.  This is unfortunate.  Too often, teams end up building features that are less valuable, and everyone seems to know it except perhaps the product owner!  Most features built into software are never even used.  Clearly, not much return for features that go unused. So how does a company avoid building features that add little value to the company?  This is a tough question to answer, but usually, this prioritization starts at the top with the executives of the company.  After all, they’re responsible for the overall vision of the company. Here’s what I recommend: Know your market. Know your customers and users. Know where you’re going and what you want to achieve. Implement the Vision Know Your Market We often see companies that don’t know their market.  Personally, I’m surprised by this.  These companies don’t know who their competitors are, don’t know what features make their product desirable in the market, and in many cases, get by with saying, “I’ve been doing this for XX years.  I know what the market wants!”  In many cases, they equate “marketing” with “advertising” and don’t understand the difference. This is almost never true.  Good companies will spend significant amounts of time and money finding out who they’re competing against and what makes their competitors successful in the marketplace.  Good companies understand that marketing involves more than just advertising.  Often, marketing is mostly research and analysis, not sales.  Until you understand your market, you cannot know what features will give you the best return on your investment dollar. Good companies have a marketing department and can answer the next important step which is to know your customers and your users. Know your Customers and Users First, note that I included both customers and users.  They’re often not the same thing.  Users use the product that you build.  Customers buy the product that you build.  It’s a subtle difference, but too often, I’ve seen companies that focus exclusively on one or the other and are not successful simply because they ignore an important part of the group. If your company is doing appropriate marketing, you know that these are two different aspects of your product and that both deserve attention to have a product that is successful in your target market.  Your marketing department should be spending a lot of time understanding these personas and then conveying that information to the company. I’m always surprised when development teams think that they can build a product that people want to use without understanding the users of that product.  Developers think differently than most people in the world.  They know what the computer is doing.  The computer isn’t magic to them.  So when they assume that they know how to build something, they bring with them quite a bit of baggage.  Never assume that you know your customer unless you’re regularly having interaction with them.  Also, don’t just leave this to Marketing or Product Management.  Take them time to get your developers out with the customers as well.  Developers are very smart people, and often, seeing how someone uses their software inspires them to make a much better product. Very often, because the users and customers aren’t know, teams will spend a significant amount of time building apps that are super flexible and configurable so that any possible combination of feature can be used.  This demonstrates a clear lack of understanding of the customer.  Most configuration questions can quickly be answered by talking to the customer.  In most cases, if your software requires significant setup and configuration before its usable, you probably don’t know your customers and users very well. Until you know your customers, you cannot know what features will be most valuable to your customers and you cannot build those features in a way that your customers can use. Know Where You’re Going and What You Want to Achieve Many companies suffer from not having a plan.  Executives will tell the team to make them a plan.  The team, not knowing their market and customers and users, will come up with a plan that doesn’t reflect reality and doesn’t consider ROI.  Management then wonders why the product is doing poorly in the market place. Instead of leaving this up to the teams, as executives, work with Marketing to understand what broad categories of features will sell the most product in the marketplace.  Then, once you’ve determined that, give this vision to the team and let them run with it.  Revise the vision as needed, but avoid changing streams frequently.  Sure, sometimes you need to, but often, executives will change priorities many times a month, leading to nothing more than confusion.  If the team has a vision, they’ll be able to execute that vision far better than they could otherwise. By knowing what products are most important, you can set budgetary goals and guidelines that will help you achieve the vision that was created. Implement the Vision Creating the vision is often where the general executives stop participating in the plan.  The team is responsible for implementing that vision.  Executives should attend showcases and and should remain aware of the progress that the team is making towards meeting the vision, however. Once a broad vision has been created, the team should break that vision down into minimal market features (MMF).  These MMFs should be sized using story points so that, using the team’s velocity, an estimated cost can be determined for each feature.  The product management team should then try to quantify the relative value of the MMFs based on customer feedback and interviews.  Once the value and cost of creating the feature is understood, a return on investment can be calculated.  The features should then be prioritized with the MMF’s that have the highest value and lowest cost rising to the top of features to implement.  Don’t let politics get in the way! Once the MMF’s have been prioritized, they should go through release planning to schedule them for implementation. Conclusion By having a good grasp on the strategy of the company, your Agile teams can be much more effective.  Each and every story the team is implementing will roll up into features that matter to the company and provide ROI to them.  The steps outlined in this post should be repeated on a regular basis.  I recommend reviewing them at least once per quarter to make sure that the vision hasn’t shifted and that the teams are still working on what matters most to the company. Technorati Tags: Agile,Product Owner,ROI

    Read the article

  • A simple Dynamic Proxy

    - by Abhijeet Patel
    Frameworks such as EF4 and MOQ do what most developers consider "dark magic". For instance in EF4, when you use a POCO for an entity you can opt-in to get behaviors such as "lazy-loading" and "change tracking" at runtime merely by ensuring that your type has the following characteristics: The class must be public and not sealed. The class must have a public or protected parameter-less constructor. The class must have public or protected properties Adhere to this and your type is magically endowed with these behaviors without any additional programming on your part. Behind the scenes the framework subclasses your type at runtime and creates a "dynamic proxy" which has these additional behaviors and when you navigate properties of your POCO, the framework replaces the POCO type with derived type instances. The MOQ framework does simlar magic. Let's say you have a simple interface:   public interface IFoo      {          int GetNum();      }   We can verify that the GetNum() was invoked on a mock like so:   var mock = new Mock<IFoo>(MockBehavior.Default);   mock.Setup(f => f.GetNum());   var num = mock.Object.GetNum();   mock.Verify(f => f.GetNum());   Beind the scenes the MOQ framework is generating a dynamic proxy by implementing IFoo at runtime. the call to moq.Object returns the dynamic proxy on which we then call "GetNum" and then verify that this method was invoked. No dark magic at all, just clever programming is what's going on here, just not visible and hence appears magical! Let's create a simple dynamic proxy generator which accepts an interface type and dynamically creates a proxy implementing the interface type specified at runtime.     public static class DynamicProxyGenerator   {       public static T GetInstanceFor<T>()       {           Type typeOfT = typeof(T);           var methodInfos = typeOfT.GetMethods();           AssemblyName assName = new AssemblyName("testAssembly");           var assBuilder = AppDomain.CurrentDomain.DefineDynamicAssembly(assName, AssemblyBuilderAccess.RunAndSave);           var moduleBuilder = assBuilder.DefineDynamicModule("testModule", "test.dll");           var typeBuilder = moduleBuilder.DefineType(typeOfT.Name + "Proxy", TypeAttributes.Public);              typeBuilder.AddInterfaceImplementation(typeOfT);           var ctorBuilder = typeBuilder.DefineConstructor(                     MethodAttributes.Public,                     CallingConventions.Standard,                     new Type[] { });           var ilGenerator = ctorBuilder.GetILGenerator();           ilGenerator.EmitWriteLine("Creating Proxy instance");           ilGenerator.Emit(OpCodes.Ret);           foreach (var methodInfo in methodInfos)           {               var methodBuilder = typeBuilder.DefineMethod(                   methodInfo.Name,                   MethodAttributes.Public | MethodAttributes.Virtual,                   methodInfo.ReturnType,                   methodInfo.GetParameters().Select(p => p.GetType()).ToArray()                   );               var methodILGen = methodBuilder.GetILGenerator();               methodILGen.EmitWriteLine("I'm a proxy");               if (methodInfo.ReturnType == typeof(void))               {                   methodILGen.Emit(OpCodes.Ret);               }               else               {                   if (methodInfo.ReturnType.IsValueType || methodInfo.ReturnType.IsEnum)                   {                       MethodInfo getMethod = typeof(Activator).GetMethod(/span>"CreateInstance",new Type[]{typeof((Type)});                                               LocalBuilder lb = methodILGen.DeclareLocal(methodInfo.ReturnType);                       methodILGen.Emit(OpCodes.Ldtoken, lb.LocalType);                       methodILGen.Emit(OpCodes.Call, typeofype).GetMethod("GetTypeFromHandle"));  ));                       methodILGen.Emit(OpCodes.Callvirt, getMethod);                       methodILGen.Emit(OpCodes.Unbox_Any, lb.LocalType);                                                              }                 else                   {                       methodILGen.Emit(OpCodes.Ldnull);                   }                   methodILGen.Emit(OpCodes.Ret);               }               typeBuilder.DefineMethodOverride(methodBuilder, methodInfo);           }                     Type constructedType = typeBuilder.CreateType();           var instance = Activator.CreateInstance(constructedType);           return (T)instance;       }   }   Dynamic proxies are created by calling into the following main types: AssemblyBuilder, TypeBuilder, Modulebuilder and ILGenerator. These types enable dynamically creating an assembly and emitting .NET modules and types in that assembly, all using IL instructions. Let's break down the code above a bit and examine it piece by piece                Type typeOfT = typeof(T);              var methodInfos = typeOfT.GetMethods();              AssemblyName assName = new AssemblyName("testAssembly");              var assBuilder = AppDomain.CurrentDomain.DefineDynamicAssembly(assName, AssemblyBuilderAccess.RunAndSave);              var moduleBuilder = assBuilder.DefineDynamicModule("testModule", "test.dll");              var typeBuilder = moduleBuilder.DefineType(typeOfT.Name + "Proxy", TypeAttributes.Public);   We are instructing the runtime to create an assembly caled "test.dll"and in this assembly we then emit a new module called "testModule". We then emit a new type definition of name "typeName"Proxy into this new module. This is the definition for the "dynamic proxy" for type T                 typeBuilder.AddInterfaceImplementation(typeOfT);               var ctorBuilder = typeBuilder.DefineConstructor(                         MethodAttributes.Public,                         CallingConventions.Standard,                         new Type[] { });               var ilGenerator = ctorBuilder.GetILGenerator();               ilGenerator.EmitWriteLine("Creating Proxy instance");               ilGenerator.Emit(OpCodes.Ret);   The newly created type implements type T and defines a default parameterless constructor in which we emit a call to Console.WriteLine. This call is not necessary but we do this so that we can see first hand that when the proxy is constructed, when our default constructor is invoked.   var methodBuilder = typeBuilder.DefineMethod(                      methodInfo.Name,                      MethodAttributes.Public | MethodAttributes.Virtual,                      methodInfo.ReturnType,                      methodInfo.GetParameters().Select(p => p.GetType()).ToArray()                      );   We then iterate over each method declared on type T and add a method definition of the same name into our "dynamic proxy" definition     if (methodInfo.ReturnType == typeof(void))   {       methodILGen.Emit(OpCodes.Ret);   }   If the return type specified in the method declaration of T is void we simply return.     if (methodInfo.ReturnType.IsValueType || methodInfo.ReturnType.IsEnum)   {                               MethodInfo getMethod = typeof(Activator).GetMethod("CreateInstance",                                                         new Type[]{typeof(Type)});                               LocalBuilder lb = methodILGen.DeclareLocal(methodInfo.ReturnType);                                                     methodILGen.Emit(OpCodes.Ldtoken, lb.LocalType);       methodILGen.Emit(OpCodes.Call, typeof(Type).GetMethod("GetTypeFromHandle"));       methodILGen.Emit(OpCodes.Callvirt, getMethod);       methodILGen.Emit(OpCodes.Unbox_Any, lb.LocalType);   }   If the return type in the method declaration of T is either a value type or an enum, then we need to create an instance of the value type and return that instance the caller. In order to accomplish that we need to do the following: 1) Get a handle to the Activator.CreateInstance method 2) Declare a local variable which represents the Type of the return type(i.e the type object of the return type) specified on the method declaration of T(obtained from the MethodInfo) and push this Type object onto the evaluation stack. In reality a RuntimeTypeHandle is what is pushed onto the stack. 3) Invoke the "GetTypeFromHandle" method(a static method in the Type class) passing in the RuntimeTypeHandle pushed onto the stack previously as an argument, the result of this invocation is a Type object (representing the method's return type) which is pushed onto the top of the evaluation stack. 4) Invoke Activator.CreateInstance passing in the Type object from step 3, the result of this invocation is an instance of the value type boxed as a reference type and pushed onto the top of the evaluation stack. 5) Unbox the result and place it into the local variable of the return type defined in step 2   methodILGen.Emit(OpCodes.Ldnull);   If the return type is a reference type then we just load a null onto the evaluation stack   methodILGen.Emit(OpCodes.Ret);   Emit a a return statement to return whatever is on top of the evaluation stack(null or an instance of a value type) back to the caller     Type constructedType = typeBuilder.CreateType();   var instance = Activator.CreateInstance(constructedType);   return (T)instance;   Now that we have a definition of the "dynamic proxy" implementing all the methods declared on T, we can now create an instance of the proxy type and return that out typed as T. The caller can now invoke the generator and request a dynamic proxy for any type T. In our example when the client invokes GetNum() we get back "0". Lets add a new method on the interface called DayOfWeek GetDay()   public interface IFoo      {          int GetNum();          DayOfWeek GetDay();      }   When GetDay() is invoked, the "dynamic proxy" returns "Sunday" since that is the default value for the DayOfWeek enum This is a very trivial example of dynammic proxies, frameworks like MOQ have a way more sophisticated implementation of this paradigm where in you can instruct the framework to create proxies which return specified values for a method implementation.

    Read the article

  • Brighton Rocks: UA Europe 2011

    - by ultan o'broin
    User Assistance Europe 2011 was held in Brighton, UK. Having seen Quadrophenia a dozen times, I just had to go along (OK, I wanted to talk about messages in enterprise applications). Sadly, it rained a lot, though that was still eminently more tolerable than being stuck home in Dublin during Bloomsday. So, here are my somewhat selective highlights and observations from the conference, massively skewed towards my own interests, as usual. Enjoyed Leah Guren's (Cow TC) great start ‘keynote’ on the Cultural Dimensions of Software Help Usage. Starting out by revisiting Hofstede's and Hall's work on culture (how many times I have done this for Multilingual magazine?) and then Neilsen’s findings on age as an indicator of performance, Leah showed how it is the expertise of the user that user assistance (UA) needs to be designed for (especially for high-end users), with some considerations made for age, while the gender and culture of users are not major factors. Help also needs to be contextual and concise, embedded close to the action. That users are saying things like “If I want help on Office, I go to Google ” isn't all that profound at this stage, but it is always worth reiterating how search can be optimized to return better results for users. Interestingly, regardless of user education level, the issue of information quality--hinging on the lynchpin of terminology reflecting that of the user--is critical. Major takeaway for me there. Matthew Ellison’s sessions on embedded help and demos were also impressive. Embedded help that is concise and contextual is definitely a powerful UX enabler, and I’m pleased to say that in Oracle Fusion Applications we have embraced the concept fully. Matthew also mentioned in his session about successful software demos that the principle of modality with demos is a must. Look no further than Oracle User Productivity Kit demos See It!, Try It!, Know It, and Do It! modes, for example. I also found some key takeaways in the presentation by Marie-Louise Flacke on notes and warnings. Here, legal considerations seemed to take precedence over providing any real information to users. I was delighted when Marie-Louise called out the Oracle JDeveloper documentation as an exemplar of how to use notes and instructions instead of trying to scare the bejaysus out of people and not providing them with any real information they’d find useful instead. My own session on designing messages for enterprise applications was well attended. Knowing your user profiles (remember user expertise is the king maker for UA so write for each audience involved), how users really work, the required application business and UI rules, what your application technology supports, and how messages integrate with the enterprise help desk and support policies and you will go much further than relying solely on the guideline of "writing messages in plain language". And, remember the value in warnings and confirmation messages too, and how you can use them smartly. I hope y’all got something from my presentation and from my answers to questions afterwards. Ellis Pratt stole the show with his presentation on applying game theory to software UA, using plenty of colorful, relevant examples (check out the Atlassian and DropBox approaches, for example), and striking just the right balance between theory and practice. Completely agree that the approach to take here is not to make UA itself a game, but to invoke UA as part of a bigger game dynamic (time-to-task completion, personal and communal goals, personal achievement and status, and so on). Sure there are gotchas and limitations to gamification, and we need to do more research. However, we'll hear a lot more about this subject in coming years, particularly in the enterprise space. I hope. I also heard good things about the different sessions about DITA usage (including one by Sonja Fuga that clearly opens the door for major innovation in the community content space using WordPress), the progressive disclosure of information (Cerys Willoughby), an overview of controlled language (or "information quality", as I like to position it) solutions and rationale by Dave Gash, and others. I also spent time chatting with Mike Hamilton of MadCap Software, who showed me a cool demo of their Flare product, and the Lingo translation solution. I liked the idea of their licensing model for workers-on-the-go; that’s smart UX-awareness in itself. Also chatted with Julian Murfitt of Mekon about uptake of DITA in the enterprise space. In all, it's worth attending UA Europe. I was surprised, however, not to see conference topics about mobile UA, community conversation and content, and search in its own right. These are unstoppable forces now, and the latter is pretty central to providing assistance now to all but the most irredentist of hard-copy fetishists or advanced technical or functional users working away on the back end of applications and systems. Only saw one iPad too (says the guy who carries three laptops). Tweeting during the conference was pretty much nonexistent during the event, so no community energy there. Perhaps all this can be addressed next year. I would love to see the next UA Europe event come to Dublin (despite Bloomsday, it's not a bad place place, really) now that hotels are so cheap and all. So, what is my overall impression of the state of user assistance in Europe? Clearly, there are still many people in the industry who feel there is something broken with the traditional forms of user assistance (particularly printed doc) and something needs to be done about it. I would suggest they move on and try and embrace change, instead. Many others see new possibilities, offered by UX and technology, as well as the reality of online user behavior in an increasingly connected world and that is encouraging. Such thought leaders need to be listened to. As Ellis Pratt says in his great book, Trends in Technical Communication - Rethinking Help: “To stay relevant means taking a new perspective on the role (of technical writer), and delivering “products” over and above the traditional manual and online Help file... there are a number of new trends in this field - some complementary, some conflicting. Whatever trends emerge as the norm, it’s likely the status quo will change.” It already has, IMO. I hear similar debates in the professional translation world about the onset of translation crowd sourcing (the Facebook model) and machine translation (trust me, that battle is over). Neither of these initiatives has put anyone out of a job and probably won't, though the nature of the work might change. If anything, such innovations have increased the overall need for professional translators as user expectations rise, new audiences emerge, and organizations need to collate and curate user-generated content, combining it with their own. Perhaps user assistance professionals can learn from other professions and grow accordingly.

    Read the article

  • Why It Is So Important to Know Your Customer

    - by Christie Flanagan
    Over the years, I endured enough delayed flights, air turbulence and misadventures in airport security clearance to watch my expectations for the air travel experience fall to abysmally low levels. The extent of my loyalty to any one carrier had more to do with the proximity of the airport parking garage to their particular gate than to any effort on the airline’s part to actually earn and retain my business. That all changed one day when I found myself at the airport hoping to catch a return flight home a few hours earlier than expected, using an airline I had flown with for the first time just that week.  When you travel regularly for business, being able to catch a return flight home that’s even an hour or two earlier than originally scheduled is a big deal. It can mean the difference between having a normal evening with your family and having to sneak in like a cat burglar after everyone is fast asleep. And so I found myself on this particular day hoping to catch an earlier flight home. I approached the gate agent and was told that I could go on standby for their next flight out. Then I asked how much it was going to cost to change the flight, knowing full well that I wouldn’t get reimbursed by my company for any change fees. “Oh, there’s no charge to fly on standby,” the gate agent told me. I made a funny look. I couldn’t believe what I was hearing. This airline was going to let my fly on standby, at no additional charge, even though I was a new customer with no status or points. It had been years since I’d seen an airline pass up a short term revenue generating opportunity in favor of a long term loyalty generating one.  At that moment, this particular airline gained my loyal business. Since then, this airline has had the opportunity to learn a lot about me. They know where I live, where I fly from, where I usually fly to, and where I like to sit on the plane. In general, I’ve found their customer service to be quite good whether at the airport, via call center and even through social channels. They email me occasionally, and when they do, they demonstrate that they know me by promoting deals for flights from where I live to places that I’d be interested in visiting. And that’s part of why I’m always so puzzled when I visit their website.Does this company with the great service, customer friendly policies, and clean planes demonstrate that they know me at all when I visit their website? The answer is no. Even when I log in using my loyalty program credentials, it’s pretty obvious that they’re presenting the same old home page and same old offers to every single one of their site visitors. I mean, those promotional offers that they’re featuring so prominently  -- they’re for flights that originate thousands of miles from where I live! There’s no way I’d ever book one of those flights and I’m sure I’m not the only one of their customers to feel that way.My reason for recounting this story is not to pick on the one customer experience flaw I've noticed with this particular airline, in fact, they do so many things right that I’ll continue to fly with them. But I did want to illustrate just how glaringly obvious it is to customers today when a touch point they have with a brand is impersonal, unconnected and out of sync. As someone who’s spent a number of years in the web experience management and online marketing space, it particularly peeves me when that out of sync touch point is a brand’s website, perhaps because I know how important it is to make a customer’s online experience relevant and how many powerful tools are available for making a relevant experience a reality. The fact is, delivering a one-size-fits-all online customer experience is no longer acceptable or particularly effective in today’s world. Today’s savvy customers expect you to know who they are and to understand their preferences, behavior and relationship with your brand. Not only do they expect you to know about them, but they also expect you to demonstrate this knowledge across all of their touch points with your brand in a consistent and compelling fashion, whether it be on your traditional website, your mobile web presence or through various social channels.Delivering the kind of personalized online experiences that customers want can have tremendous business benefits. This is not just about generating feelings of goodwill and higher customer satisfaction ratings either. More relevant and personalized online experiences boost the effectiveness of online marketing initiatives and the statistics prove this out. Personalized web experiences can help increase online conversion rates by 70% -- that’s a huge number.1  And more than three quarters of consumers indicate that they’ve made additional online purchases based on personalized product recommendations.2Now if only this airline would get on board with delivering a more personalized online customer experience. I’d certainly be happier and more likely to spring for one of their promotional offers. And by targeting relevant offers on their home page to appropriate segments of their site visitors, I bet they’d be happier and generating additional revenue too. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}  ***** If you're interested in hearing more perspectives on the benefits of demonstrating that you know your customers by delivering a more personalized experience, check out this white paper on creating a successful and meaningful customer experience on the web.  Also catch the video below on the business value of CX in attracting new customers featuring Oracle's VP of Customer Experience Strategy, Brian Curran. 1 Search Engine Watch 2 Marketing Charts

    Read the article

  • How to configure VPN in Windows XP

    - by SAMIR BHOGAYTA
    VPN Overview A VPN is a private network created over a public one. It’s done with encryption, this way, your data is encapsulated and secure in transit – this creates the ‘virtual’ tunnel. A VPN is a method of connecting to a private network by a public network like the Internet. An internet connection in a company is common. An Internet connection in a Home is common too. With both of these, you could create an encrypted tunnel between them and pass traffic, safely - securely. If you want to create a VPN connection you will have to use encryption to make sure that others cannot intercept the data in transit while traversing the Internet. Windows XP provides a certain level of security by using Point-to-Point Tunneling Protocol (PPTP) or Layer Two Tunneling Protocol (L2TP). They are both considered tunneling protocols – simply because they create that virtual tunnel just discussed, by applying encryption. Configure a VPN with XP If you want to configure a VPN connection from a Windows XP client computer you only need what comes with the Operating System itself, it's all built right in. To set up a connection to a VPN, do the following: 1. On the computer that is running Windows XP, confirm that the connection to the Internet is correctly configured. • You can try to browse the internet • Ping a known host on the Internet, like yahoo.com, something that isn’t blocking ICMP 2. Click Start, and then click Control Panel. 3. In Control Panel, double click Network Connections 4. Click Create a new connection in the Network Tasks task pad 5. In the Network Connection Wizard, click Next. 6. Click Connect to the network at my workplace, and then click Next. 7. Click Virtual Private Network connection, and then click Next. 8. If you are prompted, you need to select whether you will use a dialup connection or if you have a dedicated connection to the Internet either via Cable, DSL, T1, Satellite, etc. Click Next. 9. Type a host name, IP or any other description you would like to appear in the Network Connections area. You can change this later if you want. Click Next. 10. Type the host name or the Internet Protocol (IP) address of the computer that you want to connect to, and then click Next. 11. You may be asked if you want to use a Smart Card or not. 12. You are just about done, the rest of the screens just verify your connection, click Next. 13. Click to select the Add a shortcut to this connection to my desktop check box if you want one, if not, then leave it unchecked and click finish. 14. You are now done making your connection, but by default, it may try to connect. You can either try the connection now if you know its valid, if not, then just close it down for now. 15. In the Network Connections window, right-click the new connection and select properties. Let’s take a look at how you can customize this connection before it’s used. 16. The first tab you will see if the General Tab. This only covers the name of the connection, which you can also rename from the Network Connection dialog box by right clicking the connection and selecting to rename it. You can also configure a First connect, which means that Windows can connect the public network (like the Internet) before starting to attempt the ‘VPN’ connection. This is a perfect example as to when you would have configured the dialup connection; this would have been the first thing that you would have to do. It's simple, you have to be connected to the Internet first before you can encrypt and send data over it. This setting makes sure that this is a reality for you. 17. The next tab is the Options Tab. It is The Options tab has a lot you can configure in it. For one, you have the option to connect to a Windows Domain, if you select this check box (unchecked by default), then your VPN client will request Windows logon domain information while starting to work up the VPN connection. Also, you have options here for redialing. Redial attempts are configured here if you are using a dial up connection to get to the Internet. It is very handy to redial if the line is dropped as dropped lines are very common. 18. The next tab is the Security Tab. This is where you would configure basic security for the VPN client. This is where you would set any advanced IPSec configurations other security protocols as well as requiring encryption and credentials. 19. The next tab is the Networking Tab. This is where you can select what networking items are used by this VPN connection. 20. The Last tab is the Advanced Tab. This is where you can configure options for configuring a firewall, and/or sharing. Connecting to Corporate Now that you have your XP VPN client all set up and ready, the next step is to attempt a connection to the Remote Access or VPN server set up at the corporate office. To use the connection follow these simple steps. To open the client again, go back to the Network Connections dialog box. 1. One you are in the Network Connection dialog box, double-click, or right click and select ‘Connect’ from the menu – this will initiate the connection to the corporate office. 2. Type your user name and password, and then click Connect. Properties bring you back to what we just discussed in this article, all the global settings for the VPN client you are using. 3. To disconnect from a VPN connection, right-click the icon for the connection, and then click “Disconnect” Summary In this article we covered the basics of building a VPN connection using Windows XP. This is very handy when you have a VPN device but don’t have the ‘client’ that may come with it. If the VPN Server doesn’t use highly proprietary protocols, then you can use the XP client to connect with. In a future article I will get into the nuts and bolts of both IPSec and more detail on how to configure the advanced options in the Security tab of this client. 678: The remote computer did not respond. 930: The authentication server did not respond to authentication requests in a timely fashion. 800: Unable to establish the VPN connection. 623: The system could not find the phone book entry for this connection. 720: A connection to the remote computer could not be established. More on : http://www.windowsecurity.com/articles/Configure-VPN-Connection-Windows-XP.html

    Read the article

  • Of C# Iterators and Performance

    - by James Michael Hare
    Some of you reading this will be wondering, "what is an iterator" and think I'm locked in the world of C++.  Nope, I'm talking C# iterators.  No, not enumerators, iterators.   So, for those of you who do not know what iterators are in C#, I will explain it in summary, and for those of you who know what iterators are but are curious of the performance impacts, I will explore that as well.   Iterators have been around for a bit now, and there are still a bunch of people who don't know what they are or what they do.  I don't know how many times at work I've had a code review on my code and have someone ask me, "what's that yield word do?"   Basically, this post came to me as I was writing some extension methods to extend IEnumerable<T> -- I'll post some of the fun ones in a later post.  Since I was filtering the resulting list down, I was using the standard C# iterator concept; but that got me wondering: what are the performance implications of using an iterator versus returning a new enumeration?   So, to begin, let's look at a couple of methods.  This is a new (albeit contrived) method called Every(...).  The goal of this method is to access and enumeration and return every nth item in the enumeration (including the first).  So Every(2) would return items 0, 2, 4, 6, etc.   Now, if you wanted to write this in the traditional way, you may come up with something like this:       public static IEnumerable<T> Every<T>(this IEnumerable<T> list, int interval)     {         List<T> newList = new List<T>();         int count = 0;           foreach (var i in list)         {             if ((count++ % interval) == 0)             {                 newList.Add(i);             }         }           return newList;     }     So basically this method takes any IEnumerable<T> and returns a new IEnumerable<T> that contains every nth item.  Pretty straight forward.   The problem?  Well, Every<T>(...) will construct a list containing every nth item whether or not you care.  What happens if you were searching this result for a certain item and find that item after five tries?  You would have generated the rest of the list for nothing.   Enter iterators.  This C# construct uses the yield keyword to effectively defer evaluation of the next item until it is asked for.  This can be very handy if the evaluation itself is expensive or if there's a fair chance you'll never want to fully evaluate a list.   We see this all the time in Linq, where many expressions are chained together to do complex processing on a list.  This would be very expensive if each of these expressions evaluated their entire possible result set on call.    Let's look at the same example function, this time using an iterator:       public static IEnumerable<T> Every<T>(this IEnumerable<T> list, int interval)     {         int count = 0;         foreach (var i in list)         {             if ((count++ % interval) == 0)             {                 yield return i;             }         }     }   Notice it does not create a new return value explicitly, the only evidence of a return is the "yield return" statement.  What this means is that when an item is requested from the enumeration, it will enter this method and evaluate until it either hits a yield return (in which case that item is returned) or until it exits the method or hits a yield break (in which case the iteration ends.   Behind the scenes, this is all done with a class that the CLR creates behind the scenes that keeps track of the state of the iteration, so that every time the next item is asked for, it finds that item and then updates the current position so it knows where to start at next time.   It doesn't seem like a big deal, does it?  But keep in mind the key point here: it only returns items as they are requested. Thus if there's a good chance you will only process a portion of the return list and/or if the evaluation of each item is expensive, an iterator may be of benefit.   This is especially true if you intend your methods to be chainable similar to the way Linq methods can be chained.    For example, perhaps you have a List<int> and you want to take every tenth one until you find one greater than 10.  We could write that as:       List<int> someList = new List<int>();         // fill list here         someList.Every(10).TakeWhile(i => i <= 10);     Now is the difference more apparent?  If we use the first form of Every that makes a copy of the list.  It's going to copy the entire list whether we will need those items or not, that can be costly!    With the iterator version, however, it will only take items from the list until it finds one that is > 10, at which point no further items in the list are evaluated.   So, sounds neat eh?  But what's the cost is what you're probably wondering.  So I ran some tests using the two forms of Every above on lists varying from 5 to 500,000 integers and tried various things.    Now, iteration isn't free.  If you are more likely than not to iterate the entire collection every time, iterator has some very slight overhead:   Copy vs Iterator on 100% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 5 Copy 5 5 5 Iterator 5 50 50 Copy 28 50 50 Iterator 27 500 500 Copy 227 500 500 Iterator 247 5000 5000 Copy 2266 5000 5000 Iterator 2444 50,000 50,000 Copy 24,443 50,000 50,000 Iterator 24,719 500,000 500,000 Copy 250,024 500,000 500,000 Iterator 251,521   Notice that when iterating over the entire produced list, the times for the iterator are a little better for smaller lists, then getting just a slight bit worse for larger lists.  In reality, given the number of items and iterations, the result is near negligible, but just to show that iterators come at a price.  However, it should also be noted that the form of Every that returns a copy will have a left-over collection to garbage collect.   However, if we only partially evaluate less and less through the list, the savings start to show and make it well worth the overhead.  Let's look at what happens if you stop looking after 80% of the list:   Copy vs Iterator on 80% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 4 Copy 5 5 4 Iterator 5 50 40 Copy 27 50 40 Iterator 23 500 400 Copy 215 500 400 Iterator 200 5000 4000 Copy 2099 5000 4000 Iterator 1962 50,000 40,000 Copy 22,385 50,000 40,000 Iterator 19,599 500,000 400,000 Copy 236,427 500,000 400,000 Iterator 196,010       Notice that the iterator form is now operating quite a bit faster.  But the savings really add up if you stop on average at 50% (which most searches would typically do):     Copy vs Iterator on 50% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 2 Copy 5 5 2 Iterator 4 50 25 Copy 25 50 25 Iterator 16 500 250 Copy 188 500 250 Iterator 126 5000 2500 Copy 1854 5000 2500 Iterator 1226 50,000 25,000 Copy 19,839 50,000 25,000 Iterator 12,233 500,000 250,000 Copy 208,667 500,000 250,000 Iterator 122,336   Now we see that if we only expect to go on average 50% into the results, we tend to shave off around 40% of the time.  And this is only for one level deep.  If we are using this in a chain of query expressions it only adds to the savings.   So my recommendation?  If you have a resonable expectation that someone may only want to partially consume your enumerable result, I would always tend to favor an iterator.  The cost if they iterate the whole thing does not add much at all -- and if they consume only partially, you reap some really good performance gains.   Next time I'll discuss some of my favorite extensions I've created to make development life a little easier and maintainability a little better.

    Read the article

  • Lessons from rewriting POP Forums for MVC, open source-like

    - by Jeff
    It has been a ton of work, interrupted over the last two years by unemployment, moving, a baby, failing to sell houses and other life events, but it's really exciting to see POP Forums v9 coming together. I'm not even sure when I decided to really commit to it as an open source project, but working on the same team as the CodePlex folks probably had something to do with it. Moving along the roadmap I set for myself, the app is now running on a quasi-production site... we launched MouseZoom last weekend. (That's a post-beta 1 build of the forum. There's also some nifty Silverlight DeepZoom goodness on that site.)I have to make a point to illustrate just how important starting over was for me. I started this forum thing for my sites in old ASP more than ten years ago. What a mess that stuff was, including SQL injection vulnerabilities and all kinds of crap. It went to ASP.NET in 2002, but even then, it felt a little too much like script. More than a year later, in 2003, I did an honest to goodness rewrite. If you've been in this business of writing code for any amount of time, you know how much you hate what you wrote a month ago, so just imagine that with seven years in between. The subsequent versions still carried a fair amount of crap, and that's why I had to start over, to make a clean break. Mind you, much of that crap is still running on some of my production sites in a stable manner, but it's a pain in the ass to maintain.So with that clean break, there is much that I have learned. These are a few of those lessons, in no particular order...Avoid shiny object syndromeOver the years, I've embraced new things without bothering to ask myself why. I remember spending the better part of a year trying to adapt this app to use the membership and profile API's in ASP.NET, just because they were there. They didn't solve any known problem. Early on in this version, I dabbled in exotic ORM's, even though I already had the fundamental SQL that I knew worked. I bloated up the client side code with all kinds of jQuery UI and plugins just because, and it got in the way. All the new shiny can be distracting, and I've come to realize that I've allowed it to be a distraction most of my professional life.Just query what you needI've spent a lot of time over-thinking how to query data. In the SQL world, this means exotic joins, special caches, the read-update-commit loop of ORM's, etc. There are times when you have to remind yourself that you aren't Facebook, you'll never be Facebook, and that databases are in fact intended to serve data. In a lot of projects, back in the day, I used to have these big, rich data objects and pass them all over the place, through various application tiers, when in reality, all I needed was some ID from the entity. I try to be mindful of how many queries hit the database on a given request, but I don't obsess over it. I just get what I need.Don't spend too much time worrying about your unit testsIf you've looked at any of the tests for POP Forums, you might offer an audible WTF. That's OK. There's a whole lot of mocking going on. In some cases, it points out where you're doing too much, and that's good for improving your design. In other cases it shows where your design sucks. But the biggest trap of unit testing is that you worry it should be prettier. That's a waste of time. When you write a test, in many cases before the production code, the important part is that you're testing the right thing. If you have to mock up a bunch of stuff to test the outcome, so be it, but it's not wasted time. You're still doing up the typical arrange-action-assert deal, and you'll be able to read that later if you need to.Get back to your HTTP rootsASP.NET Webforms did a reasonably decent job at abstracting us away from the stateless nature of the Web. A lot of people criticize it, but I think it all worked pretty well. These days, with MVC, jQuery, REST services, and what not, we've gone back to thinking about the wire. The nuts and bolts passing between our Web browser and server matters. This doesn't make things harder, in my opinion, it makes them easier. There is something incredibly freeing about how we approach development of Web apps now. HTTP is a really simple protocol, and the stuff we push through it, in particular HTML and JSON, are pretty simple too. The debugging points are really easy to trap and trace.Premature optimization is prematureI'll go back to the data thing for a moment. I've been known to look at a particular action or use case and stress about the number of calls that are made to the database. I'm not suggesting that it's a bad thing to keep these in mind, but if you worry about it outside of the context of the actual impact, you're wasting time. For example, I query the database for last read times in a forum separately of the user and the list of forums. The impact on performance barely exists. If I put it under load, exceeding the kind of load I expect, it still barely has an impact. Then consider it only counts for logged in users. The context of this "inefficient" action is that it doesn't matter. Did I mention I won't be Facebook?Solve your own problems firstThis is another trap I've fallen into. I've often thought about what other people might need for some feature or aspect of the app. In other words, I was willing to make design decisions based on non-existent data. How stupid is that? When I decided to truly open source this thing, building for myself first was a stated design goal. This app has to server the audiences of CoasterBuzz, MouseZoom and other sites first. In this development scenario, you don't have access to mountains of usability studies or user focus groups. You have to start with what you know.I'm sure there are other points I could make too. It has been a lot of fun to work on, and I look forward to evolving the UI as time goes on. That's where I hope to see more magic in the future.

    Read the article

  • CodePlex Daily Summary for Tuesday, October 25, 2011

    CodePlex Daily Summary for Tuesday, October 25, 2011Popular ReleasesScrum Task Board Card Creator: TaskCardCreator 2.5.0.0: What's New: UX improvement: Loading of work items is done in a worker thread Use memory, not the file system, when creating reports for better performance UI improvement: Better parent/child relation between the TeamProjectPicker and MainWindow Microsoft Visual Studio Scrum 1.0: Dedicated Impediment card added Supported Templates: Microsoft Visual Studio Scrum 1.0 Product Backlog Item, Task, Impediment, and Bug MSF for Agile Software Development v5.0 User Story, Task, and BugSubtitleTools: SubtitleTools 1.9: - Improved: Some DVD players need a mandatory UTF8-BOM (http://en.wikipedia.org/wiki/Byteordermark) at the beginning of the file to show RTL subtitles correctly.THE NVL Maker: The NVL Maker Ver 3.06: 3.06 ??? ??3.04,???CG MODE?? ??3.05—— ??????????????????????????????? ?????????config.tjs????????(??????????????????????????) ????config.tjs???,????????????,????????????Config.tjs。(???????,?????????,???KAGConfig??) ???????“????”??,??????????????????(Data) ??????????????????? ???????????,????????,??????????? ?fadeoutbgm?????????,???????????????,?????????? ????????,???“??????”??。(??????????macro_edu.ks??) ????????,????????A??????,???“??????”。 ?????????????,???????B??????,???“...Xomega Framework: Xomega.Framework 1.2: Release 1.2 of Xomega Framework refactors projects to allow generating assemblies for different target frameworks and profiles and allows deploying it as a NuGet package. It also includes some small fixes to the service and presentation layer classes.People's Note: People's Note 0.31: Added note tag editing. Changed note edit conflict resolution to keep the latest version. To install: copy the appropriate CAB file onto your WM device and run it.Windows Azure Toolkit for Windows Phone: Windows Azure Toolkit for Windows Phone v1.3.1: Upgraded Windows Azure projects to Windows Azure Tools for Microsoft Visual Studio 2010 1.5 – September 2011 Upgraded the tools tools to support the Windows Phone Developer Tools RTW Update SQL Azure only scenarios to use ASP.NET Universal Providers (through the System.Web.Providers v1.0.1 NuGet package) Changed Shared Access Signature service interface to support more operations Refactored Blobs API to have a similar interface and usage to that provided by the Windows Azure SDK Stor...xUnit.net Contrib: xunitcontrib-resharper 0.4.4 (dotCover): xunitcontrib release 0.4.4 (ReSharper runner) This release provides a test runner plugin for Resharper 6.0 RTM, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) This release addresses the following issues:Support for dotCover code coverage 4132 Note that this build work against ALL VERSIONS of xunit. The files are compiled against xunit.dll 1.8 - DO NOT REPLACE THIS FILE. Thanks to xunit's version independent runner system, this package can r...BookShop: BookShop: BookShop WP7 clientRibbon Editor for Microsoft Dynamics CRM 2011: Ribbon Editor (0.1.2122.266): Added CodePlex and PayPal links New icon Bug fix: can't connect to an IFD deployment when the discovery service url has been customizedDotNet.Framework.Common: DotNet.Framework.Common 4.0: ??????????,????????????XML Explorer: XML Explorer 4.0.5: Changes in 4.0.5: Added 'Copy Attribute XPath to Address Bar' feature. Added methods for decoding node text and value from Base64 encoded strings, and copying them to the clipboard. Added 'ChildNodeDefinitions' to the options, which allows for easier navigation of parent-child and ID-IDREF relationships. Discovery happens on-demand, as nodes are expanded and child nodes are added. Nodes can now have 'virtual' child nodes, defined by an xpath to select an identifier (usually relative to ...Media Companion: MC 3.419b Weekly: A couple of minor bug fixes, but the important fix in this release is to tackle the extremely long load times for users with large TV collections (issue #130). A note has been provided by developer Playos: "One final note, you will have to suffer one final long load and then it should be fixed... alternatively you can delete the TvCache.xml and rebuild your library... The fix was to include the file extension so it doesn't have to look for the video file (checking to see if a file exists is a...CODE Framework: 4.0.11021.0: This build adds a lot of our WPF components, including our MVVC and MVC components as well as a "Metro" and "Battleship" style.GridLibre para Visual FoxPro: GridLibre para Visual FoxPro v3.5: GridLibre Para Visual FoxPro: esta herramienta ayudara a los usuarios y programadores en los manejos de los datos, como Filtrar, multiseleccion y el autoformato a las columnas como la asignacion del controlsource.WiX Toolset: WiX v3.6 Beta: First beta release of WiX v3.6. The primary focus is on Burn but there are also many small bug fixes to the core toolset. For more information see: http://robmensching.com/blog/posts/2011/10/24/WiX-v3.6-Beta-releasedUmbraco CMS: Umbraco 5.0 CMS Alpha 3: Umbraco 5 Alpha 3Umbraco 5 (aka Jupiter) will be the next version of everyone's favourite, friendly ASP.NET CMS that already powers over 100,000 websites worldwide. Try out the Alpha of v5 today! If you're new to Umbraco and would like to get a low-down on our popular and easy-to-learn approach to content management, check out our intro video. What's Alpha 3?This is our third Alpha release. It's intended for developers looking to become familiar with the codebase & architecture, or for thos...Vkontakte WP: Vkontakte: source codeWay2Sms Applications for Android, Desktop/Laptop & Java enabled phones: Way2SMS Desktop App v2.0: 1. Fixed issue with sending messages due to changes to Way2Sms site 2. Updated the character limit to 160 from 140GART - Geo Augmented Reality Toolkit: 1.0.1: About Release 1.0.1 Release 1.0.1 is a service release that addresses several issues and improves performance. As always, check the Documentation tab for instructions on how to get started. If you don't have the Windows Phone SDK yet, grab it here. Breaking Change Please note: There is a breaking change in this release. As noted below, the WorldCalculationMode property of ARItem has been replaced by a user-definable function. ARItem is now automatically wired up with a function that perform...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.32: Fix for issue #16710 - string literals in "constant literal operations" which contain ASP.NET substitutions should not be considered "constant." Move the JS1284 error (Misplaced Function Declaration) so it only fires when in strict mode. I got a couple complaints that people didn't like that error popping up in their existing code when they could verify that the location of that function, although not strict JS, still functions as expected cross-browser.New ProjectsApiDiff.Net: API Difference/Reporting project written in C# using the wonderful Cecil engine (http://www.mono-project.com/Cecil) to show differences between versions of a public API. Something like the unsupported MS tool "libcheck"Aspect Oriented Programming with .Net: Demo project using PostSharp for AOP in .Net.CCBBAA: CCBBAACdf.Iris.MiniProjetCalculateur: Application Client / Serveur calculateur en langage CCmjSiverlight: a silverlight libraryCSharpKaartSpel: Super Kaart Spel.Developer Team Article System Management: Developer Team Article System Management Makes it easier for Authors to publish their articles . It developed in VB.NET .DirectoryMonitor: DirectoryMonitorDNN Administrador de Tareas: Un Proyecto de codigo abierto Domek: Taki sobie domekEDXGraphics: This is Edward(Shiqiu) Liu's personal code repository. Projects here are primarily about Graphics, AI and Algorithms. For more introduction, please visit [url:www.edxgraphics.com] Enterprise SSIS Framework: The Enterprise SSIS Framework is a project intended to simplify the management of an organization’s administration and ETL assets by abstracting the coordination and control flow into metadata. This project consists of the four parts: - Core SSIS Packages: A set of packages that are responsible for workflow within the framework - SQL Server 2008 DB: Holds all the metadata necessary to drive the application - Administration App: A web-based front end to facilitate managing assets running ...FIM Task Sequencer "RunJob": Runjob is a simple batch controller for FIM (MIIS and ILM) for starting and monitoring the execution of FIM Run Profiles that allows for complex (looping and branching) sequences to be put together with a simple XML 'task' file. Key features include: * Works with MIIS / ILM / FIM * Optionally Connects to the SQL database to verify that run profiles exist * Can execute arbitrary tasks via a command line. * Based on the results of command line or previous run history can loop and jump ...Ham Bone Soup amateur radio software and electronic log: Ham Bone Soup is a open source software written for satisfying the needs of Amateur Radio operators. The central feature is an especially flexible logging system for acommodating contests and general purpose logs, but will grow to include many other vital Amateur radio features. It doesn't do anything yet, we are just geting started.H? th?ng qu?n lý chi tiêu - windows phone: H? th?ng qu?n lý chi tiêu - windows phoneJavascript Editor for SharePoint: Javascript Editor for SharePoint is a lightweight, in-browser editor that lets you quickly prototype and test custom javascript code in SharePointLegoWeb: Open source Web CMS base on ASP.NET Webparts + MARCXML metadata: LegoWeb is an open source web content management solution developed base on combination of ASP.NET 2.0 Webparts and MARCXML Metadata. It is very simple and very flexible. LuaXna: An simple engine that allow to devolop in LUA for XNA. It have logging feature and cycles.Manager Game: Student project for game developement course. Using XNA 4.0 for Windows platform.Metroed: A Metro version of the PhoneyTools (a WP7 toolkit) for use with C#/VB WinRT applications.My Masters Sample Project for Sharepoint 2010: The feature stabling example project which is provide to deploy custom master page to personel sites on Sharepoint 2010 The solution is anwering fallowing questions : * How to deploy a custom master page ? * How to customize a masterpage ? * How to attach custom master page to personal sites using stabling feature ? NDepend TFS 2010 integration: NDepend TFS 2010 integration provides build activities, a build report customization addin, and extends the TFS Warehouse in order to leverage NDepend quality statistics for your projects. Open Source Renderer: Open Source RendererPearTunes: Peartunes is a university project.Plugin Framework Web: Lighweight plugin framework for web applicationsQTP TFS Generic Test Integration: In Microsoft Visual Studio 2010 there is a Generic Test type that allows you to integrate with automation tools like QTP. I have created a pretty simple solution that allows you to use your existing QTP automation scripts within the Microsoft Visual Studio testing framework and here's a sample below. The key part of this solution is transforming HP's QTP format to Microsoft’s Generic Test format so that you can publish the results to TFS. The added benefit of this integration is that you c...RequiemDream: the project for manage workflow records reexecute to helping developers for debugging.Rift Addon Studio 2010: Rift AddOn Studio (AOS) is an open source development environment designed to bring a Visual Studio-like experience to building Rift AddOns. For more information on exactly what AOS does, check out the list of features below. Small Database Tools: This is a project for me to create small tools I created for database related functions.SQL Azure Membership, Profile, Role provider starter kit for MVC3 project: When you use out of box MVC 3 site template the Membership, Profile and Role provider DB is created as attached MDF file by the name AspNetDB.mdf. This project has needed steps to easily migrate this DB into SQL Azure. This is a complete MVC3 Starter site project and could be used for your next 'Big Thing' as soon as properly setup. Enjoy :)SQL Server BareMetal Hands-on-Labs: The SQL Server BareMetal Hands-on-Labs project allows you to build a SQL Server Hands-on-Lab Enviroment from scratch, using Windows Server 2008 R2 SP1 Hyper-V. State Theater Website: Building a website designed to provide information about live theater throughout a state. Basically, it a port of NJTheater.com from classic ASP to Asp.Net making it customizable to any state along the way.Static web generator: Zoltar let you generate your website (blog, personl website,etc) from a set of md files. Ukulele Chord Finder: Ukulele Chord FinderUser Profile Cleaner: This application allows you to manage via GUI or command line user profiles, removed the profiles no longer used by a number of days. The application allows you to set a list of profiles that can be excluded from deletion. Virtual Visit: This project have made the virtual visite for your siteWP7 Open source project collection by eLite: ??Windows Phone????????zebrawebservice: ??AWB,??OPS??

    Read the article

  • Premature-Optimization and Performance Anxiety

    - by James Michael Hare
    While writing my post analyzing the new .NET 4 ConcurrentDictionary class (here), I fell into one of the classic blunders that I myself always love to warn about.  After analyzing the differences of time between a Dictionary with locking versus the new ConcurrentDictionary class, I noted that the ConcurrentDictionary was faster with read-heavy multi-threaded operations.  Then, I made the classic blunder of thinking that because the original Dictionary with locking was faster for those write-heavy uses, it was the best choice for those types of tasks.  In short, I fell into the premature-optimization anti-pattern. Basically, the premature-optimization anti-pattern is when a developer is coding very early for a perceived (whether rightly-or-wrongly) performance gain and sacrificing good design and maintainability in the process.  At best, the performance gains are usually negligible and at worst, can either negatively impact performance, or can degrade maintainability so much that time to market suffers or the code becomes very fragile due to the complexity. Keep in mind the distinction above.  I'm not talking about valid performance decisions.  There are decisions one should make when designing and writing an application that are valid performance decisions.  Examples of this are knowing the best data structures for a given situation (Dictionary versus List, for example) and choosing performance algorithms (linear search vs. binary search).  But these in my mind are macro optimizations.  The error is not in deciding to use a better data structure or algorithm, the anti-pattern as stated above is when you attempt to over-optimize early on in such a way that it sacrifices maintainability. In my case, I was actually considering trading the safety and maintainability gains of the ConcurrentDictionary (no locking required) for a slight performance gain by using the Dictionary with locking.  This would have been a mistake as I would be trading maintainability (ConcurrentDictionary requires no locking which helps readability) and safety (ConcurrentDictionary is safe for iteration even while being modified and you don't risk the developer locking incorrectly) -- and I fell for it even when I knew to watch out for it.  I think in my case, and it may be true for others as well, a large part of it was due to the time I was trained as a developer.  I began college in in the 90s when C and C++ was king and hardware speed and memory were still relatively priceless commodities and not to be squandered.  In those days, using a long instead of a short could waste precious resources, and as such, we were taught to try to minimize space and favor performance.  This is why in many cases such early code-bases were very hard to maintain.  I don't know how many times I heard back then to avoid too many function calls because of the overhead -- and in fact just last year I heard a new hire in the company where I work declare that she didn't want to refactor a long method because of function call overhead.  Now back then, that may have been a valid concern, but with today's modern hardware even if you're calling a trivial method in an extremely tight loop (which chances are the JIT compiler would optimize anyway) the results of removing method calls to speed up performance are negligible for the great majority of applications.  Now, obviously, there are those coding applications where speed is absolutely king (for example drivers, computer games, operating systems) where such sacrifices may be made.  But I would strongly advice against such optimization because of it's cost.  Many folks that are performing an optimization think it's always a win-win.  That they're simply adding speed to the application, what could possibly be wrong with that?  What they don't realize is the cost of their choice.  For every piece of straight-forward code that you obfuscate with performance enhancements, you risk the introduction of bugs in the long term technical debt of the application.  It will become so fragile over time that maintenance will become a nightmare.  I've seen such applications in places I have worked.  There are times I've seen applications where the designer was so obsessed with performance that they even designed their own memory management system for their application to try to squeeze out every ounce of performance.  Unfortunately, the application stability often suffers as a result and it is very difficult for anyone other than the original designer to maintain. I've even seen this recently where I heard a C++ developer bemoaning that in VS2010 the iterators are about twice as slow as they used to be because Microsoft added range checking (probably as part of the 0x standard implementation).  To me this was almost a joke.  Twice as slow sounds bad, but it almost never as bad as you think -- especially if you're gaining safety.  The only time twice is really that much slower is when once was too slow to begin with.  Think about it.  2 minutes is slow as a response time because 1 minute is slow.  But if an iterator takes 1 microsecond to move one position and a new, safer iterator takes 2 microseconds, this is trivial!  The only way you'd ever really notice this would be in iterating a collection just for the sake of iterating (i.e. no other operations).  To my mind, the added safety makes the extra time worth it. Always favor safety and maintainability when you can.  I know it can be a hard habit to break, especially if you started out your career early or in a language such as C where they are very performance conscious.  But in reality, these type of micro-optimizations only end up hurting you in the long run. Remember the two laws of optimization.  I'm not sure where I first heard these, but they are so true: For beginners: Do not optimize. For experts: Do not optimize yet. This is so true.  If you're a beginner, resist the urge to optimize at all costs.  And if you are an expert, delay that decision.  As long as you have chosen the right data structures and algorithms for your task, your performance will probably be more than sufficient.  Chances are it will be network, database, or disk hits that will be your slow-down, not your code.  As they say, 98% of your code's bottleneck is in 2% of your code so premature-optimization may add maintenance and safety debt that won't have any measurable impact.  Instead, code for maintainability and safety, and then, and only then, when you find a true bottleneck, then you should go back and optimize further.

    Read the article

  • Cost Comparison Hard Disk Drive to Solid State Drive on Price per Gigabyte - dispelling a myth!

    - by tonyrogerson
    It is often said that Hard Disk Drive storage is significantly cheaper per GiByte than Solid State Devices – this is wholly inaccurate within the database space. People need to look at the cost of the complete solution and not just a single component part in isolation to what is really required to meet the business requirement. Buying a single Hitachi Ultrastar 600GB 3.5” SAS 15Krpm hard disk drive will cost approximately £239.60 (http://scan.co.uk, 22nd March 2012) compared to an OCZ 600GB Z-Drive R4 CM84 PCIe costing £2,316.54 (http://scan.co.uk, 22nd March 2012); I’ve not included FusionIO ioDrive because there is no public pricing available for it – something I never understand and personally when companies do this I immediately think what are they hiding, luckily in FusionIO’s case the product is proven though is expensive compared to OCZ enterprise offerings. On the face of it the single 15Krpm hard disk has a price per GB of £0.39, the SSD £3.86; this is what you will see in the press and this is what sales people will use in comparing the two technologies – do not be fooled by this bullshit people! What is the requirement? The requirement is the database will have a static size of 400GB kept static through archiving so growth and trim will balance the database size, the client requires resilience, there will be several hundred call centre staff querying the database where queries will read a small amount of data but there will be no hot spot in the data so the randomness will come across the entire 400GB of the database, estimates predict that the IOps required will be approximately 4,000IOps at peak times, because it’s a call centre system the IO latency is important and must remain below 5ms per IO. The balance between read and write is 70% read, 30% write. The requirement is now defined and we have three of the most important pieces of the puzzle – space required, estimated IOps and maximum latency per IO. Something to consider with regard SQL Server; write activity requires synchronous IO to the storage media specifically the transaction log; that means the write thread will wait until the IO is completed and hardened off until the thread can continue execution, the requirement has stated that 30% of the system activity will be write so we can expect a high amount of synchronous activity. The hardware solution needs to be defined; two possible solutions: hard disk or solid state based; the real question now is how many hard disks are required to achieve the IO throughput, the latency and resilience, ditto for the solid state. Hard Drive solution On a test on an HP DL380, P410i controller using IOMeter against a single 15Krpm 146GB SAS drive, the throughput given on a transfer size of 8KiB against a 40GiB file on a freshly formatted disk where the partition is the only partition on the disk thus the 40GiB file is on the outer edge of the drive so more sectors can be read before head movement is required: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 3,733 IOps at an average latency of 34.06ms (34 MiB/s). The same test was done on the same disk but the test file was 130GiB: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 528 IOps at an average latency of 217.49ms (4 MiB/s). From the result it is clear random performance gets worse as the disk fills up – I’m currently writing an article on short stroking which will cover this in detail. Given the work load is random in nature looking at the random performance of the single drive when only 40 GiB of the 146 GB is used gives near the IOps required but the latency is way out. Luckily I have tested 6 x 15Krpm 146GB SAS 15Krpm drives in a RAID 0 using the same test methodology, for the same test above on a 130 GiB for each drive added the performance boost is near linear, for each drive added throughput goes up by 5 MiB/sec, IOps by 700 IOps and latency reducing nearly 50% per drive added (172 ms, 94 ms, 65 ms, 47 ms, 37 ms, 30 ms). This is because the same 130GiB is spread out more as you add drives 130 / 1, 130 / 2, 130 / 3 etc. so implicit short stroking is occurring because there is less file on each drive so less head movement required. The best latency is still 30 ms but we have the IOps required now, but that’s on a 130GiB file and not the 400GiB we need. Some reality check here: a) the drive randomness is more likely to be 50/50 and not a full 100% but the above has highlighted the effect randomness has on the drive and the more a drive fills with data the worse the effect. For argument sake let us assume that for the given workload we need 8 disks to do the job, for resilience reasons we will need 16 because we need to RAID 1+0 them in order to get the throughput and the resilience, RAID 5 would degrade performance. Cost for hard drives: 16 x £239.60 = £3,833.60 For the hard drives we will need disk controllers and a separate external disk array because the likelihood is that the server itself won’t take the drives, a quick spec off DELL for a PowerVault MD1220 which gives the dual pathing with 16 disks 146GB 15Krpm 2.5” disks is priced at £7,438.00, note its probably more once we had two controller cards to sit in the server in, racking etc. Minimum cost taking the DELL quote as an example is therefore: {Cost of Hardware} / {Storage Required} £7,438.60 / 400 = £18.595 per GB £18.59 per GiB is a far cry from the £0.39 we had been told by the salesman and the myth. Yes, the storage array is composed of 16 x 146 disks in RAID 10 (therefore 8 usable) giving an effective usable storage availability of 1168GB but the actual storage requirement is only 400 and the extra disks have had to be purchased to get the  IOps up. Solid State Drive solution A single card significantly exceeds the IOps and latency required, for resilience two will be required. ( £2,316.54 * 2 ) / 400 = £11.58 per GB With the SSD solution only two PCIe sockets are required, no external disk units, no additional controllers, no redundant controllers etc. Conclusion I hope by showing you an example that the myth that hard disk drives are cheaper per GiB than Solid State has now been dispelled - £11.58 per GB for SSD compared to £18.59 for Hard Disk. I’ve not even touched on the running costs, compare the costs of running 18 hard disks, that’s a lot of heat and power compared to two PCIe cards!Just a quick note: I've left a fair amount of information out due to this being a blog! If in doubt, email me :)I'll also deal with the myth that SSD's wear out at a later date as well - that's just way over done still, yes, 5 years ago, but now - no.

    Read the article

  • Oracle Tutor: Top 10 to Implement Sustainable Policies and Procedures

    - by emily.chorba(at)oracle.com
    Overview Your organization (executives, managers, and employees) understands the value of having written business process documents (process maps, procedures, instructions, reference documents, and form abstracts). Policies and procedures should be documented because they help to reduce the range of individual decisions and encourage management by exception: the manager only needs to give special attention to unusual problems, not covered by a specific policy or procedure. As more and more procedures are written to cover recurring situations, managers will begin to make decisions which will be consistent from one functional area to the next.Companies should take a project management approach when implementing an environment for a sustainable documentation program and do the following:1. Identify an Executive Champion2. Put together a winning team3. Assign ownership4. Centralize publishing5. Establish the Document Maintenance Process Up Front6. Document critical activities only7. Document actual practice8. Minimize documentation9. Support continuous improvement10. Keep it simple 1. Identify an Executive ChampionAppoint a top down driver. Select one key individual to be a mentor for the procedure planning team. The individual should be a senior manager, such as your company president, CIO, CFO, the vice-president of quality, manufacturing, or engineering. Written policies and procedures can be important supportive aids when known to express the thinking for the chief executive officer and / or the president and to have his or her full support. 2. Put Together a Winning TeamChoose a strong Project Management Leader and staff the procedure planning team with management members from cross functional groups. Make sure team members have the responsibility - and the authority - to make things happen.The winning team should consist of the Documentation Project Manager, Document Owners (one for each functional area), a Document Controller, and Document Specialists (as needed). The Tutor Implementation Guide has complete job descriptions for these roles. 3. Assign Ownership It is virtually impossible to keep process documentation simple and meaningful if employees who are far removed from the activity itself create it. It is impossible to keep documentation up-to-date when responsibility for the document is not clearly understood.Key to the Tutor methodology, therefore, is the concept of ownership. Each document has a single owner, who is responsible for ensuring that the document is necessary and that it reflects actual practice. The owner must be a person who is knowledgeable about the activity and who has the authority to build consensus among the persons who participate in the activity as well as the authority to define or change the way an activity is performed. The owner must be an advocate of the performers and negotiate, not dictate practices.In the Tutor environment, a document's owner is the only person with the authority to approve an update to that document. 4. Centralize Publishing Although it is tempting (especially in a networked environment and with document management software solutions) to decentralize the control of all documents -- with each owner updating and distributing his own -- Tutor promotes centralized publishing by assigning the Document Administrator (gate keeper) to manage the updates and distribution of the procedures library. 5. Establish a Document Maintenance Process Up Front (and stick to it) Everyone in your organization should know they are invited to suggest changes to procedures and should understand exactly what steps to take to do so. Tutor provides a set of procedures to help your company set up a healthy document control system. There are many document management products available to automate some of the document change and maintenance steps. Depending on the size of your organization, a simple document management system can reduce the effort it takes to track and distribute document changes and updates. Whether your company decides to store the written policies and procedures on a file server or in a database, the essential tasks for maintaining documents are the same, though some tasks are automated. 6. Document Critical Activities Only The best way to keep your documentation simple is to reduce the number of process documents to a bare minimum and to include in those documents only as much detail as is absolutely necessary. The first step to reducing process documentation is to document only those activities that are deemed critical. Not all activities require documentation. In fact, some critical activities cannot and should not be standardized. Others may be sufficiently documented with an instruction or a checklist and may not require a procedure. A document should only be created when it enhances the performance of the employee performing the activity. If it does not help the employee, then there is no reason to maintain the document. Activities that represent little risk (such as project status), activities that cannot be defined in terms of specific tasks (such as product research), and activities that can be performed in a variety of ways (such as advertising) often do not require documentation. Sometimes, an activity will evolve to the point where documentation is necessary. For example, an activity performed by single employee may be straightforward and uncomplicated -- that is, until the activity is performed by multiple employees. Sometimes, it is the interaction between co-workers that necessitates documentation; sometimes, it is the complexity or the diversity of the activity.7. Document Actual Practices The only reason to maintain process documentation is to enhance the performance of the employee performing the activity. And documentation can only enhance performance if it reflects reality -- that is, current best practice. Documentation that reflects an unattainable ideal or outdated practices will end up on the shelf, unused and forgotten.Documenting actual practice means (1) auditing the activity to understand how the work is really performed, (2) identifying best practices with employees who are involved in the activity, (3) building consensus so that everyone agrees on a common method, and (4) recording that consensus.8. Minimize Documentation One way to keep it simple is to document at the highest level possible. That is, include in your documents only as much detail as is absolutely necessary.When writing a document, you should ask yourself, What is the purpose of this document? That is, what problem will it solve?By focusing on this question, you can target the critical information.• What questions are the end users likely to have?• What level of detail is required?• Is any of this information extraneous to the document's purpose? Short, concise documents are user friendly and they are easier to keep up to date. 9. Support Continuous Improvement Employees who perform an activity are often in the best position to identify improvements to the process. In other words, continuous improvement is a natural byproduct of the work itself -- but only if the improvements are communicated to all employees who are involved in the process, and only if there is consensus among those employees.Traditionally, process documentation has been used to dictate performance, to limit employees' actions. In the Tutor environment, process documents are used to communicate improvements identified by employees. How does this work? The Tutor methodology requires a process document to reflect actual practice, so the owner of a document must routinely audit its content -- does the document match what the employees are doing? If it doesn't, the owner has the responsibility to evaluate the process, to build consensus among the employees, to identify "best practices," and to communicate these improvements via a document update. Continuous improvement can also be an outgrowth of corrective action -- but only if the solutions to problems are communicated effectively. The goal should be to solve a problem once and only once, which means not only identifying the solution, but ensuring that the solution becomes part of the process. The Tutor system provides the method through which improvements and solutions are documented and communicated to all affected employees in a cost-effective, timely manner; it ensures that improvements are not lost or confined to a single employee. 10. Keep it Simple Process documents don't have to be complex and unfriendly. In fact, the simpler the format and organization, the more likely the documents will be used. And the simpler the method of maintenance, the more likely the documents will be kept up-to-date. Keep it simply by:• Minimizing skills and training required• Following the established Tutor document format and layout• Avoiding technology just for technology's sake No other rule has as major an impact on the success of your internal documentation as -- keep it simple. Learn More For more information about Tutor, visit Oracle.Com or the Tutor Blog. Post your questions at the Tutor Forum.   Emily Chorba Principle Product Manager Oracle Tutor & BPM 

    Read the article

  • Big Visible Charts

    - by Robert May
    An important part of Agile is the concept of transparency and visibility. In proper functioning teams, stakeholders can look at any team at any time in the iteration or release and see how that team is doing by simply looking at what we call Big Visible Charts. If you’ve done Scrum, you’ve seen these charts. However, interpreting these charts can often be an art form. There are several different charts that can be useful. In this newsletter, I’ll focus on the Iteration Burndown and Cumulative Flow charts. I’ve included a copy of the spreadsheet that I used to create the charts, and if you don’t have a tool that creates them for you, you can use this spreadsheet to do so. Our preferred tool for managing Scrum projects is Rally. Rally creates all of these charts for you, saving you quite a bit of time. The Iteration Burndown and Cumulative Flow Charts This is the main chart that teams use. Although less useful to stakeholders, this chart is critical to the team and provides quite a bit of information to the team about how their iteration is going. Most charts are a combination of the charts below, so you may need to combine aspects of each section to understand what is happening in your iterations. Ideal Ah, isn’t that a pretty picture? Unfortunately, it’s also very unrealistic. I’ve seen iterations that come close to ideal, but never that match perfectly. If your iteration matches perfectly, chances are, someone is playing with the numbers. Reality is just too difficult to have a burndown chart that matches this exactly. Late Planning Iteration started, but the team didn’t. You can tell this by the fact that the real number of estimated hours didn’t appear until day two. In the cumulative flow, you can also see that nothing was defined in Day one and two. You want to avoid situations like this. You’ll note that the team had to burn faster than is ideal to meet the iteration because of the late planning. This often results in long weeks and days. Testing Starved Determining whether or not testing is starved is difficult without the cumulative flow. The pattern in the burndown could be nothing more that developers not completing stories early enough or could be caused by stories being too big. With the cumulative flow, however, you see that only small bites are in progress and stories were completed early, but testing didn’t start testing until the end of the iteration, and didn’t complete testing all stories in the iteration. When this happens, question whether or not your testing resources are sufficient for your team and whether or not acceptance is adequately defined. No Testing With this one, both graphs show the same thing; the team needs testers and testing! Without testing, what was completed cannot be verified to make sure that it is acceptable to the business. If you find yourself in this situation, review your testing practices and acceptance testing process and make changes today. Late Development With this situation, both graphs tell a story. In the top graph, you can see that the hours failed to burn down as quickly as the team expected. This could be caused by the team not correctly estimating their hours or the team could have had illness or some other issue that affected them. Often, when teams are tackling something that is more unknown, they’ll run into technical barriers that cause the burn down to happen slower than expected. In the cumulative flow graph, you can see that not much was completed in the first few days. This could be because of illness or technical barriers or simply poor estimation. Testing was able to keep up with everything that was completed, however. No Tool Updating When you see graphs that look like this, you can be assured that it’s because the team is not updating the tool that generates the graphs. Review your policy for when they are to update. On the teams that I run, I require that each team member updates the tool at least once daily. You should also check to see how well the team is breaking down stories into tasks. If they’re creating few large tasks, graphs can look similar to this. As a general rule, I never allow tasks, other than Unit Testing and Uncertainty, to be greater than eight hours in duration. Scope Increase I always encourage team members to enter in however much time they think they have left on a task, even if that means increasing the total amount of time left to do. You get a much better and more realistic picture this way. Increasing time remaining could explain the burndown graph, but by looking at the cumulative flow graph, we can see that stories were added to the iteration and scope was increased. Since planning should consume all of the hours in the iteration, this is almost always a bad thing. If the scope change happened late in the iteration and the hours remaining were well below the ideal burn, then increasing scope is probably o.k., but estimation needs to get better. However, with the charts above, that’s clearly not what happened and the team was required to do extra work to make the iteration. If you find this happening, your product owner and ScrumMasters need training. The team also needs to learn to say no. Scope Decrease Scope decreases are just as bad as scope increases. Usually, graphs above show that the team did a poor job of estimating their stories and part way through had to reduce scope to change the iteration. This will happen once in a while, but if you find it’s a pattern on your team, you need to re-evaluate planning. Some teams are hopelessly optimistic. In those cases, I’ll introduce a task I call “Uncertainty.” With Uncertainty, the team estimates how many hours they might need if things don’t go well with the tasks they’ve defined. They try to estimate things that could go poorly and increase the time appropriately. Having an Uncertainty task allows them to have a low and high estimate. Uncertainty should not just be an arbitrary buffer. It must correlate to real uncertainty in the tasks that have been defined. Stories are too Big Often, we see graphs like the ones above. Note that the burndown looks fairly good, other than the chunky acceptance of stories. However, when you look at cumulative flow, you can see that at one point, everything is in progress. This is a bad thing. When you see graphs like this, you’re in one of two states. You may just have a very small team and can only handle one or two stories in your iteration. If you have more than one or two people, then the most likely problem is that your stories are far too big. To combat this, break large high hour stories into smaller pieces that can be completed independently and accepted independently. If you don’t, you’ll likely be requiring your testers to do heroic things to complete testing on the last day of the iteration and you’re much more likely to have the entire iteration fail, because of the limited amount of things that can be completed. Summary There are other charts that can be useful when doing scrum. If you don’t have any big visible charts, you really need to evaluate your process and change. These charts can provide the team a wealth of information and help you write better software. If you have any questions about charts that you’re seeing on your team, contact me with a screen capture of the charts and I’ll tell you what I’m seeing in those charts. I always want this information to be useful, so please let me know if you have other questions. Technorati Tags: Agile

    Read the article

  • DBCC CHECKDB on VVLDB and latches (Or: My Pain is Your Gain)

    - by Argenis
      Does your CHECKDB hurt, Argenis? There is a classic blog series by Paul Randal [blog|twitter] called “CHECKDB From Every Angle” which is pretty much mandatory reading for anybody who’s even remotely considering going for the MCM certification, or its replacement (the Microsoft Certified Solutions Master: Data Platform – makes my fingers hurt just from typing it). Of particular interest is the post “Consistency Options for a VLDB” – on it, Paul provides solid, timeless advice (I use the word “timeless” because it was written in 2007, and it all applies today!) on how to perform checks on very large databases. Well, here I was trying to figure out how to make CHECKDB run faster on a restored copy of one of our databases, which happens to exceed 7TB in size. The whole thing was taking several days on multiple systems, regardless of the storage used – SAS, SATA or even SSD…and I actually didn’t pay much attention to how long it was taking, or even bothered to look at the reasons why - as long as it was finishing okay and found no consistency errors. Yes – I know. That was a huge mistake, as corruption found in a database several days after taking place could only allow for further spread of the corruption – and potentially large data loss. In the last two weeks I increased my attention towards this problem, as we noticed that CHECKDB was taking EVEN LONGER on brand new all-flash storage in the SAN! I couldn’t really explain it, and were almost ready to blame the storage vendor. The vendor told us that they could initially see the server driving decent I/O – around 450Mb/sec, and then it would settle at a very slow rate of 10Mb/sec or so. “Hum”, I thought – “CHECKDB is just not pushing the I/O subsystem hard enough”. Perfmon confirmed the vendor’s observations. Dreaded @BlobEater What was CHECKDB doing all the time while doing so little I/O? Eating Blobs. It turns out that CHECKDB was taking an extremely long time on one of our frankentables, which happens to be have 35 billion rows (yup, with a b) and sucks up several terabytes of space in the database. We do have a project ongoing to purge/split/partition this table, so it’s just a matter of time before we deal with it. But the reality today is that CHECKDB is coming to a screeching halt in performance when dealing with this particular table. Checking sys.dm_os_waiting_tasks and sys.dm_os_latch_stats showed that LATCH_EX (DBCC_OBJECT_METADATA) was by far the top wait type. I remembered hearing recently about that wait from another post that Paul Randal made, but that was related to computed-column indexes, and in fact, Paul himself reminded me of his article via twitter. But alas, our pathologic table had no non-clustered indexes on computed columns. I knew that latches are used by the database engine to do internal synchronization – but how could I help speed this up? After all, this is stuff that doesn’t have a lot of knobs to tweak. (There’s a fantastic level 500 talk by Bob Ward from Microsoft CSS [blog|twitter] called “Inside SQL Server Latches” given at PASS 2010 – and you can check it out here. DISCLAIMER: I assume no responsibility for any brain melting that might ensue from watching Bob’s talk!) Failed Hypotheses Earlier on this week I flew down to Palo Alto, CA, to visit our Headquarters – and after having a great time with my Monkey peers, I was relaxing on the plane back to Seattle watching a great talk by SQL Server MVP and fellow MCM Maciej Pilecki [twitter] called “Masterclass: A Day in the Life of a Database Transaction” where he discusses many different topics related to transaction management inside SQL Server. Very good stuff, and when I got home it was a little late – that slow DBCC CHECKDB that I had been dealing with was way in the back of my head. As I was looking at the problem at hand earlier on this week, I thought “How about I set the database to read-only?” I remembered one of the things Maciej had (jokingly) said in his talk: “if you don’t want locking and blocking, set the database to read-only” (or something to that effect, pardon my loose memory). I immediately killed the CHECKDB which had been running painfully for days, and set the database to read-only mode. Then I ran DBCC CHECKDB against it. It started going really fast (even a bit faster than before), and then throttled down again to around 10Mb/sec. All sorts of expletives went through my head at the time. Sure enough, the same latching scenario was present. Oh well. I even spent some time trying to figure out if NUMA was hurting performance. Folks on Twitter made suggestions in this regard (thanks, Lonny! [twitter]) …Eureka? This past Friday I was still scratching my head about the whole thing; I was ready to start profiling with XPERF to see if I could figure out which part of the engine was to blame and then get Microsoft to look at the evidence. After getting a bunch of good news I’ll blog about separately, I sat down for a figurative smack down with CHECKDB before the weekend. And then the light bulb went on. A sparse column. I thought that I couldn’t possibly be experiencing the same scenario that Paul blogged about back in March showing extreme latching with non-clustered indexes on computed columns. Did I even have a non-clustered index on my sparse column? As it turns out, I did. I had one filtered non-clustered index – with the sparse column as the index key (and only column). To prove that this was the problem, I went and setup a test. Yup, that'll do it The repro is very simple for this issue: I tested it on the latest public builds of SQL Server 2008 R2 SP2 (CU6) and SQL Server 2012 SP1 (CU4). First, create a test database and a test table, which only needs to contain a sparse column: CREATE DATABASE SparseColTest; GO USE SparseColTest; GO CREATE TABLE testTable (testCol smalldatetime SPARSE NULL); GO INSERT INTO testTable (testCol) VALUES (NULL); GO 1000000 That’s 1 million rows, and even though you’re inserting NULLs, that’s going to take a while. In my laptop, it took 3 minutes and 31 seconds. Next, we run DBCC CHECKDB against the database: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; This runs extremely fast, as least on my test rig – 198 milliseconds. Now let’s create a filtered non-clustered index on the sparse column: CREATE NONCLUSTERED INDEX [badBadIndex] ON testTable (testCol) WHERE testCol IS NOT NULL; With the index in place now, let’s run DBCC CHECKDB one more time: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; In my test system this statement completed in 11433 milliseconds. 11.43 full seconds. Quite the jump from 198 milliseconds. I went ahead and dropped the filtered non-clustered indexes on the restored copy of our production database, and ran CHECKDB against that. We went down from 7+ days to 19 hours and 20 minutes. Cue the “Argenis is not impressed” meme, please, Mr. LaRock. My pain is your gain, folks. Go check to see if you have any of such indexes – they’re likely causing your consistency checks to run very, very slow. Happy CHECKDBing, -Argenis ps: I plan to file a Connect item for this issue – I consider it a pretty serious bug in the engine. After all, filtered indexes were invented BECAUSE of the sparse column feature – and it makes a lot of sense to use them together. Watch this space and my twitter timeline for a link.

    Read the article

  • Rapid Evolution of Society & Technology

    - by Michael Snow
    We caught up with Brian Solis on the phone the other day and Christie Flanagan had a chance to chat with him and learn a bit more about him and some of the concepts he'll be addressing in our Social Business Thought Leaders Webcast on Thursday 12/13/12. «--- Interview with Brian Solis  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast- mso-fareast-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Be sure and register for this week's webcast ---» ------------------- Guest post by Brian Solis. Reposted (Borrowed) from his posting of May 24, 2012 Dear [insert business name], what’s your promise? - Brian Solis You say you want to get closer to customers, but your actions are different than your words. You say you want to “surprise and delight” customers, but your product development teams are too busy building against a roadmap without consideration of the 5th P of marketing…people. Your employees are your number one asset, however the infrastructure of the organization has turned once optimistic and ambitious intrapreneurs into complacent cogs or worse, your greatest detractors. You question the adoption of disruptive technology by your internal champions yet you’ve not tried to find the value for yourself. You’re a change agent and you truly wish to bring about change, but you’ve not invested time or resources to answer “why” in your endeavors to become a connected or social business. If we are to truly change, we must find purpose. We must uncover the essence of our business and the value it delivers to traditional and connected consumers. We must rethink the spirit of today’s embrace and clearly articulate how transformation is going to improve customer and employee experiences and relationships now and over time. Without doing so, any attempts at evolution will be thwarted by reality. In an era of Digital Darwinism, no business is too big to fail or too small to succeed. These are undisciplined times which require alternative approaches to recognize and pursue new opportunities. But everything begins with acknowledging the 360 view of the world that you see today is actually a filtered view of managed and efficient convenience. Today, many organizations that were once inspired by innovation and engagement have fallen into a process of marketing, operationalizing, managing, and optimizing. That might have worked for the better part of the last century, but for the next 10 years and beyond, new vision, leadership and supporting business models will be written to move businesses from rigid frameworks to adaptive and agile entities. I believe that today’s executives will undergo a great test; a test of character, vision, intention, and universal leadership. It starts with a simple, but essential question…what is your promise? Notice, I didn’t ask about your brand promise. Nor did I ask for you to cite your mission and vision statements. This is much more than value propositions or manufactured marketing language designed to hook audiences and stakeholders. I asked for your promise to me as your consumer, stakeholder, and partner. This isn’t about B2B or B2C, but instead, people to people, person to person. It is this promise that will breathe new life into an organization that on the outside, could be misdiagnosed as catatonic by those who are disrupting your markets. A promise, for example, is meant to inspire. It creates alignment. It serves as the foundation for your vision, mission, and all business strategies and it must come from the top to mean anything. For without it, we cannot genuinely voice what it is we stand for or stand behind. Think for a moment about the definition of community. It’s easy to confuse a workplace or a market where everyone simply shares common characteristics. However, a community in this day and age is much more than belonging to something, it’s about doing something together that makes belonging matter The next few years will force a divide where companies are separated by intention as measured by actions and words. But, becoming a social business is not enough. Becoming more authentic and transparent doesn’t serve as a mantra for a renaissance. A promise is the ink that inscribes the spirit of the relationship between you and me. A promise serves as the words that influence change from within and change beyond the halls of our business. It is the foundation for a renewed embrace, one that must then find its way to every aspect of the organization. It’s the difference between a social business and an adaptive business. While an adaptive business can also be social, it is the culture of the organization that strives to not just use technology to extend current philosophies or processes into new domains, but instead give rise to a new culture where striving for relevance is among its goals. The tools and networks simply become enablers of a greater mission You are reading this because you believe in something more than what you’re doing today. While you fight for change within your organization, remember to aim for a higher purpose. Organizations that strive for innovation, imagination, and relevance will outperform those that do not. Part of your job is to lead a missionary push that unites the groundswell with a top down cascade. Change will only happen because you and other internal champions see what others can’t and will do what other won’t. It takes resolve. It takes the ability to translate new opportunities into business value. And, it takes courage. “This is a very noisy world, so we have to be very clear what we want them to know about us”-Steve Jobs ----------------------------------------------------------------- So -- where do you begin to evaluate the kind of experience you are delivering for your customers, partners, and employees?  Take a look at this White Paper: Creating a Successful and Meaningful Customer Experience on the Web and then have a cup of coffee while you listen to the sage advice of Guy Kawasaki in a short video below.   An interview with Guy Kawasaki on Maximizing Social Media Channels 

    Read the article

  • Automating deployments with the SQL Compare command line

    - by Jonathan Hickford
    In my previous article, “Five Tips to Get Your Organisation Releasing Software Frequently” I looked at how teams can automate processes to speed up release frequency. In this post, I’m looking specifically at automating deployments using the SQL Compare command line. SQL Compare compares SQL Server schemas and deploys the differences. It works very effectively in scenarios where only one deployment target is required – source and target databases are specified, compared, and a change script is automatically generated and applied. But if multiple targets exist, and pressure to increase the frequency of releases builds, this solution quickly becomes unwieldy.   This is where SQL Compare’s command line comes into its own. I’ve put together a PowerShell script that loops through the Servers table and pulls out the server and database, these are then passed to sqlcompare.exe to be used as target parameters. In the example the source database is a scripts folder, a folder structure of scripted-out database objects used by both SQL Source Control and SQL Compare. The script can easily be adapted to use schema snapshots.     -- Create a DeploymentTargets database and a Servers table CREATE DATABASE DeploymentTargets GO USE DeploymentTargets GO CREATE TABLE [dbo].[Servers]( [id] [int] IDENTITY(1,1) NOT NULL, [serverName] [nvarchar](50) NULL, [environment] [nvarchar](50) NULL, [databaseName] [nvarchar](50) NULL, CONSTRAINT [PK_Servers] PRIMARY KEY CLUSTERED ([id] ASC) ) GO -- Now insert your target server and database details INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment1' , N'mydb1') INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment2' , N'mydb2') Here’s the PowerShell script you can adapt for yourself as well. # We're holding the server names and database names that we want to deploy to in a database table. # We need to connect to that server to read these details $serverName = "" $databaseName = "DeploymentTargets" $authentication = "Integrated Security=SSPI" #$authentication = "User Id=xxx;PWD=xxx" # If you are using database authentication instead of Windows authentication. # Path to the scripts folder we want to deploy to the databases $scriptsPath = "SimpleTalk" # Path to SQLCompare.exe $SQLComparePath = "C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe" # Create SQL connection string, and connection $ServerConnectionString = "Data Source=$serverName;Initial Catalog=$databaseName;$authentication" $ServerConnection = new-object system.data.SqlClient.SqlConnection($ServerConnectionString); # Create a Dataset to hold the DataTable $dataSet = new-object "System.Data.DataSet" "ServerList" # Create a query $query = "SET NOCOUNT ON;" $query += "SELECT serverName, environment, databaseName " $query += "FROM dbo.Servers; " # Create a DataAdapter to populate the DataSet with the results $dataAdapter = new-object "System.Data.SqlClient.SqlDataAdapter" ($query, $ServerConnection) $dataAdapter.Fill($dataSet) | Out-Null # Close the connection $ServerConnection.Close() # Populate the DataTable $dataTable = new-object "System.Data.DataTable" "Servers" $dataTable = $dataSet.Tables[0] #For every row in the DataTable $dataTable | FOREACH-OBJECT { "Server Name: $($_.serverName)" "Database Name: $($_.databaseName)" "Environment: $($_.environment)" # Compare the scripts folder to the database and synchronize the database to match # NB. Have set SQL Compare to abort on medium level warnings. $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/AbortOnWarnings:Medium") # + @("/sync" ) # Commented out the 'sync' parameter for safety, write-host $arguments & $SQLComparePath $arguments "Exit Code: $LASTEXITCODE" # Some interesting variations # Check that every database matches a folder. # For example this might be a pre-deployment step to validate everything is at the same baseline state. # Or a post deployment script to validate the deployment worked. # An exit code of 0 means the databases are identical. # # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") # Generate a report of the difference between the folder and each database. Generate a SQL update script for each database. # For example use this after the above to generate upgrade scripts for each database # Examine the warnings and the HTML diff report to understand how the script will change objects # #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") } It’s worth noting that the above example generates the deployment scripts dynamically. This approach should be problem-free for the vast majority of changes, but it is still good practice to review and test a pre-generated deployment script prior to deployment. An alternative approach would be to pre-generate a single deployment script using SQL Compare, and run this en masse to multiple targets programmatically using sqlcmd, or using a tool like SQL Multi Script.  You can use the /ScriptFile, /report, and /showWarnings flags to generate change scripts, difference reports and any warnings.  See the commented out example in the PowerShell: #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") There is a drawback of running a pre-generated deployment script; it assumes that a given database target hasn’t drifted from its expected state. Often there are (rightly or wrongly) many individuals within an organization who have permissions to alter the production database, and changes can therefore be made outside of the prescribed development processes. The consequence is that at deployment time, the applied script has been validated against a target that no longer represents reality. The solution here would be to add a check for drift prior to running the deployment script. This is achieved by using sqlcompare.exe to compare the target against the expected schema snapshot using the /Assertidentical flag. Should this return any differences (sqlcompare.exe Exit Code 79), a drift report is outputted instead of executing the deployment script.  See the commented out example. # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") Any checks and processes that should be undertaken prior to a manual deployment, should also be happen during an automated deployment. You might think about triggering backups prior to deployment – even better, automate the verification of the backup too.   You can use SQL Compare’s command line interface along with PowerShell to automate multiple actions and checks that you need in your deployment process. Automation is a practical solution where multiple targets and a higher release cadence come into play. As we know, with great power comes great responsibility – responsibility to ensure that the necessary checks are made so deployments remain trouble-free.  (The code sample supplied in this post automates the simple dynamic deployment case – if you are considering more advanced automation, e.g. the drift checks, script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have further questions)

    Read the article

  • CodePlex Daily Summary for Sunday, October 23, 2011

    CodePlex Daily Summary for Sunday, October 23, 2011Popular ReleasesView Layout Replicator for Microsoft Dynamics CRM 2011: View Layout Replicator (1.0.921.51): Added CodePlex and PayPal links New iconSiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor (1.0.921.340): Added CodePlex and PayPal links New iconRibbon Browser for Microsoft Dynamics CRM 2011: Ribbon Browser (1.0.922.41): Added CodePlex and PayPal links New iconMVCQuick: MVCQuick 0.3.1: Features??NHibernate 3.2??Repository(ORuM) ??Spring.Net 1.3.2??Container(IoC) ??Common.Logging 1.2??Logging ASP.NET Security Provider?? ??MVCQuick.Framework??MusicStoreElysium: Elysium Theme 1.1 (CTP 1): === Version history === Elysium Theme: Version 1.1 This is pre-release Community Technology Preview version. We recommended use it only for testing and studying project's possibilities. This version included: styles for: ContextMenu MenuItem (partially) bug fixes for: CommandButton: bug #598 ComboBox: bug #599 Window: bug #605 Elysium Theme: Version 1.0 This version included: classes: ThemeManager (with standart Windows Phone colors) CommandButton, RepeatCommandButton, ToggleC...DotNet.Framework.Common: DotNet.Framework.Common 4.0: ??????????,????????????XML Explorer: XML Explorer 4.0.5: Changes in 4.0.5: Added 'Copy Attribute XPath to Address Bar' feature. Added methods for decoding node text and value from Base64 encoded strings, and copying them to the clipboard. Added 'ChildNodeDefinitions' to the options, which allows for easier navigation of parent-child and ID-IDREF relationships. Discovery happens on-demand, as nodes are expanded and child nodes are added. Nodes can now have 'virtual' child nodes, defined by an xpath to select an identifier (usually relative to ...Media Companion: MC 3.419b Weekly: A couple of minor bug fixes, but the important fix in this release is to tackle the extremely long load times for users with large TV collections (issue #130). A note has been provided by developer Playos: "One final note, you will have to suffer one final long load and then it should be fixed... alternatively you can delete the TvCache.xml and rebuild your library... The fix was to include the file extension so it doesn't have to look for the video file (checking to see if a file exists is a...CODE Framework: 4.0.11021.0: This build adds a lot of our WPF components, including our MVVC and MVC components as well as a "Metro" and "Battleship" style.GridLibre para Visual FoxPro: GridLibre para Visual FoxPro v3.5: GridLibre Para Visual FoxPro: esta herramienta ayudara a los usuarios y programadores en los manejos de los datos, como Filtrar, multiseleccion y el autoformato a las columnas como la asignacion del controlsource.Self-Tracking Entity Generator for WPF and Silverlight: Self-Tracking Entity Generator v 0.9.9: Self-Tracking Entity Generator v 0.9.9 for Entity Framework 4.0Umbraco CMS: Umbraco 5.0 CMS Alpha 3: Umbraco 5 Alpha 3Umbraco 5 (aka Jupiter) will be the next version of everyone's favourite, friendly ASP.NET CMS that already powers over 100,000 websites worldwide. Try out the Alpha of v5 today! If you're new to Umbraco and would like to get a low-down on our popular and easy-to-learn approach to content management, check out our intro video. What's Alpha 3?This is our third Alpha release. It's intended for developers looking to become familiar with the codebase & architecture, or for thos...Vkontakte WP: Vkontakte: source codeWay2Sms Applications for Android, Desktop/Laptop & Java enabled phones: Way2SMS Desktop App v2.0: 1. Fixed issue with sending messages due to changes to Way2Sms site 2. Updated the character limit to 160 from 140GART - Geo Augmented Reality Toolkit: 1.0.1: About Release 1.0.1 Release 1.0.1 is a service release that addresses several issues and improves performance. As always, check the Documentation tab for instructions on how to get started. If you don't have the Windows Phone SDK yet, grab it here. Breaking Change Please note: There is a breaking change in this release. As noted below, the WorldCalculationMode property of ARItem has been replaced by a user-definable function. ARItem is now automatically wired up with a function that perform...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.32: Fix for issue #16710 - string literals in "constant literal operations" which contain ASP.NET substitutions should not be considered "constant." Move the JS1284 error (Misplaced Function Declaration) so it only fires when in strict mode. I got a couple complaints that people didn't like that error popping up in their existing code when they could verify that the location of that function, although not strict JS, still functions as expected cross-browser.Naked Objects: Naked Objects Release 4.0.110.0: Corresponds to the packaged version 4.0.110.0 available via NuGet. Please note that the easiest way to install and run the Naked Objects Framework is via the NuGet package manager: just search the Official NuGet Package Source for 'nakedobjects'. It is only necessary to download the source code (from here) if you wish to modify or re-build the framework yourself. If you do wish to re-build the framework, consul the file HowToBuild.txt in the release. Documentation Please note that after ...myCollections: Version 1.5: New in this version : Added edit type for selected elements Added clean for selected elements Added Amazon Italia Added Amazon China Added TVDB Italia Added TVDB China Added Turkish language You can now manually add artist Added Order by Rating Improved Add by Media Improved Artist Detail Upgrade Sqlite engine View, Zoom, Grouping, Filter are now saved by category Added group by Artist Added CubeCover View BugFixingIronPython: 2.7.1 RC: This is the first release candidate of IronPython 2.7.1. Like IronPython 54498, this release requires .NET 4 or Silverlight 4. This release will replace any existing IronPython installation. If there are no showstopping issues, this will be the only release candidate for 2.7.1, so please speak up if you run into any roadblocks. The highlights of 2.7.1 are: Updated the standard library to match CPython 2.7.2. Add the ast, csv, and unicodedata modules. Fixed several bugs. IronPython To...Rawr: Rawr 4.2.6: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr AddonWe now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including bag and bank items) like Char...New Projects"Cupa Timisului" evaluation app: The application is used for evaluating CABRILLO log file for "Cupa Timisului" HAM contest. You can use this code as a startup for ham contest log evaluating software... It's developed in C#.Afrihost Capped Account Monitoring Gadget: The Afrihost Monitoring Gadget is a Windows gadget to monitor the usage on your Afrihost capped account. This project is independently developed and not associated with Afrihost. It has been developed by an Afrihost client and not Afrihost themselves. Custom ORM for .NET: This project represents tiny "Custom ORM" system written in .NET (3.5 as of now). It has strongly typed mapping like in FluentNH. It allows you to change underlying data access logic on the fly. It is simple enough to grag-&-drop in your project and than change as you like.diagnostic medical system: Medical diagnostic system. Simple academic project using BiztTalk Bussines Rule Engine. DotNetNuke Kitchen Sink: A sample module project for DotNetNuke with a variety of different scenarios covered.ecBlog: ecBlog is a very simple blog application. Just run and use. Technology Choice I developed the site as expected with the MVC and HTML 5. Why MVC? In fact there is no one reason. I developed with one of the many features of MVC . MVC comes with a specific architecture, it also conFastPizza: This is a project to delivery stores, restaurants, and othersGB2312 for Silverlight: This class is for support GB2312 simplified Chinese characters for Silverlight(include Windows Phone 7) Application and inherited from Encoding abstract class. It's developed in CSharp. ?????? Silverlight(?? Windows Phone 7)?????? GB2312 ???????,? Encoding ?????。?? C# ????。Ginnay Distributed Downloader: Distributed Downloader using multiple proxiesIn for Consideration - EGR101 Rocket Launch Sequencer: In for Consideration's EGR101 RLS is an executable version of the simplified launch sequence presented in class materials of "Introduction to Engineering" at Embry-Riddle Aeronautical University in Daytona Beach, FL. Source code is available for those interested (C# only).luminji's core lib: luminji's core lib, provide the common utility of the c#.Muki erp System: MukiERP, features. MukiERP is a free, user-friendly, web-based ERP system. MukiERP is Open Source licensed on GPL. MukiERP is in active development and is constantly improved according to its users needs. MukiERP is written in .Net C# language. MukiERP is running well on a ASP.NET and MSSQL. NameDOB: This is for sharing a specific sample with a specific group.network utility: this is a project for working with network API.PGS: (functional) Program Generator from Spreadsheets: This project allows the generation of a functional program semantically equivalent to a given spreadsheet. Using this system, you can: - solve the calculation expressed by the user using a compiled approach. - use spreadsheets as a tool for programming by example.pkrss: c++ version:pkrss.sf.net csharp version:is here. pkrss.sf.net is c++ version desktop productor written by qt 4.7.3. pkrss.codeplex is csharp version web productor.SharePoint Log Browser: The SharePoint log browser is yet another way to view the log of SharePoint.SharpChip-8: Chip-8 Emulator written in C#SQL Server Stored Procedure best practices: This SQL Server stored procedure best practice guide contains documentations of best practices and helper tools to enhance further match with the best practices. sqlsearch: Hi, Googling gives me many search tools. But all tools are not efficient or not able to search into data. So I thought why developers on codeplex and I will not find out some solution for this same. All of you are invited to contribute in this project. Thank you, Hiren V.Suffix Tree in C# and F#: SuffixTree builds a suffix tree structure. A simple client shows how to find substrings in it, and the visual client shows the actual tree. Implemented in C# and F#.Test11: it is a test projectThe Seal: The Seal is a basic Open Source 2D Fantasy Based RPG(Role Playing Game) for Windows. More info coming soon.Toolpack: Updated and improves version silverlight toolkit and wpf toolkit.Unity Azure Setting Injector: Using Unity in Windows Azure made simple. Ever considered moving to Windows Azure, but didn't know how to inject setting from your Service Configuration file? Just reference this project and you will be able to inject Azure Storage Account Connection Strings & Local Storage Paths

    Read the article

  • CodePlex Daily Summary for Thursday, November 17, 2011

    CodePlex Daily Summary for Thursday, November 17, 2011Popular ReleasesSharpMap - Geospatial Application Framework for the CLR: SharpMap-0.9-AnyCPU-Trunk-2011.11.17: This is a build of SharpMap from the 0.9 development trunk as per 2011-11-17 For most applications the AnyCPU release is the recommended, but in case you need an x86 build that is included to. For some dataproviders (GDAL/OGR, SqLite, PostGis) you need to also referense the SharpMap.Extensions assembly For SqlServer Spatial you need to reference the SharpMap.SqlServerSpatial assemblySQL Monitor - tracking sql server activities: SQLMon 4.1 alpha 5: 1. added basic schema support 2. added server instance name and process id 3. fixed problem with object search index out of range 4. improved version comparison with previous/next difference navigation 5. remeber main window spliter and object explorer spliter positionAJAX Control Toolkit: November 2011 Release: AJAX Control Toolkit Release Notes - November 2011 Release Version 51116November 2011 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4 - Binary – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 - Binary – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - The current version of the AJAX Control Toolkit is not compatible with ASP.NET 2.0. The latest version that is compatible with ASP.NET 2.0 can be found h...MVC Controls Toolkit: Mvc Controls Toolkit 1.5.5: Added: Now the DateRanteAttribute accepts complex expressions containing "Now" and "Today" as static minimum and maximum. Menu, MenuFor helpers capable of handling a "currently selected element". The developer can choose between using a standard nested menu based on a standard SimpleMenuItem class or specifying an item template based on a custom class. Added also helpers to build the tree structure containing all data items the menu takes infos from. Improved the pager. Now the developer ...SharpCompress - a fully native C# library for RAR, 7Zip, Zip, Tar, GZip, BZip2: SharpCompress 0.7: Reworked API to be more consistent. See Supported formats table. Added some more helper methods - e.g. OpenEntryStream (RarArchive/RarReader does not support this) Fixed up testsFiles Name Copier: 1.0.0.1: Files Name Copier is a simple easy to use utility that allows you to drag and drop any number of files onto it. The clipboard will now contain the list of files you dropped.Silverlight Toolkit: Windows Phone Toolkit - Nov 2011 (7.1 SDK): This release is coming soon! What's new ListPicker once again works in a ScrollViewer LongListSelector bug fixes around OutOfRange exceptions, wrong ordering of items, grouping issues, and scrolling events. ItemTuple is now refactored to be the public type LongListSelectorItem to provide users better access to the values in selection changed handlers. PerformanceProgressBar binding fix for IsIndeterminate (item 9767 and others) There is no longer a GestureListener dependency with the C...DotNetNuke® Community Edition: 06.01.01: Major Highlights Fixed problem with the core skin object rendering CSS above the other framework inserted files, which caused problems when using core style skin objects Fixed issue with iFrames getting removed when content is saved Fixed issue with the HTML module removing styling and scripts from the content Fixed issue with inserting the link to jquery after the header of the page Security Fixesnone Updated Modules/Providers ModulesHTML version 6.1.0 ProvidersnoneDotNetNuke Performance Settings: 01.00.00: First release of DotNetNuke SQL update queries to set the DNN installation for optimimal performance. Please review and rate this release... (stars are welcome)SCCM Client Actions Tool: SCCM Client Actions Tool v0.8: SCCM Client Actions Tool v0.8 is currently the latest version. It comes with following changes since last version: Added "Wake On LAN" action. WOL.EXE is now included. Added new action "Get all active advertisements" to list all machine based advertisements on remote computers. Added new action "Get all active user advertisements" to list all user based advertisements for logged on users on remote computers. Added config.ini setting "enablePingTest" to control whether ping test is ru...QuickGraph, Graph Data Structures And Algorithms for .Net: 3.6.61116.0: Portable library build that allows to use QuickGraph in any .NET environment: .net 4.0, silverlight 4.0, WP7, Win8 Metro apps.Devpad: 4.7: Whats new for Devpad 4.7: New export to Rich Text New export to FlowDocument Minor Bug Fix's, improvements and speed upsC.B.R. : Comic Book Reader: CBR 0.3: New featuresAdd magnifier size and scale New file info view in the backstage Add dynamic properties on book and settings Sorting and grouping in the explorer with new design Rework on conversion : Images, PDF, Cbr/rar, Cbz/zip, Xps to the destination formats Images, Cbz and XPS ImprovmentsSuppress MainViewModel and ExplorerViewModel dependencies Add view notifications and Messages from MVVM Light for ViewModel=>View notifications Make thread better on open catalog, no more ihm freeze, less t...Desktop Google Reader: 1.4.2: This release remove the like and the broadcast buttons as Google Reader stopped supporting them (no, we don't like this decission...) Additionally and to have at least a small plus: the login window now automaitcally logs you in if you stored username and passwort (no more extra click needed) Finally added WebKit .NET to the about window and removed Awesomium MD5-Hash: 5fccf25a2fb4fecc1dc77ebabc8d3897 SHA-Hash: d44ff788b123bd33596ad1a75f3b9fa74a862fdbRDRemote: Remote Desktop remote configurator V 1.0.0: Remote Desktop remote configurator V 1.0.0Rawr: Rawr 4.2.7: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr AddonWe now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including bag and bank items) like Char...VidCoder: 1.2.2: Updated Handbrake core to svn 4344. Fixed the 6-channel discrete mixdown option not appearing for AAC encoders. Added handling for possible exceptions when copying to the clipboard, added retries and message when it fails. Fixed issue with audio bitrate UI not appearing sometimes when switching audio encoders. Added extra checks to protect against reported crashes. Added code to upgrade encoding profiles on old queued items.Composite C1 CMS: Composite C1 3.0 RC3 (3.0.4332.33416): This is currently a Release Candidate. Upgrade guidelines and "what's new" are pending.Media Companion: MC 3.422b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) TV Show Resolutions... Made the TV Shows folder list sorted. Re-visibled 'Manually Add Path' in Root Folders. Sorted list to process during new tv episode search Rebuild Movies now processes thru folders alphabetically Fix for issue #208 - Display Missing Episodes is not popu...DotSpatial: DotSpatial Release Candidate: Supports loading extensions using System.ComponentModel.Composition. DemoMap compiled as x86 so that GDAL runs on x64 machines. How to: Use an Assembly from the WebBe aware that your browser may add an identifier to downloaded files which results in "blocked" dll files. You can follow the following link to learn how to "Unblock" files. Right click on the zip file before unzipping, choose properties, go to the general tab and click the unblock button. http://msdn.microsoft.com/en-us/library...New Projects#WP7_chameleon: Simple augmented reality app for WP7 that explores the new Mango webcam API. ASP.NET MVC Manialink Extensions: ASP.NET MVC Manialink extensions are a set of extensions for the Razor HtmlHelper class that programmatically generate Manialink XML elements. It's developed in C# (.NET 4.0) for the ASP.NET MVC 3 framework.Basic Text Games in C#: Have fun converting some old B.A.S.I.C. text games into C# using MEF for plug-in capabilities. This is a good project for beginners to learn how to follow logic and convert between languages.Chuanzhuo: Chuanzhuo projCountriesGame: A simple game, players needs to find a country starts with given letterDeployAssistant: You can use it to update you website EntSGE: SGEExtended Immediate Window Mark 2: An Extended Immediate Window for Visual Studio that can use multiple lines of code at a time rather then just one.Facebook Suite Connect Orchard module: Part of the Facebook Suite Orchard module that provides simple login functionality with Facebook ConnectGet TFS 2010 Users: Use this tool to read extended group membership of a TFS 2010 Server. GoogleTestAddin: GoogleTestAddin is an Add-In for Visual Studio 2010. It makes it easier to execute/debug googletest functions by selecting them. You'll no longer have to set the command arguments of your test application. The googletest output is redirected to the Visual Studio output window.Home Financial Manager: ;photel_organizer: Hotel organizer project from a group of students at the university deggendorfInfoCine: Aplicación en Windows Phone 7 para dar información acerca de la cartelera de los cines.Injection of values using Annotation and Reflection: Example of reflection and annotation to inject values in variables in java.Interviews: My implementation of some common problems asked at various interviews.iPipal: adsfadsfadISDS: Informacni systemy a datove skladyivvy vendor management CMS: This is vendor CMS module of the ivvy suite Ross BrodskiyKaartenASPApplicatie: ASP.Net applicatieMap Viet Nam: Control Map VietNamMoney Transfer: A project for rotating money. Software for brokering service.Music Renamer: Renames music files into the format of "00 Track Name.ext"NetOS: NetOS is an Operating System designed for Netbooks and Tablets.Orchard Audit: For Orchard CMS, a framework for logging common audit events such as viewing, editing, deleting content - allowing you to track who changed what and when, with an extensible API to audit any custom events, and configurable in admin with the Rules engine in Orchard 1.3.Orchard Layout Selector: A simple part for switching to different versions of Layout.cshtml when editing Orchard content items.PT Kurejas: This is a small tool to create image tests for web sites. Now is only available only in lithuanian language.Reactive State Machine: Reactive State Machine is a State Machine implementation based on Reactive Extensions. It plays nicely with the Visual State Manager of WPF.Root finding and related routines: C++ generic algorithms for finding roots of floating point functions.Shaping Menu Manager: Shaping Menu ManagerShuttleBus: ShuttleBus is a lightweight Service Bus implementation designed to be easily extensible and unobtrusive.silverlight ? flash? policy ??: silverlight? flash? policy socket server? ??Technical trading indicators: Small classes for computing indicators used in technical analysis.Umbraco Active Directory Role Provider: Active Directory Role Provider for umbraco membership (and sample user control) With this role provider, a user control and some IIS configuration you can do seamless authentication (no login prompt) onto your umbraco site and user the Active Directory roles insider umbraco.UtilityLibrary.Security: UtilityLibrary.SecurityUtilityLibrary.Serialize: UtilityLibrary.SerializeUtilityLibrary.StringOperator: UtilityLibrary.StringOperatorUtilityLibrary.Task: UtilityLibrary.TaskVSTweaker: VSTweaker allows you to modify the components that Visual Studio loads. This can lead to performance benefits by reducing the overhead of unused components. XNA Simple Game Engine: XNA Simple Game Engine is a small and simple engine that makes it easy to make simple projects with XNA 4.0r and C#.

    Read the article

  • DRY and SRP

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/06/11/dry-and-srp.aspxKent Beck’s XP Simplicity Rules (aka Four Rules of Simple Design) are a prioritized list of rules that when applied to your code generally yield a great design.  As you’ll see from the above link the list has slightly evolved over time.  I find today they are usually listed as: All Tests Pass Don’t Repeat Yourself (DRY) Express Intent Minimalistic These are prioritized.  If your code doesn’t work (rule 1) then everything else is forfeit.  Go back to rule one and get the code working before worrying about anything else. Over the years the community have debated whether the priority of rules 2 and 3 should be reversed.  Some say a little duplication in the code is OK as long as it helps express intent.  I’ve debated it myself.  This recent post got me thinking about this again, hence this post.   I don’t think it is fair to compare “Expressing Intent” against “DRY”.  This is a comparison of apples to oranges.  “Expressing Intent” is a principal of code quality.  “Repeating Yourself” is a code smell.  A code smell is merely an indicator that there might be something wrong with the code.  It takes further investigation to determine if a violation of an underlying principal of code quality has actually occurred. For example “using nouns for method names”, “using verbs for property names”, or “using Booleans for parameters” are all code smells that indicate that code probably isn’t doing a good job at expressing intent.  They are usually very good indicators.  But what principle is the code smell of Duplication pointing to and how good of an indicator is it? Duplication in the code base is bad for a couple reasons.  If you need to make a change and that needs to be made in a number of locations it is difficult to know if you have caught all of them.  This can lead to bugs if/when one of those locations is overlooked.  By refactoring the code to remove all duplication there will be left with only one place to change, thereby eliminating this problem. With most projects the code becomes the single source of truth for a project.  If a production code base is inconsistent with a five year old requirements or design document the production code that people are currently living with is usually declared as the current reality (or truth).  Requirement or design documents at this age in a project life cycle are usually of little value. Although comparing production code to external documentation is usually straight forward, duplication within the code base muddles this declaration of truth.  When code is duplicated small discrepancies will creep in between the two copies over time.  The question then becomes which copy is correct?  As different factions debate how the software should work, trust in the software and the team behind it erodes. The code smell of Duplication points to a violation of the “Single Source of Truth” principle.  Let me define that as: A stakeholder’s requirement for a software change should never cause more than one class to change. Violation of the Single Source of Truth principle will always result in duplication in the code.  However, the inverse is not always true.  Duplication in the code does not necessarily indicate that there is a violation of the Single Source of Truth principle. To illustrate this, let’s look at a retail system where the system will (1) send a transaction to a bank and (2) print a receipt for the customer.  Although these are two separate features of the system, they are closely related.  The reason for printing the receipt is usually to provide an audit trail back to the bank transaction.  Both features use the same data:  amount charged, account number, transaction date, customer name, retail store name, and etcetera.  Because both features use much of the same data, there is likely to be a lot of duplication between them.  This duplication can be removed by making both features use the same data access layer. Then start coming the divergent requirements.  The receipt stakeholder wants a change so that the account number has the last few digits masked out to protect the customer’s privacy.  That can be solve with a small IF statement whilst still eliminating all duplication in the system.  Then the bank wants to take a picture of the customer as well as capture their signature and/or PIN number for enhanced security.  Then the receipt owner wants to pull data from a completely different system to report the customer’s loyalty program point total. After a while you realize that the two stakeholders have somewhat similar, but ultimately different responsibilities.  They have their own reasons for pulling the data access layer in different directions.  Then it dawns on you, the Single Responsibility Principle: There should never be more than one reason for a class to change. In this example we have two stakeholders giving two separate reasons for the data access class to change.  It is clear violation of the Single Responsibility Principle.  That’s a problem because it can often lead the project owner pitting the two stakeholders against each other in a vein attempt to get them to work out a mutual single source of truth.  But that doesn’t exist.  There are two completely valid truths that the developers need to support.  How is this to be supported and honour the Single Responsibility Principle?  The solution is to duplicate the data access layer and let each stakeholder control their own copy. The Single Source of Truth and Single Responsibility Principles are very closely related.  SST tells you when to remove duplication; SRP tells you when to introduce it.  They may seem to be fighting each other, but really they are not.  The key is to clearly identify the different responsibilities (or sources of truth) over a system.  Sometimes there is a single person with that responsibility, other times there are many.  This can be especially difficult if the same person has dual responsibilities.  They might not even realize they are wearing multiple hats. In my opinion Single Source of Truth should be listed as the second rule of simple design with Express Intent at number three.  Investigation of the DRY code smell should yield to the proper application SST, without violating SRP.  When necessary leave duplication in the system and let the class names express the different people that are responsible for controlling them.  Knowing all the people with responsibilities over a system is the higher priority because you’ll need to know this before you can express it.  Although it may be a code smell when there is duplication in the code, it does not necessarily mean that the coder has chosen to be expressive over DRY or that the code is bad.

    Read the article

  • RIF PRD: Presentation syntax issues

    - by Charles Young
    Over Christmas I got to play a bit with the W3C RIF PRD and came across a few issues which I thought I would record for posterity. Specifically, I was working on a grammar for the presentation syntax using a GLR grammar parser tool (I was using the current CTP of ‘M’ (MGrammer) and Intellipad – I do so hope the MS guys don’t kill off M and Intellipad now they have dropped the other parts of SQL Server Modelling). I realise that the presentation syntax is non-normative and that any issues with it do not therefore compromise the standard. However, presentation syntax is useful in its own right, and it would be great to iron out any issues in a future revision of the standard. The main issues are actually not to do with the grammar at all, but rather with the ‘running example’ in the RIF PRD recommendation. I started with the code provided in Example 9.1. There are several discrepancies when compared with the EBNF rules documented in the standard. Broadly the problems can be categorised as follows: ·      Parenthesis mismatch – the wrong number of parentheses are used in various places. For example, in GoldRule, the RHS of the rule (the ‘Then’) is nested in the LHS (‘the If’). In NewCustomerAndWidgetRule, the RHS is orphaned from the LHS. Together with additional incorrect parenthesis, this leads to orphanage of UnknownStatusRule from the entire Document. ·      Invalid use of parenthesis in ‘Forall’ constructs. Parenthesis should not be used to enclose formulae. Removal of the invalid parenthesis gave me a feeling of inconsistency when comparing formulae in Forall to formulae in If. The use of parenthesis is not actually inconsistent in these two context, but in an If construct it ‘feels’ as if you are enclosing formulae in parenthesis in a LISP-like fashion. In reality, the parenthesis is simply being used to group subordinate syntax elements. The fact that an If construct can contain only a single formula as an immediate child adds to this feeling of inconsistency. ·      Invalid representation of compact URIs (CURIEs) in the context of Frame productions. In several places the URIs are not qualified with a namespace prefix (‘ex1:’). This conflicts with the definition of CURIEs in the RIF Datatypes and Built-Ins 1.0 document. Here are the productions: CURIE          ::= PNAME_LN                  | PNAME_NS PNAME_LN       ::= PNAME_NS PN_LOCAL PNAME_NS       ::= PN_PREFIX? ':' PN_LOCAL       ::= ( PN_CHARS_U | [0-9] ) ((PN_CHARS|'.')* PN_CHARS)? PN_CHARS       ::= PN_CHARS_U                  | '-' | [0-9] | #x00B7                  | [#x0300-#x036F] | [#x203F-#x2040] PN_CHARS_U     ::= PN_CHARS_BASE                  | '_' PN_CHARS_BASE ::= [A-Z] | [a-z] | [#x00C0-#x00D6] | [#x00D8-#x00F6]                  | [#x00F8-#x02FF] | [#x0370-#x037D] | [#x037F-#x1FFF]                  | [#x200C-#x200D] | [#x2070-#x218F] | [#x2C00-#x2FEF]                  | [#x3001-#xD7FF] | [#xF900-#xFDCF] | [#xFDF0-#xFFFD]                  | [#x10000-#xEFFFF] PN_PREFIX      ::= PN_CHARS_BASE ((PN_CHARS|'.')* PN_CHARS)? The more I look at CURIEs, the more my head hurts! The RIF specification allows prefixes and colons without local names, which surprised me. However, the CURIE Syntax 1.0 working group note specifically states that this form is supported…and then promptly provides a syntactic definition that seems to preclude it! However, on (much) deeper inspection, it appears that ‘ex1:’ (for example) is allowed, but would really represent a ‘fragment’ of the ‘reference’, rather than a prefix! Ouch! This is so completely ambiguous that it surely calls into question the whole CURIE specification.   In any case, RIF does not allow local names without a prefix. ·      Missing ‘External’ specifiers for built-in functions and predicates.  The EBNF specification enforces this for terms within frames, but does not appear to enforce (what I believe is) the correct use of External on built-in predicates. In any case, the running example only specifies ‘External’ once on the predicate in UnknownStatusRule. External() is required in several other places. ·      The List used on the LHS of UnknownStatusRule is comma-delimited. This is not supported by the EBNF definition. Similarly, the argument list of pred:list-contains is illegally comma-delimited. ·      Unnecessary use of conjunction around a single formula in DiscountRule. This is strictly legal in the EBNF, but redundant.   All the above issues concern the presentation syntax used in the running example. There are a few minor issues with the grammar itself. Note that Michael Kiefer stated in his paper “Rule Interchange Format: The Framework” that: “The presentation syntax of RIF … is an abstract syntax and, as such, it omits certain details that might be important for unambiguous parsing.” ·      The grammar cannot differentiate unambiguously between strategies and priorities on groups. A processor is forced to resolve this by detecting the use of IRIs and integers. This could easily be fixed in the grammar.   ·      The grammar cannot unambiguously parse the ‘->’ operator in frames. Specifically, ‘-’ characters are allowed in PN_LOCAL names and hence a parser cannot determine if ‘status->’ is (‘status’ ‘->’) or (‘status-’ ‘>’).   One way to fix this is to amend the PN_LOCAL production as follows: PN_LOCAL ::= ( PN_CHARS_U | [0-9] ) ((PN_CHARS|'.')* ((PN_CHARS)-('-')))? However, unilaterally changing the definition of this production, which is defined in the SPARQL Query Language for RDF specification, makes me uncomfortable. ·      I assume that the presentation syntax is case-sensitive. I couldn’t find this stated anywhere in the documentation, but function/predicate names do appear to be documented as being case-sensitive. ·      The EBNF does not specify whitespace handling. A couple of productions (RULE and ACTION_BLOCK) are crafted to enforce the use of whitespace. This is not necessary. It seems inconsistent with the rest of the specification and can cause parsing issues. In addition, the Const production exhibits whitespaces issues. The intention may have been to disallow the use of whitespace around ‘^^’, but any direct implementation of the EBNF will probably allow whitespace between ‘^^’ and the SYMSPACE. Of course, I am being a little nit-picking about all this. On the whole, the EBNF translated very smoothly and directly to ‘M’ (MGrammar) and proved to be fairly complete. I have encountered far worse issues when translating other EBNF specifications into usable grammars.   I can’t imagine there would be any difficulty in implementing the same grammar in Antlr, COCO/R, gppg, XText, Bison, etc. A general observation, which repeats a point made above, is that the use of parenthesis in the presentation syntax can feel inconsistent and un-intuitive.   It isn’t actually inconsistent, but I think the presentation syntax could be improved by adopting braces, rather than parenthesis, to delimit subordinate syntax elements in a similar way to so many programming languages. The familiarity of braces would communicate the structure of the syntax more clearly to people like me.  If braces were adopted, parentheses could be retained around ‘var (frame | ‘new()’) constructs in action blocks. This use of parenthesis feels very LISP-like, and I think that this is my issue. It’s as if the presentation syntax represents the deformed love-child of LISP and C. In some places (specifically, action blocks), parenthesis is used in a LISP-like fashion. In other places it is used like braces in C. I find this quite confusing. Here is a corrected version of the running example (Example 9.1) in compliant presentation syntax: Document(    Prefix( ex1 <http://example.com/2009/prd2> )    (* ex1:CheckoutRuleset *)  Group rif:forwardChaining (     (* ex1:GoldRule *)    Group 10 (      Forall ?customer such that And(?customer # ex1:Customer                                     ?customer[ex1:status->"Silver"])        (Forall ?shoppingCart such that ?customer[ex1:shoppingCart->?shoppingCart]           (If Exists ?value (And(?shoppingCart[ex1:value->?value]                                  External(pred:numeric-greater-than-or-equal(?value 2000))))            Then Do(Modify(?customer[ex1:status->"Gold"])))))      (* ex1:DiscountRule *)    Group (      Forall ?customer such that ?customer # ex1:Customer        (If Or( ?customer[ex1:status->"Silver"]                ?customer[ex1:status->"Gold"])         Then Do ((?s ?customer[ex1:shoppingCart-> ?s])                  (?v ?s[ex1:value->?v])                  Modify(?s [ex1:value->External(func:numeric-multiply (?v 0.95))]))))      (* ex1:NewCustomerAndWidgetRule *)    Group (      Forall ?customer such that And(?customer # ex1:Customer                                     ?customer[ex1:status->"New"] )        (If Exists ?shoppingCart ?item                   (And(?customer[ex1:shoppingCart->?shoppingCart]                        ?shoppingCart[ex1:containsItem->?item]                        ?item # ex1:Widget ) )         Then Do( (?s ?customer[ex1:shoppingCart->?s])                  (?val ?s[ex1:value->?val])                  (?voucher ?customer[ex1:voucher->?voucher])                  Retract(?customer[ex1:voucher->?voucher])                  Retract(?voucher)                  Modify(?s[ex1:value->External(func:numeric-multiply(?val 0.90))]))))      (* ex1:UnknownStatusRule *)    Group (      Forall ?customer such that ?customer # ex1:Customer        (If Not(Exists ?status                       (And(?customer[ex1:status->?status]                            External(pred:list-contains(List("New" "Bronze" "Silver" "Gold") ?status)) )))         Then Do( Execute(act:print(External(func:concat("New customer: " ?customer))))                  Assert(?customer[ex1:status->"New"]))))  ) )   I hope that helps someone out there :-)

    Read the article

  • The Presentation Isn't Over Until It's Over

    - by Phil Factor
    The senior corporate dignitaries settled into their seats looking important in a blue-suited sort of way. The lights dimmed as I strode out in front to give my presentation.  I had ten vital minutes to make my pitch.  I was about to dazzle the top management of a large software company who were considering the purchase of my software product. I would present them with a dazzling synthesis of diagrams, graphs, followed by  a live demonstration of my software projected from my laptop.  My preparation had been meticulous: It had to be: A year’s hard work was at stake, so I’d prepared it to perfection.  I stood up and took them all in, with a gaze of sublime confidence. Then the laptop expired. There are several possible alternative plans of action when this happens     A. Stare at the smoking laptop vacuously, flapping ones mouth slowly up and down     B. Stand frozen like a statue, locked in indecision between fright and flight.     C. Run out of the room, weeping     D. Pretend that this was all planned     E. Abandon the presentation in favour of a stilted and tedious dissertation about the software     F. Shake your fist at the sky, and curse the sense of humour of your preferred deity I started for a few seconds on plan B, normally referred to as the ‘Rabbit in the headlamps of the car’ technique. Suddenly, a little voice inside my head spoke. It spoke the famous inane words of Yogi Berra; ‘The game isn't over until it's over.’ ‘Too right’, I thought. What to do? I ran through the alternatives A-F inclusive in my mind but none appealed to me. I was completely unprepared for this. Nowadays, longevity has since taught me more than I wanted to know about the wacky sense of humour of fate, and I would have taken two laptops. I hadn’t, but decided to do the presentation anyway as planned. I started out ignoring the dead laptop, but pretending, instead that it was still working. The audience looked startled. They were expecting plan B to be succeeded by plan C, I suspect. They weren’t used to denial on this scale. After my introductory talk, which didn’t require any visuals, I came to the diagram that described the application I’d written.  I’d taken ages over it and it was hot stuff. Well, it would have been had it been projected onto the screen. It wasn’t. Before I describe what happened then, I must explain that I have thespian tendencies.  My  triumph as Professor Higgins in My Fair Lady at the local operatic society is now long forgotten, but I remember at the time of my finest performance, the moment that, glancing up over the vast audience of  moist-eyed faces at the during the poignant  scene between Eliza and Higgins at the end, I  realised that I had a talent that one day could possibly  be harnessed for commercial use I just talked about the diagram as if it was there, but throwing in some extra description. The audience nodded helpfully when I’d done enough. Emboldened, I began a sort of mime, well, more of a ballet, to represent each slide as I came to it. Heaven knows I’d done my preparation and, in my mind’s eye, I could see every detail, but I had to somehow project the reality of that vision to the audience, much the same way any actor playing Macbeth should do the ghost of Banquo.  My desperation gave me a manic energy. If you’ve ever demonstrated a windows application entirely by mime, gesture and florid description, you’ll understand the scale of the challenge, but then I had nothing to lose. With a brief sentence of description here and there, and arms flailing whilst outlining the size and shape of  graphs and diagrams, I used the many tricks of mime, gesture and body-language  learned from playing Captain Hook, or the Sheriff of Nottingham in pantomime. I set out determinedly on my desperate venture. There wasn’t time to do anything but focus on the challenge of the task: the world around me narrowed down to ten faces and my presentation: ten souls who had to be hypnotized into seeing a Windows application:  one that was slick, well organized and functional I don’t remember the details. Eight minutes of my life are gone completely. I was a thespian berserker.  I know however that I followed the basic plan of building the presentation in a carefully controlled crescendo until the dazzling finale where the results were displayed on-screen.  ‘And here you see the results, neatly formatted and grouped carefully to enhance the significance of the figures, together with running trend-graphs!’ I waved a mime to signify an animated  window-opening, and looked up, in my first pause, to gaze defiantly  at the audience.  It was a sight I’ll never forget. Ten pairs of eyes were gazing in rapt attention at the imaginary window, and several pairs of eyes were glancing at the imaginary graphs and figures.  I hadn’t had an audience like that since my starring role in  Beauty and the Beast.  At that moment, I realized that my desperate ploy might work. I sat down, slightly winded, when my ten minutes were up.  For the first and last time in my life, the audience of a  ‘PowerPoint’ presentation burst into spontaneous applause. ‘Any questions?’ ‘Yes,  Have you got an agent?’ Yes, in case you’re wondering, I got the deal. They bought the software product from me there and then. However, it was a life-changing experience for me and I have never ever again trusted technology as part of a presentation.  Even if things can’t go wrong, they’ll go wrong and they’ll kill the flow of what you’re presenting.  if you can’t do something without the techno-props, then you shouldn’t do it.  The greatest lesson of all is that great presentations require preparation and  ‘stage-presence’ rather than fancy graphics. They’re a great supporting aid, but they should never dominate to the point that you’re lost without them.

    Read the article

  • Possible SWITCH Optimization in DAX – #powerpivot #dax #tabular

    - by Marco Russo (SQLBI)
    In one of the Advanced DAX Workshop I taught this year, I had an interesting discussion about how to optimize a SWITCH statement (which could be frequently used checking a slicer, like in the Parameter Table pattern). Let’s start with the problem. What happen when you have such a statement? Sales :=     SWITCH (         VALUES ( Period[Period] ),         "Current", [Internet Total Sales],         "MTD", [MTD Sales],         "QTD", [QTD Sales],         "YTD", [YTD Sales],          BLANK ()     ) The SWITCH statement is in reality just syntax sugar for a nested IF statement. When you place such a measure in a pivot table, for every cell of the pivot table the IF options are evaluated. In order to optimize performance, the DAX engine usually does not compute cell-by-cell, but tries to compute the values in bulk-mode. However, if a measure contains an IF statement, every cell might have a different execution path, so the current implementation might evaluate all the possible IF branches in bulk-mode, so that for every cell the result from one of the branches will be already available in a pre-calculated dataset. The price for that could be high. If you consider the previous Sales measure, the YTD Sales measure could be evaluated for all the cells where it’s not required, and also when YTD is not selected at all in a Pivot Table. The actual optimization made by the DAX engine could be different in every build, and I expect newer builds of Tabular and Power Pivot to be better than older ones. However, we still don’t live in an ideal world, so it could be better trying to help the engine finding a better execution plan. One student (Niek de Wit) proposed this approach: Selection := IF (     HASONEVALUE ( Period[Period] ),     VALUES ( Period[Period] ) ) Sales := CALCULATE (     [Internet Total Sales],     FILTER (         VALUES ( 'Internet Sales'[Order Quantity] ),         'Internet Sales'[Order Quantity]             = IF (                 [Selection] = "Current",                 'Internet Sales'[Order Quantity],                 -1             )     ) )     + CALCULATE (         [MTD Sales],         FILTER (             VALUES ( 'Internet Sales'[Order Quantity] ),             'Internet Sales'[Order Quantity]                 = IF (                     [Selection] = "MTD",                     'Internet Sales'[Order Quantity],                     -1                 )         )     )     + CALCULATE (         [QTD Sales],         FILTER (             VALUES ( 'Internet Sales'[Order Quantity] ),             'Internet Sales'[Order Quantity]                 = IF (                     [Selection] = "QTD",                     'Internet Sales'[Order Quantity],                     -1                 )         )     )     + CALCULATE (         [YTD Sales],         FILTER (             VALUES ( 'Internet Sales'[Order Quantity] ),             'Internet Sales'[Order Quantity]                 = IF (                     [Selection] = "YTD",                     'Internet Sales'[Order Quantity],                     -1                 )         )     ) At first sight, you might think it’s impossible that this approach could be faster. However, if you examine with the profiler what happens, there is a different story. Every original IF’s execution branch is now a separate CALCULATE statement, which applies a filter that does not execute the required measure calculation if the result of the FILTER is empty. I used the ‘Internet Sales’[Order Quantity] column in this example just because in Adventure Works it has only one value (every row has 1): in the real world, you should use a column that has a very low number of distinct values, or use a column that has always the same value for every row (so it will be compressed very well!). Because the value –1 is never used in this column, the IF comparison in the filter discharge all the values iterated in the filter if the selection does not match with the desired value. I hope to have time in the future to write a longer article about this optimization technique, but in the meantime I’ve seen this optimization has been useful in many other implementations. Please write your feedback if you find scenarios (in both Power Pivot and Tabular) where you obtain performance improvements using this technique!

    Read the article

  • Free Document/Content Management System Using SharePoint 2010

    - by KunaalKapoor
    That’s right, it’s true. You can use the free version of SharePoint 2010 to meet your document and content management needs and even run your public facing website or an internal knowledge bank.  SharePoint Foundation 2010 is free. It may not have all the features that you get in the enterprise license but it still has enough to cater to your needs to build a document management system and replace age old file shares or folders. I’ve built a dozen content management sites for internal and public use exploiting SharePoint. There are hundreds of web content management systems out there (see CMS Matrix).  On one hand we have commercial platforms like SharePoint, SiteCore, and Ektron etc. which are the most frequently used and on the other hand there are free options like WordPress, Drupal, Joomla, and Plone etc. which are pretty common popular as well. But I would be very surprised if anyone was able to find a single CMS platform that is all things to all people. Infact not a lot of people consider SharePoint’s free version under the free CMS side but its high time organizations benefit from this. Through this blog post I wanted to present SharePoint Foundation as an option for running a FREE CMS platform. Even if you knew that there is a free version of SharePoint, what most people don’t realize is that SharePoint Foundation is a great option for running web sites of all kinds – not just team sites. It is a great option for many reasons, but in reality it is supported by Microsoft, and above all it is FREE (yay!), and it is extremely easy to get started.  From a functionality perspective – it’s hard to beat SharePoint. Even the free version, SharePoint Foundation, offers simple data connectivity (through BCS), cross browser support, accessibility, support for Office Web Apps, blogs, wikis, templates, document support, health analyzer, support for presence, and MUCH more.I often get asked: “Can I use SharePoint 2010 as a document management system?” The answer really depends on ·          What are your specific requirements? ·          What systems you currently have in place for managing documents. ·          And of course how much money you have J Benefits? Not many large organizations have benefited from SharePoint yet. For some it has been an IT project to see what they can achieve with it, for others it has been used as a collaborative platform or in many cases an extended intranet. SharePoint 2010 has changed the game slightly as the improvements that Microsoft have made have been noted by organizations, and we are seeing a lot of companies starting to build specific business applications using SharePoint as the basis, and nearly every business process will require documents at some stage. If you require a document management system and have SharePoint in place then it can be a relatively straight forward decision to use SharePoint, as long as you have reviewed the considerations just discussed. The collaborative nature of SharePoint 2010 is also a massive advantage, as specific departmental or project sites can be created quickly and easily that allow workers to interact in a variety of different ways using one source of information.  This also benefits an organization with regards to how they manage the knowledge that they have, as if all of their information is in one source then it is naturally easier to search and manage. Is SharePoint right for your organization? As just discussed, this can only be determined after defining your requirements and also planning a longer term strategy for how you will manage your documents and information. A key factor to look at is how the users would interact with the system and how much value would it get for your organization. The amount of data and documents that organizations are creating is increasing rapidly each year. Therefore the ability to archive this information, whilst keeping the ability to know what you have and where it is, is vital to any organizations management of their information life cycle. SharePoint is best used for the initial life of business documents where they need to be referenced and accessed after time. It is often beneficial to archive these to overcome for storage and performance issues. FREE CMS – SharePoint, Really? In order to show some of the completely of what comes with this free version of SharePoint 2010, I thought it would make sense to use Wikipedia (since every one trusts it as a credible source). Wikipedia shows that a web content management system typically has the following components: Document Management:   -       CMS software may provide a means of managing the life cycle of a document from initial creation time, through revisions, publication, archive, and document destruction. SharePoint is king when it comes to document management.  Version history, exclusive check-out, security, publication, workflow, and so much more.  Content Virtualization:   -       CMS software may provide a means of allowing each user to work within a virtual copy of the entire Web site, document set, and/or code base. This enables changes to multiple interdependent resources to be viewed and/or executed in-context prior to submission. Through the use of versioning, each content manager can preview, publish, and roll-back content of pages, wiki entries, blog posts, documents, or any other type of content stored in SharePoint.  The idea of each user having an entire copy of the website virtualized is a bit odd to me – not sure why anyone would need that for anything but the simplest of websites. Automated Templates:   -       Create standard output templates that can be automatically applied to new and existing content, allowing the appearance of all content to be changed from one central place. Through the use of Master Pages and Themes, SharePoint provides the ability to change the entire look and feel of site.  Of course, the older brother version of SharePoint – SharePoint Server 2010 – also introduces the concept of Page Layouts which allows page template level customization and even switching the layout of an individual page using different page templates.  I think many organizations really think they want this but rarely end up using this bit of functionality.  Easy Edits:   -       Once content is separated from the visual presentation of a site, it usually becomes much easier and quicker to edit and manipulate. Most WCMS software includes WYSIWYG editing tools allowing non-technical individuals to create and edit content. This is probably easier described with a screen cap of a vanilla SharePoint Foundation page in edit mode.  Notice the page editing toolbar, the multiple layout options…  It’s actually easier to use than Microsoft Word. Workflow management: -       Workflow is the process of creating cycles of sequential and parallel tasks that must be accomplished in the CMS. For example, a content creator can submit a story, but it is not published until the copy editor cleans it up and the editor-in-chief approves it. Workflow, it’s in there. In fact, the same workflow engine is running under SharePoint Foundation that is running under the other versions of SharePoint.  The primary difference is that with SharePoint Foundation – you need to configure the workflows yourself.   Web Standards: -       Active WCMS software usually receives regular updates that include new feature sets and keep the system up to current web standards. SharePoint is in the fourth major iteration under Microsoft with the 2010 release.  In addition to the innovation that Microsoft continuously adds, you have the entire global ecosystem available. Scalable Expansion:   -       Available in most modern WCMSs is the ability to expand a single implementation (one installation on one server) across multiple domains. SharePoint Foundation can run multiple sites using multiple URLs on a single server install.  Even more powerful, SharePoint Foundation is scalable and can be part of a multi-server farm to ensure that it will handle any amount of traffic that can be thrown at it. Delegation & Security:  -       Some CMS software allows for various user groups to have limited privileges over specific content on the website, spreading out the responsibility of content management. SharePoint Foundation provides very granular security capabilities. Read @ http://msdn.microsoft.com/en-us/library/ee537811.aspx Content Syndication:  -       CMS software often assists in content distribution by generating RSS and Atom data feeds to other systems. They may also e-mail users when updates are available as part of the workflow process. SharePoint Foundation nails it.  With RSS syndication and email alerts available out of the box, content syndication is already in the platform. Multilingual Support: -       Ability to display content in multiple languages. SharePoint Foundation 2010 supports more than 40 languages. Read More Read more @ http://msdn.microsoft.com/en-us/library/dd776256(v=office.12).aspxYou can download the free version from http://www.microsoft.com/en-us/download/details.aspx?id=5970

    Read the article

  • Oracle Enterprise Manager 12c Ops Center Jump-Start for Partners

    - by Get_Specialized!
    Following the Normal 0 false false false EN-US X-NONE X-NONE Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} announcement at Oracle OpenWorld Tokyo, Partners can check out these resources to further learn about Oracle Enterprise Manager 12c Op Center and then use it to optimize your solution/services or offer new ones: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Product Documentation Oracle Technical Network Resources Online Learning Series for Partners in the OPN Enterprise Manager KnowledgeZone Whitepaper Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Making Infrastructure-as-a-Service in the Enterprise a Reality IDC report: Oracle Enterprise Manager 12c Embraces the Cloud with Integrated Lifecycle Management Follow-up webcast April 12th  Total Cloud Control for Systems Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Enterprise Manager Ops Center 12c is no extra charge and included in the support contract of Oracle Systems customers.To learn more see the Ops Center Everywhere Program And if you're not already a member, be sure and join the Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Enterprise Manager KnowledgeZone on the Oracle PartnerNetwork  Portal

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32  | Next Page >