Search Results

Search found 1292 results on 52 pages for 'martin vobr'.

Page 52/52 | < Previous Page | 48 49 50 51 52 

  • ASP.NET MVC 2 / Localization / Dynamic Default Value?

    - by cyberblast
    Hello In an ASP.NET MVC 2 application, i'm having a route like this: routes.MapRoute( "Default", // Route name "{lang}/{controller}/{action}/{id}", // URL with parameters new // Parameter defaults { controller = "Home", action = "Index", lang = "de", id = UrlParameter.Optional }, new { lang = new AllowedValuesRouteConstraint(new string[] { "de", "en", "fr", "it" }, StringComparison.InvariantCultureIgnoreCase) } Now, basically I would like to set the thread's culture according the language passed in. But there is one exception: If the user requests the page for the first time, like calling "http://www.mysite.com" I want to set the initial language if possible to the one "preferred by the browser". How can I distinguish in an early procesing stage (like global.asax), if the default parameter has been set because of the default value or mentioned explicit through the URL? (I would prefer a solution where the request URL is not getting parsed). Is there a way to dynamically provide a default-value for a paramter? Something like a hook? Or where can I override the default value (good application event?). This is the code i'm actually experimenting with: protected void Application_AcquireRequestState(object sender, EventArgs e) { string activeLanguage; string[] validLanguages; string defaultLanguage; string browsersPreferredLanguage; try { HttpContextBase contextBase = new HttpContextWrapper(Context); RouteData activeRoute = RouteTable.Routes.GetRouteData(new HttpContextWrapper(Context)); if (activeRoute == null) { return; } activeLanguage = activeRoute.GetRequiredString("lang"); Route route = (Route)activeRoute.Route; validLanguages = ((AllowedValuesRouteConstraint)route.Constraints["lang"]).AllowedValues; defaultLanguage = route.Defaults["lang"].ToString(); browsersPreferredLanguage = GetBrowsersPreferredLanguage(); //TODO: Better way than parsing the url bool defaultInitialized = contextBase.Request.Url.ToString().IndexOf(string.Format("/{0}/", defaultLanguage), StringComparison.InvariantCultureIgnoreCase) > -1; string languageToActivate = defaultLanguage; if (!defaultInitialized) { if (validLanguages.Contains(browsersPreferredLanguage, StringComparer.InvariantCultureIgnoreCase)) { languageToActivate = browsersPreferredLanguage; } } //TODO: Where and how to overwrtie the default value that it gets passed to the controller? contextBase.RewritePath(contextBase.Request.Path.Replace("/de/", "/en/")); SetLanguage(languageToActivate); } catch (Exception ex) { //TODO: Log Console.WriteLine(ex.Message); } } protected string GetBrowsersPreferredLanguage() { string acceptedLang = string.Empty; if (HttpContext.Current.Request.UserLanguages != null && HttpContext.Current.Request.UserLanguages.Length > 0) { acceptedLang = HttpContext.Current.Request.UserLanguages[0].Substring(0, 2); } return acceptedLang; } protected void SetLanguage(string languageToActivate) { CultureInfo cultureInfo = new CultureInfo(languageToActivate); if (!Thread.CurrentThread.CurrentUICulture.TwoLetterISOLanguageName.Equals(languageToActivate, StringComparison.InvariantCultureIgnoreCase)) { Thread.CurrentThread.CurrentUICulture = cultureInfo; } if (!Thread.CurrentThread.CurrentCulture.TwoLetterISOLanguageName.Equals(languageToActivate, StringComparison.InvariantCultureIgnoreCase)) { Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture(cultureInfo.Name); } } The RouteConstraint to reproduce the sample: public class AllowedValuesRouteConstraint : IRouteConstraint { private string[] _allowedValues; private StringComparison _stringComparism; public string[] AllowedValues { get { return _allowedValues; } } public AllowedValuesRouteConstraint(string[] allowedValues, StringComparison stringComparism) { _allowedValues = allowedValues; _stringComparism = stringComparism; } public AllowedValuesRouteConstraint(string[] allowedValues) { _allowedValues = allowedValues; _stringComparism = StringComparison.InvariantCultureIgnoreCase; } public bool Match(HttpContextBase httpContext, Route route, string parameterName, RouteValueDictionary values, RouteDirection routeDirection) { if (_allowedValues != null) { return _allowedValues.Any(a => a.Equals(values[parameterName].ToString(), _stringComparism)); } else { return false; } } } Can someone help me out with that problem? Thanks, Martin

    Read the article

  • Incremental PCA

    - by smichak
    Hi, Lately, I've been looking into an implementation of an incremental PCA algorithm in python - I couldn't find something that would meet my needs so I did some reading and implemented an algorithm I found in some paper. Here is the module's code - the relevant paper on which it is based is mentioned in the module's documentation. I would appreciate any feedback from people who are interested in this. Micha #!/usr/bin/env python """ Incremental PCA calculation module. Based on P.Hall, D. Marshall and R. Martin "Incremental Eigenalysis for Classification" which appeared in British Machine Vision Conference, volume 1, pages 286-295, September 1998. Principal components are updated sequentially as new observations are introduced. Each new observation (x) is projected on the eigenspace spanned by the current principal components (U) and the residual vector (r = x - U(U.T*x)) is used as a new principal component (U' = [U r]). The new principal components are then rotated by a rotation matrix (R) whose columns are the eigenvectors of the transformed covariance matrix (D=U'.T*C*U) to yield p + 1 principal components. From those, only the first p are selected. """ __author__ = "Micha Kalfon" import numpy as np _ZERO_THRESHOLD = 1e-9 # Everything below this is zero class IPCA(object): """Incremental PCA calculation object. General Parameters: m - Number of variables per observation n - Number of observations p - Dimension to which the data should be reduced """ def __init__(self, m, p): """Creates an incremental PCA object for m-dimensional observations in order to reduce them to a p-dimensional subspace. @param m: Number of variables per observation. @param p: Number of principle components. @return: An IPCA object. """ self._m = float(m) self._n = 0.0 self._p = float(p) self._mean = np.matrix(np.zeros((m , 1), dtype=np.float64)) self._covariance = np.matrix(np.zeros((m, m), dtype=np.float64)) self._eigenvectors = np.matrix(np.zeros((m, p), dtype=np.float64)) self._eigenvalues = np.matrix(np.zeros((1, p), dtype=np.float64)) def update(self, x): """Updates with a new observation vector x. @param x: Next observation as a column vector (m x 1). """ m = self._m n = self._n p = self._p mean = self._mean C = self._covariance U = self._eigenvectors E = self._eigenvalues if type(x) is not np.matrix or x.shape != (m, 1): raise TypeError('Input is not a matrix (%d, 1)' % int(m)) # Update covariance matrix and mean vector and centralize input around # new mean oldmean = mean mean = (n*mean + x) / (n + 1.0) C = (n*C + x*x.T + n*oldmean*oldmean.T - (n+1)*mean*mean.T) / (n + 1.0) x -= mean # Project new input on current p-dimensional subspace and calculate # the normalized residual vector g = U.T*x r = x - (U*g) r = (r / np.linalg.norm(r)) if not _is_zero(r) else np.zeros_like(r) # Extend the transformation matrix with the residual vector and find # the rotation matrix by solving the eigenproblem DR=RE U = np.concatenate((U, r), 1) D = U.T*C*U (E, R) = np.linalg.eigh(D) # Sort eigenvalues and eigenvectors from largest to smallest to get the # rotation matrix R sorter = list(reversed(E.argsort(0))) E = E[sorter] R = R[:,sorter] # Apply the rotation matrix U = U*R # Select only p largest eigenvectors and values and update state self._n += 1.0 self._mean = mean self._covariance = C self._eigenvectors = U[:, 0:p] self._eigenvalues = E[0:p] @property def components(self): """Returns a matrix with the current principal components as columns. """ return self._eigenvectors @property def variances(self): """Returns a list with the appropriate variance along each principal component. """ return self._eigenvalues def _is_zero(x): """Return a boolean indicating whether the given vector is a zero vector up to a threshold. """ return np.fabs(x).min() < _ZERO_THRESHOLD if __name__ == '__main__': import sys def pca_svd(X): X = X - X.mean(0).repeat(X.shape[0], 0) [_, _, V] = np.linalg.svd(X) return V N = 1000 obs = np.matrix([np.random.normal(size=10) for _ in xrange(N)]) V = pca_svd(obs) print V[0:2] pca = IPCA(obs.shape[1], 2) for i in xrange(obs.shape[0]): x = obs[i,:].transpose() pca.update(x) U = pca.components print U

    Read the article

  • Is it possible to embed Cockburn style textual UML Use Case content in the code base to improve code

    - by fooledbyprimes
    experimenting with Cockburn use cases in code I was writing some complicated UI code. I decided to employ Cockburn use cases with fish,kite,and sea levels (discussed by Martin Fowler in his book 'UML Distilled'). I wrapped Cockburn use cases in static C# objects so that I could test logical conditions against static constants which represented steps in a UI workflow. The idea was that you could read the code and know what it was doing because the wrapped objects and their public contants gave you ENGLISH use cases via namespaces. Also, I was going to use reflection to pump out error messages that included the described use cases. The idea is that the stack trace could include some UI use case steps IN ENGLISH.... It turned out to be a fun way to achieve a mini,psuedo light-weight Domain Language but without having to write a DSL compiler. So my question is whether or not this is a good way to do this? Has anyone out there ever done something similar? c# example snippets follow Assume we have some aspx page which has 3 user controls (with lots of clickable stuff). User must click on stuff in one particular user control (possibly making some kind of selection) and then the UI must visually cue the user that the selection was successful. Now, while that item is selected, the user must browse through a gridview to find an item within one of the other user controls and then select something. This sounds like an easy thing to manage but the code can get ugly. In my case, the user controls all sent event messages which were captured by the main page. This way, the page acted like a central processor of UI events and could keep track of what happens when the user is clicking around. So, in the main aspx page, we capture the first user control's event. using MyCompany.MyApp.Web.UseCases; protected void MyFirstUserControl_SomeUIWorkflowRequestCommingIn(object sender, EventArgs e) { // some code here to respond and make "state" changes or whatever // // blah blah blah // finally we have this (how did we know to call fish level method?? because we knew when we wrote the code to send the event in the user control) UpdateUserInterfaceOnFishLevelUseCaseGoalSuccess(FishLevel.SomeNamedUIWorkflow.SelectedItemForPurchase) } protected void UpdateUserInterfaceOnFishLevelGoalSuccess(FishLevel.SomeNamedUIWorkflow goal) { switch (goal) { case FishLevel.SomeNamedUIWorkflow.NewMasterItemSelected: //call some UI related methods here including methods for the other user controls if necessary.... break; case FishLevel.SomeNamedUIWorkFlow.DrillDownOnDetails: //call some UI related methods here including methods for the other user controls if necessary.... break; case FishLevel.SomeNamedUIWorkFlow.CancelMultiSelect: //call some UI related methods here including methods for the other user controls if necessary.... break; // more cases... } } } //also we have protected void UpdateUserInterfaceOnSeaLevelGoalSuccess(SeaLevel.SomeNamedUIWorkflow goal) { switch (goal) { case SeaLevel.CheckOutWorkflow.ChangedCreditCard: // do stuff // more cases... } } } So, in the MyCompany.MyApp.Web.UseCases namespace we might have code like this: class SeaLevel... class FishLevel... class KiteLevel... The workflow use cases embedded in the classes could be inner classes or static methods or enumerations or whatever gives you the cleanest namespace. I can't remember what I did originally but you get the picture.

    Read the article

  • Design Question - how do you break the dependency between classes using an interface?

    - by Seth Spearman
    Hello, I apologize in advance but this will be a long question. I'm stuck. I am trying to learn unit testing, C#, and design patterns - all at once. (Maybe that's my problem.) As such I am reading the Art of Unit Testing (Osherove), and Clean Code (Martin), and Head First Design Patterns (O'Reilly). I am just now beginning to understand delegates and events (which you would see if you were to troll my SO questions of recent). I still don't quite get lambdas. To contextualize all of this I have given myself a learning project I am calling goAlarms. I have an Alarm class with members you'd expect (NextAlarmTime, Name, AlarmGroup, Event Trigger etc.) I wanted the "Timer" of the alarm to be extensible so I created an IAlarmScheduler interface as follows... public interface AlarmScheduler { Dictionary<string,Alarm> AlarmList { get; } void Startup(); void Shutdown(); void AddTrigger(string triggerName, string groupName, Alarm alarm); void RemoveTrigger(string triggerName); void PauseTrigger(string triggerName); void ResumeTrigger(string triggerName); void PauseTriggerGroup(string groupName); void ResumeTriggerGroup(string groupName); void SetSnoozeTrigger(string triggerName, int duration); void SetNextOccurrence (string triggerName, DateTime nextOccurrence); } This IAlarmScheduler interface define a component that will RAISE an alarm (Trigger) which will bubble up to my Alarm class and raise the Trigger Event of the alarm itself. It is essentially the "Timer" component. I have found that the Quartz.net component is perfectly suited for this so I have created a QuartzAlarmScheduler class which implements IAlarmScheduler. All that is fine. My problem is that the Alarm class is abstract and I want to create a lot of different KINDS of alarm. For example, I already have a Heartbeat alarm (triggered every (int) interval of minutes), AppointmentAlarm (triggered on set date and time), Daily Alarm (triggered every day at X) and perhaps others. And Quartz.NET is perfectly suited to handle this. My problem is a design problem. I want to be able to instantiate an alarm of any kind without my Alarm class (or any derived classes) knowing anything about Quartz. The problem is that Quartz has awesome factories that return just the right setup for the Triggers that will be needed by my Alarm classes. So, for example, I can get a Quartz trigger by using TriggerUtils.MakeMinutelyTrigger to create a trigger for the heartbeat alarm described above. Or TriggerUtils.MakeDailyTrigger for the daily alarm. I guess I could sum it up this way. Indirectly or directly I want my alarm classes to be able to consume the TriggerUtils.Make* classes without knowing anything about them. I know that is a contradiction, but that is why I am asking the question. I thought about putting a delegate field into the alarm which would be assigned one of these Make method but by doing that I am creating a hard dependency between alarm and Quartz which I want to avoid for both unit testing purposes and design purposes. I thought of using a switch for the type in QuartzAlarmScheduler per here but I know it is bad design and I am trying to learn good design. If I may editorialize a bit. I've decided that coding (predefined) classes is easy. Design is HARD...in fact, really hard and I am really fighting feeling stupid right now. I guess I want to know if you really smart people took a while to really understand and master this stuff or should I feel stupid (as I do) because I haven't grasped it better in the couple of weeks/months I have been studying. You guys are awesome and thanks in advance for your answers. Seth

    Read the article

  • Increasing speed of python code

    - by Curious2learn
    Hi, I have some python code that has many classes. I used cProfile to find that the total time to run the program is 68 seconds. I found that the following function in a class called Buyers takes about 60 seconds of those 68 seconds. I have to run the program about 100 times, so any increase in speed will help. Can you suggest ways to increase the speed by modifying the code? If you need more information that will help, please let me know. def qtyDemanded(self, timePd, priceVector): '''Returns quantity demanded in period timePd. In addition, also updates the list of customers and non-customers. Inputs: timePd and priceVector Output: count of people for whom priceVector[-1] < utility ''' ## Initialize count of customers to zero ## Set self.customers and self.nonCustomers to empty lists price = priceVector[-1] count = 0 self.customers = [] self.nonCustomers = [] for person in self.people: if person.utility >= price: person.customer = 1 self.customers.append(person) else: person.customer = 0 self.nonCustomers.append(person) return len(self.customers) self.people is a list of person objects. Each person has customer and utility as its attributes. EDIT - responsed added ------------------------------------- Thanks so much for the suggestions. Here is the response to some questions and suggestions people have kindly made. I have not tried them all, but will try others and write back later. (1) @amber - the function is accessed 80,000 times. (2) @gnibbler and others - self.people is a list of Person objects in memory. Not connected to a database. (3) @Hugh Bothwell cumtime taken by the original function - 60.8 s (accessed 80000 times) cumtime taken by the new function with local function aliases as suggested - 56.4 s (accessed 80000 times) (4) @rotoglup and @Martin Thomas I have not tried your solutions yet. I need to check the rest of the code to see the places where I use self.customers before I can make the change of not appending the customers to self.customers list. But I will try this and write back. (5) @TryPyPy - thanks for your kind offer to check the code. Let me first read a little on the suggestions you have made to see if those will be feasible to use. EDIT 2 Some suggested that since I am flagging the customers and noncustomers in the self.people, I should try without creating separate lists of self.customers and self.noncustomers using append. Instead, I should loop over the self.people to find the number of customers. I tried the following code and timed both functions below f_w_append and f_wo_append. I did find that the latter takes less time, but it is still 96% of the time taken by the former. That is, it is a very small increase in the speed. @TryPyPy - The following piece of code is complete enough to check the bottleneck function, in case your offer is still there to check it with other compilers. Thanks again to everyone who replied. import numpy class person(object): def __init__(self, util): self.utility = util self.customer = 0 class population(object): def __init__(self, numpeople): self.people = [] self.cus = [] self.noncus = [] numpy.random.seed(1) utils = numpy.random.uniform(0, 300, numpeople) for u in utils: per = person(u) self.people.append(per) popn = population(300) def f_w_append(): '''Function with append''' P = 75 cus = [] noncus = [] for per in popn.people: if per.utility >= P: per.customer = 1 cus.append(per) else: per.customer = 0 noncus.append(per) return len(cus) def f_wo_append(): '''Function without append''' P = 75 for per in popn.people: if per.utility >= P: per.customer = 1 else: per.customer = 0 numcustomers = 0 for per in popn.people: if per.customer == 1: numcustomers += 1 return numcustomers

    Read the article

  • Architecture choice about representation of collections in Business Objects

    - by Rajarshi
    I have made certain choices in my architecture which I request the community to review and comment. I am breaking up the post in smaller sections to make it easier to understand the context and then suggest/comment. I am sorry that the post is long, but is required to explain the context. What am I building A typical business application where there are application users, security roles, business operation/action rights based on roles and several business modules like Stock Receive, Stock Transfer, Sale Order, Sale Invoice, Sale Return, Stock Audit etc. and several reports. The application is a WinForm application since it has a lot of rich and responsive UI requirements and has to operate in disconnected mode (with a local SQL Server), most of the time. What have I done I have built a framework - nothing to boast about, but just a set of libraries that serves the repetative requirements of my application, e.g. authentication, role based authorization, data access, validation, exception handling, logging, change status tracking, presentation model compliance and reasonable loose coupling between components. No, I have not written everything from scratch, you can say I have consolidated many things together like some concepts from CSLA, Martin Fowler for Presentation Model, blocks from Enterprise Library, Unity etc. to build a set of libraries that will help my developers be productive quickly without having to look up Google for many of the technical requirements. I have tried to keep the framework generic so that it can be used in typical business applications and also tried to follow some best practices that will support the same Business Objects to be used in an ASP.NET MVC environment also. My present architecture serves my objectives well, and have built several modules (on WinForm) without much trouble. The architecture also lent itself well to build some usable prototype on ASP.NET MVC with the same set of business objects, without changing a single line of code. My Dilemma I have used Custom Business Objects since that gives me a clearer OOP representation of the problem scope in my solution scope, and helps me visualize my entire solution as collection of objects with data and behavior rather than having a set of relational data (DataSet) and implement behaviours (business logic, validation) etc. separately. With rich databinding support in .NET 2.0 binding Custom Business Objects to UI was a breeze. Now while building my business objects, I am still in a dilemma about representation of collections in business objects. Currently I am using DataSets to represent collections while I have seen many suggestions to implement custom collections. For example, in my vision, a typical Sale Invoice Object will contain 'Sales Invoice Items' as a collection. Now theoritically, I can accept that the each 'Sales Invoice Item' should have its own behavior along with their data (ItemCode, Name, Qty, Price etc.) but typically managing of Sale Invoice Items in a Sale Invoice is handled by the Sale Invoice Object itself, e.g. adding/removing Items from collection. Additionally, we can also put business logic/rules for the Sales Invoice Items like "Qty should not be greater than the ordered qty", "Price should be max 10% above the price in Sale Order" etc. in the Sale Invoice object itself. With that kind of a vision, I felt that most business object child collections can be managed by the parent itself, including add/remove from collection as well and implementing business logic for the collection items, hence the collection items hold nothing but data. Additionally, typical collections are represented in UI in Grids, where ability to support DataBinding becomes very important for any collection. Implementing a custom collection, in that case would also mean, I have to implement robust DataBinding support as well, for the collection, which is of course time consuming. Now, considering child collection behaviors are implemented in the parent and the need for DataBinding of child collections, I chose DataSet to represent any child collection in my business objects. In the above example of Sale Invoice I will have 'Invoice Number', 'Date', 'Customer' etc. as attributes of the 'Sale Invoice' but 'InvoiceItems' as a DataSet. Of course, when I say DataSet, it is not a vanilla dataset but an extended DataSet that supports business rule validation and the same role based security model of my framework to allow/deny any business operation to rows/columns of the DataSet, automatically. This approach has allowed easier collection management and databinding in my business objects and my developers are able to deliver modules rapidly. Questions Do you feel that the approach is reasonable? Do you see any shortcomings of this approach? I am recently thinking of using 'Typed DataSets' as child collections, for easier representation in code, that will allow me to write 'currentInvoice.InvoiceItems' (for the DataTable) and 'invoiceItem.ProductCode' or 'invoiceItem.Qty', instead of 'drow["ProductCode"].ToString()' or '(int)drow["Qty"]' etc. Does this choice have any demerits? Thank you if you have read so far and a salute if you still have the Energy to answer.

    Read the article

  • C# Generic Arrays and math operations on it

    - by msedi
    Hello, I'm currently involved in a project where I have very large image volumes. This volumes have to processed very fast (adding, subtracting, thresholding, and so on). Additionally most of the volume are so large that they event don't fit into the memory of the system. For that reason I have created an abstract volume class (VoxelVolume) that host the volume and image data and overloads the operators so that it's possible to perform the regular mathematical operations on volumes. Thereby two more questions opened up which I will put into stackoverflow into two additional threads. Here is my first question. My volume is implemented in a way that it only can contain float array data, but most of the containing data is from an UInt16 image source. Only operations on the volume can create float array images. When I started implementing such a volume the class looked like following: public abstract class VoxelVolume<T> { ... } but then I realized that overloading the operators or return values would get more complicated. An example would be: public abstract class VoxelVolume<T> { ... public static VoxelVolume<T> Import<T>(param string[] files) { } } also adding two overloading operators would be more complicated: ... public static VoxelVolume<T> operator+(VoxelVolume<T> A, VoxelVolume<T> B) { ... } Let's assume I can overcome the problems described above, nevertheless I have different types of arrays that contain the image data. Since I have fixed my type in the volumes to float the is no problem and I can do an unsafe operation when adding the contents of two image volume arrays. I have read a few threads here and had a look around the web, but found no real good explanation of what to do when I want to add two arrays of different types in a fast way. Unfortunately every math operation on generics is not possible, since C# is not able to calculate the size of the underlying data type. Of course there might by a way around this problem by using C++/CLR, but currently everything I have done so far, runs in 32bit and 64bit without having to do a thing. Switching to C++/CLR seemed to me (pleased correct me if I'm wrong) that I'm bound to a certain platform (32bit) and I have to compile two assemblies when I let the application run on another platform (64bit). Is this true? So asked shortly: How is it possible to add two arrays of two different types in a fast way. Is it true that the developers of C# haven't thought about this. Switching to a different language (C# - C++) seems not to be an option. I realize that simply performing this operation float []A = new float[]{1,2,3}; byte []B = new byte[]{1,2,3}; float []C = A+B; is not possible and unnecessary although it would be nice if it would work. My solution I was trying was following: public static class ArrayExt { public static unsafe TResult[] Add<T1, T2, TResult>(T1 []A, T2 []B) { // Assume the length of both arrays is equal TResult[] result = new TResult[A.Length]; GCHandle h1 = GCHandle.Alloc (A, Pinned); GCHandle h2 = GCHandle.Alloc (B, Pinned); GCHandle hR = GCHandle.Alloc (C, Pinned); void *ptrA = h1.ToPointer(); void *ptrB = h2.ToPointer(); void *ptrR = hR.ToPointer(); for (int i=0; i<A.Length; i++) { *((TResult *)ptrR) = (TResult *)((T1)*ptrA + (T2)*ptrB)); } h1.Free(); h2.Free(); hR.Free(); return result; } } Please excuse if the code above is not quite correct, I wrote it without using an C# editor. Is such a solution a shown above thinkable? Please feel free to ask if I made a mistake or described some things incompletely. Thanks for your help Martin

    Read the article

  • Design pattern for cost calculator app?

    - by Anders Svensson
    Hi, I have a problem that I’ve tried to get help for before, but I wasn’t able to solve it then, so I’m trying to simplify the problem now to see if I can get some more concrete help with this because it is driving me crazy… Basically, I have a working (more complex) version of this application, which is a project cost calculator. But because I am at the same time trying to learn to design my applications better, I would like some input on how I could improve this design. Basically the main thing I want is input on the conditionals that (here) appear repeated in two places. The suggestions I got before was to use the strategy pattern or factory pattern. I also know about the Martin Fowler book with the suggestion to Refactor conditional with polymorphism. I understand that principle in his simpler example. But how can I do either of these things here (if any would be suitable)? The way I see it, the calculation is dependent on a couple of conditions: 1. What kind of service is it, writing or analysis? 2. Is the project small, medium or large? (Please note that there may be other parameters as well, equally different, such as “are the products new or previously existing?” So such parameters should be possible to add, but I tried to keep the example simple with only two parameters to be able to get concrete help) So refactoring with polymorphism would imply creating a number of subclasses, which I already have for the first condition (type of service), and should I really create more subclasses for the second condition as well (size)? What would that become, AnalysisSmall, AnalysisMedium, AnalysisLarge, WritingSmall, etc…??? No, I know that’s not good, I just don’t see how to work with that pattern anyway else? I see the same problem basically for the suggestions of using the strategy pattern (and the factory pattern as I see it would just be a helper to achieve the polymorphism above). So please, if anyone has concrete suggestions as to how to design these classes the best way I would be really grateful! Please also consider whether I have chosen the objects correctly too, or if they need to be redesigned. (Responses like "you should consider the factory pattern" will obviously not be helpful... I've already been down that road and I'm stumped at precisely how in this case) Regards, Anders The code (very simplified, don’t mind the fact that I’m using strings instead of enums, not using a config file for data etc, that will be done as necessary in the real application once I get the hang of these design problems): public abstract class Service { protected Dictionary<string, int> _hours; protected const int SMALL = 2; protected const int MEDIUM = 8; public int NumberOfProducts { get; set; } public abstract int GetHours(); } public class Writing : Service { public Writing(int numberOfProducts) { NumberOfProducts = numberOfProducts; _hours = new Dictionary<string, int> { { "small", 125 }, { "medium", 100 }, { "large", 60 } }; } public override int GetHours() { if (NumberOfProducts <= SMALL) return _hours["small"] * NumberOfProducts; if (NumberOfProducts <= MEDIUM) return (_hours["small"] * SMALL) + (_hours["medium"] * (NumberOfProducts - SMALL)); return (_hours["small"] * SMALL) + (_hours["medium"] * (MEDIUM - SMALL)) + (_hours["large"] * (NumberOfProducts - MEDIUM)); } } public class Analysis : Service { public Analysis(int numberOfProducts) { NumberOfProducts = numberOfProducts; _hours = new Dictionary<string, int> { { "small", 56 }, { "medium", 104 }, { "large", 200 } }; } public override int GetHours() { if (NumberOfProducts <= SMALL) return _hours["small"]; if (NumberOfProducts <= MEDIUM) return _hours["medium"]; return _hours["large"]; } } public partial class Form1 : Form { public Form1() { InitializeComponent(); List<int> quantities = new List<int>(); for (int i = 0; i < 100; i++) { quantities.Add(i); } comboBoxNumberOfProducts.DataSource = quantities; } private void comboBoxNumberOfProducts_SelectedIndexChanged(object sender, EventArgs e) { Service writing = new Writing((int) comboBoxNumberOfProducts.SelectedItem); Service analysis = new Analysis((int) comboBoxNumberOfProducts.SelectedItem); labelWriterHours.Text = writing.GetHours().ToString(); labelAnalysisHours.Text = analysis.GetHours().ToString(); } }

    Read the article

  • ?Oracle Database 12c????Information Lifecycle Management ILM?Storage Enhancements

    - by Liu Maclean(???)
    Oracle Database 12c????Information Lifecycle Management ILM ?????????Storage Enhancements ???????? Lifecycle Management ILM ????????? Automatic Data Placement ??????, ??ADP? ?????? 12c???????Datafile??? Online Move Datafile, ????????????????datafile???????,??????????????? ????(12.1.0.1)Automatic Data Optimization?heat map????????: ????????? (CDB)?????Automatic Data Optimization?heat map Row-level policies for ADO are not supported for Temporal Validity. Partition-level ADO and compression are supported if partitioned on the end-time columns. Row-level policies for ADO are not supported for in-database archiving. Partition-level ADO and compression are supported if partitioned on the ORA_ARCHIVE_STATE column. Custom policies (user-defined functions) for ADO are not supported if the policies default at the tablespace level. ADO does not perform checks for storage space in a target tablespace when using storage tiering. ADO is not supported on tables with object types or materialized views. ADO concurrency (the number of simultaneous policy jobs for ADO) depends on the concurrency of the Oracle scheduler. If a policy job for ADO fails more than two times, then the job is marked disabled and the job must be manually enabled later. Policies for ADO are only run in the Oracle Scheduler maintenance windows. Outside of the maintenance windows all policies are stopped. The only exceptions are those jobs for rebuilding indexes in ADO offline mode. ADO has restrictions related to moving tables and table partitions. ??????row,segment???????????ADO??,?????create table?alter table?????? ????ADO??,??????????????,???????????????? storage tier , ?????????storage tier?????????, ??????????????ADO??????????? segment?row??group? ?CREATE TABLE?ALERT TABLE???ILM???,??????????????????ADO policy? ??ILM policy???????????????? ??????? ????ADO policy, ?????alter table  ???????,?????????????? CREATE TABLE sales_ado (PROD_ID NUMBER NOT NULL, CUST_ID NUMBER NOT NULL, TIME_ID DATE NOT NULL, CHANNEL_ID NUMBER NOT NULL, PROMO_ID NUMBER NOT NULL, QUANTITY_SOLD NUMBER(10,2) NOT NULL, AMOUNT_SOLD NUMBER(10,2) NOT NULL ) ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SQL> SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled 2 FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLED -------------------- -------------------------- -------------- P41 DATA MOVEMENT YES ALTER TABLE sales MODIFY PARTITION sales_1995 ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLE ------------------------ ------------- ------ P1 DATA MOVEMENT YES P2 DATA MOVEMENT YES /* You can disable an ADO policy with the following */ ALTER TABLE sales_ado ILM DISABLE POLICY P1; /* You can delete an ADO policy with the following */ ALTER TABLE sales_ado ILM DELETE POLICY P1; /* You can disable all ADO policies with the following */ ALTER TABLE sales_ado ILM DISABLE_ALL; /* You can delete all ADO policies with the following */ ALTER TABLE sales_ado ILM DELETE_ALL; /* You can disable an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DISABLE POLICY P2; /* You can delete an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DELETE POLICY P2; ILM ???????: ?????ILM ADP????,???????: ?????? ???? activity tracking, ????2????????,???????????????????: SEGMENT-LEVEL???????????????????? ROW-LEVEL????????,??????? ????????: 1??????? SEGMENT-LEVEL activity tracking ALTER TABLE interval_sales ILM  ENABLE ACTIVITY TRACKING SEGMENT ACCESS ???????INTERVAL_SALES??segment level  activity tracking,?????????????????? 2? ??????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING (CREATE TIME , WRITE TIME); 3????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING  (READ TIME); ?12.1.0.1.0?????? ??HEAT_MAP??????????, ?????system??session?????heap_map????????????? ?????????HEAT MAP??,? ALTER SYSTEM SET HEAT_MAP = ON; ?HEAT MAP??????,??????????????????????????  ??SYSTEM?SYSAUX????????????? ???????HEAT MAP??: ALTER SYSTEM SET HEAT_MAP = OFF; ????? HEAT_MAP????, ?HEAT_MAP??? ?????????????????????? ?HEAT_MAP?????????Automatic Data Optimization (ADO)??? ??ADO??,Heat Map ?????????? ????V$HEAT_MAP_SEGMENT ??????? HEAT MAP?? SQL> select * from V$heat_map_segment; no rows selected SQL> alter session set heat_map=on; Session altered. SQL> select * from scott.emp; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO ---------- ---------- --------- ---------- --------- ---------- ---------- ---------- 7369 SMITH CLERK 7902 17-DEC-80 800 20 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30 7566 JONES MANAGER 7839 02-APR-81 2975 20 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 19-APR-87 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30 7876 ADAMS CLERK 7788 23-MAY-87 1100 20 7900 JAMES CLERK 7698 03-DEC-81 950 30 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7934 MILLER CLERK 7782 23-JAN-82 1300 10 14 rows selected. SQL> select * from v$heat_map_segment; OBJECT_NAME SUBOBJECT_NAME OBJ# DATAOBJ# TRACK_TIM SEG SEG FUL LOO CON_ID -------------------- -------------------- ---------- ---------- --------- --- --- --- --- ---------- EMP 92997 92997 23-JUL-13 NO NO YES NO 0 ??v$heat_map_segment???,?v$heat_map_segment??????????????X$HEATMAPSEGMENT V$HEAT_MAP_SEGMENT displays real-time segment access information. Column Datatype Description OBJECT_NAME VARCHAR2(128) Name of the object SUBOBJECT_NAME VARCHAR2(128) Name of the subobject OBJ# NUMBER Object number DATAOBJ# NUMBER Data object number TRACK_TIME DATE Timestamp of current activity tracking SEGMENT_WRITE VARCHAR2(3) Indicates whether the segment has write access: (YES or NO) SEGMENT_READ VARCHAR2(3) Indicates whether the segment has read access: (YES or NO) FULL_SCAN VARCHAR2(3) Indicates whether the segment has full table scan: (YES or NO) LOOKUP_SCAN VARCHAR2(3) Indicates whether the segment has lookup scan: (YES or NO) CON_ID NUMBER The ID of the container to which the data pertains. Possible values include:   0: This value is used for rows containing data that pertain to the entire CDB. This value is also used for rows in non-CDBs. 1: This value is used for rows containing data that pertain to only the root n: Where n is the applicable container ID for the rows containing data The Heat Map feature is not supported in CDBs in Oracle Database 12c, so the value in this column can be ignored. ??HEAP MAP??????????????????,????DBA_HEAT_MAP_SEGMENT???????? ???????HEAT_MAP_STAT$?????? ??Automatic Data Optimization??????: ????1: SQL> alter system set heat_map=on; ?????? ????????????? scott?? http://www.askmaclean.com/archives/scott-schema-script.html SQL> grant all on dbms_lock to scott; ????? SQL> grant dba to scott; ????? @ilm_setup_basic C:\APP\XIANGBLI\ORADATA\MACLEAN\ilm.dbf @tktgilm_demo_env_setup SQL> connect scott/tiger ; ???? SQL> select count(*) from scott.employee; COUNT(*) ---------- 3072 ??? 1 ?? SQL> set serveroutput on SQL> exec print_compression_stats('SCOTT','EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 3072 Adv/basic compressed : 0 Others : 0 PL/SQL ???????? ???????3072?????? ????????? ????policy ???????????? alter table employee ilm add policy row store compress advanced row after 3 days of no modification / SQL> set serveroutput on SQL> execute list_ilm_policies; -------------------------------------------------- Policies defined for SCOTT -------------------------------------------------- Object Name------ : EMPLOYEE Subobject Name--- : Object Type------ : TABLE Inherited from--- : POLICY NOT INHERITED Policy Name------ : P1 Action Type------ : COMPRESSION Scope------------ : ROW Compression level : ADVANCED Tier Tablespace-- : Condition type--- : LAST MODIFICATION TIME Condition days--- : 3 Enabled---------- : YES -------------------------------------------------- PL/SQL ???????? SQL> select sysdate from dual; SYSDATE -------------- 29-7? -13 SQL> execute set_back_chktime(get_policy_name('EMPLOYEE',null,'COMPRESSION','ROW','ADVANCED',3,null,null),'EMPLOYEE',null,6); Object check time reset ... -------------------------------------- Object Name : EMPLOYEE Object Number : 93123 D.Object Numbr : 93123 Policy Number : 1 Object chktime : 23-7? -13 08.13.42.000000 ?? Distnt chktime : 0 -------------------------------------- PL/SQL ???????? ?policy?chktime???6??, ????set_back_chktime???????????????“????”?,?????????,???????? ?????? alter system flush buffer_cache; alter system flush buffer_cache; alter system flush shared_pool; alter system flush shared_pool; SQL> execute set_window('MONDAY_WINDOW','OPEN'); Set Maint. Window OPEN ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : TRUE ----------------------------- PL/SQL ???????? SQL> exec dbms_lock.sleep(60) ; PL/SQL ???????? SQL> exec print_compression_stats('SCOTT', 'EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 338 Adv/basic compressed : 2734 Others : 0 PL/SQL ???????? ??????????????? Adv/basic compressed : 2734 ??????? SQL> col object_name for a20 SQL> select object_id,object_name from dba_objects where object_name='EMPLOYEE'; OBJECT_ID OBJECT_NAME ---------- -------------------- 93123 EMPLOYEE SQL> execute list_ilm_policy_executions ; -------------------------------------------------- Policies execution details for SCOTT -------------------------------------------------- Policy Name------ : P22 Job Name--------- : ILMJOB48 Start time------- : 29-7? -13 08.37.45.061000 ?? End time--------- : 29-7? -13 08.37.48.629000 ?? ----------------- Object Name------ : EMPLOYEE Sub_obj Name----- : Obj Type--------- : TABLE ----------------- Exec-state------- : SELECTED FOR EXECUTION Job state-------- : COMPLETED SUCCESSFULLY Exec comments---- : Results comments- : --- -------------------------------------------------- PL/SQL ???????? ILMJOB48?????policy?JOB,?12.1.0.1??J00x???? ?MMON_SLAVE???M00x???15????????? select sample_time,program,module,action from v$active_session_history where action ='KDILM background EXEcution' order by sample_time; 29-7? -13 08.16.38.369000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.38.388000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.39.390000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.23.38.681000000 ?? ORACLE.EXE (M002) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.32.38.968000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.39.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.40.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.36.40.066000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.42.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.43.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.44.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.38.42.386000000 ?? ORACLE.EXE (M001) MMON_SLAVE KDILM background EXEcution select distinct action from v$active_session_history where action like 'KDILM%' KDILM background CLeaNup KDILM background EXEcution SQL> execute set_window('MONDAY_WINDOW','CLOSE'); Set Maint. Window CLOSE ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : FALSE ----------------------------- PL/SQL ???????? SQL> drop table employee purge ; ????? ???? ????? spool ilm_usecase_1_cleanup.lst @ilm_demo_cleanup ; spool off

    Read the article

  • Ubuntu server has slow performance

    - by Rich
    I have a custom built Ubuntu 11.04 server with a 6 disk software RAID 10 primary drive. On it I'm primarily running a PostgreSQL and a few other utilities that stream data from the web. I often find after a few hours of uptime the server starts to lag with all kinds of processes. For example, it may take 10-15 seconds after log-in to get a shell prompt. It might take 5-10 seconds for top to come up. An ls might take a second or two. When I look at top there is almost no CPU usage. There's a fair amount of memory used by the PostgreSQL server but not enough to bleed into swap. I have no idea where to go from here, other than to suspect the RAID10 (I've only ever had software RAID 1's before). Edit: Output from top: top - 11:56:03 up 1:46, 3 users, load average: 0.89, 0.73, 0.72 Tasks: 119 total, 1 running, 118 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.0%sy, 0.0%ni, 93.5%id, 6.2%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 16325596k total, 3478248k used, 12847348k free, 20880k buffers Swap: 19534176k total, 0k used, 19534176k free, 3041992k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1747 woodsp 20 0 109m 10m 4888 S 1 0.1 0:42.70 python 357 root 20 0 0 0 0 S 0 0.0 0:00.40 jbd2/sda3-8 1 root 20 0 24324 2284 1344 S 0 0.0 0:00.84 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0 0.0 0:00.24 ksoftirqd/0 6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0 0.0 0:00.01 watchdog/0 8 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1 10 root 20 0 0 0 0 S 0 0.0 0:00.02 ksoftirqd/1 12 root RT 0 0 0 0 S 0 0.0 0:00.01 watchdog/1 13 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/2 14 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/2:0 15 root 20 0 0 0 0 S 0 0.0 0:00.00 ksoftirqd/2 16 root RT 0 0 0 0 S 0 0.0 0:00.01 watchdog/2 17 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/3 18 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/3:0 19 root 20 0 0 0 0 S 0 0.0 0:00.02 ksoftirqd/3 20 root RT 0 0 0 0 S 0 0.0 0:00.01 watchdog/3 21 root 0 -20 0 0 0 S 0 0.0 0:00.00 cpuset 22 root 0 -20 0 0 0 S 0 0.0 0:00.00 khelper 23 root 20 0 0 0 0 S 0 0.0 0:00.00 kdevtmpfs 24 root 0 -20 0 0 0 S 0 0.0 0:00.00 netns 26 root 20 0 0 0 0 S 0 0.0 0:00.00 sync_supers df -h rpsharp@ncp-skookum:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 1.8T 549G 1.2T 32% / udev 7.8G 4.0K 7.8G 1% /dev tmpfs 3.2G 492K 3.2G 1% /run none 5.0M 0 5.0M 0% /run/lock none 7.8G 0 7.8G 0% /run/shm /dev/sda2 952M 128K 952M 1% /boot/efi /dev/md0 5.5T 562G 4.7T 11% /usr/local free -m psharp@ncp-skookum:~$ free -m total used free shared buffers cached Mem: 15942 3409 12533 0 20 2983 -/+ buffers/cache: 405 15537 Swap: 19076 0 19076 tail -50 /var/log/syslog Jul 3 06:31:32 ncp-skookum rsyslogd: [origin software="rsyslogd" swVersion="5.8.6" x-pid="1070" x-info="http://www.rsyslog.com"] rsyslogd was HUPed Jul 3 06:39:01 ncp-skookum CRON[14211]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jul 3 06:40:01 ncp-skookum CRON[14223]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jul 3 07:00:01 ncp-skookum CRON[14328]: (woodsp) CMD (/home/woodsp/bin/mail_tweetupdate # email an update) Jul 3 07:00:01 ncp-skookum CRON[14327]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jul 3 07:00:28 ncp-skookum sendmail[14356]: q63E0SoZ014356: from=woodsp, size=2328, class=0, nrcpts=2, msgid=<[email protected]>, relay=woodsp@localhost Jul 3 07:00:29 ncp-skookum sm-mta[14357]: q63E0Si6014357: from=<[email protected]>, size=2569, class=0, nrcpts=2, msgid=<[email protected]>, proto=ESMTP, daemon=MTA-v4, relay=localhost [127.0.0.1] Jul 3 07:00:29 ncp-skookum sendmail[14356]: q63E0SoZ014356: to=Spencer Wood <[email protected]>,Martin Lacayo <[email protected]>, ctladdr=woodsp (1004/1005), delay=00:00:01, xdelay=00:00:01, mailer=relay, pri=62328, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (q63E0Si6014357 Message accepted for delivery) Jul 3 07:00:29 ncp-skookum sm-mta[14359]: STARTTLS=client, relay=mx3.stanford.edu., version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-SHA, bits=256/256 Jul 3 07:00:29 ncp-skookum sm-mta[14359]: q63E0Si6014357: to=<[email protected]>,<[email protected]>, ctladdr=<[email protected]> (1004/1005), delay=00:00:01, xdelay=00:00:00, mailer=esmtp, pri=152569, relay=mx3.stanford.edu. [171.67.219.73], dsn=2.0.0, stat=Sent (Ok: queued as 8F3505802AC) Jul 3 07:09:08 ncp-skookum CRON[14396]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jul 3 07:17:01 ncp-skookum CRON[14438]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jul 3 07:20:01 ncp-skookum CRON[14453]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jul 3 07:39:01 ncp-skookum CRON[14551]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jul 3 07:40:01 ncp-skookum CRON[14562]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jul 3 08:00:01 ncp-skookum CRON[14668]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jul 3 08:09:01 ncp-skookum CRON[14724]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jul 3 08:17:01 ncp-skookum CRON[14766]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jul 3 08:20:01 ncp-skookum CRON[14781]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jul 3 08:39:01 ncp-skookum CRON[14881]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jul 3 08:40:01 ncp-skookum CRON[14892]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Output of hdparm -t /dev/sd{a,b,c,d,e,f} This looks suspicious? /dev/sda: Timing buffered disk reads: 2 MB in 4.84 seconds = 423.39 kB/sec /dev/sdb: Timing buffered disk reads: 420 MB in 3.01 seconds = 139.74 MB/sec /dev/sdc: Timing buffered disk reads: 390 MB in 3.00 seconds = 129.87 MB/sec /dev/sdd: Timing buffered disk reads: 416 MB in 3.00 seconds = 138.51 MB/sec /dev/sde: Timing buffered disk reads: 422 MB in 3.00 seconds = 140.50 MB/sec /dev/sdf: Timing buffered disk reads: 416 MB in 3.01 seconds = 138.26 MB/sec

    Read the article

  • How to use sudo with WinSCP and ProFTPd?

    - by Gaia
    I need to run the SFTP fileserver binary as root, but direct root login is not allowed. In WinSCP, if I use "default" on SFTP server protocol option everything works as expected. Following the instructions to sudo in WinSCP, I tried using "sudo /usr/sbin/proftpd" (works on the command line without any prompts) but it brings up "Cannot initialize SFTP protocol. Is the host running a SFTP server?" How to use sudo with WinSCP and ProFTPd? WinSCP 4.3.7 GUI Protocol: SFTP-3 CentOS 6.2 Webmin/Virtualmin (Current Version) PS: only cert based login is allowed . 2012-06-17 11:05:56.998 -------------------------------------------------------------------------- . 2012-06-17 11:05:56.998 WinSCP Version 4.3.7 (Build 1679) (OS 6.1.7601 Service Pack 1) . 2012-06-17 11:05:56.998 Configuration: HKEY_CURRENT_USER\Software\Martin Prikryl\WinSCP 2\ . 2012-06-17 11:05:56.999 Login time: Sunday, June 17, 2012 11:05:56 AM . 2012-06-17 11:05:56.999 -------------------------------------------------------------------------- . 2012-06-17 11:05:56.999 Session name: KVM1 (Modified stored session) . 2012-06-17 11:05:57.047 Host name: mykvm.com (Port: 22) . 2012-06-17 11:05:57.048 User name: adminuser (Password: No, Key file: Yes) . 2012-06-17 11:05:57.048 Tunnel: No . 2012-06-17 11:05:57.048 Transfer Protocol: SFTP (SCP) . 2012-06-17 11:05:57.048 Ping type: -, Ping interval: 30 sec; Timeout: 15 sec . 2012-06-17 11:05:57.048 Proxy: none . 2012-06-17 11:05:57.048 SSH protocol version: 2; Compression: Yes . 2012-06-17 11:05:57.048 Bypass authentication: No . 2012-06-17 11:05:57.048 Try agent: Yes; Agent forwarding: No; TIS/CryptoCard: No; KI: Yes; GSSAPI: No . 2012-06-17 11:05:57.048 Ciphers: aes,blowfish,3des,WARN,arcfour,des; Ssh2DES: No . 2012-06-17 11:05:57.048 SSH Bugs: -,-,-,-,-,-,-,-,- . 2012-06-17 11:05:57.048 SFTP Bugs: -,- . 2012-06-17 11:05:57.048 Return code variable: Autodetect; Lookup user groups: Yes . 2012-06-17 11:05:57.048 Shell: default . 2012-06-17 11:05:57.048 EOL: 0, UTF: 2 . 2012-06-17 11:05:57.048 Clear aliases: Yes, Unset nat.vars: Yes, Resolve symlinks: Yes . 2012-06-17 11:05:57.048 LS: ls -la, Ign LS warn: Yes, Scp1 Comp: No . 2012-06-17 11:05:57.048 Local directory: default, Remote directory: home, Update: No, Cache: Yes . 2012-06-17 11:05:57.048 Cache directory changes: Yes, Permanent: Yes . 2012-06-17 11:05:57.048 DST mode: 1 . 2012-06-17 11:05:57.048 -------------------------------------------------------------------------- . 2012-06-17 11:05:57.113 Looking up host "mykvm.com" . 2012-06-17 11:05:57.132 Connecting to xxx.xxx.128.59 port 22 . 2012-06-17 11:05:57.499 Server version: SSH-2.0-OpenSSH_5.3 . 2012-06-17 11:05:57.499 Using SSH protocol version 2 . 2012-06-17 11:05:57.499 We claim version: SSH-2.0-WinSCP_release_4.3.7 . 2012-06-17 11:05:57.679 Server supports delayed compression; will try this later . 2012-06-17 11:05:57.679 Doing Diffie-Hellman group exchange . 2012-06-17 11:05:58.077 Doing Diffie-Hellman key exchange with hash SHA-1 . 2012-06-17 11:05:58.498 Host key fingerprint is: . 2012-06-17 11:05:58.498 ssh-rsa 2048 bd:e4:34:b1:d4:69:d6:4e:e4:26:04:8b:b7:b3:de:c3 . 2012-06-17 11:05:58.498 Initialised AES-256 SDCTR client->server encryption . 2012-06-17 11:05:58.498 Initialised HMAC-SHA1 client->server MAC algorithm . 2012-06-17 11:05:58.498 Initialised AES-256 SDCTR server->client encryption . 2012-06-17 11:05:58.498 Initialised HMAC-SHA1 server->client MAC algorithm . 2012-06-17 11:05:58.922 Reading private key file "D:\id_rsa.ppk" ! 2012-06-17 11:05:58.924 Using username "adminuser". . 2012-06-17 11:05:59.550 Offered public key . 2012-06-17 11:05:59.743 Offer of public key accepted ! 2012-06-17 11:05:59.743 Authenticating with public key "masterkey for admin" . 2012-06-17 11:05:59.764 Prompt (3, SSH key passphrase, , Passphrase for key "masterkey for admin": ) . 2012-06-17 11:06:02.938 Sent public key signature . 2012-06-17 11:06:03.352 Access granted . 2012-06-17 11:06:03.352 Initiating key re-exchange (enabling delayed compression) . 2012-06-17 11:06:03.765 Doing Diffie-Hellman group exchange . 2012-06-17 11:06:03.955 Doing Diffie-Hellman key exchange with hash SHA-1 . 2012-06-17 11:06:04.410 Initialised AES-256 SDCTR client->server encryption . 2012-06-17 11:06:04.410 Initialised HMAC-SHA1 client->server MAC algorithm . 2012-06-17 11:06:04.410 Initialised zlib (RFC1950) compression . 2012-06-17 11:06:04.410 Initialised AES-256 SDCTR server->client encryption . 2012-06-17 11:06:04.410 Initialised HMAC-SHA1 server->client MAC algorithm . 2012-06-17 11:06:04.410 Initialised zlib (RFC1950) decompression . 2012-06-17 11:06:04.839 Opened channel for session . 2012-06-17 11:06:05.247 Started a shell/command . 2012-06-17 11:06:05.253 -------------------------------------------------------------------------- . 2012-06-17 11:06:05.253 Using SFTP protocol. . 2012-06-17 11:06:05.253 Doing startup conversation with host. > 2012-06-17 11:06:05.259 Type: SSH_FXP_INIT, Size: 5, Number: -1 . 2012-06-17 11:06:05.354 Server sent command exit status 0 . 2012-06-17 11:06:05.354 Disconnected: All channels closed * 2012-06-17 11:06:05.380 (ESshFatal) Connection has been unexpectedly closed. Server sent command exit status 0. * 2012-06-17 11:06:05.380 Cannot initialize SFTP protocol. Is the host running a SFTP server?

    Read the article

  • Use Extension method to write cleaner code

    - by Fredrik N
    This blog post will show you step by step to refactoring some code to be more readable (at least what I think). Patrik Löwnedahl gave me some of the ideas when we where talking about making code much cleaner. The following is an simple application that will have a list of movies (Normal and Transfer). The task of the application is to calculate the total sum of each movie and also display the price of each movie. class Program { enum MovieType { Normal, Transfer } static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfNormalMovie = 0; int totalPriceOfTransferMovie = 0; foreach (var movie in movies) { if (movie == MovieType.Normal) { totalPriceOfNormalMovie += 2; Console.WriteLine("$2"); } else if (movie == MovieType.Transfer) { totalPriceOfTransferMovie += 3; Console.WriteLine("$3"); } } } private static IEnumerable<MovieType> GetMovies() { return new List<MovieType>() { MovieType.Normal, MovieType.Transfer, MovieType.Normal }; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } In the code above I’m using an enum, a good way to add types (isn’t it ;)). I also use one foreach loop to calculate the price, the loop has a condition statement to check what kind of movie is added to the list of movies. I want to reuse the foreach only to increase performance and let it do two things (isn’t that smart of me?! ;)). First of all I can admit, I’m not a big fan of enum. Enum often results in ugly condition statements and can be hard to maintain (if a new type is added we need to check all the code in our app to see if we use the enum somewhere else). I don’t often care about pre-optimizations when it comes to write code (of course I have performance in mind). I rather prefer to use two foreach to let them do one things instead of two. So based on what I don’t like and Martin Fowler’s Refactoring catalog, I’m going to refactoring this code to what I will call a more elegant and cleaner code. First of all I’m going to use Split Loop to make sure the foreach will do one thing not two, it will results in two foreach (Don’t care about performance here, if the results will results in bad performance, you can refactoring later, but computers are so fast to day, so iterating through a list is not often so time consuming.) Note: The foreach actually do four things, will come to is later. var movies = GetMovies(); int totalPriceOfNormalMovie = 0; int totalPriceOfTransferMovie = 0; foreach (var movie in movies) { if (movie == MovieType.Normal) { totalPriceOfNormalMovie += 2; Console.WriteLine("$2"); } } foreach (var movie in movies) { if (movie == MovieType.Transfer) { totalPriceOfTransferMovie += 3; Console.WriteLine("$3"); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } To remove the condition statement we can use the Where extension method added to the IEnumerable<T> and is located in the System.Linq namespace: foreach (var movie in movies.Where( m => m == MovieType.Normal)) { totalPriceOfNormalMovie += 2; Console.WriteLine("$2"); } foreach (var movie in movies.Where( m => m == MovieType.Transfer)) { totalPriceOfTransferMovie += 3; Console.WriteLine("$3"); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The above code will still do two things, calculate the total price, and display the price of the movie. I will not take care of it at the moment, instead I will focus on the enum and try to remove them. One way to remove enum is by using the Replace Conditional with Polymorphism. So I will create two classes, one base class called Movie, and one called MovieTransfer. The Movie class will have a property called Price, the Movie will now hold the price:   public class Movie { public virtual int Price { get { return 2; } } } public class MovieTransfer : Movie { public override int Price { get { return 3; } } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The following code has no enum and will use the new Movie classes instead: class Program { static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfNormalMovie = 0; int totalPriceOfTransferMovie = 0; foreach (var movie in movies.Where( m => m is Movie)) { totalPriceOfNormalMovie += movie.Price; Console.WriteLine(movie.Price); } foreach (var movie in movies.Where( m => m is MovieTransfer)) { totalPriceOfTransferMovie += movie.Price; Console.WriteLine(movie.Price); } } private static IEnumerable<Movie> GetMovies() { return new List<Movie>() { new Movie(), new MovieTransfer(), new Movie() }; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   If you take a look at the foreach now, you can see it still actually do two things, calculate the price and display the price. We can do some more refactoring here by using the Sum extension method to calculate the total price of the movies:   static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfNormalMovie = movies.Where(m => m is Movie) .Sum(m => m.Price); int totalPriceOfTransferMovie = movies.Where(m => m is MovieTransfer) .Sum(m => m.Price); foreach (var movie in movies.Where( m => m is Movie)) Console.WriteLine(movie.Price); foreach (var movie in movies.Where( m => m is MovieTransfer)) Console.WriteLine(movie.Price); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now when the Movie object will hold the price, there is no need to use two separate foreach to display the price of the movies in the list, so we can use only one instead: foreach (var movie in movies) Console.WriteLine(movie.Price); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If we want to increase the Maintainability index we can use the Extract Method to move the Sum of the prices into two separate methods. The name of the method will explain what we are doing: static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfMovie = TotalPriceOfMovie(movies); int totalPriceOfTransferMovie = TotalPriceOfMovieTransfer(movies); foreach (var movie in movies) Console.WriteLine(movie.Price); } private static int TotalPriceOfMovieTransfer(IEnumerable<Movie> movies) { return movies.Where(m => m is MovieTransfer) .Sum(m => m.Price); } private static int TotalPriceOfMovie(IEnumerable<Movie> movies) { return movies.Where(m => m is Movie) .Sum(m => m.Price); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now to the last thing, I love the ForEach method of the List<T>, but the IEnumerable<T> doesn’t have it, so I created my own ForEach extension, here is the code of the ForEach extension method: public static class LoopExtensions { public static void ForEach<T>(this IEnumerable<T> values, Action<T> action) { Contract.Requires(values != null); Contract.Requires(action != null); foreach (var v in values) action(v); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } I will now replace the foreach by using this ForEach method: static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfMovie = TotalPriceOfMovie(movies); int totalPriceOfTransferMovie = TotalPriceOfMovieTransfer(movies); movies.ForEach(m => Console.WriteLine(m.Price)); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The ForEach on the movies will now display the price of the movie, but maybe we want to display the name of the movie etc, so we can use Extract Method by moving the lamdba expression into a method instead, and let the method explains what we are displaying: movies.ForEach(DisplayMovieInfo); private static void DisplayMovieInfo(Movie movie) { Console.WriteLine(movie.Price); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now the refactoring is done! Here is the complete code:   class Program { static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfMovie = TotalPriceOfMovie(movies); int totalPriceOfTransferMovie = TotalPriceOfMovieTransfer(movies); movies.ForEach(DisplayMovieInfo); } private static void DisplayMovieInfo(Movie movie) { Console.WriteLine(movie.Price); } private static int TotalPriceOfMovieTransfer(IEnumerable<Movie> movies) { return movies.Where(m => m is MovieTransfer) .Sum(m => m.Price); } private static int TotalPriceOfMovie(IEnumerable<Movie> movies) { return movies.Where(m => m is Movie) .Sum(m => m.Price); } private static IEnumerable<Movie> GetMovies() { return new List<Movie>() { new Movie(), new MovieTransfer(), new Movie() }; } } public class Movie { public virtual int Price { get { return 2; } } } public class MovieTransfer : Movie { public override int Price { get { return 3; } } } pulbic static class LoopExtensions { public static void ForEach<T>(this IEnumerable<T> values, Action<T> action) { Contract.Requires(values != null); Contract.Requires(action != null); foreach (var v in values) action(v); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } I think the new code is much cleaner than the first one, and I love the ForEach extension on the IEnumerable<T>, I can use it for different kind of things, for example: movies.Where(m => m is Movie) .ForEach(DoSomething); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } By using the Where and ForEach extension method, some if statements can be removed and will make the code much cleaner. But the beauty is in the eye of the beholder. What would you have done different, what do you think will make the first example in the blog post look much cleaner than my results, comments are welcome! If you want to know when I will publish a new blog post, you can follow me on twitter: http://www.twitter.com/fredrikn

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Design by Contract with Microsoft .Net Code Contract

    - by Fredrik N
    I have done some talks on different events and summits about Defensive Programming and Design by Contract, last time was at Cornerstone’s Developer Summit 2010. Next time will be at SweNug (Sweden .Net User Group). I decided to write a blog post about of some stuffs I was talking about. Users are a terrible thing! Protect your self from them ”Human users have a gift for doing the worst possible thing at the worst possible time.” – Michael T. Nygard, Release It! The kind of users Michael T. Nygard are talking about is the users of a system. We also have users that uses our code, the users I’m going to focus on is the users of our code. Me and you and another developers. “Any fool can write code that a computer can understand. Good programmers write code that humans can understand.” – Martin Fowler Good programmers also writes code that humans know how to use, good programmers also make sure software behave in a predictable manner despise inputs or user actions. Design by Contract   Design by Contract (DbC) is a way for us to make a contract between us (the code writer) and the users of our code. It’s about “If you give me this, I promise to give you this”. It’s not about business validations, that is something completely different that should be part of the domain model. DbC is to make sure the users of our code uses it in a correct way, and that we can rely on the contract and write code in a way where we know that the users will follow the contract. It will make it much easier for us to write code with a contract specified. Something like the following code is something we may see often: public void DoSomething(Object value) { value.DoIKnowThatICanDoThis(); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Where “value” can be uses directly or passed to other methods and later be used. What some of us can easily forget here is that the “value” can be “null”. We will probably not passing a null value, but someone else that uses our code maybe will do it. I think most of you (including me) have passed “null” into a method because you don’t know if the argument need to be specified to a valid value etc. I bet most of you also have got the “Null reference exception”. Sometimes this “Null reference exception” can be hard and take time to fix, because we need to search among our code to see where the “null” value was passed in etc. Wouldn’t it be much better if we can as early as possible specify that the value can’t not be null, so the users of our code also know it when the users starts to use our code, and before run time execution of the code? This is where DbC comes into the picture. We can use DbC to specify what we need, and by doing so we can rely on the contract when we write our code. So the code above can actually use the DoIKnowThatICanDoThis() method on the value object without being worried that the “value” can be null. The contract between the users of the code and us writing the code, says that the “value” can’t be null.   Pre- and Postconditions   When working with DbC we are specifying pre- and postconditions.  Precondition is a condition that should be met before a query or command is executed. An example of a precondition is: “The Value argument of the method can’t be null”, and we make sure the “value” isn’t null before the method is called. Postcondition is a condition that should be met when a command or query is completed, a postcondition will make sure the result is correct. An example of a postconditon is “The method will return a list with at least 1 item”. Commands an Quires When using DbC, we need to know what a Command and a Query is, because some principles that can be good to follow are based on commands and queries. A Command is something that will not return anything, like the SQL’s CREATE, UPDATE and DELETE. There are two kinds of Commands when using DbC, the Creation commands (for example a Constructor), and Others. Others can for example be a Command to add a value to a list, remove or update a value etc. //Creation commands public Stack(int size) //Other commands public void Push(object value); public void Remove(); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   A Query, is something that will return something, for example an Attribute, Property or a Function, like the SQL’s SELECT.   There are two kinds of Queries, the Basic Queries  (Quires that aren’t based on another queries), and the Derived Queries, queries that is based on another queries. Here is an example of queries of a Stack: //Basic Queries public int Count; public object this[int index] { get; } //Derived Queries //Is related to Count Query public bool IsEmpty() { return Count == 0; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } To understand about some principles that are good to follow when using DbC, we need to know about the Commands and different Queries. The 6 Principles When working with DbC, it’s advisable to follow some principles to make it easier to define and use contracts. The following DbC principles are: Separate commands and queries. Separate basic queries from derived queries. For each derived query, write a postcondition that specifies what result will be returned, in terms of one or more basic queries. For each command, write a postcondition that specifies the value of every basic query. For every query and command, decide on a suitable precondition. Write invariants to define unchanging properties of objects. Before I will write about each of them I want you to now that I’m going to use .Net 4.0 Code Contract. I will in the rest of the post uses a simple Stack (Yes I know, .Net already have a Stack class) to give you the basic understanding about using DbC. A Stack is a data structure where the first item in, will be the first item out. Here is a basic implementation of a Stack where not contract is specified yet: public class Stack { private object[] _array; //Basic Queries public uint Count; public object this[uint index] { get { return _array[index]; } set { _array[index] = value; } } //Derived Queries //Is related to Count Query public bool IsEmpty() { return Count == 0; } //Is related to Count and this[] Query public object Top() { return this[Count]; } //Creation commands public Stack(uint size) { Count = 0; _array = new object[size]; } //Other commands public void Push(object value) { this[++Count] = value; } public void Remove() { this[Count] = null; Count--; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Note: The Stack is implemented in a way to demonstrate the use of Code Contract in a simple way, the implementation may not look like how you would implement it, so don’t think this is the perfect Stack implementation, only used for demonstration.   Before I will go deeper into the principles I will simply mention how we can use the .Net Code Contract. I mention before about pre- and postcondition, is about “Require” something and to “Ensure” something. When using Code Contract, we will use a static class called “Contract” and is located in he “System.Diagnostics.Contracts” namespace. The contract must be specified at the top or our member statement block. To specify a precondition with Code Contract we uses the Contract.Requires method, and to specify a postcondition, we uses the Contract.Ensure method. Here is an example where both a pre- and postcondition are used: public object Top() { Contract.Requires(Count > 0, "Stack is empty"); Contract.Ensures(Contract.Result<object>() == this[Count]); return this[Count]; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   The contract above requires that the Count is greater than 0, if not we can’t get the item at the Top of a Stack. We also Ensures that the results (By using the Contract.Result method, we can specify a postcondition that will check if the value returned from a method is correct) of the Top query is equal to this[Count].   1. Separate Commands and Queries   When working with DbC, it’s important to separate Command and Quires. A method should either be a command that performs an Action, or returning information to the caller, not both. By asking a question the answer shouldn’t be changed. The following is an example of a Command and a Query of a Stack: public void Push(object value) public object Top() .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   The Push is a command and will not return anything, just add a value to the Stack, the Top is a query to get the item at the top of the stack.   2. Separate basic queries from derived queries There are two different kinds of queries,  the basic queries that doesn’t rely on another queries, and derived queries that uses a basic query. The “Separate basic queries from derived queries” principle is about about that derived queries can be specified in terms of basic queries. So this principles is more about recognizing that a query is a derived query or a basic query. It will then make is much easier to follow the other principles. The following code shows a basic query and a derived query: //Basic Queries public uint Count; //Derived Queries //Is related to Count Query public bool IsEmpty() { return Count == 0; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   We can see that IsEmpty will use the Count query, and that makes the IsEmpty a Derived query.   3. For each derived query, write a postcondition that specifies what result will be returned, in terms of one or more basic queries.   When the derived query is recognize we can follow the 3ed principle. For each derived query, we can create a postcondition that specifies what result our derived query will return in terms of one or more basic queries. Remember that DbC is about contracts between the users of the code and us writing the code. So we can’t use demand that the users will pass in a valid value, we must also ensure that we will give the users what the users wants, when the user is following our contract. The IsEmpty query of the Stack will use a Count query and that will make the IsEmpty a Derived query, so we should now write a postcondition that specified what results will be returned, in terms of using a basic query and in this case the Count query, //Basic Queries public uint Count; //Derived Queries public bool IsEmpty() { Contract.Ensures(Contract.Result<bool>() == (Count == 0)); return Count == 0; } The Contract.Ensures is used to create a postcondition. The above code will make sure that the results of the IsEmpty (by using the Contract.Result to get the result of the IsEmpty method) is correct, that will say that the IsEmpty will be either true or false based on Count is equal to 0 or not. The postcondition are using a basic query, so the IsEmpty is now following the 3ed principle. We also have another Derived Query, the Top query, it will also need a postcondition and it uses all basic queries. The Result of the Top method must be the same value as the this[] query returns. //Basic Queries public uint Count; public object this[uint index] { get { return _array[index]; } set { _array[index] = value; } } //Derived Queries //Is related to Count and this[] Query public object Top() { Contract.Ensures(Contract.Result<object>() == this[Count]); return this[Count]; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   4. For each command, write a postcondition that specifies the value of every basic query.   For each command we will create a postconditon that specifies the value of basic queries. If we look at the Stack implementation we will have three Commands, one Creation command, the Constructor, and two others commands, Push and Remove. Those commands need a postcondition and they should include basic query to follow the 4th principle. //Creation commands public Stack(uint size) { Contract.Ensures(Count == 0); Count = 0; _array = new object[size]; } //Other commands public void Push(object value) { Contract.Ensures(Count == Contract.OldValue<uint>(Count) + 1); Contract.Ensures(this[Count] == value); this[++Count] = value; } public void Remove() { Contract.Ensures(Count == Contract.OldValue<uint>(Count) - 1); this[Count] = null; Count--; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   As you can see the Create command will Ensures that Count will be 0 when the Stack is created, when a Stack is created there shouldn’t be any items in the stack. The Push command will take a value and put it into the Stack, when an item is pushed into the Stack, the Count need to be increased to know the number of items added to the Stack, and we must also make sure the item is really added to the Stack. The postconditon of the Push method will make sure the that old value of the Count (by using the Contract.OldValue we can get the value a Query has before the method is called)  plus 1 will be equal to the Count query, this is the way we can ensure that the Push will increase the Count with one. We also make sure the this[] query will now contain the item we pushed into the Stack. The Remove method must make sure the Count is decreased by one when the top item is removed from the Stack. The Commands is now following the 4th principle, where each command now have a postcondition that used the value of basic queries. Note: The principle says every basic Query, the Remove only used one Query the Count, it’s because this command can’t use the this[] query because an item is removed, so the only way to make sure an item is removed is to just use the Count query, so the Remove will still follow the principle.   5. For every query and command, decide on a suitable precondition.   We have now focused only on postcondition, now time for some preconditons. The 5th principle is about deciding a suitable preconditon for every query and command. If we starts to look at one of our basic queries (will not go through all Queries and commands here, just some of them) the this[] query, we can’t pass an index that is lower then 1 (.Net arrays and list are zero based, but not the stack in this blog post ;)) and the index can’t be lesser than the number of items in the stack. So here we will need a preconditon. public object this[uint index] { get { Contract.Requires(index >= 1); Contract.Requires(index <= Count); return _array[index]; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Think about the Contract as an documentation about how to use the code in a correct way, so if the contract could be specified elsewhere (not part of the method body), we could simply write “return _array[index]” and there is no need to check if index is greater or lesser than Count, because that is specified in a “contract”. The implementation of Code Contract, requires that the contract is specified in the code. As a developer I would rather have this contract elsewhere (Like Spec#) or implemented in a way Eiffel uses it as part of the language. Now when we have looked at one Query, we can also look at one command, the Remove command (You can see the whole implementation of the Stack at the end of this blog post, where precondition is added to more queries and commands then what I’m going to show in this section). We can only Remove an item if the Count is greater than 0. So we can write a precondition that will require that Count must be greater than 0. public void Remove() { Contract.Requires(Count > 0); Contract.Ensures(Count == Contract.OldValue<uint>(Count) - 1); this[Count] = null; Count--; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   6. Write invariants to define unchanging properties of objects.   The last principle is about making sure the object are feeling great! This is done by using invariants. When using Code Contract we can specify invariants by adding a method with the attribute ContractInvariantMethod, the method must be private or public and can only contains calls to Contract.Invariant. To make sure the Stack feels great, the Stack must have 0 or more items, the Count can’t never be a negative value to make sure each command and queries can be used of the Stack. Here is our invariant for the Stack object: [ContractInvariantMethod] private void ObjectInvariant() { Contract.Invariant(Count >= 0); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Note: The ObjectInvariant method will be called every time after a Query or Commands is called. Here is the full example using Code Contract:   public class Stack { private object[] _array; //Basic Queries public uint Count; public object this[uint index] { get { Contract.Requires(index >= 1); Contract.Requires(index <= Count); return _array[index]; } set { Contract.Requires(index >= 1); Contract.Requires(index <= Count); _array[index] = value; } } //Derived Queries //Is related to Count Query public bool IsEmpty() { Contract.Ensures(Contract.Result<bool>() == (Count == 0)); return Count == 0; } //Is related to Count and this[] Query public object Top() { Contract.Requires(Count > 0, "Stack is empty"); Contract.Ensures(Contract.Result<object>() == this[Count]); return this[Count]; } //Creation commands public Stack(uint size) { Contract.Requires(size > 0); Contract.Ensures(Count == 0); Count = 0; _array = new object[size]; } //Other commands public void Push(object value) { Contract.Requires(value != null); Contract.Ensures(Count == Contract.OldValue<uint>(Count) + 1); Contract.Ensures(this[Count] == value); this[++Count] = value; } public void Remove() { Contract.Requires(Count > 0); Contract.Ensures(Count == Contract.OldValue<uint>(Count) - 1); this[Count] = null; Count--; } [ContractInvariantMethod] private void ObjectInvariant() { Contract.Invariant(Count >= 0); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Summary By using Design By Contract we can make sure the users are using our code in a correct way, and we must also make sure the users will get the expected results when they uses our code. This can be done by specifying contracts. To make it easy to use Design By Contract, some principles may be good to follow like the separation of commands an queries. With .Net 4.0 we can use the Code Contract feature to specify contracts.

    Read the article

  • Building applications with WCF - Intro

    - by skjagini
    I am going to write series of articles using Windows Communication Framework (WCF) to develop client and server applications and this is the first part of that series. What is WCF As Juwal puts in his Programming WCF book, WCF provides an SDK for developing and deploying services on Windows, provides runtime environment to expose CLR types as services and consume services as CLR types. Building services with WCF is incredibly easy and it’s implementation provides a set of industry standards and off the shelf plumbing including service hosting, instance management, reliability, transaction management, security etc such that it greatly increases productivity Scenario: Lets consider a typical bank customer trying to create an account, deposit amount and transfer funds between accounts, i.e. checking and savings. To make it interesting, we are going to divide the functionality into multiple services and each of them working with database directly. We will run test cases with and without transactional support across services. In this post we will build contracts, services, data access layer, unit tests to verify end to end communication etc, nothing big stuff here and we dig into other features of the WCF in subsequent posts with incremental changes. In any distributed architecture we have two pieces i.e. services and clients. Services as the name implies provide functionality to execute various pieces of business logic on the server, and clients providing interaction to the end user. Services can be built with Web Services or with WCF. Service built on WCF have the advantage of binding independent, i.e. can run against TCP and HTTP protocol without any significant changes to the code. Solution Services Profile: For creating a new bank customer, getting details about existing customer ProfileContract ProfileService Checking Account: To get checking account balance, deposit or withdraw amount CheckingAccountContract CheckingAccountService Savings Account: To get savings account balance, deposit or withdraw amount SavingsAccountContract SavingsAccountService ServiceHost: To host services, i.e. running the services at particular address, binding and contract where client can connect to Client: Helps end user to use services like creating account and amount transfer between the accounts BankDAL: Data access layer to work with database     BankDAL It’s no brainer not to use an ORM as many matured products are available currently in market including Linq2Sql, Entity Framework (EF), LLblGenPro etc. For this exercise I am going to use Entity Framework 4.0, CTP 5 with code first approach. There are two approaches when working with data, data driven and code driven. In data driven we start by designing tables and their constrains in database and generate entities in code while in code driven (code first) approach entities are defined in code and the metadata generated from the entities is used by the EF to create tables and table constrains. In previous versions the entity classes had  to derive from EF specific base classes. In EF 4 it  is not required to derive from any EF classes, the entities are not only persistence ignorant but also enable full test driven development using mock frameworks.  Application consists of 3 entities, Customer entity which contains Customer details; CheckingAccount and SavingsAccount to hold the respective account balance. We could have introduced an Account base class for CheckingAccount and SavingsAccount which is certainly possible with EF mappings but to keep it simple we are just going to follow 1 –1 mapping between entity and table mappings. Lets start out by defining a class called Customer which will be mapped to Customer table, observe that the class is simply a plain old clr object (POCO) and has no reference to EF at all. using System;   namespace BankDAL.Model { public class Customer { public int Id { get; set; } public string FullName { get; set; } public string Address { get; set; } public DateTime DateOfBirth { get; set; } } }   In order to inform EF about the Customer entity we have to define a database context with properties of type DbSet<> for every POCO which needs to be mapped to a table in database. EF uses convention over configuration to generate the metadata resulting in much less configuration. using System.Data.Entity;   namespace BankDAL.Model { public class BankDbContext: DbContext { public DbSet<Customer> Customers { get; set; } } }   Entity constrains can be defined through attributes on Customer class or using fluent syntax (no need to muscle with xml files), CustomerConfiguration class. By defining constrains in a separate class we can maintain clean POCOs without corrupting entity classes with database specific information.   using System; using System.Data.Entity.ModelConfiguration;   namespace BankDAL.Model { public class CustomerConfiguration: EntityTypeConfiguration<Customer> { public CustomerConfiguration() { Initialize(); }   private void Initialize() { //Setting the Primary Key this.HasKey(e => e.Id);   //Setting required fields this.HasRequired(e => e.FullName); this.HasRequired(e => e.Address); //Todo: Can't create required constraint as DateOfBirth is not reference type, research it //this.HasRequired(e => e.DateOfBirth); } } }   Any queries executed against Customers property in BankDbContext are executed against Cusomers table. By convention EF looks for connection string with key of BankDbContext when working with the context.   We are going to define a helper class to work with Customer entity with methods for querying, adding new entity etc and these are known as repository classes, i.e., CustomerRepository   using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CustomerRepository { private readonly IDbSet<Customer> _customers;   public CustomerRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _customers = bankDbContext.Customers; }   public IQueryable<Customer> Query() { return _customers; }   public void Add(Customer customer) { _customers.Add(customer); } } }   From the above code it is observable that the Query methods returns customers as IQueryable i.e. customers are retrieved only when actually used i.e. iterated. Returning as IQueryable also allows to execute filtering and joining statements from business logic using lamba expressions without cluttering the data access layer with tens of methods.   Our CheckingAccountRepository and SavingsAccountRepository look very similar to each other using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CheckingAccountRepository { private readonly IDbSet<CheckingAccount> _checkingAccounts;   public CheckingAccountRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _checkingAccounts = bankDbContext.CheckingAccounts; }   public IQueryable<CheckingAccount> Query() { return _checkingAccounts; }   public void Add(CheckingAccount account) { _checkingAccounts.Add(account); }   public IQueryable<CheckingAccount> GetAccount(int customerId) { return (from act in _checkingAccounts where act.CustomerId == customerId select act); }   } } The repository classes look very similar to each other for Query and Add methods, with the help of C# generics and implementing repository pattern (Martin Fowler) we can reduce the repeated code. Jarod from ElegantCode has posted an article on how to use repository pattern with EF which we will implement in the subsequent articles along with WCF Unity life time managers by Drew Contracts It is very easy to follow contract first approach with WCF, define the interface and append ServiceContract, OperationContract attributes. IProfile contract exposes functionality for creating customer and getting customer details.   using System; using System.ServiceModel; using BankDAL.Model;   namespace ProfileContract { [ServiceContract] public interface IProfile { [OperationContract] Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth);   [OperationContract] Customer GetCustomer(int id);   } }   ICheckingAccount contract exposes functionality for working with checking account, i.e., getting balance, deposit and withdraw of amount. ISavingsAccount contract looks the same as checking account.   using System.ServiceModel;   namespace CheckingAccountContract { [ServiceContract] public interface ICheckingAccount { [OperationContract] decimal? GetCheckingAccountBalance(int customerId);   [OperationContract] void DepositAmount(int customerId,decimal amount);   [OperationContract] void WithdrawAmount(int customerId, decimal amount);   } }   Services   Having covered the data access layer and contracts so far and here comes the core of the business logic, i.e. services.   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } ProfileService implements the IProfile contract for creating customer and getting customer detail using CustomerRepository. using System; using System.Linq; using System.ServiceModel; using BankDAL; using BankDAL.Model; using BankDAL.Repositories; using ProfileContract;   namespace ProfileService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Profile: IProfile { public Customer CreateAccount( string customerName, string address, DateTime dateOfBirth) { Customer cust = new Customer { FullName = customerName, Address = address, DateOfBirth = dateOfBirth };   using (var bankDbContext = new BankDbContext()) { new CustomerRepository(bankDbContext).Add(cust); bankDbContext.SaveChanges(); } return cust; }   public Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth) { return CreateAccount(customerName, address, dateOfBirth); } public Customer GetCustomer(int id) { return new CustomerRepository(new BankDbContext()).Query() .Where(i => i.Id == id).FirstOrDefault(); }   } } From the above code you shall observe that we are calling bankDBContext’s SaveChanges method and there is no save method specific to customer entity because EF manages all the changes centralized at the context level and all the pending changes so far are submitted in a batch and it is represented as Unit of Work. Similarly Checking service implements ICheckingAccount contract using CheckingAccountRepository, notice that we are throwing overdraft exception if the balance falls by zero. WCF has it’s own way of raising exceptions using fault contracts which will be explained in the subsequent articles. SavingsAccountService is similar to CheckingAccountService. using System; using System.Linq; using System.ServiceModel; using BankDAL.Model; using BankDAL.Repositories; using CheckingAccountContract;   namespace CheckingAccountService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Checking:ICheckingAccount { public decimal? GetCheckingAccountBalance(int customerId) { using (var bankDbContext = new BankDbContext()) { CheckingAccount account = (new CheckingAccountRepository(bankDbContext) .GetAccount(customerId)).FirstOrDefault();   if (account != null) return account.Balance;   return null; } }   public void DepositAmount(int customerId, decimal amount) { using(var bankDbContext = new BankDbContext()) { var checkingAccountRepository = new CheckingAccountRepository(bankDbContext); CheckingAccount account = (checkingAccountRepository.GetAccount(customerId)) .FirstOrDefault();   if (account == null) { account = new CheckingAccount() { CustomerId = customerId }; checkingAccountRepository.Add(account); }   account.Balance = account.Balance + amount; if (account.Balance < 0) throw new ApplicationException("Overdraft not accepted");   bankDbContext.SaveChanges(); } } public void WithdrawAmount(int customerId, decimal amount) { DepositAmount(customerId, -1*amount); } } }   BankServiceHost The host acts as a glue binding contracts with it’s services, exposing the endpoints. The services can be exposed either through the code or configuration file, configuration file is preferred as it allows run time changes to service behavior even after deployment. We have 3 services and for each of the service you need to define name (the class that implements the service with fully qualified namespace) and endpoint known as ABC, i.e. address, binding and contract. We are using netTcpBinding and have defined the base address with for each of the contracts .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } <system.serviceModel> <services> <service name="ProfileService.Profile"> <endpoint binding="netTcpBinding" contract="ProfileContract.IProfile"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Profile"/> </baseAddresses> </host> </service> <service name="CheckingAccountService.Checking"> <endpoint binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Checking"/> </baseAddresses> </host> </service> <service name="SavingsAccountService.Savings"> <endpoint binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Savings"/> </baseAddresses> </host> </service> </services> </system.serviceModel> Have to open the services by creating service host which will handle the incoming requests from clients.   using System;   namespace ServiceHost { class Program { static void Main(string[] args) { CreateHosts(); Console.ReadLine(); }   private static void CreateHosts() { CreateHost(typeof(ProfileService.Profile),"Profile Service"); CreateHost(typeof(SavingsAccountService.Savings), "Savings Account Service"); CreateHost(typeof(CheckingAccountService.Checking), "Checking Account Service"); }   private static void CreateHost(Type type, string hostDescription) { System.ServiceModel.ServiceHost host = new System.ServiceModel.ServiceHost(type); host.Open();   if (host.ChannelDispatchers != null && host.ChannelDispatchers.Count != 0 && host.ChannelDispatchers[0].Listener != null) Console.WriteLine("Started: " + host.ChannelDispatchers[0].Listener.Uri); else Console.WriteLine("Failed to start:" + hostDescription); } } } BankClient    The client has no knowledge about service business logic other than the functionality it exposes through the contract, end points and a proxy to work against. The endpoint data and server proxy can be generated by right clicking on the project reference and choosing ‘Add Service Reference’ and entering the service end point address. Or if you have access to source, you can manually reference contract dlls and update clients configuration file to point to the service end point if the server and client happens to be being built using .Net framework. One of the pros with the manual approach is you don’t have to work against messy code generated files.   <system.serviceModel> <client> <endpoint name="tcpProfile" address="net.tcp://localhost:1000/Profile" binding="netTcpBinding" contract="ProfileContract.IProfile"/> <endpoint name="tcpCheckingAccount" address="net.tcp://localhost:1000/Checking" binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <endpoint name="tcpSavingsAccount" address="net.tcp://localhost:1000/Savings" binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/>   </client> </system.serviceModel> The client uses a façade to connect to the services   using System.ServiceModel; using CheckingAccountContract; using ProfileContract; using SavingsAccountContract;   namespace Client { public class ProxyFacade { public static IProfile ProfileProxy() { return (new ChannelFactory<IProfile>("tcpProfile")).CreateChannel(); }   public static ICheckingAccount CheckingAccountProxy() { return (new ChannelFactory<ICheckingAccount>("tcpCheckingAccount")) .CreateChannel(); }   public static ISavingsAccount SavingsAccountProxy() { return (new ChannelFactory<ISavingsAccount>("tcpSavingsAccount")) .CreateChannel(); }   } }   With that in place, lets get our unit tests going   using System; using System.Diagnostics; using BankDAL.Model; using NUnit.Framework; using ProfileContract;   namespace Client { [TestFixture] public class Tests { private void TransferFundsFromSavingsToCheckingAccount(int customerId, decimal amount) { ProxyFacade.CheckingAccountProxy().DepositAmount(customerId, amount); ProxyFacade.SavingsAccountProxy().WithdrawAmount(customerId, amount); }   private void TransferFundsFromCheckingToSavingsAccount(int customerId, decimal amount) { ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, amount); ProxyFacade.CheckingAccountProxy().WithdrawAmount(customerId, amount); }     [Test] public void CreateAndGetProfileTest() { IProfile profile = ProxyFacade.ProfileProxy(); const string customerName = "Tom"; int customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)).Id; Customer customer = profile.GetCustomer(customerId); Assert.AreEqual(customerName,customer.FullName); }   [Test] public void DepositWithDrawAndTransferAmountTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Smith" + DateTime.Now.ToString("HH:mm:ss"); var customer = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)); // Deposit to Savings ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 100); ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 25); Assert.AreEqual(125, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); // Withdraw ProxyFacade.SavingsAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(95, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id));   // Deposit to Checking ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 60); ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 40); Assert.AreEqual(100, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); // Withdraw ProxyFacade.CheckingAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(70, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Savings to Checking TransferFundsFromSavingsToCheckingAccount(customer.Id,10); Assert.AreEqual(85, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Checking to Savings TransferFundsFromCheckingToSavingsAccount(customer.Id, 50); Assert.AreEqual(135, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(30, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); }   [Test] public void FundTransfersWithOverDraftTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Angelina" + DateTime.Now.ToString("HH:mm:ss");   var customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1972, 1, 1)).Id;   ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, 100); TransferFundsFromSavingsToCheckingAccount(customerId,80); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId));   try { TransferFundsFromSavingsToCheckingAccount(customerId,30); } catch (Exception e) { Debug.WriteLine(e.Message); }   Assert.AreEqual(110, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId)); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); } } }   We are creating a new instance of the channel for every operation, we will look into instance management and how creating a new instance of channel affects it in subsequent articles. The first two test cases deals with creation of Customer, deposit and withdraw of month between accounts. The last case, FundTransferWithOverDraftTest() is interesting. Customer starts with depositing $100 in SavingsAccount followed by transfer of $80 in to checking account resulting in $20 in savings account.  Customer then initiates $30 transfer from Savings to Checking resulting in overdraft exception on Savings with $30 being deposited to Checking. As we are not running both the requests in transactions the customer ends up with more amount than what he started with $100. In subsequent posts we will look into transactions handling.  Make sure the ServiceHost project is set as start up project and start the solution. Run the test cases either from NUnit client or TestDriven.Net/Resharper which ever is your favorite tool. Make sure you have updated the data base connection string in the ServiceHost config file to point to your local database

    Read the article

  • Is there a Telecommunications Reference Architecture?

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Abstract   Reference architecture provides needed architectural information that can be provided in advance to an enterprise to enable consistent architectural best practices. Enterprise Reference Architecture helps business owners to actualize their strategies, vision, objectives, and principles. It evaluates the IT systems, based on Reference Architecture goals, principles, and standards. It helps to reduce IT costs by increasing functionality, availability, scalability, etc. Telecom Reference Architecture provides customers with the flexibility to view bundled service bills online with the provision of multiple services. It provides real-time, flexible billing and charging systems, to handle complex promotions, discounts, and settlements with multiple parties. This paper attempts to describe the Reference Architecture for the Telecom Enterprises. It lays the foundation for a Telecom Reference Architecture by articulating the requirements, drivers, and pitfalls for telecom service providers. It describes generic reference architecture for telecom enterprises and moves on to explain how to achieve Enterprise Reference Architecture by using SOA.   Introduction   A Reference Architecture provides a methodology, set of practices, template, and standards based on a set of successful solutions implemented earlier. These solutions have been generalized and structured for the depiction of both a logical and a physical architecture, based on the harvesting of a set of patterns that describe observations in a number of successful implementations. It helps as a reference for the various architectures that an enterprise can implement to solve various problems. It can be used as the starting point or the point of comparisons for various departments/business entities of a company, or for the various companies for an enterprise. It provides multiple views for multiple stakeholders.   Major artifacts of the Enterprise Reference Architecture are methodologies, standards, metadata, documents, design patterns, etc.   Purpose of Reference Architecture   In most cases, architects spend a lot of time researching, investigating, defining, and re-arguing architectural decisions. It is like reinventing the wheel as their peers in other organizations or even the same organization have already spent a lot of time and effort defining their own architectural practices. This prevents an organization from learning from its own experiences and applying that knowledge for increased effectiveness.   Reference architecture provides missing architectural information that can be provided in advance to project team members to enable consistent architectural best practices.   Enterprise Reference Architecture helps an enterprise to achieve the following at the abstract level:   ·       Reference architecture is more of a communication channel to an enterprise ·       Helps the business owners to accommodate to their strategies, vision, objectives, and principles. ·       Evaluates the IT systems based on Reference Architecture Principles ·       Reduces IT spending through increasing functionality, availability, scalability, etc ·       A Real-time Integration Model helps to reduce the latency of the data updates Is used to define a single source of Information ·       Provides a clear view on how to manage information and security ·       Defines the policy around the data ownership, product boundaries, etc. ·       Helps with cost optimization across project and solution portfolios by eliminating unused or duplicate investments and assets ·       Has a shorter implementation time and cost   Once the reference architecture is in place, the set of architectural principles, standards, reference models, and best practices ensure that the aligned investments have the greatest possible likelihood of success in both the near term and the long term (TCO).     Common pitfalls for Telecom Service Providers   Telecom Reference Architecture serves as the first step towards maturity for a telecom service provider. During the course of our assignments/experiences with telecom players, we have come across the following observations – Some of these indicate a lack of maturity of the telecom service provider:   ·       In markets that are growing and not so mature, it has been observed that telcos have a significant amount of in-house or home-grown applications. In some of these markets, the growth has been so rapid that IT has been unable to cope with business demands. Telcos have shown a tendency to come up with workarounds in their IT applications so as to meet business needs. ·       Even for core functions like provisioning or mediation, some telcos have tried to manage with home-grown applications. ·       Most of the applications do not have the required scalability or maintainability to sustain growth in volumes or functionality. ·       Applications face interoperability issues with other applications in the operator's landscape. Integrating a new application or network element requires considerable effort on the part of the other applications. ·       Application boundaries are not clear, and functionality that is not in the initial scope of that application gets pushed onto it. This results in the development of the multiple, small applications without proper boundaries. ·       Usage of Legacy OSS/BSS systems, poor Integration across Multiple COTS Products and Internal Systems. Most of the Integrations are developed on ad-hoc basis and Point-to-Point Integration. ·       Redundancy of the business functions in different applications • Fragmented data across the different applications and no integrated view of the strategic data • Lot of performance Issues due to the usage of the complex integration across OSS and BSS systems   However, this is where the maturity of the telecom industry as a whole can be of help. The collaborative efforts of telcos to overcome some of these problems have resulted in bodies like the TM Forum. They have come up with frameworks for business processes, data, applications, and technology for telecom service providers. These could be a good starting point for telcos to clean up their enterprise landscape.   Industry Trends in Telecom Reference Architecture   Telecom reference architectures are evolving rapidly because telcos are facing business and IT challenges.   “The reality is that there probably is no killer application, no silver bullet that the telcos can latch onto to carry them into a 21st Century.... Instead, there are probably hundreds – perhaps thousands – of niche applications.... And the only way to find which of these works for you is to try out lots of them, ramp up the ones that work, and discontinue the ones that fail.” – Martin Creaner President & CTO TM Forum.   The following trends have been observed in telecom reference architecture:   ·       Transformation of business structures to align with customer requirements ·       Adoption of more Internet-like technical architectures. The Web 2.0 concept is increasingly being used. ·       Virtualization of the traditional operations support system (OSS) ·       Adoption of SOA to support development of IP-based services ·       Adoption of frameworks like Service Delivery Platforms (SDPs) and IP Multimedia Subsystem ·       (IMS) to enable seamless deployment of various services over fixed and mobile networks ·       Replacement of in-house, customized, and stove-piped OSS/BSS with standards-based COTS products ·       Compliance with industry standards and frameworks like eTOM, SID, and TAM to enable seamless integration with other standards-based products   Drivers of Reference Architecture   The drivers of the Reference Architecture are Reference Architecture Goals, Principles, and Enterprise Vision and Telecom Transformation. The details are depicted below diagram. @font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }div.Section1 { page: Section1; } Figure 1. Drivers for Reference Architecture @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Today’s telecom reference architectures should seamlessly integrate traditional legacy-based applications and transition to next-generation network technologies (e.g., IP multimedia subsystems). This has resulted in new requirements for flexible, real-time billing and OSS/BSS systems and implications on the service provider’s organizational requirements and structure.   Telecom reference architectures are today expected to:   ·       Integrate voice, messaging, email and other VAS over fixed and mobile networks, back end systems ·       Be able to provision multiple services and service bundles • Deliver converged voice, video and data services ·       Leverage the existing Network Infrastructure ·       Provide real-time, flexible billing and charging systems to handle complex promotions, discounts, and settlements with multiple parties. ·       Support charging of advanced data services such as VoIP, On-Demand, Services (e.g.  Video), IMS/SIP Services, Mobile Money, Content Services and IPTV. ·       Help in faster deployment of new services • Serve as an effective platform for collaboration between network IT and business organizations ·       Harness the potential of converging technology, networks, devices and content to develop multimedia services and solutions of ever-increasing sophistication on a single Internet Protocol (IP) ·       Ensure better service delivery and zero revenue leakage through real-time balance and credit management ·       Lower operating costs to drive profitability   Enterprise Reference Architecture   The Enterprise Reference Architecture (RA) fills the gap between the concepts and vocabulary defined by the reference model and the implementation. Reference architecture provides detailed architectural information in a common format such that solutions can be repeatedly designed and deployed in a consistent, high-quality, supportable fashion. This paper attempts to describe the Reference Architecture for the Telecom Application Usage and how to achieve the Enterprise Level Reference Architecture using SOA.   • Telecom Reference Architecture • Enterprise SOA based Reference Architecture   Telecom Reference Architecture   Tele Management Forum’s New Generation Operations Systems and Software (NGOSS) is an architectural framework for organizing, integrating, and implementing telecom systems. NGOSS is a component-based framework consisting of the following elements:   ·       The enhanced Telecom Operations Map (eTOM) is a business process framework. ·       The Shared Information Data (SID) model provides a comprehensive information framework that may be specialized for the needs of a particular organization. ·       The Telecom Application Map (TAM) is an application framework to depict the functional footprint of applications, relative to the horizontal processes within eTOM. ·       The Technology Neutral Architecture (TNA) is an integrated framework. TNA is an architecture that is sustainable through technology changes.   NGOSS Architecture Standards are:   ·       Centralized data ·       Loosely coupled distributed systems ·       Application components/re-use  ·       A technology-neutral system framework with technology specific implementations ·       Interoperability to service provider data/processes ·       Allows more re-use of business components across multiple business scenarios ·       Workflow automation   The traditional operator systems architecture consists of four layers,   ·       Business Support System (BSS) layer, with focus toward customers and business partners. Manages order, subscriber, pricing, rating, and billing information. ·       Operations Support System (OSS) layer, built around product, service, and resource inventories. ·       Networks layer – consists of Network elements and 3rd Party Systems. ·       Integration Layer – to maximize application communication and overall solution flexibility.   Reference architecture for telecom enterprises is depicted below. @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Figure 2. Telecom Reference Architecture   The major building blocks of any Telecom Service Provider architecture are as follows:   1. Customer Relationship Management   CRM encompasses the end-to-end lifecycle of the customer: customer initiation/acquisition, sales, ordering, and service activation, customer care and support, proactive campaigns, cross sell/up sell, and retention/loyalty.   CRM also includes the collection of customer information and its application to personalize, customize, and integrate delivery of service to a customer, as well as to identify opportunities for increasing the value of the customer to the enterprise.   The key functionalities related to Customer Relationship Management are   ·       Manage the end-to-end lifecycle of a customer request for products. ·       Create and manage customer profiles. ·       Manage all interactions with customers – inquiries, requests, and responses. ·       Provide updates to Billing and other south bound systems on customer/account related updates such as customer/ account creation, deletion, modification, request bills, final bill, duplicate bills, credit limits through Middleware. ·       Work with Order Management System, Product, and Service Management components within CRM. ·       Manage customer preferences – Involve all the touch points and channels to the customer, including contact center, retail stores, dealers, self service, and field service, as well as via any media (phone, face to face, web, mobile device, chat, email, SMS, mail, the customer's bill, etc.). ·       Support single interface for customer contact details, preferences, account details, offers, customer premise equipment, bill details, bill cycle details, and customer interactions.   CRM applications interact with customers through customer touch points like portals, point-of-sale terminals, interactive voice response systems, etc. The requests by customers are sent via fulfillment/provisioning to billing system for ordering processing.   2. Billing and Revenue Management   Billing and Revenue Management handles the collection of appropriate usage records and production of timely and accurate bills – for providing pre-bill usage information and billing to customers; for processing their payments; and for performing payment collections. In addition, it handles customer inquiries about bills, provides billing inquiry status, and is responsible for resolving billing problems to the customer's satisfaction in a timely manner. This process grouping also supports prepayment for services.   The key functionalities provided by these applications are   ·       To ensure that enterprise revenue is billed and invoices delivered appropriately to customers. ·       To manage customers’ billing accounts, process their payments, perform payment collections, and monitor the status of the account balance. ·       To ensure the timely and effective fulfillment of all customer bill inquiries and complaints. ·       Collect the usage records from mediation and ensure appropriate rating and discounting of all usage and pricing. ·       Support revenue sharing; split charging where usage is guided to an account different from the service consumer. ·       Support prepaid and post-paid rating. ·       Send notification on approach / exceeding the usage thresholds as enforced by the subscribed offer, and / or as setup by the customer. ·       Support prepaid, post paid, and hybrid (where some services are prepaid and the rest of the services post paid) customers and conversion from post paid to prepaid, and vice versa. ·       Support different billing function requirements like charge prorating, promotion, discount, adjustment, waiver, write-off, account receivable, GL Interface, late payment fee, credit control, dunning, account or service suspension, re-activation, expiry, termination, contract violation penalty, etc. ·       Initiate direct debit to collect payment against an invoice outstanding. ·       Send notification to Middleware on different events; for example, payment receipt, pre-suspension, threshold exceed, etc.   Billing systems typically get usage data from mediation systems for rating and billing. They get provisioning requests from order management systems and inquiries from CRM systems. Convergent and real-time billing systems can directly get usage details from network elements.   3. Mediation   Mediation systems transform/translate the Raw or Native Usage Data Records into a general format that is acceptable to billing for their rating purposes.   The following lists the high-level roles and responsibilities executed by the Mediation system in the end-to-end solution.   ·       Collect Usage Data Records from different data sources – like network elements, routers, servers – via different protocol and interfaces. ·       Process Usage Data Records – Mediation will process Usage Data Records as per the source format. ·       Validate Usage Data Records from each source. ·       Segregates Usage Data Records coming from each source to multiple, based on the segregation requirement of end Application. ·       Aggregates Usage Data Records based on the aggregation rule if any from different sources. ·       Consolidates multiple Usage Data Records from each source. ·       Delivers formatted Usage Data Records to different end application like Billing, Interconnect, Fraud Management, etc. ·       Generates audit trail for incoming Usage Data Records and keeps track of all the Usage Data Records at various stages of mediation process. ·       Checks duplicate Usage Data Records across files for a given time window.   4. Fulfillment   This area is responsible for providing customers with their requested products in a timely and correct manner. It translates the customer's business or personal need into a solution that can be delivered using the specific products in the enterprise's portfolio. This process informs the customers of the status of their purchase order, and ensures completion on time, as well as ensuring a delighted customer. These processes are responsible for accepting and issuing orders. They deal with pre-order feasibility determination, credit authorization, order issuance, order status and tracking, customer update on customer order activities, and customer notification on order completion. Order management and provisioning applications fall into this category.   The key functionalities provided by these applications are   ·       Issuing new customer orders, modifying open customer orders, or canceling open customer orders; ·       Verifying whether specific non-standard offerings sought by customers are feasible and supportable; ·       Checking the credit worthiness of customers as part of the customer order process; ·       Testing the completed offering to ensure it is working correctly; ·       Updating of the Customer Inventory Database to reflect that the specific product offering has been allocated, modified, or cancelled; ·       Assigning and tracking customer provisioning activities; ·       Managing customer provisioning jeopardy conditions; and ·       Reporting progress on customer orders and other processes to customer.   These applications typically get orders from CRM systems. They interact with network elements and billing systems for fulfillment of orders.   5. Enterprise Management   This process area includes those processes that manage enterprise-wide activities and needs, or have application within the enterprise as a whole. They encompass all business management processes that   ·       Are necessary to support the whole of the enterprise, including processes for financial management, legal management, regulatory management, process, cost, and quality management, etc.;   ·       Are responsible for setting corporate policies, strategies, and directions, and for providing guidelines and targets for the whole of the business, including strategy development and planning for areas, such as Enterprise Architecture, that are integral to the direction and development of the business;   ·       Occur throughout the enterprise, including processes for project management, performance assessments, cost assessments, etc.     (i) Enterprise Risk Management:   Enterprise Risk Management focuses on assuring that risks and threats to the enterprise value and/or reputation are identified, and appropriate controls are in place to minimize or eliminate the identified risks. The identified risks may be physical or logical/virtual. Successful risk management ensures that the enterprise can support its mission critical operations, processes, applications, and communications in the face of serious incidents such as security threats/violations and fraud attempts. Two key areas covered in Risk Management by telecom operators are:   ·       Revenue Assurance: Revenue assurance system will be responsible for identifying revenue loss scenarios across components/systems, and will help in rectifying the problems. The following lists the high-level roles and responsibilities executed by the Revenue Assurance system in the end-to-end solution. o   Identify all usage information dropped when networks are being upgraded. o   Interconnect bill verification. o   Identify where services are routinely provisioned but never billed. o   Identify poor sales policies that are intensifying collections problems. o   Find leakage where usage is sent to error bucket and never billed for. o   Find leakage where field service, CRM, and network build-out are not optimized.   ·       Fraud Management: Involves collecting data from different systems to identify abnormalities in traffic patterns, usage patterns, and subscription patterns to report suspicious activity that might suggest fraudulent usage of resources, resulting in revenue losses to the operator.   The key roles and responsibilities of the system component are as follows:   o   Fraud management system will capture and monitor high usage (over a certain threshold) in terms of duration, value, and number of calls for each subscriber. The threshold for each subscriber is decided by the system and fixed automatically. o   Fraud management will be able to detect the unauthorized access to services for certain subscribers. These subscribers may have been provided unauthorized services by employees. The component will raise the alert to the operator the very first time of such illegal calls or calls which are not billed. o   The solution will be to have an alarm management system that will deliver alarms to the operator/provider whenever it detects a fraud, thus minimizing fraud by catching it the first time it occurs. o   The Fraud Management system will be capable of interfacing with switches, mediation systems, and billing systems   (ii) Knowledge Management   This process focuses on knowledge management, technology research within the enterprise, and the evaluation of potential technology acquisitions.   Key responsibilities of knowledge base management are to   ·       Maintain knowledge base – Creation and updating of knowledge base on ongoing basis. ·       Search knowledge base – Search of knowledge base on keywords or category browse ·       Maintain metadata – Management of metadata on knowledge base to ensure effective management and search. ·       Run report generator. ·       Provide content – Add content to the knowledge base, e.g., user guides, operational manual, etc.   (iii) Document Management   It focuses on maintaining a repository of all electronic documents or images of paper documents relevant to the enterprise using a system.   (iv) Data Management   It manages data as a valuable resource for any enterprise. For telecom enterprises, the typical areas covered are Master Data Management, Data Warehousing, and Business Intelligence. It is also responsible for data governance, security, quality, and database management.   Key responsibilities of Data Management are   ·       Using ETL, extract the data from CRM, Billing, web content, ERP, campaign management, financial, network operations, asset management info, customer contact data, customer measures, benchmarks, process data, e.g., process inputs, outputs, and measures, into Enterprise Data Warehouse. ·       Management of data traceability with source, data related business rules/decisions, data quality, data cleansing data reconciliation, competitors data – storage for all the enterprise data (customer profiles, products, offers, revenues, etc.) ·       Get online update through night time replication or physical backup process at regular frequency. ·       Provide the data access to business intelligence and other systems for their analysis, report generation, and use.   (v) Business Intelligence   It uses the Enterprise Data to provide the various analysis and reports that contain prospects and analytics for customer retention, acquisition of new customers due to the offers, and SLAs. It will generate right and optimized plans – bolt-ons for the customers.   The following lists the high-level roles and responsibilities executed by the Business Intelligence system at the Enterprise Level:   ·       It will do Pattern analysis and reports problem. ·       It will do Data Analysis – Statistical analysis, data profiling, affinity analysis of data, customer segment wise usage patterns on offers, products, service and revenue generation against services and customer segments. ·       It will do Performance (business, system, and forecast) analysis, churn propensity, response time, and SLAs analysis. ·       It will support for online and offline analysis, and report drill down capability. ·       It will collect, store, and report various SLA data. ·       It will provide the necessary intelligence for marketing and working on campaigns, etc., with cost benefit analysis and predictions.   It will advise on customer promotions with additional services based on loyalty and credit history of customer   ·       It will Interface with Enterprise Data Management system for data to run reports and analysis tasks. It will interface with the campaign schedules, based on historical success evidence.   (vi) Stakeholder and External Relations Management   It manages the enterprise's relationship with stakeholders and outside entities. Stakeholders include shareholders, employee organizations, etc. Outside entities include regulators, local community, and unions. Some of the processes within this grouping are Shareholder Relations, External Affairs, Labor Relations, and Public Relations.   (vii) Enterprise Resource Planning   It is used to manage internal and external resources, including tangible assets, financial resources, materials, and human resources. Its purpose is to facilitate the flow of information between all business functions inside the boundaries of the enterprise and manage the connections to outside stakeholders. ERP systems consolidate all business operations into a uniform and enterprise wide system environment.   The key roles and responsibilities for Enterprise System are given below:   ·        It will handle responsibilities such as core accounting, financial, and management reporting. ·       It will interface with CRM for capturing customer account and details. ·       It will interface with billing to capture the billing revenue and other financial data. ·       It will be responsible for executing the dunning process. Billing will send the required feed to ERP for execution of dunning. ·       It will interface with the CRM and Billing through batch interfaces. Enterprise management systems are like horizontals in the enterprise and typically interact with all major telecom systems. E.g., an ERP system interacts with CRM, Fulfillment, and Billing systems for different kinds of data exchanges.   6. External Interfaces/Touch Points   The typical external parties are customers, suppliers/partners, employees, shareholders, and other stakeholders. External interactions from/to a Service Provider to other parties can be achieved by a variety of mechanisms, including:   ·       Exchange of emails or faxes ·       Call Centers ·       Web Portals ·       Business-to-Business (B2B) automated transactions   These applications provide an Internet technology driven interface to external parties to undertake a variety of business functions directly for themselves. These can provide fully or partially automated service to external parties through various touch points.   Typical characteristics of these touch points are   ·       Pre-integrated self-service system, including stand-alone web framework or integration front end with a portal engine ·       Self services layer exposing atomic web services/APIs for reuse by multiple systems across the architectural environment ·       Portlets driven connectivity exposing data and services interoperability through a portal engine or web application   These touch points mostly interact with the CRM systems for requests, inquiries, and responses.   7. Middleware   The component will be primarily responsible for integrating the different systems components under a common platform. It should provide a Standards-Based Platform for building Service Oriented Architecture and Composite Applications. The following lists the high-level roles and responsibilities executed by the Middleware component in the end-to-end solution.   ·       As an integration framework, covering to and fro interfaces ·       Provide a web service framework with service registry. ·       Support SOA framework with SOA service registry. ·       Each of the interfaces from / to Middleware to other components would handle data transformation, translation, and mapping of data points. ·       Receive data from the caller / activate and/or forward the data to the recipient system in XML format. ·       Use standard XML for data exchange. ·       Provide the response back to the service/call initiator. ·       Provide a tracking until the response completion. ·       Keep a store transitional data against each call/transaction. ·       Interface through Middleware to get any information that is possible and allowed from the existing systems to enterprise systems; e.g., customer profile and customer history, etc. ·       Provide the data in a common unified format to the SOA calls across systems, and follow the Enterprise Architecture directive. ·       Provide an audit trail for all transactions being handled by the component.   8. Network Elements   The term Network Element means a facility or equipment used in the provision of a telecommunications service. Such terms also includes features, functions, and capabilities that are provided by means of such facility or equipment, including subscriber numbers, databases, signaling systems, and information sufficient for billing and collection or used in the transmission, routing, or other provision of a telecommunications service.   Typical network elements in a GSM network are Home Location Register (HLR), Intelligent Network (IN), Mobile Switching Center (MSC), SMS Center (SMSC), and network elements for other value added services like Push-to-talk (PTT), Ring Back Tone (RBT), etc.   Network elements are invoked when subscribers use their telecom devices for any kind of usage. These elements generate usage data and pass it on to downstream systems like mediation and billing system for rating and billing. They also integrate with provisioning systems for order/service fulfillment.   9. 3rd Party Applications   3rd Party systems are applications like content providers, payment gateways, point of sale terminals, and databases/applications maintained by the Government.   Depending on applicability and the type of functionality provided by 3rd party applications, the integration with different telecom systems like CRM, provisioning, and billing will be done.   10. Service Delivery Platform   A service delivery platform (SDP) provides the architecture for the rapid deployment, provisioning, execution, management, and billing of value added telecom services. SDPs are based on the concept of SOA and layered architecture. They support the delivery of voice, data services, and content in network and device-independent fashion. They allow application developers to aggregate network capabilities, services, and sources of content. SDPs typically contain layers for web services exposure, service application development, and network abstraction.   SOA Reference Architecture   SOA concept is based on the principle of developing reusable business service and building applications by composing those services, instead of building monolithic applications in silos. It’s about bridging the gap between business and IT through a set of business-aligned IT services, using a set of design principles, patterns, and techniques.   In an SOA, resources are made available to participants in a value net, enterprise, line of business (typically spanning multiple applications within an enterprise or across multiple enterprises). It consists of a set of business-aligned IT services that collectively fulfill an organization’s business processes and goals. We can choreograph these services into composite applications and invoke them through standard protocols. SOA, apart from agility and reusability, enables:   ·       The business to specify processes as orchestrations of reusable services ·       Technology agnostic business design, with technology hidden behind service interface ·       A contractual-like interaction between business and IT, based on service SLAs ·       Accountability and governance, better aligned to business services ·       Applications interconnections untangling by allowing access only through service interfaces, reducing the daunting side effects of change ·       Reduced pressure to replace legacy and extended lifetime for legacy applications, through encapsulation in services   ·       A Cloud Computing paradigm, using web services technologies, that makes possible service outsourcing on an on-demand, utility-like, pay-per-usage basis   The following section represents the Reference Architecture of logical view for the Telecom Solution. The new custom built application needs to align with this logical architecture in the long run to achieve EA benefits.   Packaged implementation applications, such as ERP billing applications, need to expose their functions as service providers (as other applications consume) and interact with other applications as service consumers.   COT applications need to expose services through wrappers such as adapters to utilize existing resources and at the same time achieve Enterprise Architecture goal and objectives.   The following are the various layers for Enterprise level deployment of SOA. This diagram captures the abstract view of Enterprise SOA layers and important components of each layer. Layered architecture means decomposition of services such that most interactions occur between adjacent layers. However, there is no strict rule that top layers should not directly communicate with bottom layers.   The diagram below represents the important logical pieces that would result from overall SOA transformation. @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Figure 3. Enterprise SOA Reference Architecture 1.          Operational System Layer: This layer consists of all packaged applications like CRM, ERP, custom built applications, COTS based applications like Billing, Revenue Management, Fulfilment, and the Enterprise databases that are essential and contribute directly or indirectly to the Enterprise OSS/BSS Transformation.   ERP holds the data of Asset Lifecycle Management, Supply Chain, and Advanced Procurement and Human Capital Management, etc.   CRM holds the data related to Order, Sales, and Marketing, Customer Care, Partner Relationship Management, Loyalty, etc.   Content Management handles Enterprise Search and Query. Billing application consists of the following components:   ·       Collections Management, Customer Billing Management, Invoices, Real-Time Rating, Discounting, and Applying of Charges ·       Enterprise databases will hold both the application and service data, whether structured or unstructured.   MDM - Master data majorly consists of Customer, Order, Product, and Service Data.     2.          Enterprise Component Layer:   This layer consists of the Application Services and Common Services that are responsible for realizing the functionality and maintaining the QoS of the exposed services. This layer uses container-based technologies such as application servers to implement the components, workload management, high availability, and load balancing.   Application Services: This Service Layer enables application, technology, and database abstraction so that the complex accessing logic is hidden from the other service layers. This is a basic service layer, which exposes application functionalities and data as reusable services. The three types of the Application access services are:   ·       Application Access Service: This Service Layer exposes application level functionalities as a reusable service between BSS to BSS and BSS to OSS integration. This layer is enabled using disparate technology such as Web Service, Integration Servers, and Adaptors, etc.   ·       Data Access Service: This Service Layer exposes application data services as a reusable reference data service. This is done via direct interaction with application data. and provides the federated query.   ·       Network Access Service: This Service Layer exposes provisioning layer as a reusable service from OSS to OSS integration. This integration service emphasizes the need for high performance, stateless process flows, and distributed design.   Common Services encompasses management of structured, semi-structured, and unstructured data such as information services, portal services, interaction services, infrastructure services, and security services, etc.   3.          Integration Layer:   This consists of service infrastructure components like service bus, service gateway for partner integration, service registry, service repository, and BPEL processor. Service bus will carry the service invocation payloads/messages between consumers and providers. The other important functions expected from it are itinerary based routing, distributed caching of routing information, transformations, and all qualities of service for messaging-like reliability, scalability, and availability, etc. Service registry will hold all contracts (wsdl) of services, and it helps developers to locate or discover service during design time or runtime.   • BPEL processor would be useful in orchestrating the services to compose a complex business scenario or process. • Workflow and business rules management are also required to support manual triggering of certain activities within business process. based on the rules setup and also the state machine information. Application, data, and service mediation layer typically forms the overall composite application development framework or SOA Framework.   4.          Business Process Layer: These are typically the intermediate services layer and represent Shared Business Process Services. At Enterprise Level, these services are from Customer Management, Order Management, Billing, Finance, and Asset Management application domains.   5.          Access Layer: This layer consists of portals for Enterprise and provides a single view of Enterprise information management and dashboard services.   6.          Channel Layer: This consists of various devices; applications that form part of extended enterprise; browsers through which users access the applications.   7.          Client Layer: This designates the different types of users accessing the enterprise applications. The type of user typically would be an important factor in determining the level of access to applications.   8.          Vertical pieces like management, monitoring, security, and development cut across all horizontal layers Management and monitoring involves all aspects of SOA-like services, SLAs, and other QoS lifecycle processes for both applications and services surrounding SOA governance.     9.          EA Governance, Reference Architecture, Roadmap, Principles, and Best Practices:   EA Governance is important in terms of providing the overall direction to SOA implementation within the enterprise. This involves board-level involvement, in addition to business and IT executives. At a high level, this involves managing the SOA projects implementation, managing SOA infrastructure, and controlling the entire effort through all fine-tuned IT processes in accordance with COBIT (Control Objectives for Information Technology).   Devising tools and techniques to promote reuse culture, and the SOA way of doing things needs competency centers to be established in addition to training the workforce to take up new roles that are suited to SOA journey.   Conclusions   Reference Architectures can serve as the basis for disparate architecture efforts throughout the organization, even if they use different tools and technologies. Reference architectures provide best practices and approaches in the independent way a vendor deals with technology and standards. Reference Architectures model the abstract architectural elements for an enterprise independent of the technologies, protocols, and products that are used to implement an SOA. Telecom enterprises today are facing significant business and technology challenges due to growing competition, a multitude of services, and convergence. Adopting architectural best practices could go a long way in meeting these challenges. The use of SOA-based architecture for communication to each of the external systems like Billing, CRM, etc., in OSS/BSS system has made the architecture very loosely coupled, with greater flexibility. Any change in the external systems would be absorbed at the Integration Layer without affecting the rest of the ecosystem. The use of a Business Process Management (BPM) tool makes the management and maintenance of the business processes easy, with better performance in terms of lead time, quality, and cost. Since the Architecture is based on standards, it will lower the cost of deploying and managing OSS/BSS applications over their lifecycles.

    Read the article

  • Agile Development

    - by James Oloo Onyango
    Alot of literature has and is being written about agile developement and its surrounding philosophies. In my quest to find the best way to express the importance of agile methodologies, i have found Robert C. Martin's "A Satire Of Two Companies" to be both the most concise and thorough! Enjoy the read! Rufus Inc Project Kick Off Your name is Bob. The date is January 3, 2001, and your head still aches from the recent millennial revelry. You are sitting in a conference room with several managers and a group of your peers. You are a project team leader. Your boss is there, and he has brought along all of his team leaders. His boss called the meeting. "We have a new project to develop," says your boss's boss. Call him BB. The points in his hair are so long that they scrape the ceiling. Your boss's points are just starting to grow, but he eagerly awaits the day when he can leave Brylcream stains on the acoustic tiles. BB describes the essence of the new market they have identified and the product they want to develop to exploit this market. "We must have this new project up and working by fourth quarter October 1," BB demands. "Nothing is of higher priority, so we are cancelling your current project." The reaction in the room is stunned silence. Months of work are simply going to be thrown away. Slowly, a murmur of objection begins to circulate around the conference table.   His points give off an evil green glow as BB meets the eyes of everyone in the room. One by one, that insidious stare reduces each attendee to quivering lumps of protoplasm. It is clear that he will brook no discussion on this matter. Once silence has been restored, BB says, "We need to begin immediately. How long will it take you to do the analysis?" You raise your hand. Your boss tries to stop you, but his spitwad misses you and you are unaware of his efforts.   "Sir, we can't tell you how long the analysis will take until we have some requirements." "The requirements document won't be ready for 3 or 4 weeks," BB says, his points vibrating with frustration. "So, pretend that you have the requirements in front of you now. How long will you require for analysis?" No one breathes. Everyone looks around to see whether anyone has some idea. "If analysis goes beyond April 1, we have a problem. Can you finish the analysis by then?" Your boss visibly gathers his courage: "We'll find a way, sir!" His points grow 3 mm, and your headache increases by two Tylenol. "Good." BB smiles. "Now, how long will it take to do the design?" "Sir," you say. Your boss visibly pales. He is clearly worried that his 3 mms are at risk. "Without an analysis, it will not be possible to tell you how long design will take." BB's expression shifts beyond austere.   "PRETEND you have the analysis already!" he says, while fixing you with his vacant, beady little eyes. "How long will it take you to do the design?" Two Tylenol are not going to cut it. Your boss, in a desperate attempt to save his new growth, babbles: "Well, sir, with only six months left to complete the project, design had better take no longer than 3 months."   "I'm glad you agree, Smithers!" BB says, beaming. Your boss relaxes. He knows his points are secure. After a while, he starts lightly humming the Brylcream jingle. BB continues, "So, analysis will be complete by April 1, design will be complete by July 1, and that gives you 3 months to implement the project. This meeting is an example of how well our new consensus and empowerment policies are working. Now, get out there and start working. I'll expect to see TQM plans and QIT assignments on my desk by next week. Oh, and don't forget that your crossfunctional team meetings and reports will be needed for next month's quality audit." "Forget the Tylenol," you think to yourself as you return to your cubicle. "I need bourbon."   Visibly excited, your boss comes over to you and says, "Gosh, what a great meeting. I think we're really going to do some world shaking with this project." You nod in agreement, too disgusted to do anything else. "Oh," your boss continues, "I almost forgot." He hands you a 30-page document. "Remember that the SEI is coming to do an evaluation next week. This is the evaluation guide. You need to read through it, memorize it, and then shred it. It tells you how to answer any questions that the SEI auditors ask you. It also tells you what parts of the building you are allowed to take them to and what parts to avoid. We are determined to be a CMM level 3 organization by June!"   You and your peers start working on the analysis of the new project. This is difficult because you have no requirements. But from the 10-minute introduction given by BB on that fateful morning, you have some idea of what the product is supposed to do.   Corporate process demands that you begin by creating a use case document. You and your team begin enumerating use cases and drawing oval and stick diagrams. Philosophical debates break out among the team members. There is disagreement as to whether certain use cases should be connected with <<extends>> or <<includes>> relationships. Competing models are created, but nobody knows how to evaluate them. The debate continues, effectively paralyzing progress.   After a week, somebody finds the iceberg.com Web site, which recommends disposing entirely of <<extends>> and <<includes>> and replacing them with <<precedes>> and <<uses>>. The documents on this Web site, authored by Don Sengroiux, describes a method known as stalwart-analysis, which claims to be a step-by-step method for translating use cases into design diagrams. More competing use case models are created using this new scheme, but again, people can't agree on how to evaluate them. The thrashing continues. More and more, the use case meetings are driven by emotion rather than by reason. If it weren't for the fact that you don't have requirements, you'd be pretty upset by the lack of progress you are making. The requirements document arrives on February 15. And then again on February 20, 25, and every week thereafter. Each new version contradicts the previous one. Clearly, the marketing folks who are writing the requirements, empowered though they might be, are not finding consensus.   At the same time, several new competing use case templates have been proposed by the various team members. Each template presents its own particularly creative way of delaying progress. The debates rage on. On March 1, Prudence Putrigence, the process proctor, succeeds in integrating all the competing use case forms and templates into a single, all-encompassing form. Just the blank form is 15 pages long. She has managed to include every field that appeared on all the competing templates. She also presents a 159- page document describing how to fill out the use case form. All current use cases must be rewritten according to the new standard.   You marvel to yourself that it now requires 15 pages of fill-in-the-blank and essay questions to answer the question: What should the system do when the user presses Return? The corporate process (authored by L. E. Ott, famed author of "Holistic Analysis: A Progressive Dialectic for Software Engineers") insists that you discover all primary use cases, 87 percent of all secondary use cases, and 36.274 percent of all tertiary use cases before you can complete analysis and enter the design phase. You have no idea what a tertiary use case is. So in an attempt to meet this requirement, you try to get your use case document reviewed by the marketing department, which you hope will know what a tertiary use case is.   Unfortunately, the marketing folks are too busy with sales support to talk to you. Indeed, since the project started, you have not been able to get a single meeting with marketing, which has provided a never-ending stream of changing and contradictory requirements documents.   While one team has been spinning endlessly on the use case document, another team has been working out the domain model. Endless variations of UML documents are pouring out of this team. Every week, the model is reworked.   The team members can't decide whether to use <<interfaces>> or <<types>> in the model. A huge disagreement has been raging on the proper syntax and application of OCL. Others on the team just got back from a 5-day class on catabolism, and have been producing incredibly detailed and arcane diagrams that nobody else can fathom.   On March 27, with one week to go before analysis is to be complete, you have produced a sea of documents and diagrams but are no closer to a cogent analysis of the problem than you were on January 3. **** And then, a miracle happens.   **** On Saturday, April 1, you check your e-mail from home. You see a memo from your boss to BB. It states unequivocally that you are done with the analysis! You phone your boss and complain. "How could you have told BB that we were done with the analysis?" "Have you looked at a calendar lately?" he responds. "It's April 1!" The irony of that date does not escape you. "But we have so much more to think about. So much more to analyze! We haven't even decided whether to use <<extends>> or <<precedes>>!" "Where is your evidence that you are not done?" inquires your boss, impatiently. "Whaaa . . . ." But he cuts you off. "Analysis can go on forever; it has to be stopped at some point. And since this is the date it was scheduled to stop, it has been stopped. Now, on Monday, I want you to gather up all existing analysis materials and put them into a public folder. Release that folder to Prudence so that she can log it in the CM system by Monday afternoon. Then get busy and start designing."   As you hang up the phone, you begin to consider the benefits of keeping a bottle of bourbon in your bottom desk drawer. They threw a party to celebrate the on-time completion of the analysis phase. BB gave a colon-stirring speech on empowerment. And your boss, another 3 mm taller, congratulated his team on the incredible show of unity and teamwork. Finally, the CIO takes the stage to tell everyone that the SEI audit went very well and to thank everyone for studying and shredding the evaluation guides that were passed out. Level 3 now seems assured and will be awarded by June. (Scuttlebutt has it that managers at the level of BB and above are to receive significant bonuses once the SEI awards level 3.)   As the weeks flow by, you and your team work on the design of the system. Of course, you find that the analysis that the design is supposedly based on is flawedno, useless; no, worse than useless. But when you tell your boss that you need to go back and work some more on the analysis to shore up its weaker sections, he simply states, "The analysis phase is over. The only allowable activity is design. Now get back to it."   So, you and your team hack the design as best you can, unsure of whether the requirements have been properly analyzed. Of course, it really doesn't matter much, since the requirements document is still thrashing with weekly revisions, and the marketing department still refuses to meet with you.     The design is a nightmare. Your boss recently misread a book named The Finish Line in which the author, Mark DeThomaso, blithely suggested that design documents should be taken down to code-level detail. "If we are going to be working at that level of detail," you ask, "why don't we simply write the code instead?" "Because then you wouldn't be designing, of course. And the only allowable activity in the design phase is design!" "Besides," he continues, "we have just purchased a companywide license for Dandelion! This tool enables 'Round the Horn Engineering!' You are to transfer all design diagrams into this tool. It will automatically generate our code for us! It will also keep the design diagrams in sync with the code!" Your boss hands you a brightly colored shrinkwrapped box containing the Dandelion distribution. You accept it numbly and shuffle off to your cubicle. Twelve hours, eight crashes, one disk reformatting, and eight shots of 151 later, you finally have the tool installed on your server. You consider the week your team will lose while attending Dandelion training. Then you smile and think, "Any week I'm not here is a good week." Design diagram after design diagram is created by your team. Dandelion makes it very difficult to draw these diagrams. There are dozens and dozens of deeply nested dialog boxes with funny text fields and check boxes that must all be filled in correctly. And then there's the problem of moving classes between packages. At first, these diagram are driven from the use cases. But the requirements are changing so often that the use cases rapidly become meaningless. Debates rage about whether VISITOR or DECORATOR design patterns should be used. One developer refuses to use VISITOR in any form, claiming that it's not a properly object-oriented construct. Someone refuses to use multiple inheritance, since it is the spawn of the devil. Review meetings rapidly degenerate into debates about the meaning of object orientation, the definition of analysis versus design, or when to use aggregation versus association. Midway through the design cycle, the marketing folks announce that they have rethought the focus of the system. Their new requirements document is completely restructured. They have eliminated several major feature areas and replaced them with feature areas that they anticipate customer surveys will show to be more appropriate. You tell your boss that these changes mean that you need to reanalyze and redesign much of the system. But he says, "The analysis phase is system. But he says, "The analysis phase is over. The only allowable activity is design. Now get back to it."   You suggest that it might be better to create a simple prototype to show to the marketing folks and even some potential customers. But your boss says, "The analysis phase is over. The only allowable activity is design. Now get back to it." Hack, hack, hack, hack. You try to create some kind of a design document that might reflect the new requirements documents. However, the revolution of the requirements has not caused them to stop thrashing. Indeed, if anything, the wild oscillations of the requirements document have only increased in frequency and amplitude.   You slog your way through them.   On June 15, the Dandelion database gets corrupted. Apparently, the corruption has been progressive. Small errors in the DB accumulated over the months into bigger and bigger errors. Eventually, the CASE tool just stopped working. Of course, the slowly encroaching corruption is present on all the backups. Calls to the Dandelion technical support line go unanswered for several days. Finally, you receive a brief e-mail from Dandelion, informing you that this is a known problem and that the solution is to purchase the new version, which they promise will be ready some time next quarter, and then reenter all the diagrams by hand.   ****   Then, on July 1 another miracle happens! You are done with the design!   Rather than go to your boss and complain, you stock your middle desk drawer with some vodka.   **** They threw a party to celebrate the on-time completion of the design phase and their graduation to CMM level 3. This time, you find BB's speech so stirring that you have to use the restroom before it begins. New banners and plaques are all over your workplace. They show pictures of eagles and mountain climbers, and they talk about teamwork and empowerment. They read better after a few scotches. That reminds you that you need to clear out your file cabinet to make room for the brandy. You and your team begin to code. But you rapidly discover that the design is lacking in some significant areas. Actually, it's lacking any significance at all. You convene a design session in one of the conference rooms to try to work through some of the nastier problems. But your boss catches you at it and disbands the meeting, saying, "The design phase is over. The only allowable activity is coding. Now get back to it."   ****   The code generated by Dandelion is really hideous. It turns out that you and your team were using association and aggregation the wrong way, after all. All the generated code has to be edited to correct these flaws. Editing this code is extremely difficult because it has been instrumented with ugly comment blocks that have special syntax that Dandelion needs in order to keep the diagrams in sync with the code. If you accidentally alter one of these comments, the diagrams will be regenerated incorrectly. It turns out that "Round the Horn Engineering" requires an awful lot of effort. The more you try to keep the code compatible with Dandelion, the more errors Dandelion generates. In the end, you give up and decide to keep the diagrams up to date manually. A second later, you decide that there's no point in keeping the diagrams up to date at all. Besides, who has time?   Your boss hires a consultant to build tools to count the number of lines of code that are being produced. He puts a big thermometer graph on the wall with the number 1,000,000 on the top. Every day, he extends the red line to show how many lines have been added. Three days after the thermometer appears on the wall, your boss stops you in the hall. "That graph isn't growing quickly enough. We need to have a million lines done by October 1." "We aren't even sh-sh-sure that the proshect will require a m-million linezh," you blather. "We have to have a million lines done by October 1," your boss reiterates. His points have grown again, and the Grecian formula he uses on them creates an aura of authority and competence. "Are you sure your comment blocks are big enough?" Then, in a flash of managerial insight, he says, "I have it! I want you to institute a new policy among the engineers. No line of code is to be longer than 20 characters. Any such line must be split into two or more preferably more. All existing code needs to be reworked to this standard. That'll get our line count up!"   You decide not to tell him that this will require two unscheduled work months. You decide not to tell him anything at all. You decide that intravenous injections of pure ethanol are the only solution. You make the appropriate arrangements. Hack, hack, hack, and hack. You and your team madly code away. By August 1, your boss, frowning at the thermometer on the wall, institutes a mandatory 50-hour workweek.   Hack, hack, hack, and hack. By September 1st, the thermometer is at 1.2 million lines and your boss asks you to write a report describing why you exceeded the coding budget by 20 percent. He institutes mandatory Saturdays and demands that the project be brought back down to a million lines. You start a campaign of remerging lines. Hack, hack, hack, and hack. Tempers are flaring; people are quitting; QA is raining trouble reports down on you. Customers are demanding installation and user manuals; salespeople are demanding advance demonstrations for special customers; the requirements document is still thrashing, the marketing folks are complaining that the product isn't anything like they specified, and the liquor store won't accept your credit card anymore. Something has to give.    On September 15, BB calls a meeting. As he enters the room, his points are emitting clouds of steam. When he speaks, the bass overtones of his carefully manicured voice cause the pit of your stomach to roll over. "The QA manager has told me that this project has less than 50 percent of the required features implemented. He has also informed me that the system crashes all the time, yields wrong results, and is hideously slow. He has also complained that he cannot keep up with the continuous train of daily releases, each more buggy than the last!" He stops for a few seconds, visibly trying to compose himself. "The QA manager estimates that, at this rate of development, we won't be able to ship the product until December!" Actually, you think it's more like March, but you don't say anything. "December!" BB roars with such derision that people duck their heads as though he were pointing an assault rifle at them. "December is absolutely out of the question. Team leaders, I want new estimates on my desk in the morning. I am hereby mandating 65-hour work weeks until this project is complete. And it better be complete by November 1."   As he leaves the conference room, he is heard to mutter: "Empowermentbah!" * * * Your boss is bald; his points are mounted on BB's wall. The fluorescent lights reflecting off his pate momentarily dazzle you. "Do you have anything to drink?" he asks. Having just finished your last bottle of Boone's Farm, you pull a bottle of Thunderbird from your bookshelf and pour it into his coffee mug. "What's it going to take to get this project done? " he asks. "We need to freeze the requirements, analyze them, design them, and then implement them," you say callously. "By November 1?" your boss exclaims incredulously. "No way! Just get back to coding the damned thing." He storms out, scratching his vacant head.   A few days later, you find that your boss has been transferred to the corporate research division. Turnover has skyrocketed. Customers, informed at the last minute that their orders cannot be fulfilled on time, have begun to cancel their orders. Marketing is re-evaluating whether this product aligns with the overall goals of the company. Memos fly, heads roll, policies change, and things are, overall, pretty grim. Finally, by March, after far too many sixty-five hour weeks, a very shaky version of the software is ready. In the field, bug-discovery rates are high, and the technical support staff are at their wits' end, trying to cope with the complaints and demands of the irate customers. Nobody is happy.   In April, BB decides to buy his way out of the problem by licensing a product produced by Rupert Industries and redistributing it. The customers are mollified, the marketing folks are smug, and you are laid off.     Rupert Industries: Project Alpha   Your name is Robert. The date is January 3, 2001. The quiet hours spent with your family this holiday have left you refreshed and ready for work. You are sitting in a conference room with your team of professionals. The manager of the division called the meeting. "We have some ideas for a new project," says the division manager. Call him Russ. He is a high-strung British chap with more energy than a fusion reactor. He is ambitious and driven but understands the value of a team. Russ describes the essence of the new market opportunity the company has identified and introduces you to Jane, the marketing manager, who is responsible for defining the products that will address it. Addressing you, Jane says, "We'd like to start defining our first product offering as soon as possible. When can you and your team meet with me?" You reply, "We'll be done with the current iteration of our project this Friday. We can spare a few hours for you between now and then. After that, we'll take a few people from the team and dedicate them to you. We'll begin hiring their replacements and the new people for your team immediately." "Great," says Russ, "but I want you to understand that it is critical that we have something to exhibit at the trade show coming up this July. If we can't be there with something significant, we'll lose the opportunity."   "I understand," you reply. "I don't yet know what it is that you have in mind, but I'm sure we can have something by July. I just can't tell you what that something will be right now. In any case, you and Jane are going to have complete control over what we developers do, so you can rest assured that by July, you'll have the most important things that can be accomplished in that time ready to exhibit."   Russ nods in satisfaction. He knows how this works. Your team has always kept him advised and allowed him to steer their development. He has the utmost confidence that your team will work on the most important things first and will produce a high-quality product.   * * *   "So, Robert," says Jane at their first meeting, "How does your team feel about being split up?" "We'll miss working with each other," you answer, "but some of us were getting pretty tired of that last project and are looking forward to a change. So, what are you people cooking up?" Jane beams. "You know how much trouble our customers currently have . . ." And she spends a half hour or so describing the problem and possible solution. "OK, wait a second" you respond. "I need to be clear about this." And so you and Jane talk about how this system might work. Some of her ideas aren't fully formed. You suggest possible solutions. She likes some of them. You continue discussing.   During the discussion, as each new topic is addressed, Jane writes user story cards. Each card represents something that the new system has to do. The cards accumulate on the table and are spread out in front of you. Both you and Jane point at them, pick them up, and make notes on them as you discuss the stories. The cards are powerful mnemonic devices that you can use to represent complex ideas that are barely formed.   At the end of the meeting, you say, "OK, I've got a general idea of what you want. I'm going to talk to the team about it. I imagine they'll want to run some experiments with various database structures and presentation formats. Next time we meet, it'll be as a group, and we'll start identifying the most important features of the system."   A week later, your nascent team meets with Jane. They spread the existing user story cards out on the table and begin to get into some of the details of the system. The meeting is very dynamic. Jane presents the stories in the order of their importance. There is much discussion about each one. The developers are concerned about keeping the stories small enough to estimate and test. So they continually ask Jane to split one story into several smaller stories. Jane is concerned that each story have a clear business value and priority, so as she splits them, she makes sure that this stays true.   The stories accumulate on the table. Jane writes them, but the developers make notes on them as needed. Nobody tries to capture everything that is said; the cards are not meant to capture everything but are simply reminders of the conversation.   As the developers become more comfortable with the stories, they begin writing estimates on them. These estimates are crude and budgetary, but they give Jane an idea of what the story will cost.   At the end of the meeting, it is clear that many more stories could be discussed. It is also clear that the most important stories have been addressed and that they represent several months worth of work. Jane closes the meeting by taking the cards with her and promising to have a proposal for the first release in the morning.   * * *   The next morning, you reconvene the meeting. Jane chooses five cards and places them on the table. "According to your estimates, these cards represent about one perfect team-week's worth of work. The last iteration of the previous project managed to get one perfect team-week done in 3 real weeks. If we can get these five stories done in 3 weeks, we'll be able to demonstrate them to Russ. That will make him feel very comfortable about our progress." Jane is pushing it. The sheepish look on her face lets you know that she knows it too. You reply, "Jane, this is a new team, working on a new project. It's a bit presumptuous to expect that our velocity will be the same as the previous team's. However, I met with the team yesterday afternoon, and we all agreed that our initial velocity should, in fact, be set to one perfectweek for every 3 real-weeks. So you've lucked out on this one." "Just remember," you continue, "that the story estimates and the story velocity are very tentative at this point. We'll learn more when we plan the iteration and even more when we implement it."   Jane looks over her glasses at you as if to say "Who's the boss around here, anyway?" and then smiles and says, "Yeah, don't worry. I know the drill by now."Jane then puts 15 more cards on the table. She says, "If we can get all these cards done by the end of March, we can turn the system over to our beta test customers. And we'll get good feedback from them."   You reply, "OK, so we've got our first iteration defined, and we have the stories for the next three iterations after that. These four iterations will make our first release."   "So," says Jane, can you really do these five stories in the next 3 weeks?" "I don't know for sure, Jane," you reply. "Let's break them down into tasks and see what we get."   So Jane, you, and your team spend the next several hours taking each of the five stories that Jane chose for the first iteration and breaking them down into small tasks. The developers quickly realize that some of the tasks can be shared between stories and that other tasks have commonalities that can probably be taken advantage of. It is clear that potential designs are popping into the developers' heads. From time to time, they form little discussion knots and scribble UML diagrams on some cards.   Soon, the whiteboard is filled with the tasks that, once completed, will implement the five stories for this iteration. You start the sign-up process by saying, "OK, let's sign up for these tasks." "I'll take the initial database generation." Says Pete. "That's what I did on the last project, and this doesn't look very different. I estimate it at two of my perfect workdays." "OK, well, then, I'll take the login screen," says Joe. "Aw, darn," says Elaine, the junior member of the team, "I've never done a GUI, and kinda wanted to try that one."   "Ah, the impatience of youth," Joe says sagely, with a wink in your direction. "You can assist me with it, young Jedi." To Jane: "I think it'll take me about three of my perfect workdays."   One by one, the developers sign up for tasks and estimate them in terms of their own perfect workdays. Both you and Jane know that it is best to let the developers volunteer for tasks than to assign the tasks to them. You also know full well that you daren't challenge any of the developers' estimates. You know these people, and you trust them. You know that they are going to do the very best they can.   The developers know that they can't sign up for more perfect workdays than they finished in the last iteration they worked on. Once each developer has filled his or her schedule for the iteration, they stop signing up for tasks.   Eventually, all the developers have stopped signing up for tasks. But, of course, tasks are still left on the board.   "I was worried that that might happen," you say, "OK, there's only one thing to do, Jane. We've got too much to do in this iteration. What stories or tasks can we remove?" Jane sighs. She knows that this is the only option. Working overtime at the beginning of a project is insane, and projects where she's tried it have not fared well.   So Jane starts to remove the least-important functionality. "Well, we really don't need the login screen just yet. We can simply start the system in the logged-in state." "Rats!" cries Elaine. "I really wanted to do that." "Patience, grasshopper." says Joe. "Those who wait for the bees to leave the hive will not have lips too swollen to relish the honey." Elaine looks confused. Everyone looks confused. "So . . .," Jane continues, "I think we can also do away with . . ." And so, bit by bit, the list of tasks shrinks. Developers who lose a task sign up for one of the remaining ones.   The negotiation is not painless. Several times, Jane exhibits obvious frustration and impatience. Once, when tensions are especially high, Elaine volunteers, "I'll work extra hard to make up some of the missing time." You are about to correct her when, fortunately, Joe looks her in the eye and says, "When once you proceed down the dark path, forever will it dominate your destiny."   In the end, an iteration acceptable to Jane is reached. It's not what Jane wanted. Indeed, it is significantly less. But it's something the team feels that can be achieved in the next 3 weeks.   And, after all, it still addresses the most important things that Jane wanted in the iteration. "So, Jane," you say when things had quieted down a bit, "when can we expect acceptance tests from you?" Jane sighs. This is the other side of the coin. For every story the development team implements,   Jane must supply a suite of acceptance tests that prove that it works. And the team needs these long before the end of the iteration, since they will certainly point out differences in the way Jane and the developers imagine the system's behaviour.   "I'll get you some example test scripts today," Jane promises. "I'll add to them every day after that. You'll have the entire suite by the middle of the iteration."   * * *   The iteration begins on Monday morning with a flurry of Class, Responsibilities, Collaborators sessions. By midmorning, all the developers have assembled into pairs and are rapidly coding away. "And now, my young apprentice," Joe says to Elaine, "you shall learn the mysteries of test-first design!"   "Wow, that sounds pretty rad," Elaine replies. "How do you do it?" Joe beams. It's clear that he has been anticipating this moment. "OK, what does the code do right now?" "Huh?" replied Elaine, "It doesn't do anything at all; there is no code."   "So, consider our task; can you think of something the code should do?" "Sure," Elaine said with youthful assurance, "First, it should connect to the database." "And thereupon, what must needs be required to connecteth the database?" "You sure talk weird," laughed Elaine. "I think we'd have to get the database object from some registry and call the Connect() method. "Ah, astute young wizard. Thou perceives correctly that we requireth an object within which we can cacheth the database object." "Is 'cacheth' really a word?" "It is when I say it! So, what test can we write that we know the database registry should pass?" Elaine sighs. She knows she'll just have to play along. "We should be able to create a database object and pass it to the registry in a Store() method. And then we should be able to pull it out of the registry with a Get() method and make sure it's the same object." "Oh, well said, my prepubescent sprite!" "Hay!" "So, now, let's write a test function that proves your case." "But shouldn't we write the database object and registry object first?" "Ah, you've much to learn, my young impatient one. Just write the test first." "But it won't even compile!" "Are you sure? What if it did?" "Uh . . ." "Just write the test, Elaine. Trust me." And so Joe, Elaine, and all the other developers began to code their tasks, one test case at a time. The room in which they worked was abuzz with the conversations between the pairs. The murmur was punctuated by an occasional high five when a pair managed to finish a task or a difficult test case.   As development proceeded, the developers changed partners once or twice a day. Each developer got to see what all the others were doing, and so knowledge of the code spread generally throughout the team.   Whenever a pair finished something significant whether a whole task or simply an important part of a task they integrated what they had with the rest of the system. Thus, the code base grew daily, and integration difficulties were minimized.   The developers communicated with Jane on a daily basis. They'd go to her whenever they had a question about the functionality of the system or the interpretation of an acceptance test case.   Jane, good as her word, supplied the team with a steady stream of acceptance test scripts. The team read these carefully and thereby gained a much better understanding of what Jane expected the system to do. By the beginning of the second week, there was enough functionality to demonstrate to Jane. She watched eagerly as the demonstration passed test case after test case. "This is really cool," Jane said as the demonstration finally ended. "But this doesn't seem like one-third of the tasks. Is your velocity slower than anticipated?"   You grimace. You'd been waiting for a good time to mention this to Jane but now she was forcing the issue. "Yes, unfortunately, we are going more slowly than we had expected. The new application server we are using is turning out to be a pain to configure. Also, it takes forever to reboot, and we have to reboot it whenever we make even the slightest change to its configuration."   Jane eyes you with suspicion. The stress of last Monday's negotiations had still not entirely dissipated. She says, "And what does this mean to our schedule? We can't slip it again, we just can't. Russ will have a fit! He'll haul us all into the woodshed and ream us some new ones."   You look Jane right in the eyes. There's no pleasant way to give someone news like this. So you just blurt out, "Look, if things keep going like they're going, we're not going to be done with everything by next Friday. Now it's possible that we'll figure out a way to go faster. But, frankly, I wouldn't depend on that. You should start thinking about one or two tasks that could be removed from the iteration without ruining the demonstration for Russ. Come hell or high water, we are going to give that demonstration on Friday, and I don't think you want us to choose which tasks to omit."   "Aw forchrisakes!" Jane barely manages to stifle yelling that last word as she stalks away, shaking her head. Not for the first time, you say to yourself, "Nobody ever promised me project management would be easy." You are pretty sure it won't be the last time, either.   Actually, things went a bit better than you had hoped. The team did, in fact, have to drop one task from the iteration, but Jane had chosen wisely, and the demonstration for Russ went without a hitch. Russ was not impressed with the progress, but neither was he dismayed. He simply said, "This is pretty good. But remember, we have to be able to demonstrate this system at the trade show in July, and at this rate, it doesn't look like you'll have all that much to show." Jane, whose attitude had improved dramatically with the completion of the iteration, responded to Russ by saying, "Russ, this team is working hard, and well. When July comes around, I am confident that we'll have something significant to demonstrate. It won't be everything, and some of it may be smoke and mirrors, but we'll have something."   Painful though the last iteration was, it had calibrated your velocity numbers. The next iteration went much better. Not because your team got more done than in the last iteration but simply because the team didn't have to remove any tasks or stories in the middle of the iteration.   By the start of the fourth iteration, a natural rhythm has been established. Jane, you, and the team know exactly what to expect from one another. The team is running hard, but the pace is sustainable. You are confident that the team can keep up this pace for a year or more.   The number of surprises in the schedule diminishes to near zero; however, the number of surprises in the requirements does not. Jane and Russ frequently look over the growing system and make recommendations or changes to the existing functionality. But all parties realize that these changes take time and must be scheduled. So the changes do not cause anyone's expectations to be violated. In March, there is a major demonstration of the system to the board of directors. The system is very limited and is not yet in a form good enough to take to the trade show, but progress is steady, and the board is reasonably impressed.   The second release goes even more smoothly than the first. By now, the team has figured out a way to automate Jane's acceptance test scripts. The team has also refactored the design of the system to the point that it is really easy to add new features and change old ones. The second release was done by the end of June and was taken to the trade show. It had less in it than Jane and Russ would have liked, but it did demonstrate the most important features of the system. Although customers at the trade show noticed that certain features were missing, they were very impressed overall. You, Russ, and Jane all returned from the trade show with smiles on your faces. You all felt as though this project was a winner.   Indeed, many months later, you are contacted by Rufus Inc. That company had been working on a system like this for its internal operations. Rufus has canceled the development of that system after a death-march project and is negotiating to license your technology for its environment.   Indeed, things are looking up!

    Read the article

< Previous Page | 48 49 50 51 52