Search Results

Search found 1125 results on 45 pages for 'bhoomi nature'.

Page 42/45 | < Previous Page | 38 39 40 41 42 43 44 45  | Next Page >

  • Consuming the Amazon S3 service from a Win8 Metro Application

    - by cibrax
    As many of the existing Http APIs for Cloud Services, AWS also provides a set of different platform SDKs for hiding many of complexities present in the APIs. While there is a platform SDK for .NET, which is open source and available in C#, that SDK does not work in Win8 Metro Applications for the changes introduced in WinRT. WinRT offers a complete different set of APIs for doing I/O operations such as doing http calls or using cryptography for signing or encrypting data, two aspects that are absolutely necessary for consuming AWS. All the I/O APIs available as part of WinRT are asynchronous, and uses the TPL model for .NET applications (HTML and JavaScript Metro applications use a model based in promises, which is similar concept).  In the case of S3, the http Authorization header is used for two purposes, authenticating clients and make sure the messages were not altered while they were in transit. For doing that, it uses a signature or hash of the message content and some of the headers using a symmetric key (That's just one of the available mechanisms). Windows Azure for example also uses the same mechanism in many of its APIs. There are three challenges that any developer working for first time in Metro will have to face to consume S3, the new WinRT APIs, the asynchronous nature of them and the complexity introduced for generating the Authorization header. Having said that, I decided to write this post with some of the gotchas I found myself trying to consume this Amazon service. 1. Generating the signature for the Authorization header All the cryptography APIs in WinRT are available under Windows.Security.Cryptography namespace. Many of operations available in these APIs uses the concept of buffers (IBuffer) for representing a chunk of binary data. As you will see in the example below, these buffers are mainly generated with the use of static methods in a WinRT class CryptographicBuffer available as part of the namespace previously mentioned. private string DeriveAuthToken(string resource, string httpMethod, string timestamp) { var stringToSign = string.Format("{0}\n" + "\n" + "\n" + "\n" + "x-amz-date:{1}\n" + "/{2}/", httpMethod, timestamp, resource); var algorithm = MacAlgorithmProvider.OpenAlgorithm("HMAC_SHA1"); var keyMaterial = CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(this.secret)); var hmacKey = algorithm.CreateKey(keyMaterial); var signature = CryptographicEngine.Sign( hmacKey, CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(stringToSign)) ); return CryptographicBuffer.EncodeToBase64String(signature); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The algorithm that determines the information or content you need to use for generating the signature is very well described as part of the AWS documentation. In this case, this method is generating a signature required for creating a new bucket. A HmacSha1 hash is computed using a secret or symetric key provided by AWS in the management console. 2. Sending an Http Request to the S3 service WinRT also ships with the System.Net.Http.HttpClient that was first introduced some months ago with ASP.NET Web API. This client provides a rich interface on top the traditional WebHttpRequest class, and also solves some of limitations found in this last one. There are a few things that don't work with a raw WebHttpRequest such as setting the Host header, which is something absolutely required for consuming S3. Also, HttpClient is more friendly for doing unit tests, as it receives a HttpMessageHandler as part of the constructor that can fake to emulate a real http call. This is how the code for consuming the service with HttpClient looks like, public async Task<S3Response> CreateBucket(string name, string region = null, params string[] acl) { var timestamp = string.Format("{0:r}", DateTime.UtcNow); var auth = DeriveAuthToken(name, "PUT", timestamp); var request = new HttpRequestMessage(HttpMethod.Put, "http://s3.amazonaws.com/"); request.Headers.Host = string.Format("{0}.s3.amazonaws.com", name); request.Headers.TryAddWithoutValidation("Authorization", "AWS " + this.key + ":" + auth); request.Headers.Add("x-amz-date", timestamp); var client = new HttpClient(); var response = await client.SendAsync(request); return new S3Response { Succeed = response.StatusCode == HttpStatusCode.OK, Message = (response.Content != null) ? await response.Content.ReadAsStringAsync() : null }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You will notice a few additional things in this code. By default, HttpClient validates the values for some well-know headers, and Authorization is one of them. It won't allow you to set a value with ":" on it, which is something that S3 expects. However, that's not a problem at all, as you can skip the validation by using the TryAddWithoutValidation method. Also, the code is heavily relying on the new async and await keywords to transform all the asynchronous calls into synchronous ones. In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, public class FakeHttpMessageHandler : HttpMessageHandler { HttpResponseMessage response; public FakeHttpMessageHandler(HttpResponseMessage response) { this.response = response; } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { var tcs = new TaskCompletionSource<HttpResponseMessage>(); tcs.SetResult(response); return tcs.Task; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You can use this handler for injecting any response while you are unit testing the code.

    Read the article

  • The Future of Project Management is Social

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Kazim Isfahani, Director, Product Marketing, Oracle Rapid Ascent. Breakneck Speed. Lightning Fast. Perhaps even overwhelming. No matter which set of adjectives we use to describe it, social media’s rise into the enterprise mainstream has been unprecedented. Indeed, the big 4 social media powerhouses (Facebook, Google+, LinkedIn, and Twitter), have nearly 2 Billion users between them. You may be asking (as you should really) “That’s all well and good for the consumer, but for me at my company, what’s your point? Beyond the fact that I can check and post updates, that is.” Good question, kind sir. Impact of Social and Collaboration on Project Management I’ll dovetail this discussion to the project management realm, since that’s what I’m writing about. Speed is a big challenge for project-driven organizations. Anything that can help speed up project delivery - be it a new product introduction effort or a geographical expansion project - fast is a good thing. So where does this whole social thing fit particularly since there are already a host of tools to help with traditional project execution? The fact is companies have seen improvements in their productivity by deploying departmental collaboration and other social-oriented solutions. McKinsey’s survey on social tools shows we have reached critical scale: 72% of respondents report that their companies use at least one and over 40% say they are using social networks and blogs. We don’t hear as much about the impact of social media technologies at the project and project manager level, but that does not mean there is none. Consider the new hire. The type of individual entering the workforce and executing on projects is a generation of worker expecting visually appealing, easy to use and easy to understand technology meshing hand-in-hand with business processes. Consider the project manager. The social era has enhanced the role that the project manager must play. Today’s project manager must be a supreme communicator, an influencer, a sympathizer, a negotiator, and still manage to keep all stakeholders in the loop on project progress. Social tools play a significant role in this effort. Now consider the impact to the project team. The way that a project team functions has changed, with newer, social oriented technologies making the process of information dissemination and team communications much more fluid. It’s clear that a shift is occurring where “social” is intersecting with project management. The Rise of Social Project Management We refer to the melding of project management and social networking as Social Project Management. Social Project Management is based upon the philosophy that the project team is one part of an integrated whole, and that valuable and unique abilities exist within the larger organization. For this reason, Social Project Management systems should be integrated into the collaborative platform(s) of an organization, allowing communication to proceed outside the project boundaries. What makes social project management "social" is an implicit awareness where distributed teams build connected links in ways that were previously restricted to teams that were co-located. Just as critical, Social Project Management embraces the vision of seamless online collaboration within a project team, but also provides for, (and enhances) the use of rigorous project management techniques. Social Project Management acknowledges that projects (particularly large projects) are a social activity - people doing work with people, for other people, with commitments to yet other people. The more people (larger projects), the more interpersonal the interactions, and the more social affects the project. The Epitome of Social - Fusion Project Portfolio Management If I take this one level further to discuss Fusion Project Portfolio Management, the notion of Social Project Management is on full display. With Fusion Project Portfolio Management, project team members have a single place for interaction on projects and access to any other resources working within the Fusion ERP applications. This allows team members the opportunity to be informed with greater participation and provide better information. The application’s the visual appeal, and highly graphical nature makes it easy to navigate information. The project activity stream adds to the intuitive user experience. The goal of productivity is pervasive throughout Fusion Project Portfolio Management. Field research conducted with Oracle customers and partners showed that users needed a way to stay in the context of their core transactions and yet easily access social networking tools. This is manifested in the application so when a user executes a business process, they not only have the transactional application at their fingertips, but also have things like e-mail, SMS, text, instant messaging, chat – all providing a number of different ways to interact with people and/or groups of people, both internal and external to the project and enterprise. But in the end, connecting people is relatively easy. The larger issue is finding a way to serve up relevant, system-generated, actionable information, in real time, which will allow for more streamlined execution on key business processes. Fusion Project Portfolio Management’s design concept enables users to create project communities, establish discussion threads, manage event calendars as well as deliver project based work spaces to organize communications within the context of a project – all within a secure business environment. We’d love to hear from you and get your thoughts and ideas about how Social Project Management is impacting your organization. To learn more about Oracle Fusion Project Portfolio Management, please visit this link

    Read the article

  • H1 Visa interview tips–What you must know before attending the interview?

    - by Gopinath
    USA’s H1 visa allows highly qualified professionals from other countries to work in America. Many IT professionals in India aspire to go to USA on H1 and work for their clients. Recently I had a chance to study H1 visa process to help one of my friends and I would like to share what I learned. With the assumption that your H1 petition is approved and you got an interview scheduled at US Embassy for your visa stamping, here are tips you must know before attending the interview Dress Code – Formals Say no to casuals or any fancy dress when you attend the interview. It’s not a party or friends home you are visiting. Consider H1 Visa interview as your job interview and dress up in formals. There is no option B for your, you must be in formals. A plain formal shirt with a matching pant is suggested for men. Tie and Suit would not be required, but if you are a professional at management level you can consider wearing suit. Women can wear either formal Salwar or formal pant-shirt. Avoid heavy jewellery, wear what is must as per your tradition or culture. Body Language -  Smile on your face Your body language reflects what you are and what’s going on in your mind. Don’t be nervous or restless, be relaxed and wear a beautiful smile on your face. A smile is a curve that sets everything straight. When you are called for the interview, greet the interviewer with a beautiful smile. Say Good Morning/Afternoon/Evening depending on time you are visiting them. Whenever appropriate say Thank You. Generally American professionals are very friendly people and they reciprocate for your greetings. Make sure that you make them comfortable to start the interview. Carry original documents in a separate folder I don’t want to talk much about the documents that are required for your H1B interview as it’s big subject on it’s own and it requires a separate post. I assume that your consultant or employer helped you in gathering all the required documents like – petition, DS 160 forms, education & job related documents, resume, interview call letters, client letters, etc. For all the documents you are going to submit at the interview make sure that you have originals in a separate folder.  If required interviewer may ask you show the originals of any of the document you submitted for visa processing. Don’t mix the original documents with the documents you need to submit for interview. Have a separate folder for them. For those who are going to stamping along with their spouse and children, they need to carry few extra original documents like – marriage certificate, marriage photos(30 numbers)/album, birth certificates, passports, education and profession related certificates of the spouse and children. Know your role & responsibilities The interviewer will ask you questions on your roles and responsibilities at client location. Be clear what is your day to day tasks at client place and prepared to face detailed questions on the same. When asked explain clearly and also make sure what you say is inline with what is mentioned in your petition and client invitation letter. At times they may ask you questions specific to the project/technology you are going to work. So doing some homework in this area will help you easily answer the questions. Failing to answer basic questions on your role & responsibilities may result in rejection. You work for your Employer at Client location but NOT FOR CLIENT One of the important things to keep in mind that you work for your employer and you are being deputed to client location on a work visa.  Your employer is going to be solely responsible for your salary, work, promotion, pay hikes or what so ever during your stay at USA. Your client will not be responsible for anything. Lets say you are employed with Company X in India and they are applying for H1B to work at your client(ex: Microsoft) in USA, you must keep in my mind that Microsoft is not your employer. Microsoft will not pay your salaries or responsible for any employment related activities. Company X will be solely responsible for all your employer related activities. If you don’t get this correctly and say to Visa interviewer that your client is responsible, then you may get into troubles. Know your client It’s always good to know the clients with whom you are going to work in USA and their business. If your client is a well know organisation then you may not get many questions from interviewer else you need to be well prepared to provide details like – nature of business, location, size of the organisation, etc.  Get to know the basic details about your client and be confident while providing those details to the interviewer. Also make sure that you never talk about any confidential details of your client projects and business. Revealing confidential details of your client may land your job itself in soup. Make sure that your spouse is also in sync with you If you’ve applied a H4 visa for your spouse along with your H1, make sure that spouse is in sync with you. Your spouse also should know the basic details of your job, your employer, client and location where you will be travelling. Your spouse should also be prepared to answers questions related to marriage, their profession(if working), kids, education, etc. Interviewers will try to asses your spouse communication skills, whereabouts while staying in USA and would they prefer to work USA or not. On H4, which is a dependent visa, your spouse is not allowed to work in USA and at any point your spouse should not show the intentions to search for work in USA. Less luggage more comfort You would have definitely heard that there are lot of restrictions on what you can carry along with you to an US Embassy while attending the interview. To be frank it’s not good to say there are many restrictions, but there are a hell a lot of restrictions. There are unbelievable restrictions and it’s for the safety of everyone. You are not allowed to carry mobile phones, CD/DVDs, USBs, bank cards, cameras, cosmetics, food(except baby food), water, wallets, backpacks, sealed covers, etc. Trust me most of the things we carry with us regularly every day are not allowed inside. As there are 100s of restrictions, it would be easier if you understand what you can carry along with you and just carry them alone. Ask your employer/consultant to provide you a checklist of items that you can carry. Most what you would require are H1B related documents provided by the employer/consultant Photographs All original documents supporting your H1B Passports Some cash for your travel expenses (avoid coins) Any important phone number / details written in a paper(like your cab driver number, etc.) If you carry restricted stuff then you will be stopped at security checks, you have to find people who can safely keep all the restricted items. Due to heavy restrictions in and around the US Embassy you will not find any  place to keep your luggage. So just carry the bare minimum things required so that you feel more comfortable. Useful Links THE U.S. NON IMMIGRANT VISA APPLICATION PROCESS U.S VISA SECURITY REGULATIONS GENERAL FAQS Hope this information is helpful to you and best of luck for your interview. Creative commons Image credit: Flickr/ alexfrance, vinothchandar. hughelectronic, architratan, striatic

    Read the article

  • Community Outreach - Where Should I Go

    - by Roger Brinkley
    A few days ago I was talking to person new to community development and they asked me what guidelines I used to determine the worthiness of a particular event. After our conversation was over I thought about it a little bit more and figured out there are three ways to determine if any event (be it conference, blog, podcast or other social medias) is worth doing: Transferability, Multiplication, and Impact. Transferability - Is what I have to say useful to the people that are going to hear it. For instance, consider a company that has product offering that can connect up using a number of languages like Scala, Grovey or Java. Sending a Scala expert to talk about Scala and the product is not transferable to a Java User Group, but a Java expert doing the same talk with a Java slant is. Similarly, talking about JavaFX to any Java User Group meeting in Brazil was pretty much a wasted effort until it was open sourced. Once it was open sourced it was well received. You can also look at transferability in relation to the subject matter that you're dealing with. How transferable is a presentation that I create. Can I, or a technical writer on the staff, turn it into some technical document. Could it be converted into some type of screen cast. If we have a regular podcast can we make a reference to the document, catch the high points or turn it into a interview. Is there a way of using this in the sales group. In other words is the document purely one dimensional or can it be re-purposed in other forms. Multiplication - On every trip I'm looking for 2 to 5 solid connections that I can make with developers. These are long term connections, because I know that once that relationship is established it will lead to another 2 - 5 from that connection and within a couple of years were talking about some 100 connections from just one developer. For instance, when I was working on JavaHelp in 2000 I hired a science teacher with a programming background. We've developed a very tight relationship over the year though we rarely see each other more than once a year. But at this JavaOne, one of his employees came up to me and said, "Richard (Rick Hard in Czech) told me to tell you that he couldn't make it to JavaOne this year but if I saw you to tell you hi". Another example is from my Mobile & Embedded days in Brasil. On our very first FISL trip about 5 years ago there were two university students that had created a project called "Marge". Marge was a Bluetooth framework that made connecting bluetooth devices easier. I invited them to a "Sun" dinner that evening. Originally they were planning on leaving that afternoon, but they changed their plans recognizing the opportunity. Their eyes were as big a saucers when they realized the level of engineers at the meeting. They went home started a JUG in Florianoplis that we've visited more than a couple of times. One of them went to work for Brazilian government lab like Berkley Labs, MIT Lab, John Hopkins Applied Physicas Labs or Lincoln Labs in the US. That presented us with an opportunity to show Embedded Java as a possibility for some of the work they were doing there. Impact - The final criteria is how life changing is what I'm going to say be to the individuals I'm reaching. A t-shirt is just a token, but when I reach down and tug at their developer hearts then I know I've succeeded. I'll never forget one time we flew all night to reach Joan Pasoa in Northern Brazil. We arrived at 2am went immediately to our hotel only to be woken up at 6 am to travel 2 hours by car to the presentation hall. When we arrived we were totally exhausted. Outside the facility there were 500 people lined up to hear 6 speakers for the day. That itself was uplifting.  I delivered one of my favorite talks on "I have passion". It was a talk on golf and embedded java development, "Find your passion". When we finished a couple of first year students came up to me and said how much my talk had inspired them. FISL is another great example. I had been about 4 years in a row. FISL is a very young group of developers so capturing their attention is important. Several of the students will come back 2 or 3 years later and ask me questions about research or jobs. And then there's Louis. Louis is one my favorite Brazilians. I can only describe him as a big Brazilian teddy bear. I see him every year at FISL. He works primarily in Java EE but he's attended every single one of my talks over the last 4 years. I can't tell you why, but he always greets me and gives me a hug. For some reason I've had a real impact. And of course when it comes to impact you don't just measure a presentation but every single interaction you have at an event. It's the hall way conversations, the booth conversations, but more importantly it's the conversations at dinner tables or in the cars when you're getting transported to an event. There's a good story that illustrates this. Last year in the spring I was traveling to Goiânia in Brazil. I've been there many times and leaders there no me well. One young man has picked me up at the airport on more than one occasion. We were going out to dinner one evening and he brought his girl friend along. One thing let to another and I eventually asked him, in front of her, "Why haven't you asked her to marry you?" There were all kinds of excuses and she just looked at him and smiled. When I came back in December for JavaOne he came and sought me. "I just want to tell you that I thought a lot about what you said, and I asked her to marry me. We're getting married next Spring." Sometimes just one presentation is all it takes to make an impact. Other times it takes years. Some impacts are directly related to the company and some are more personal in nature. It doesn't matter which it is because it's having the impact that matters.

    Read the article

  • Free Document/Content Management System Using SharePoint 2010

    - by KunaalKapoor
    That’s right, it’s true. You can use the free version of SharePoint 2010 to meet your document and content management needs and even run your public facing website or an internal knowledge bank.  SharePoint Foundation 2010 is free. It may not have all the features that you get in the enterprise license but it still has enough to cater to your needs to build a document management system and replace age old file shares or folders. I’ve built a dozen content management sites for internal and public use exploiting SharePoint. There are hundreds of web content management systems out there (see CMS Matrix).  On one hand we have commercial platforms like SharePoint, SiteCore, and Ektron etc. which are the most frequently used and on the other hand there are free options like WordPress, Drupal, Joomla, and Plone etc. which are pretty common popular as well. But I would be very surprised if anyone was able to find a single CMS platform that is all things to all people. Infact not a lot of people consider SharePoint’s free version under the free CMS side but its high time organizations benefit from this. Through this blog post I wanted to present SharePoint Foundation as an option for running a FREE CMS platform. Even if you knew that there is a free version of SharePoint, what most people don’t realize is that SharePoint Foundation is a great option for running web sites of all kinds – not just team sites. It is a great option for many reasons, but in reality it is supported by Microsoft, and above all it is FREE (yay!), and it is extremely easy to get started.  From a functionality perspective – it’s hard to beat SharePoint. Even the free version, SharePoint Foundation, offers simple data connectivity (through BCS), cross browser support, accessibility, support for Office Web Apps, blogs, wikis, templates, document support, health analyzer, support for presence, and MUCH more.I often get asked: “Can I use SharePoint 2010 as a document management system?” The answer really depends on ·          What are your specific requirements? ·          What systems you currently have in place for managing documents. ·          And of course how much money you have J Benefits? Not many large organizations have benefited from SharePoint yet. For some it has been an IT project to see what they can achieve with it, for others it has been used as a collaborative platform or in many cases an extended intranet. SharePoint 2010 has changed the game slightly as the improvements that Microsoft have made have been noted by organizations, and we are seeing a lot of companies starting to build specific business applications using SharePoint as the basis, and nearly every business process will require documents at some stage. If you require a document management system and have SharePoint in place then it can be a relatively straight forward decision to use SharePoint, as long as you have reviewed the considerations just discussed. The collaborative nature of SharePoint 2010 is also a massive advantage, as specific departmental or project sites can be created quickly and easily that allow workers to interact in a variety of different ways using one source of information.  This also benefits an organization with regards to how they manage the knowledge that they have, as if all of their information is in one source then it is naturally easier to search and manage. Is SharePoint right for your organization? As just discussed, this can only be determined after defining your requirements and also planning a longer term strategy for how you will manage your documents and information. A key factor to look at is how the users would interact with the system and how much value would it get for your organization. The amount of data and documents that organizations are creating is increasing rapidly each year. Therefore the ability to archive this information, whilst keeping the ability to know what you have and where it is, is vital to any organizations management of their information life cycle. SharePoint is best used for the initial life of business documents where they need to be referenced and accessed after time. It is often beneficial to archive these to overcome for storage and performance issues. FREE CMS – SharePoint, Really? In order to show some of the completely of what comes with this free version of SharePoint 2010, I thought it would make sense to use Wikipedia (since every one trusts it as a credible source). Wikipedia shows that a web content management system typically has the following components: Document Management:   -       CMS software may provide a means of managing the life cycle of a document from initial creation time, through revisions, publication, archive, and document destruction. SharePoint is king when it comes to document management.  Version history, exclusive check-out, security, publication, workflow, and so much more.  Content Virtualization:   -       CMS software may provide a means of allowing each user to work within a virtual copy of the entire Web site, document set, and/or code base. This enables changes to multiple interdependent resources to be viewed and/or executed in-context prior to submission. Through the use of versioning, each content manager can preview, publish, and roll-back content of pages, wiki entries, blog posts, documents, or any other type of content stored in SharePoint.  The idea of each user having an entire copy of the website virtualized is a bit odd to me – not sure why anyone would need that for anything but the simplest of websites. Automated Templates:   -       Create standard output templates that can be automatically applied to new and existing content, allowing the appearance of all content to be changed from one central place. Through the use of Master Pages and Themes, SharePoint provides the ability to change the entire look and feel of site.  Of course, the older brother version of SharePoint – SharePoint Server 2010 – also introduces the concept of Page Layouts which allows page template level customization and even switching the layout of an individual page using different page templates.  I think many organizations really think they want this but rarely end up using this bit of functionality.  Easy Edits:   -       Once content is separated from the visual presentation of a site, it usually becomes much easier and quicker to edit and manipulate. Most WCMS software includes WYSIWYG editing tools allowing non-technical individuals to create and edit content. This is probably easier described with a screen cap of a vanilla SharePoint Foundation page in edit mode.  Notice the page editing toolbar, the multiple layout options…  It’s actually easier to use than Microsoft Word. Workflow management: -       Workflow is the process of creating cycles of sequential and parallel tasks that must be accomplished in the CMS. For example, a content creator can submit a story, but it is not published until the copy editor cleans it up and the editor-in-chief approves it. Workflow, it’s in there. In fact, the same workflow engine is running under SharePoint Foundation that is running under the other versions of SharePoint.  The primary difference is that with SharePoint Foundation – you need to configure the workflows yourself.   Web Standards: -       Active WCMS software usually receives regular updates that include new feature sets and keep the system up to current web standards. SharePoint is in the fourth major iteration under Microsoft with the 2010 release.  In addition to the innovation that Microsoft continuously adds, you have the entire global ecosystem available. Scalable Expansion:   -       Available in most modern WCMSs is the ability to expand a single implementation (one installation on one server) across multiple domains. SharePoint Foundation can run multiple sites using multiple URLs on a single server install.  Even more powerful, SharePoint Foundation is scalable and can be part of a multi-server farm to ensure that it will handle any amount of traffic that can be thrown at it. Delegation & Security:  -       Some CMS software allows for various user groups to have limited privileges over specific content on the website, spreading out the responsibility of content management. SharePoint Foundation provides very granular security capabilities. Read @ http://msdn.microsoft.com/en-us/library/ee537811.aspx Content Syndication:  -       CMS software often assists in content distribution by generating RSS and Atom data feeds to other systems. They may also e-mail users when updates are available as part of the workflow process. SharePoint Foundation nails it.  With RSS syndication and email alerts available out of the box, content syndication is already in the platform. Multilingual Support: -       Ability to display content in multiple languages. SharePoint Foundation 2010 supports more than 40 languages. Read More Read more @ http://msdn.microsoft.com/en-us/library/dd776256(v=office.12).aspxYou can download the free version from http://www.microsoft.com/en-us/download/details.aspx?id=5970

    Read the article

  • Master-slave vs. peer-to-peer archictecture: benefits and problems

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Almost two decades ago, I was a member of a database development team that introduced adaptive locking. Locking, the most popular concurrency control technique in database systems, is pessimistic. Locking ensures that two or more conflicting operations on the same data item don’t “trample” on each other’s toes, resulting in data corruption. In a nutshell, here’s the issue we were trying to address. In everyday life, traffic lights serve the same purpose. They ensure that traffic flows smoothly and when everyone follows the rules, there are no accidents at intersections. As I mentioned earlier, the problem with typical locking protocols is that they are pessimistic. Regardless of whether there is another conflicting operation in the system or not, you have to hold a lock! Acquiring and releasing locks can be quite expensive, depending on how many objects the transaction touches. Every transaction has to pay this penalty. To use the earlier traffic light analogy, if you have ever waited at a red light in the middle of nowhere with no one on the road, wondering why you need to wait when there’s clearly no danger of a collision, you know what I mean. The adaptive locking scheme that we invented was able to minimize the number of locks that a transaction held, by detecting whether there were one or more transactions that needed conflicting eyou could get by without holding any lock at all. In many “well-behaved” workloads, there are few conflicts, so this optimization is a huge win. If, on the other hand, there are many concurrent, conflicting requests, the algorithm gracefully degrades to the “normal” behavior with minimal cost. We were able to reduce the number of lock requests per TPC-B transaction from 178 requests down to 2! Wow! This is a dramatic improvement in concurrency as well as transaction latency. The lesson from this exercise was that if you can identify the common scenario and optimize for that case so that only the uncommon scenarios are more expensive, you can make dramatic improvements in performance without sacrificing correctness. So how does this relate to the architecture and design of some of the modern NoSQL systems? NoSQL systems can be broadly classified as master-slave sharded, or peer-to-peer sharded systems. NoSQL systems with a peer-to-peer architecture have an interesting way of handling changes. Whenever an item is changed, the client (or an intermediary) propagates the changes synchronously or asynchronously to multiple copies (for availability) of the data. Since the change can be propagated asynchronously, during some interval in time, it will be the case that some copies have received the update, and others haven’t. What happens if someone tries to read the item during this interval? The client in a peer-to-peer system will fetch the same item from multiple copies and compare them to each other. If they’re all the same, then every copy that was queried has the same (and up-to-date) value of the data item, so all’s good. If not, then the system provides a mechanism to reconcile the discrepancy and to update stale copies. So what’s the problem with this? There are two major issues: First, IT’S HORRIBLY PESSIMISTIC because, in the common case, it is unlikely that the same data item will be updated and read from different locations at around the same time! For every read operation, you have to read from multiple copies. That’s a pretty expensive, especially if the data are stored in multiple geographically separate locations and network latencies are high. Second, if the copies are not all the same, the application has to reconcile the differences and propagate the correct value to the out-dated copies. This means that the application program has to handle discrepancies in the different versions of the data item and resolve the issue (which can further add to cost and operation latency). Resolving discrepancies is only one part of the problem. What if the same data item was updated independently on two different nodes (copies)? In that case, due to the asynchronous nature of change propagation, you might land up with different versions of the data item in different copies. In this case, the application program also has to resolve conflicts and then propagate the correct value to the copies that are out-dated or have incorrect versions. This can get really complicated. My hunch is that there are many peer-to-peer-based applications that don’t handle this correctly, and worse, don’t even know it. Imagine have 100s of millions of records in your database – how can you tell whether a particular data item is incorrect or out of date? And what price are you willing to pay for ensuring that the data can be trusted? Multiple network messages per read request? Discrepancy and conflict resolution logic in the application, and potentially, additional messages? All this overhead, when all you were trying to do was to read a data item. Wouldn’t it be simpler to avoid this problem in the first place? Master-slave architectures like the Oracle NoSQL Database handles this very elegantly. A change to a data item is always sent to the master copy. Consequently, the master copy always has the most current and authoritative version of the data item. The master is also responsible for propagating the change to the other copies (for availability and read scalability). Client drivers are aware of master copies and replicas, and client drivers are also aware of the “currency” of a replica. In other words, each NoSQL Database client knows how stale a replica is. This vastly simplifies the job of the application developer. If the application needs the most current version of the data item, the client driver will automatically route the request to the master copy. If the application is willing to tolerate some staleness of data (e.g. a version that is no more than 1 second out of date), the client can easily determine which replica (or set of replicas) can satisfy the request, and route the request to the most efficient copy. This results in a dramatic simplification in application logic and also minimizes network requests (the driver will only send the request to exactl the right replica, not many). So, back to my original point. A well designed and well architected system minimizes or eliminates unnecessary overhead and avoids pessimistic algorithms wherever possible in order to deliver a highly efficient and high performance system. If you’ve every programmed an Oracle NoSQL Database application, you’ll know the difference! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Feedback on meeting of the Linux User Group of Mauritius

    Once upon a time in a country far far away... Okay, actually it's not that bad but it has been a while since the last meeting of the Linux User Group of Mauritius (LUGM). There have been plans in the past but it never really happened. Finally, Selven took the opportunity and organised a new meetup with low administrative overhead, proper scheduling on alternative dates and a small attendee's survey on the preferred option. All the pre-work was nicely executed. First, I wasn't sure whether it would be possible to attend. Luckily I got some additional information, like children should come, too, and I was sold to this community gathering. According to other long-term members of the LUGM it was the first time 'ever' that a gathering was organised outside of Quatre Bornes, and I have to admit it was great! LUGM - user group meeting on the 15.06.2013 in L'Escalier Quick overview of Linux & the LUGM With a little bit of delay the LUGM meeting officially started with a quick overview and introduction to Linux presented by Avinash. During the session he told the audience that there had been quite some activity over the island some years ago but unfortunately it had been quiet during recent times. Of course, we also spoke about the acknowledged world dominance of Linux - thanks to Android - and the interesting possibilities for countries like Mauritius. It is known that a couple of public institutions have there back-end infrastructure running on Red Hat Linux systems but the presence on the desktop is still very low. Users are simply hanging on to Windows XP and older versions of Microsoft Office. Following the introduction of the LUGM Ajay joined into the session and it quickly changed into a panel discussion with lots of interesting questions and answers, sharing of first-hand experience either on the job or in private use of Linux, and a couple of ideas about how the LUGM could promote Linux a bit more in Mauritius. It was great to get an insight into other attendee's opinion and activities. Especially taking into consideration that I'm already using Linux since around 1996/97. Frankly speaking, I bought a SuSE 4.x distribution back in those days because I couldn't achieve certain tasks on Windows NT 4.0 without spending a fortune. OpenELEC Mediacenter Next, Selven gave us decent introduction on OpenELEC: Open Embedded Linux Entertainment Center (OpenELEC) is a small Linux distribution built from scratch as a platform to turn your computer into an XBMC media center. OpenELEC is designed to make your system boot fast, and the install is so easy that anyone can turn a blank PC into a media machine in less than 15 minutes. I didn't know about it until this presentation. In the past, I was mainly attached to Video Disk Recorder (VDR) as it allows the use of satellite receiver cards very easily. Hm, somehow I'm still missing my precious HTPC that I had to leave back in Germany years ago. It was great piece of hardware and software; self-built PC in a standard HiFi-sized (43cm) black desktop casing with 2 full-featured Hauppauge DVB-s cards, an old-fashioned Voodoo graphics card, WiFi card, Pioneer slot-in DVD drive, and fully remote controlled via infra-red thanks to Debian, VDR and LIRC. With EP Guide, scheduled recordings and general multimedia centre it offered all the necessary comfort in the living room, besides a Nintendo game console; actually a GameCube at that time... But I have to admit that putting OpenELEC on a Raspberry Pi would be a cool DIY project in the near future. LUGM - our next generation of linux users (15.06.2013) Project Evil Genius (PEG) Don't be scared of the paragraph header. Ish gave us a cool explanation why he named it PEG - Project Evil Genius; it's because of the time of the day when he was scripting down his ideas to be able to build, package and provide software applications to various Linux distributions. The main influence came from openSuSE but the platform didn't cater for his needs and ideas, so he started to work out something on his own. During his passionate session he also talked about the amazing experience he had due to other Linux users from all over the world. During the next couple of days Ish promised to put his script to GitHub... Looking forward to that. Check out Ish's personal blog over at hacklog.in. Highly recommended to read. Why India? Simply because the registration fees per year for an Indian domain are approximately 20 times less than for a Mauritian domain (.mu). Exploring the beach of L'Escalier af the meeting 'After-party' at the beach of L'Escalier Puh, after such interesting sessions, ideas around Linux and good conversation during the breaks and over lunch it was time for a little break-out. Selven suggested that we all should head down to the beach of L'Escalier and get some impressions of nature down here in the south of the island. Talking about 'beach' ;-) - absolutely not comparable to the white-sanded ones here in Flic en Flac... There are no lagoons down at the south coast of Mauriitus, and watching the breaking waves is a different experience and joy after all. Unfortunately, I was a little bit worried about the thoughtless littering at such a remote location. You have to drive on natural paths through the sugar cane fields and I was really shocked by the amount of rubbish lying around almost everywhere. Sad, really sad and it concurs with Yasir's recent article on the same topic. Resumé & outlook It was a great event. I met with new people, had some good conversations, and even my children enjoyed themselves the whole day. The location was well-chosen, enough space for each and everyone, parking spaces and even a playground for the children. Also, a big "Thank You" to Selven and his helpers for the organisation and preparation of lunch. I'm kind of sure that this was an exceptional meeting of LUGM and I'm really looking forward to the next gathering of Linux geeks. Hopefully, soon. All images are courtesy of Avinash Meetoo. More pictures are available on Flickr.

    Read the article

  • Simplifying Human Capital Management with Mobile Applications

    - by HCM-Oracle
    By Aaron Green If you're starting to think 'mobility' is a recurring theme in your reading, you'd be right. For those who haven't started to build organisational capabilities to leverage it, it's fair to say you're late to the party. The good news: better late than never. Research firm eMarketer says the worldwide smartphone audience will total 1.75 billion this year, while communications technology and services provider Ericsson suggests smartphones will triple to 5.6 billion globally by 2019. It should be no surprise, smart phone adoption is reaching the farthest corners of the globe; the subsequent impact of enterprise applications enabled by these devices is driving business performance improvement and will continue to do so. Companies using advanced workforce analytics can add significantly to the bottom line, while impacting customer satisfaction, quality and productivity. It's a statement that makes most business leaders sit forward in their chairs. Achieving these three standards is like sipping The Golden Elixir for the business world. No-one would argue their importance. So what are 'advanced workforce analytics?' Simply, they're unprecedented access to workforce trends and performance markers. Many are made possible by a mobile world and the enterprise applications that come with it on smart devices. Some refer to it as 'the consumerisation of IT'. As this phenomenon has matured and become more widely appreciated it has impacted the spectrum of functional units within an enterprise differently, but powerfully. Whether it's sales, HR, marketing, IT, or operations, all have benefited from a more mobile approach. It has been the catalyst for improvement in, and management of, the employee experience. The net result of which is happier customers. The obvious benefits but the lesser realised impact Most people understand that mobility allows for greater efficiency and productivity, collaboration and flexibility, but how that translates into business outcomes within the various functional groups is lesser known. In actuality mobility has helped galvanise partnerships between cross-functional groups within the enterprise. Where in some quarters it was once feared mobility could fragment a workforce, its rallying cry of support is coming from what you might describe as an unlikely source - HR. As the bedrock of an enterprise, it is conceivable HR might contemplate the possible negative impact of a mobile workforce that no-longer sits in an office, at the same desks every day. After all, who would know what they were doing or saying? How would they collaborate? It's reasonable to see why HR might have a legitimate claim to try and retain as much 'perceived control' as possible. The reality however is mobility has emancipated human capital and its management. Mobility and enterprise applications are expediting decision making. Google calls it Zero Moment of Truth, or ZMOT. It enables smoother operation and can contribute to faster growth. From a collaborative perspective, with the growing use of enterprise social media, which in many cases is being driven by HR, workforce planning and the tangible impact of change is much easier to map. This in turn provides a platform from which individuals and teams can thrive. With more agility and ability to anticipate, staff satisfaction and retention is higher, and real time feedback constant. The management team can save time, energy and costs with more accurate data, which is then intelligently applied across the workforce to truly engage with staff, customers and partners. From a human capital management (HCM) perspective, mobility can help you close the loop on true talent management. It can enhance what managers can offer and what employees can provide in return. It can create nested relationships and powerful partnerships. IT and HR - partners and stewards of mobility One effect of enterprise mobility is an evolution in the nature of the relationship between HR and IT from one of service provision to partnership. The reason for the dynamic shift is largely due to the 'bring your own device' (BYOD) movement, which is transitioning to a 'bring your own application' (BYOA) scenario. As enterprise technology has in some ways reverse-engineered its solutions to help manage this situation, the partnership between IT (the functional owner) and HR (the strategic enabler) is deeply entrenched. And it has to be. The CIO and the HR leader are faced with compliance and regulatory issues and concerns around information security and personal privacy on a daily basis, complicated by global reach and varied domestic legislation. There are tens of thousands of new mobile apps entering the market each month and, unlike many consumer applications which get downloaded but are often never opened again after initial perusal, enterprise applications are being relied upon by functional groups, not least by HR to enhance people management. It requires a systematic approach across all applications in use within the enterprise in order to ensure they're used to best effect. No turning back, and no desire to With real time analytics on performance and the ability for immediate feedback, there is no turning back for managers. In my experience with Oracle, our customers' operational efficiency is at record levels. It's clear as a result of the combination of individual KPIs and organisational goals, CIOs have been able to give HR leaders the ability to build predictive models that feed into an enterprise organisations' evolving strategy. It also helps them ensure regulatory compliance much more easily. Once an arduous task, with mobile enabled automation and quality data, compliance is simpler. Their world has changed for the better. For the CIO, mobility also assists them to optimise performance. While it doesn't come without challenges, mobile-enabled applications and the native experience users have with them means employees don't need high-level technical expertise to train users. It reduces the training and engagement required from the IT team so they can focus on other things that deliver value to the bottom line; all the while lowering the cost of assets and related maintenance work by simplifying processes. Rewards of a mobile enterprise outweigh risks With mobile tools allowing us to increasingly integrate our personal and professional lives, terms like "office hours" are becoming irrelevant, so work/life balance is a cultural must. Enterprises are expected to offer tools that enable workers to access information from anywhere, at any time, from any device. Employees want simplicity and convenience but it doesn't stop at private enterprise. This is a societal shift. Governments, which traditionally have been known to be slower to adopt newer technology, are also offering support for local businesses to go mobile. Several state government websites have advice on how to create mobile apps and more. And as recently as last week the Victorian Minister for Technology Gordon Rich-Phillips unveiled his State government's ICT roadmap for the next two years, which details an increased use of the public cloud, as well as mobile communications, and improved access to online data-sets. Tech giants are investing significantly in solutions designed to simplify mobile deployment and enablement. The mobility trend is creating a wave of change in the industry and driving transformation in the enterprise. If you're not on that wave, the business risk continues to rise as your competitiveness drops. Aaron is the Vice President of HCM Strategy at Oracle Corporation where he is responsible for researching and identifying emerging trends in the practice of Human Resources and works to deliver industry-leading technology solutions. Other responsibilities include, ownership of Oracle's innovative HCM solutions across JAPAC and enabling organisations to transform and modernise their workforce tools. Follow him on Twitter @aaronjgreen

    Read the article

  • Symantec Protection Suite and System Recovery 2011 Desktop Edition

    - by rihatum
    I am re-posting this as my previous question was being treated as if I am "Shopping or seeking Product Recommendations" even though I was NOT - BTW they have deleted my comments too which were not offensive in nature. anyway - I have re-phrased some parts of my question and I hope SF Admins "Do Not Modify / Edit" this one - will be most grateful for that. I have a lot of respect for the People who visit this SITE and help others ! Just To clarify : Just to go by SF rules - I am not seeking someone to Design this solution, I am simply seeking real world examples, experiences, technical expert opinions / suggestions, any tips or tricks they may have or any problems they may have faced while doing something similar above with these products. I am also not asking for Capacity Planning for Storage, We have done some research and I am seeking Expert Assurance / Suggestions. We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Server Hardware : Per SQL Server : 16GB RAM + SAS DISKS + Dual XEON, RAID-10 for the SQL DB or I can always mount a LUN from our existing Hitachi or EMC SAN. SEPM Server : 16GB RAM + SAS DISKS + DUAL XEON System Recovery MGMT SERVER : 16GB RAM + SAS DISKS + DUAL XEON Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) We have netbackup in our environment backing up other servers, I am planning to protect these 4 (2 x SQL, 1 x SEPM, 1 x System Recovery Mgmt. Server) via netbackup or I can use System recovery 2011 server edition on all 4 of these boxes as well. (License is not an issue as we have the complete symantec portfolio included in our license). d) Now - Saving Desktop backups - What strategies have you implemented ? Any best practice recommendation for a large user base ? I was thinking to either mount a LUN from our Hitachi SAN on the Symantec Recovery Server itself or backup to the users hard drive locally and then copy it over to a network location ? Suggestions welcome :-) If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks !

    Read the article

  • What's up with LDoms: Part 5 - A few Words about Consoles

    - by Stefan Hinker
    Back again to look at a detail of LDom configuration that is often forgotten - the virtual console server. Remember, LDoms are SPARC systems.  As such, each guest will have it's own OBP running.  And to connect to that OBP, the administrator will need a console connection.  Since it's OBP, and not some x86 BIOS, this console will be very serial in nature ;-)  It's really very much like in the good old days, where we had a terminal concentrator where all those serial cables ended up in.  Just like with other components in LDoms, the virtualized solution looks very similar. Every LDom guest requires exactly one console connection.  Envision this similar to the RS-232 port on older SPARC systems.  The LDom framework provides one or more console services that provide access to these connections.  This would be the virtual equivalent of a network terminal server (NTS), where all those serial cables are plugged in.  In the physical world, we'd have a list somewhere, that would tell us which TCP-Port of the NTS was connected to which server.  "ldm list" does just that: root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- UART 16 7680M 0.4% 27d 8h 22m jupiter bound ------ 5002 20 8G mars active -n---- 5000 2 8G 0.5% 55d 14h 10m venus active -n---- 5001 2 8G 0.5% 56d 40m pluto inactive ------ 4 4G The column marked "CONS" tells us, where to reach the console of each domain. In the case of the primary domain, this is actually a (more) physical connection - it's the console connection of the physical system, which is either reachable via the ILOM of that system, or directly via the serial console port on the chassis. All the other guests are reachable through the console service which we created during the inital setup of the system.  Note that pluto does not have a port assigned.  This is because pluto is not yet bound.  (Binding can be viewed very much as the assembly of computer parts - CPU, Memory, disks, network adapters and a serial console cable are all put together when binding the domain.)  Unless we set the port number explicitly, LDoms Manager will do this on a first come, first serve basis.  For just a few domains, this is fine.  For larger deployments, it might be a good idea to assign these port numbers manually using the "ldm set-vcons" command.  However, there is even better magic associated with virtual consoles. You can group several domains into one console group, reachable through one TCP port of the console service.  This can be useful when several groups of administrators are to be given access to different domains, or for other grouping reasons.  Here's an example: root@sun # ldm set-vcons group=planets service=console jupiter root@sun # ldm set-vcons group=planets service=console pluto root@sun # ldm bind jupiter root@sun # ldm bind pluto root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- UART 16 7680M 6.1% 27d 8h 24m jupiter bound ------ 5002 200 8G mars active -n---- 5000 2 8G 0.6% 55d 14h 12m pluto bound ------ 5002 4 4G venus active -n---- 5001 2 8G 0.5% 56d 42m root@sun # telnet localhost 5002 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. sun-vnts-planets: h, l, c{id}, n{name}, q:l DOMAIN ID DOMAIN NAME DOMAIN STATE 2 jupiter online 3 pluto online sun-vnts-planets: h, l, c{id}, n{name}, q:npluto Connecting to console "pluto" in group "planets" .... Press ~? for control options .. What I did here was add the two domains pluto and jupiter to a new console group called "planets" on the service "console" running in the primary domain.  Simply using a group name will create such a group, if it doesn't already exist.  By default, each domain has its own group, using the domain name as the group name.  The group will be available on port 5002, chosen by LDoms Manager because I didn't specify it.  If I connect to that console group, I will now first be prompted to choose the domain I want to connect to from a little menu. Finally, here's an example how to assign port numbers explicitly: root@sun # ldm set-vcons port=5044 group=pluto service=console pluto root@sun # ldm bind pluto root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- UART 16 7680M 3.8% 27d 8h 54m jupiter active -t---- 5002 200 8G 0.5% 30m mars active -n---- 5000 2 8G 0.6% 55d 14h 43m pluto bound ------ 5044 4 4G venus active -n---- 5001 2 8G 0.4% 56d 1h 13m With this, pluto would always be reachable on port 5044 in its own exclusive console group, no matter in which order other domains are bound. Now, you might be wondering why we always have to mention the console service name, "console" in all the examples here.  The simple answer is because there could be more than one such console service.  For all "normal" use, a single console service is absolutely sufficient.  But the system is flexible enough to allow more than that single one, should you need them.  In fact, you could even configure such a console service on a domain other than the primary (or control domain), which would make that domain a real console server.  I actually have a customer who does just that - they want to separate console access from the control domain functionality.  But this is definately a rather sophisticated setup. Something I don't want to go into in this post is access control.  vntsd, which is the daemon providing all these console services, is fully RBAC-aware, and you can configure authorizations for individual users to connect to console groups or individual domain's consoles.  If you can't wait until I get around to security, check out the man page of vntsd. Further reading: The Admin Guide is rather reserved on this subject.  I do recommend to check out the Reference Manual. The manpage for vntsd will discuss all the control sequences as well as the grouping and authorizations mentioned here.

    Read the article

  • FOUR questions to ask if you are implementing DATABASE-AS-A-SERVICE

    - by Sudip Datta
    During my ongoing tenure at Oracle, I have met all types of DBAs. Happy DBAs, unhappy DBAs, proud DBAs, risk-loving DBAs, cautious DBAs. These days, as Database-as-a-Service (DBaaS) becomes more mainstream, I find some complacent DBAs who are basking in their achievement of having implemented DBaaS. Some others, however, are not that happy. They grudgingly complain that they did not have much of a say in the implementation, they simply had to follow what their cloud architects (mostly infrastructure admins) offered them. In most cases it would be a database wrapped inside a VM that would be labeled as “Database as a Service”. In other cases, it would be existing brute-force automation simply exposed in a portal. As much as I think that there is more to DBaaS than those approaches and often get tempted to propose Enterprise Manager 12c, I try to be objective. Neither do I want to dampen the spirit of the happy ones, nor do I want to stoke the pain of the unhappy ones. As I mentioned in my previous post, I don’t deny vanilla automation could be useful. I like virtualization too for what it has helped us accomplish in terms of resource management, but we need to scrutinize its merit on a case-by-case basis and apply it meaningfully. For DBAs who either claim to have implemented DBaaS or are planning to do so, I simply want to provide four key questions to ponder about: 1. Does it make life easier for your end users? Database-as-a-Service can have several types of end users. Junior DBAs, QA Engineers, Developers- each having their own skillset. The objective of DBaaS is to make their life simple, so that they can focus on their core responsibilities without having to worry about additional stuff. For example, if you are a Developer using Oracle Application Express (APEX), you want to deal with schema, objects and PL/SQL code and not with datafiles or listener configuration. If you are a QA Engineer needing database copies for functional testing, you do not want to deal with underlying operating system patching and compliance issues. The question to ask, therefore, is, whether DBaaS makes life easier for those users. It is often convenient to give them VM shells to deal with a la Amazon EC2 IaaS, but is that what they really want? Is it a productive use of a developer's time if he needs to apply RPM errata to his Linux operating system. Asking him to keep the underlying operating system current is like making a guest responsible for a restaurant's decor. 2. Does it make life easier for your administrators? Cloud, in general, is supposed to free administrators from attending to mundane tasks like provisioning services for every single end user request. It is supposed to enable a readily consumable platform and enforce standardization in the process. For example, if a Service Catalog exposes DBaaS of specific database versions and configurations, it, by its very nature, enforces certain discipline and standardization within the IT environment. What if, instead of specific database configurations, cloud allowed each end user to create databases of their liking resulting in hundreds of version and patch levels and thousands of individual databases. Therefore the right question to ask is whether the unwanted consequence of DBaaS is OS and database sprawl. And if so, who is responsible for tracking them, backing them up, administering them? Studies have shown that these administrative overheads increase exponentially with new targets, and it could result in a management nightmare. That leads us to our next question. 3. Does it satisfy your Security Officers and Compliance Auditors? Compliance Auditors need to know who did what and when. They also want the cloud platform to be secure, so that end users have little freedom in tampering with it. Dealing with VM sprawl is not the easiest of challenges, let alone dealing with them as they keep getting reconfigured and moved around. This leads to the proverbial needle in the haystack problem, and all it needs is one needle to cause a serious compliance issue in the enterprise. Bottomline is, flexibility and agility should not come at the expense of compliance and it is very important to get the balance right. Can we have security and isolation without creating compliance challenges? Instead of a ‘one size fits all approach’ i.e. OS level isolation, can we think smartly about database isolation or schema based isolation? This is where the appropriate resource modeling needs to be applied. The usual systems management vendors out there with heterogeneous common-denominator approach have compromised on these semantics. If you follow Enterprise Manager’s DBaaS solution, you will see that we have considered different models, not precluding virtualization, for different customer use cases. The judgment to use virtual assemblies versus databases on physical RAC versus Schema-as-a-Service in a single database, should be governed by the need of the applications and not by putting compliance considerations in the backburner. 4. Does it satisfy your CIO? Finally, does it satisfy your higher ups? As the sponsor of cloud initiative, the CIO is expected to lead an IT transformation project, not merely a run-of-the-mill IT operations. Simply virtualizing server resources and delivering them through self-service is a good start, but hardly transformational. CIOs may appreciate the instant benefit from server consolidation, but studies have revealed that the ROI from consolidation would flatten out at 20-25%. The question would be: what next? As we go higher up in the stack, the need to virtualize, segregate and optimize shifts to those layers that are more palpable to the business users. As Sushil Kumar noted in his blog post, " the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment." Business users could not care less about infrastructure consolidation or virtualization - they care about business agility and service level assurance. Last but not the least, lot of CIOs get miffed if we ask them to throw away their existing hardware investments for implementing DBaaS. In Oracle, we always emphasize on freedom of choosing a platform; hence Enterprise Manager’s DBaaS solution is platform neutral. It can work on any Operating System (that the agent is certified on) Oracle’s hardware as well as 3rd party hardware. As a parting note, I urge you to remember these 4 questions. Remember that your satisfaction as an implementer lies in the satisfaction of others.

    Read the article

  • How broken is routing strategy that causes a martian packet (so far only) during tracepath?

    - by lkraav
    I believe I've achieved a table that routes packets from and to eth1/192.168.3.x through 192.168.3.1, and packets from and to eth0/192.168.1.x through 192.168.1.1 (helpful source). Question: when doing tracepath from 192.168.3.20 (from within vserver), I'm getting kernel: [318535.927489] martian source 192.168.3.20 from 212.47.223.33, on dev eth0 at or near the target IP, while intermediary hops go without (log below). I don't understand why this packet is arriving on eth0, instead of eth1, even after reading this: Note that you may see packets from non-routable IP addresses when running the traceroute or tracepath commands. While packets cannot be routed to these routers, packets sent between 2 routers only need to know the address of the next hop within the local networks, which could be a non-routable address. Can someone explain that paragraph in human language? Based on short initial trials so far, everything else seems to work without causing martians. Is this contained to the nature of tracepath operation or do I have some other bigger routing problem that will cause work traffic breakage? Side note: is it possible to inspect martian packet with tcpdump or wireshark or anything of the sort? I'm have not been able to get it to show up on my own. vserver-20 / # tracepath -n 212.47.223.33 1: 192.168.3.2 0.064ms pmtu 1500 1: 192.168.3.1 1.076ms 1: 192.168.3.1 1.259ms 2: 90.191.8.2 1.908ms 3: 90.190.134.194 2.595ms 4: 194.126.123.94 2.136ms asymm 5 5: 195.250.170.22 2.266ms asymm 6 6: 212.47.201.86 2.390ms asymm 7 7: no reply 8: no reply 9: no reply ^C Host routing: $ sudo ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo 2: sit0: <NOARP> mtu 1480 qdisc noop state DOWN link/sit 0.0.0.0 brd 0.0.0.0 3: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:24:1d:de:b3:5d brd ff:ff:ff:ff:ff:ff inet 192.168.1.2/24 scope global eth0 4: eth1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:46:46:a3:6a brd ff:ff:ff:ff:ff:ff inet 192.168.3.2/27 scope global eth1 inet 192.168.3.20/27 brd 192.168.3.31 scope global secondary eth1 # linux-vserver instance $ sudo ip route default via 192.168.1.1 dev eth0 metric 3 unreachable 127.0.0.0/8 scope host 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.2 192.168.3.0/27 dev eth1 proto kernel scope link src 192.168.3.2 $ sudo ip rule 0: from all lookup local 32764: from all to 192.168.3.0/27 lookup dmz 32765: from 192.168.3.0/27 lookup dmz 32766: from all lookup main 32767: from all lookup default $ sudo ip route show table dmz default via 192.168.3.1 dev eth1 metric 4 192.168.3.0/27 dev eth1 scope link metric 4 Gateway routing # ip route 10.24.0.2 dev tun0 proto kernel scope link src 10.24.0.1 10.24.0.0/24 via 10.24.0.2 dev tun0 192.168.3.0/24 dev br-dmz proto kernel scope link src 192.168.3.1 192.168.1.0/24 dev br-lan proto kernel scope link src 192.168.1.1 $ISP_NET/23 dev eth0.1 proto kernel scope link src $WAN_IP default via $ISP_GW dev eth0.1 Additional background Options for non-virtualized network interface isolation?

    Read the article

  • outgoing DNS flood targeted to non-ISP hosts

    - by radudani
    Below is the specific traffic monitored at the network perimeter and originating from a user PC on Vista platform. My question is not about the effects of the flood, but about the nature of the source of it. Is this some known infection, or just an application went out of control? a standard NOD32 scan didn't find anything, as the user told me. Thank you for any hint. 14:40:10.115876 IP 192.168.7.42.4122 > 67.228.0.181.53: S 2742536765:2742536765(0) win 16384 <mss 1460,nop,nop,sackOK> 14:40:10.115943 IP 192.168.7.42.4124 > 67.228.181.207.53: S 3071079888:3071079888(0) win 16384 <mss 1460,nop,nop,sackOK> 14:40:10.116015 IP 192.168.7.42.4126 > 67.228.0.181.53: S 3445199428:3445199428(0) win 16384 <mss 1460,nop,nop,sackOK> 14:40:10.116086 IP 192.168.7.42.4128 > 67.228.181.207.53: S 2053198691:2053198691(0) win 16384 <mss 1460,nop,nop,sackOK> 14:40:10.116154 IP 192.168.7.42.4130 > 67.228.0.181.53: S 2841660872:2841660872(0) win 16384 <mss 1460,nop,nop,sackOK> 14:40:10.116222 IP 192.168.7.42.4132 > 67.228.181.207.53: S 3150822465:3150822465(0) win 16384 <mss 1460,nop,nop,sackOK> 14:40:10.116290 IP 192.168.7.42.4134 > 67.228.0.181.53: S 1692515021:1692515021(0) win 16384 <mss 1460,nop,nop,sackOK> 14:40:10.116358 IP 192.168.7.42.4136 > 67.228.181.207.53: S 3358275919:3358275919(0) win 16384 <mss 1460,nop,nop,sackOK> 14:40:10.116430 IP 192.168.7.42.4138 > 67.228.0.181.53: S 930184999:930184999(0) win 16384 <mss 1460,nop,nop,sackOK> 14:40:10.116498 IP 192.168.7.42.4140 > 67.228.181.207.53: S 1504984630:1504984630(0) win 16384 <mss 1460,nop,nop,sackOK> 14:40:10.116566 IP 192.168.7.42.4142 > 67.228.0.181.53: S 546074424:546074424(0) win 16384 <mss 1460,nop,nop,sackOK> 14:40:10.116634 IP 192.168.7.42.4144 > 67.228.181.207.53: S 4241828590:4241828590(0) win 16384 <mss 1460,nop,nop,sackOK> 14:40:10.116702 IP 192.168.7.42.4146 > 67.228.0.181.53: S 668634627:668634627(0) win 16384 <mss 1460,nop,nop,sackOK> 14:40:10.116769 IP 192.168.7.42.4148 > 67.228.181.207.53: S 3768119461:3768119461(0) win 16384 <mss 1460,nop,nop,sackOK> 14:40:10.117360 IP 192.168.7.42.4111 > 67.228.0.181.53: 12676 op8 Resp12*- [2128q][|domain] 14:40:10.117932 IP 192.168.7.42.4112 > 67.228.181.207.53: 44190 op7 NotAuth*|$ [29103q],[|domain] 14:40:10.118726 IP 192.168.7.42.4113 > 67.228.0.181.53: 49196 inv_q [b2&3=0xeea] [64081q] [28317a] [43054n] [23433au] Type63482 (Class 5889)? M-_^OS>M-JM-m^_M-i.[|domain] 14:40:10.119934 IP 192.168.7.42.4114 > 67.228.181.207.53: 48131 updateMA Resp12$ [43850q],[|domain] 14:40:10.121164 IP 192.168.7.42.4115 > 67.228.0.181.53: 46330 updateM% [b2&3=0x665b] [23691a] [998q] [32406n] [11452au][|domain] 14:40:10.121866 IP 192.168.7.42.4116 > 67.228.181.207.53: 34425 op7 YXRRSet* [39927q][|domain] 14:40:10.123107 IP 192.168.7.42.4117 > 67.228.0.181.53: 56536 notify+ [b2&3=0x27e6] [59761a] [23005q] [33341n] [29705au][|domain] 14:40:10.123961 IP 192.168.7.42.4118 > 67.228.181.207.53: 19323 stat% [b2&3=0x14bb] [32491a] [41925q] [2038n] [5857au][|domain] 14:40:10.132499 IP 192.168.7.42.4119 > 67.228.0.181.53: 50432 updateMA+ [b2&3=0x6bc2] [10733a] [9775q] [46984n] [15261au][|domain] 14:40:10.133394 IP 192.168.7.42.4120 > 67.228.181.207.53: 2171 notify Refused$ [26027q][|domain] 14:40:10.134421 IP 192.168.7.42.4121 > 67.228.0.181.53: 25802 updateM NXDomain*-$ [28641q][|domain] 14:40:10.135392 IP 192.168.7.42.4122 > 67.228.181.207.53: 2073 updateMA+ [b2&3=0x6d0b] [43177a] [54332q] [17736n] [43636au][|domain] 14:40:10.136638 IP 192.168.7.42.4123 > 67.228.0.181.53: 15346 updateD+% [b2&3=0x577a] [61686a] [19106q] [15824n] [37833au] Type28590 (Class 64856)? [|domain] 14:40:10.137265 IP 192.168.7.42.4124 > 67.228.181.207.53: 60761 update+ [b2&3=0x2b66] [43293a] [53922q] [23115n] [11349au][|domain] 14:40:10.148122 IP 192.168.7.42.4125 > 67.228.0.181.53: 3418 op3% [b2&3=0x1a92] [51107a] [60368q] [47777n] [56081au][|domain]

    Read the article

  • Can someone explain the "use-cases" for the default munin graphs?

    - by exhuma
    When installing munin, it activates a default set of plugins (at least on ubuntu). Alternatively, you can simply run munin-node-configure to figure out which plugins are supported on your system. Most of these plugins plot straight-forward data. My question is not to explain the nature of the data (well... maybe for some) but what is it that you look for in these graphs? It is easy to install munin and see fancy graphs. But having the graphs and not being able to "read" them renders them totally useless. I am going to list standard plugins which are enabled by default on my system. So it's going to be a long list. For completeness, I am also going to list plugins which I think to understand and give a short explanation as to what I think it's used for. Pleas correct if I am wrong with any of them. So let me split this questions in three parts: Plugins where I don't even understand the data Plugins where I understand the data but don't know what I should look out for Plugins which I think to understand Plugins where I don't even understand the data These may contain questions that are not necessarily aimed at munin alone. Not understanding the data usually mean a gap in fundamental knowledge on operating systems/hardware.... ;) Feel free to respond with a "giyf" answer. These are plugins where I can only guess what's going on... I hardly want to look at these "guessing"... Disk IOs per device (IOs/second)What's an IO. I know it stands for input/output. But that's as far as it goes. Disk latency per device (Average IO wait)Not a clue what an "IO wait" is... IO Service TimeThis one is a huge mess, and it's near impossible to see something in the graph at all. Plugins where I understand the data but don't know what I should look out for IOStat (blocks/second read/written)I assume, the thing to look out for in here are spikes? Which would mean that the device is in heavy use? Available entropy (bytes)I assume that this is important for random number generation? Why would I graph this? So far the value has always been near constant. VMStat (running/I/O sleep processes)What's the difference between this one and the "processes" graph? Both show running/sleeping processes, whereas the "Processes" graph seems to have more details. Disk throughput per device (bytes/second read/written) What's thedifference between this one and the "IOStat" graph? inode table usageWhat should I look for in this graph? Plugins which I think to understand I'll be guessing some things here... correct me if I am wrong. Disk usage in percent (percent)How much disk space is used/remaining. As this is approaching 100%, you should consider cleaning up or extend the partition. This is extremely important for the root partition. Firewall Throughput (packets/second)The number of packets passing through the firewall. If this is spiking for a longer period of time, it could be a sign of a DOS attack (or we are simply recieving a large file). It can also give you an idea about your firewall performance. If it's levelling out and you need more "power" you should consider load balancing. If it's levelling out and see a correlation with your CPU load, it could also mean that your hardware is not fast enough. Correlations with disk usage could point to excessive LOG targets in you FW config. eth0 errors (packets in/out)Network errors. If this value is increasing, it could be a sign of faulty hardware. eth0 traffic (bits/second in/out)Raw network traffic. This should correlate with Firewall throughput. number of threadsAn ever-increasing value might point to a process not properly closing threads. Investigate! processesBreakdown of active processes (including sleeping). A quick spike in here might point to a fork-bomb. A slowly, but ever-increasing value might point to an application spawning sub-processes but not properly closing them. Investigate using ps faux. process priorityThis shows the distribution of process priorities. Having only high-priority processes is not of much use. Consider de-prioritizing some. cpu usageFairly straight-forward. If this is spiking, you may have an attack going on, or a process is hogging the CPU. Idf it's slowly increasing and approaching max in normal operations, you should consider upgrading your hardware (or load-balancing). file table usageNumber of actively open files. If this is reaching max, you may have a process opening, but not properly releasing files. load averageShows an summarized value for the system load. Should correlate with CPU usage. Increasing values can come from a number of sources. Look for correlations with other graphs. memory usageA graphical representation of you memory. As long as you have a lot of unused+cache+buffers you are fine. swap in/outShows the activity on your swap partition. This should always be 0. If you see activity on this, you should add more memory to your machine!

    Read the article

  • Bad disks in ancient server

    - by Joel Coel
    I have a 1998-era Netware 3.12 server that runs everything on our campus: general ledger, purchasing, payroll, student information, grades, you name it. The server has an Adaptec RAID controller with two volumes: RAID 1, 2 17GB scsi disks, Seagate ST318417W RAID 5, 3 4GB scsi disks, 2 Seagate ST34573W and 1 ST34572W. We are currently in the early stages of a project to replace this system, but you don't just jump into a new system like that and so I need to keep this server running until at least November 2011. This week we had not one but two hard drives fail. Thankfully they are from different volumes and we're able to keep running for the moment, but given the close nature of these failures I have serious doubts that I'll be able to avoid catastrophic failure from this server through the November target as is without restoring the RAID redundancy — it'll only take one more drive failure anywhere and I'm completely hosed. We are fortunate enough to have exact match "spares" lying around for both drives, but the spares are in unknown condition. I tried swapping just them in, but the RAID controller isn't smart enough to handle this and it renders the system unbootable. As for the RAID controller itself, there is utility I can get into during POST via a Ctrl-A shortcut, but I can't do much useful from there. To actually manage volumes I must first boot in to Netware, at which point I can use CI/O Array Management Software Version 2.0 to actually look at volume information. I suspect that the normal way to manage things is to boot from a special floppy with the controller software on it, but that floppy is long gone. Going through the options in the RAID software, I think the only supported way to replace a disk in an existing RAID volume is to physically add the disk, boot up and configure it as a "spare" for a volume, force the volume to use the spare to replace an existing down disk (and at this point I'm only guessing) so that the down disk becomes the spare, repair the volume, remove the spare from the volume, and then shut down and remove the disk. Then start all over for the other failed disk. All this amounts to a lot of downtime, assuming I can even make it work and that my spares are any good. As for finding reliable spares, I have no clue where to even begin looking to find a new 4GB scsi drive, or even which exact scsi system I'm looking for, as it's gone through a few different iterations over time. Another option is to migrate this to a virtual machine (hyper-v), but all previous attempts we've made in this area have failed to get very far. When this machine was installed I was just graduating from high school, and so it requires lower level knowledge of netware and dos than I ever developed, or if I did have since forgotten (I'm not exactly a dos neophyte, either). Part of my problem is this is a high-use server, and taking it down for a few days to figure things out isn't gonna fly very well. As for the question, I'm looking for anything that might be helpful in this situation: a recommendation on a place to find good spares from this era, personal experience repairing RAID volumes using a similar controller or building a hyper-v vm from an old netware server, a line on a floppy with better software for the RAID controller, recommendation on a good Novell consultant in Nebraska that would be able to put things right, a whole other option I haven't considered yet, etc. Update: For backups, we have good (recently verified via restore) backups of the data only -- nothing for the software that actually runs things. Update 2: Just a progress report that I currently have a working Netware 3.12 install in VMWare Virtual Server 2.0, thanks largely to the guide I found here: http://cerbulescubogdan.blogspot.com/2010/11/novell-netware-312-on-vmware.html The next steps are preparing empty netware volumes to match the additional volumes on my existing server, taking a dump of everything on the C:\ drive and netware volumes on my existing server, and figuring out from that information what modules need added to netware, installing my licenses (we do still have that disk, if it's any good), and moving data over. I have approval to bring the server down for a week after the first of the year (sadly not before), so, aside from creating empty volumes, the rest of the work will have to wait until then. Final Update (Jan 5, 2011): I was able to get spares working in both raid arrays without data loss this week. Both are now listed by the controller as "FAULT TOLLERANT" (yay!). I was also able to build on the progress from my last update and now have a functional "spare" server in VMWare Server 2.0. The spare can run and use our erp software, but I can't put it into production because I can't (yet) print from that box (and I have no idea why). Even so, this VM will do in a pinch if I have no other choice, and between it and the repaired RAID arrays I'm comfortable pushing on until I can junk the machine in November.

    Read the article

  • Parallelism in .NET – Part 9, Configuration in PLINQ and TPL

    - by Reed
    Parallel LINQ and the Task Parallel Library contain many options for configuration.  Although the default configuration options are often ideal, there are times when customizing the behavior is desirable.  Both frameworks provide full configuration support. When working with Data Parallelism, there is one primary configuration option we often need to control – the number of threads we want the system to use when parallelizing our routine.  By default, PLINQ and the TPL both use the ThreadPool to schedule tasks.  Given the major improvements in the ThreadPool in CLR 4, this default behavior is often ideal.  However, there are times that the default behavior is not appropriate.  For example, if you are working on multiple threads simultaneously, and want to schedule parallel operations from within both threads, you might want to consider restricting each parallel operation to using a subset of the processing cores of the system.  Not doing this might over-parallelize your routine, which leads to inefficiencies from having too many context switches. In the Task Parallel Library, configuration is handled via the ParallelOptions class.  All of the methods of the Parallel class have an overload which accepts a ParallelOptions argument. We configure the Parallel class by setting the ParallelOptions.MaxDegreeOfParallelism property.  For example, let’s revisit one of the simple data parallel examples from Part 2: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re looping through an image, and calling a method on each pixel in the image.  If this was being done on a separate thread, and we knew another thread within our system was going to be doing a similar operation, we likely would want to restrict this to using half of the cores on the system.  This could be accomplished easily by doing: var options = new ParallelOptions(); options.MaxDegreeOfParallelism = Math.Max(Environment.ProcessorCount / 2, 1); Parallel.For(0, pixelData.GetUpperBound(0), options, row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Now, we’re restricting this routine to using no more than half the cores in our system.  Note that I included a check to prevent a single core system from supplying zero; without this check, we’d potentially cause an exception.  I also did not hard code a specific value for the MaxDegreeOfParallelism property.  One of our goals when parallelizing a routine is allowing it to scale on better hardware.  Specifying a hard-coded value would contradict that goal. Parallel LINQ also supports configuration, and in fact, has quite a few more options for configuring the system.  The main configuration option we most often need is the same as our TPL option: we need to supply the maximum number of processing threads.  In PLINQ, this is done via a new extension method on ParallelQuery<T>: ParallelEnumerable.WithDegreeOfParallelism. Let’s revisit our declarative data parallelism sample from Part 6: double min = collection.AsParallel().Min(item => item.PerformComputation()); Here, we’re performing a computation on each element in the collection, and saving the minimum value of this operation.  If we wanted to restrict this to a limited number of threads, we would add our new extension method: int maxThreads = Math.Max(Environment.ProcessorCount / 2, 1); double min = collection .AsParallel() .WithDegreeOfParallelism(maxThreads) .Min(item => item.PerformComputation()); This automatically restricts the PLINQ query to half of the threads on the system. PLINQ provides some additional configuration options.  By default, PLINQ will occasionally revert to processing a query in parallel.  This occurs because many queries, if parallelized, typically actually cause an overall slowdown compared to a serial processing equivalent.  By analyzing the “shape” of the query, PLINQ often decides to run a query serially instead of in parallel.  This can occur for (taken from MSDN): Queries that contain a Select, indexed Where, indexed SelectMany, or ElementAt clause after an ordering or filtering operator that has removed or rearranged original indices. Queries that contain a Take, TakeWhile, Skip, SkipWhile operator and where indices in the source sequence are not in the original order. Queries that contain Zip or SequenceEquals, unless one of the data sources has an originally ordered index and the other data source is indexable (i.e. an array or IList(T)). Queries that contain Concat, unless it is applied to indexable data sources. Queries that contain Reverse, unless applied to an indexable data source. If the specific query follows these rules, PLINQ will run the query on a single thread.  However, none of these rules look at the specific work being done in the delegates, only at the “shape” of the query.  There are cases where running in parallel may still be beneficial, even if the shape is one where it typically parallelizes poorly.  In these cases, you can override the default behavior by using the WithExecutionMode extension method.  This would be done like so: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .Select(i => i.PerformComputation()) .Reverse(); Here, the default behavior would be to not parallelize the query unless collection implemented IList<T>.  We can force this to run in parallel by adding the WithExecutionMode extension method in the method chain. Finally, PLINQ has the ability to configure how results are returned.  When a query is filtering or selecting an input collection, the results will need to be streamed back into a single IEnumerable<T> result.  For example, the method above returns a new, reversed collection.  In this case, the processing of the collection will be done in parallel, but the results need to be streamed back to the caller serially, so they can be enumerated on a single thread. This streaming introduces overhead.  IEnumerable<T> isn’t designed with thread safety in mind, so the system needs to handle merging the parallel processes back into a single stream, which introduces synchronization issues.  There are two extremes of how this could be accomplished, but both extremes have disadvantages. The system could watch each thread, and whenever a thread produces a result, take that result and send it back to the caller.  This would mean that the calling thread would have access to the data as soon as data is available, which is the benefit of this approach.  However, it also means that every item is introducing synchronization overhead, since each item needs to be merged individually. On the other extreme, the system could wait until all of the results from all of the threads were ready, then push all of the results back to the calling thread in one shot.  The advantage here is that the least amount of synchronization is added to the system, which means the query will, on a whole, run the fastest.  However, the calling thread will have to wait for all elements to be processed, so this could introduce a long delay between when a parallel query begins and when results are returned. The default behavior in PLINQ is actually between these two extremes.  By default, PLINQ maintains an internal buffer, and chooses an optimal buffer size to maintain.  Query results are accumulated into the buffer, then returned in the IEnumerable<T> result in chunks.  This provides reasonably fast access to the results, as well as good overall throughput, in most scenarios. However, if we know the nature of our algorithm, we may decide we would prefer one of the other extremes.  This can be done by using the WithMergeOptions extension method.  For example, if we know that our PerformComputation() routine is very slow, but also variable in runtime, we may want to retrieve results as they are available, with no bufferring.  This can be done by changing our above routine to: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.NotBuffered) .Select(i => i.PerformComputation()) .Reverse(); On the other hand, if are already on a background thread, and we want to allow the system to maximize its speed, we might want to allow the system to fully buffer the results: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.FullyBuffered) .Select(i => i.PerformComputation()) .Reverse(); Notice, also, that you can specify multiple configuration options in a parallel query.  By chaining these extension methods together, we generate a query that will always run in parallel, and will always complete before making the results available in our IEnumerable<T>.

    Read the article

  • Mapping UrlEncoded POST Values in ASP.NET Web API

    - by Rick Strahl
    If there's one thing that's a bit unexpected in ASP.NET Web API, it's the limited support for mapping url encoded POST data values to simple parameters of ApiController methods. When I first looked at this I thought I was doing something wrong, because it seems mighty odd that you can bind query string values to parameters by name, but can't bind POST values to parameters in the same way. To demonstrate here's a simple example. If you have a Web API method like this:[HttpGet] public HttpResponseMessage Authenticate(string username, string password) { …} and then hit with a URL like this: http://localhost:88/samples/authenticate?Username=ricks&Password=sekrit it works just fine. The query string values are mapped to the username and password parameters of our API method. But if you now change the method to work with [HttpPost] instead like this:[HttpPost] public HttpResponseMessage Authenticate(string username, string password) { …} and hit it with a POST HTTP Request like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Content-type: application/x-www-form-urlencoded Content-Length: 30 Username=ricks&Password=sekrit you'll find that while the request works, it doesn't actually receive the two string parameters. The username and password parameters are null and so the method is definitely going to fail. When I mentioned this over Twitter a few days ago I got a lot of responses back of why I'd want to do this in the first place - after all HTML Form submissions are the domain of MVC and not WebAPI which is a valid point. However, the more common use case is using POST Variables with AJAX calls. The following is quite common for passing simple values:$.post(url,{ Username: "Rick", Password: "sekrit" },function(result) {…}); but alas that doesn't work. How ASP.NET Web API handles Content Bodies Web API supports parsing content data in a variety of ways, but it does not deal with multiple posted content values. In effect you can only post a single content value to a Web API Action method. That one parameter can be very complex and you can bind it in a variety of ways, but ultimately you're tied to a single POST content value in your parameter definition. While it's possible to support multiple parameters on a POST/PUT operation, only one parameter can be mapped to the actual content - the rest have to be mapped to route values or the query string. Web API treats the whole request body as one big chunk of data that is sent to a Media Type Formatter that's responsible for de-serializing the content into whatever value the method requires. The restriction comes from async nature of Web API where the request data is read only once inside of the formatter that retrieves and deserializes it. Because it's read once, checking for content (like individual POST variables) first is not possible. However, Web API does provide a couple of ways to access the form POST data: Model Binding - object property mapping to bind POST values FormDataCollection - collection of POST keys/values ModelBinding POST Values - Binding POST data to Object Properties The recommended way to handle POST values in Web API is to use Model Binding, which maps individual urlencoded POST values to properties of a model object provided as the parameter. Model binding requires a single object as input to be bound to the POST data, with each POST key that matches a property name (including nested properties like Address.Street) being mapped and updated including automatic type conversion of simple types. This is a very nice feature - and a familiar one from MVC - that makes it very easy to have model objects mapped directly from inbound data. The obvious drawback with Model Binding is that you need a model for it to work: You have to provide a strongly typed object that can receive the data and this object has to map the inbound data. To rewrite the example above to use ModelBinding I have to create a class maps the properties that I need as parameters:public class LoginData { public string Username { get; set; } public string Password { get; set; } } and then accept the data like this in the API method:[HttpPost] public HttpResponseMessage Authenticate(LoginData login) { string username = login.Username; string password = login.Password; … } This works fine mapping the POST values to the properties of the login object. As a side benefit of this method definition, the method now also allows posting of JSON or XML to the same endpoint. If I change my request to send JSON like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: application/jsonContent-type: application/json Content-Length: 40 {"Username":"ricks","Password":"sekrit"} it works as well and transparently, courtesy of the nice Content Negotiation features of Web API. There's nothing wrong with using Model binding and in fact it's a common practice to use (view) model object for inputs coming back from the client and mapping them into these models. But it can be  kind of a hassle if you have AJAX applications with a ton of backend hits, especially if many methods are very atomic and focused and don't effectively require a model or view. Not always do you have to pass structured data, but sometimes there are just a couple of simple response values that need to be sent back. If all you need is to pass a couple operational parameters, creating a view model object just for parameter purposes seems like overkill. Maybe you can use the query string instead (if that makes sense), but if you can't then you can often end up with a plethora of 'message objects' that serve no further  purpose than to make Model Binding work. Note that you can accept multiple parameters with ModelBinding so the following would still work:[HttpPost] public HttpResponseMessage Authenticate(LoginData login, string loginDomain) but only the object will be bound to POST data. As long as loginDomain comes from the querystring or route data this will work. Collecting POST values with FormDataCollection Another more dynamic approach to handle POST values is to collect POST data into a FormDataCollection. FormDataCollection is a very basic key/value collection (like FormCollection in MVC and Request.Form in ASP.NET in general) and then read the values out individually by querying each. [HttpPost] public HttpResponseMessage Authenticate(FormDataCollection form) { var username = form.Get("Username"); var password = form.Get("Password"); …} The downside to this approach is that it's not strongly typed, you have to handle type conversions on non-string parameters, and it gets a bit more complicated to test such as setup as you have to seed a FormDataCollection with data. On the other hand it's flexible and easy to use and especially with string parameters is easy to deal with. It's also dynamic, so if the client sends you a variety of combinations of values on which you make operating decisions, this is much easier to work with than a strongly typed object that would have to account for all possible values up front. The downside is that the code looks old school and isn't as self-documenting as a parameter list or object parameter would be. Nevertheless it's totally functionality and a viable choice for collecting POST values. What about [FromBody]? Web API also has a [FromBody] attribute that can be assigned to parameters. If you have multiple parameters on a Web API method signature you can use [FromBody] to specify which one will be parsed from the POST content. Unfortunately it's not terribly useful as it only returns content in raw format and requires a totally non-standard format ("=content") to specify your content. For more info in how FromBody works and several related issues to how POST data is mapped, you can check out Mike Stalls post: How WebAPI does Parameter Binding Not really sure where the Web API team thought [FromBody] would really be a good fit other than a down and dirty way to send a full string buffer. Extending Web API to make multiple POST Vars work? Don't think so Clearly there's no native support for multiple POST variables being mapped to parameters, which is a bit of a bummer. I know in my own work on one project my customer actually found this to be a real sticking point in their AJAX backend work, and we ended up not using Web API and using MVC JSON features instead. That's kind of sad because Web API is supposed to be the proper solution for AJAX backends. With all of ASP.NET Web API's extensibility you'd think there would be some way to build this functionality on our own, but after spending a bit of time digging and asking some of the experts from the team and Web API community I didn't hear anything that even suggests that this is possible. From what I could find I'd say it's not possible primarily because Web API's Routing engine does not account for the POST variable mapping. This means [HttpPost] methods with url encoded POST buffers are not mapped to the parameters of the endpoint, and so the routes would never even trigger a request that could be intercepted. Once the routing doesn't work there's not much that can be done. If somebody has an idea how this could be accomplished I would love to hear about it. Do we really need multi-value POST mapping? I think that that POST value mapping is a feature that one would expect of any API tool to have. If you look at common APIs out there like Flicker and Google Maps etc. they all work with POST data. POST data is very prominent much more so than JSON inputs and so supporting as many options that enable would seem to be crucial. All that aside, Web API does provide very nice features with Model Binding that allows you to capture many POST variables easily enough, and logistically this will let you build whatever you need with POST data of all shapes as long as you map objects. But having to have an object for every operation that receives a data input is going to take its toll in heavy AJAX applications, with a lot of types created that do nothing more than act as parameter containers. I also think that POST variable mapping is an expected behavior and Web APIs non-support will likely result in many, many questions like this one: How do I bind a simple POST value in ASP.NET WebAPI RC? with no clear answer to this question. I hope for V.next of WebAPI Microsoft will consider this a feature that's worth adding. Related Articles Passing multiple POST parameters to Web API Controller Methods Mike Stall's post: How Web API does Parameter Binding Where does ASP.NET Web API Fit?© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • First round playing with Memcached

    - by Shaun
    To be honest I have not been very interested in the caching before I’m going to a project which would be using the multi-site deployment and high connection and concurrency and very sensitive to the user experience. That means we must cache the output data for better performance. After looked for the Internet I finally focused on the Memcached. What’s the Memcached? I think the description on its main site gives us a very good and simple explanation. Free & open source, high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load. Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering. Memcached is simple yet powerful. Its simple design promotes quick deployment, ease of development, and solves many problems facing large data caches. Its API is available for most popular languages. The original Memcached was built on *nix system are is being widely used in the PHP world. Although it’s not a problem to use the Memcached installed on *nix system there are some windows version available fortunately. Since we are WISC (Windows – IIS – SQL Server – C#, which on the opposite of LAMP) it would be much easier for us to use the Memcached on Windows rather than *nix. I’m using the Memcached Win X64 version provided by NorthScale. There are also the x86 version and other operation system version.   Install Memcached Unpack the Memcached file to a folder on the machine you want it to be installed, we can see that there are only 3 files and the main file should be the “memcached.exe”. Memcached would be run on the server as a service. To install the service just open a command windows and navigate to the folder which contains the “memcached.exe”, let’s say “C:\Memcached\”, and then type “memcached.exe -d install”. If you are using Windows Vista and Windows 7 system please be execute the command through the administrator role. Right-click the command item in the start menu and use “Run as Administrator”, otherwise the Memcached would not be able to be installed successfully. Once installed successful we can type “memcached.exe -d start” to launch the service. Now it’s ready to be used. The default port of Memcached is 11211 but you can change it through the command argument. You can find the help by typing “memcached -h”.   Using Memcached Memcahed has many good and ready-to-use providers for vary program language. After compared and reviewed I chose the Memcached Providers. It’s built based on another 3rd party Memcached client named enyim.com Memcached Client. The Memcached Providers is very simple to set/get the cached objects through the Memcached servers and easy to be configured through the application configuration file (aka web.config and app.config). Let’s create a console application for the demonstration and add the 3 DLL files from the package of the Memcached Providers to the project reference. Then we need to add the configuration for the Memcached server. Create an App.config file and firstly add the section on top of it. Here we need three sections: the section for Memcached Providers, for enyim.com Memcached client and the log4net. 1: <configSections> 2: <section name="cacheProvider" 3: type="MemcachedProviders.Cache.CacheProviderSection, MemcachedProviders" 4: allowDefinition="MachineToApplication" 5: restartOnExternalChanges="true"/> 6: <sectionGroup name="enyim.com"> 7: <section name="memcached" 8: type="Enyim.Caching.Configuration.MemcachedClientSection, Enyim.Caching"/> 9: </sectionGroup> 10: <section name="log4net" 11: type="log4net.Config.Log4NetConfigurationSectionHandler,log4net"/> 12: </configSections> Then we will add the configuration for 3 of them in the App.config file. The Memcached server information would be defined under the enyim.com section since it will be responsible for connect to the Memcached server. Assuming I installed the Memcached on two servers with the default port, the configuration would be like this. 1: <enyim.com> 2: <memcached> 3: <servers> 4: <!-- put your own server(s) here--> 5: <add address="192.168.0.149" port="11211"/> 6: <add address="10.10.20.67" port="11211"/> 7: </servers> 8: <socketPool minPoolSize="10" maxPoolSize="100" connectionTimeout="00:00:10" deadTimeout="00:02:00"/> 9: </memcached> 10: </enyim.com> Memcached supports the multi-deployment which means you can install the Memcached on the servers as many as you need. The protocol of the Memcached responsible for routing the cached objects into the proper server. So it’s very easy to scale-out your system by Memcached. And then define the Memcached Providers configuration. The defaultExpireTime indicates how long the objected cached in the Memcached would be expired, the default value is 2000 ms. 1: <cacheProvider defaultProvider="MemcachedCacheProvider"> 2: <providers> 3: <add name="MemcachedCacheProvider" 4: type="MemcachedProviders.Cache.MemcachedCacheProvider, MemcachedProviders" 5: keySuffix="_MySuffix_" 6: defaultExpireTime="2000"/> 7: </providers> 8: </cacheProvider> The last configuration would be the log4net. 1: <log4net> 2: <!-- Define some output appenders --> 3: <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender"> 4: <layout type="log4net.Layout.PatternLayout"> 5: <conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline"/> 6: </layout> 7: </appender> 8: <!--<threshold value="OFF" />--> 9: <!-- Setup the root category, add the appenders and set the default priority --> 10: <root> 11: <priority value="WARN"/> 12: <appender-ref ref="ConsoleAppender"> 13: <filter type="log4net.Filter.LevelRangeFilter"> 14: <levelMin value="WARN"/> 15: <levelMax value="FATAL"/> 16: </filter> 17: </appender-ref> 18: </root> 19: </log4net>   Get, Set and Remove the Cached Objects Once we finished the configuration it would be very simple to consume the Memcached servers. The Memcached Providers gives us a static class named DistCache that can be used to operate the Memcached servers. Get<T>: Retrieve the cached object from the Memcached servers. If failed it will return null or the default value. Add: Add an object with a unique key into the Memcached servers. Assuming that we have an operation that retrieve the email from the name which is time consuming. This is the operation that should be cached. The method would be like this. I utilized Thread.Sleep to simulate the long-time operation. 1: static string GetEmailByNameSlowly(string name) 2: { 3: Thread.Sleep(2000); 4: return name + "@ethos.com.cn"; 5: } Then in the real retrieving method we will firstly check whether the name, email information had been searched previously and cached. If yes we will just return them from the Memcached, otherwise we will invoke the slowly method to retrieve it and then cached. 1: static string GetEmailByName(string name) 2: { 3: var email = DistCache.Get<string>(name); 4: if (string.IsNullOrEmpty(email)) 5: { 6: Console.WriteLine("==> The name/email not be in memcached so need slow loading. (name = {0})==>", name); 7: email = GetEmailByNameSlowly(name); 8: DistCache.Add(name, email); 9: } 10: else 11: { 12: Console.WriteLine("==> The name/email had been in memcached. (name = {0})==>", name); 13: } 14: return email; 15: } Finally let’s finished the calling method and execute. 1: static void Main(string[] args) 2: { 3: var name = string.Empty; 4: while (name != "q") 5: { 6: Console.Write("==> Please enter the name to find the email: "); 7: name = Console.ReadLine(); 8:  9: var email = GetEmailByName(name); 10: Console.WriteLine("==> The email of {0} is {1}.", name, email); 11: } 12: } The first time I entered “ziyanxu” it takes about 2 seconds to get the email since there’s nothing cached. But the next time I entered “ziyanxu” it returned very quickly from the Memcached.   Summary In this post I explained a bit on why we need cache, what’s Memcached and how to use it through the C# application. The example is fairly simple but hopefully demonstrated on how to use it. Memcached is very easy and simple to be used since it gives you the full opportunity to consider what, when and how to cache the objects. And when using Memcached you don’t need to consider the cache servers. The Memcached would be like a huge object pool in front of you. The next step I’m thinking now are: What kind of data should be cached? And how to determined the key? How to implement the cache as a layer on top of the business layer so that the application will not notice that the cache is there. How to implement the cache by AOP so that the business logic no need to consider the cache. I will investigate on them in the future and will share my thoughts and results.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • "Content is not allowed in prolog" when parsing perfectly valid XML on GAE

    - by Adrian Petrescu
    Hey guys, I've been beating my head against this absolutely infuriating bug for the last 48 hours, so I thought I'd finally throw in the towel and try asking here before I throw my laptop out the window. I'm trying to parse the response XML from a call I made to AWS SimpleDB. The response is coming back on the wire just fine; for example, it may look like: <?xml version="1.0" encoding="utf-8"?> <ListDomainsResponse xmlns="http://sdb.amazonaws.com/doc/2009-04-15/"> <ListDomainsResult> <DomainName>Audio</DomainName> <DomainName>Course</DomainName> <DomainName>DocumentContents</DomainName> <DomainName>LectureSet</DomainName> <DomainName>MetaData</DomainName> <DomainName>Professors</DomainName> <DomainName>Tag</DomainName> </ListDomainsResult> <ResponseMetadata> <RequestId>42330b4a-e134-6aec-e62a-5869ac2b4575</RequestId> <BoxUsage>0.0000071759</BoxUsage> </ResponseMetadata> </ListDomainsResponse> I pass in this XML to a parser with XMLEventReader eventReader = xmlInputFactory.createXMLEventReader(response.getContent()); and call eventReader.nextEvent(); a bunch of times to get the data I want. Here's the bizarre part -- it works great inside the local server. The response comes in, I parse it, everyone's happy. The problem is that when I deploy the code to Google App Engine, the outgoing request still works, and the response XML seems 100% identical and correct to me, but the response fails to parse with the following exception: com.amazonaws.http.HttpClient handleResponse: Unable to unmarshall response (ParseError at [row,col]:[1,1] Message: Content is not allowed in prolog.): <?xml version="1.0" encoding="utf-8"?> <ListDomainsResponse xmlns="http://sdb.amazonaws.com/doc/2009-04-15/"><ListDomainsResult><DomainName>Audio</DomainName><DomainName>Course</DomainName><DomainName>DocumentContents</DomainName><DomainName>LectureSet</DomainName><DomainName>MetaData</DomainName><DomainName>Professors</DomainName><DomainName>Tag</DomainName></ListDomainsResult><ResponseMetadata><RequestId>42330b4a-e134-6aec-e62a-5869ac2b4575</RequestId><BoxUsage>0.0000071759</BoxUsage></ResponseMetadata></ListDomainsResponse> javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,1] Message: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(Unknown Source) at com.sun.xml.internal.stream.XMLEventReaderImpl.nextEvent(Unknown Source) at com.amazonaws.transform.StaxUnmarshallerContext.nextEvent(StaxUnmarshallerContext.java:153) ... (rest of lines omitted) I have double, triple, quadruple checked this XML for 'invisible characters' or non-UTF8 encoded characters, etc. I looked at it byte-by-byte in an array for byte-order-marks or something of that nature. Nothing; it passes every validation test I could throw at it. Even stranger, it happens if I use a Saxon-based parser as well -- but ONLY on GAE, it always works fine in my local environment. It makes it very hard to trace the code for problems when I can only run the debugger on an environment that works perfectly (I haven't found any good way to remotely debug on GAE). Nevertheless, using the primitive means I have, I've tried a million approaches including: XML with and without the prolog With and without newlines With and without the "encoding=" attribute in the prolog Both newline styles With and without the chunking information present in the HTTP stream And I've tried most of these in multiple combinations where it made sense they would interact -- nothing! I'm at my wit's end. Has anyone seen an issue like this before that can hopefully shed some light on it? Thanks!

    Read the article

  • NHibernate which cache to use for WinForms application

    - by chiccodoro
    I have a C# WinForms application with a database backend (oracle) and use NHibernate for O/R mapping. I would like to reduce communication to the database as much as possible since the network in here is quite slow, so I read about second level caching. I found this quite good introduction, which lists the following available cache implementations. I'm wondering which implementation I should use for my application. The caching should be simple, it should not significantly slow down the first occurrence of a query, and it should not take much memory to load the implementing assemblies. (With NHibernate and Castle, the application already takes up to 80 MB of RAM!) Velocity: uses Microsoft Velocity which is a highly scalable in-memory application cache for all kinds of data. Prevalence: uses Bamboo.Prevalence as the cache provider. Bamboo.Prevalence is a .NET implementation of the object prevalence concept brought to life by Klaus Wuestefeld in Prevayler. Bamboo.Prevalence provides transparent object persistence to deterministic systems targeting the CLR. It offers persistent caching for smart client applications. SysCache: Uses System.Web.Caching.Cache as the cache provider. This means that you can rely on ASP.NET caching feature to understand how it works. SysCache2: Similar to NHibernate.Caches.SysCache, uses ASP.NET cache. This provider also supports SQL dependency-based expiration, meaning that it is possible to configure certain cache regions to automatically expire when the relevant data in the database changes. MemCache: uses memcached; memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load. Basically a distributed hash table. SharedCache: high-performance, distributed and replicated memory object caching system. See here and here for more info My considerations so far were: Velocity seems quite heavyweight and overkill (the files totally take 467 KB of disk space, haven't measured the RAM it takes so far because I didn't manage to make it run, see below) Prevalence, at least in my first attempt, slowed down my query from ~0.5 secs to ~5 secs, and caching didn't work (see below) SysCache seems to be for ASP.NET, not for winforms. MemCache and SharedCache seem to be for distributed scenarios. Which one would you suggest me to use? There would also be a built-in implementation, which of course is very lightweight, but the referenced article tells me that I "(...) should never use this cache provider for production code but only for testing." Besides the question which fits best into my situation I also faced problems with applying them: Velocity complained that "dcacheClient" tag not specified in the application configuration file. Specify valid tag in configuration file," although I created an app.config file for the assembly and pasted the example from this article. Prevalence, as mentioned above, heavily slowed down my first query, and the next time the exact same query was executed, another select was sent to the database. Maybe I should "externalize" this topic into another post. I will do that if someone tells me it is absolutely unusual that a query is slowed down so much and he needs further details to help me.

    Read the article

  • Issuing Current Time Increments in StreamInsight (A Practical Example)

    The issuing of a Current Time Increment, Cti, in StreamInsight is very definitely one of the most important concepts to learn if you want your Streams to be responsive. A full discussion of how to issue Ctis is beyond the scope of this article but a very good explanation in addition to Books Online can be found in these three articles by a member of the StreamInsight team at Microsoft, Ciprian Gerea. Time in StreamInsight Series http://blogs.msdn.com/b/streaminsight/archive/2010/07/23/time-in-streaminsight-i.aspx http://blogs.msdn.com/b/streaminsight/archive/2010/07/30/time-in-streaminsight-ii.aspx http://blogs.msdn.com/b/streaminsight/archive/2010/08/03/time-in-streaminsight-iii.aspx A lot of the problems I see with unresponsive or stuck streams on the MSDN Forums are to do with how Ctis are enqueued or in a lot of cases not enqueued. If you enqueue events and never enqueue a Cti then StreamInsight will be perfectly happy. You, on the other hand, will never see data on the output as you have not told StreamInsight to flush the stream. This article deals with a specific implementation problem I had recently whilst working on a StreamInsight project. I look at some possible options and discuss why they would not work before showing the way I solved the problem. The stream of data I was dealing with on this project was very bursty that is to say when events were flowing they came through very quickly and in large numbers (1000 events/sec), but when the stream calmed down it could be a few seconds between each event. When enqueuing events into the StreamInsight engne it is best practice to do so with a StartTime that is given to you by the system producing the event . StreamInsight processes events and it doesn't matter whether those events are being pushed into the engine by a source system or the events are being read from something like a flat file in a directory somewhere. You can apply the same logic and temporal algebra to both situations. Reading from a file is an excellent example of where the time of the event on the source itself is very important. We could be reading that file a long time after it was written. Being able to read the StartTime from the events allows us to define windows that will hold the correct sets of events. I was able to do this with my stream but this is where my problems started. Below is a very simple script to create a SQL Server table and populate it with sample data that will show exactly the problem I had. CREATE TABLE [dbo].[t] ( [c1] [int] PRIMARY KEY, [c2] [datetime] NULL ) INSERT t VALUES (1,'20100810'),(2,'20100810'),(3,'20100810') Column c2 defines the StartTime of the event on the source and as you can see the values in all 3 rows of data is the same. If we read Ciprian’s articles we know that we can define how Ctis get injected into the stream in 3 different places The Stream Definition The Input Factory The Input Adapter I personally have always been a fan of enqueing Ctis through the factory. Below is code typical of what I would use to do this On the class itself I do some inheriting public class SimpleInputFactory : ITypedInputAdapterFactory<SimpleInputConfig>, ITypedDeclareAdvanceTimeProperties<SimpleInputConfig> And then I implement the following function public AdapterAdvanceTimeSettings DeclareAdvanceTimeProperties<TPayload>(SimpleInputConfig configInfo, EventShape eventShape) { return new AdapterAdvanceTimeSettings( new AdvanceTimeGenerationSettings(configInfo.CtiFrequency, TimeSpan.FromTicks(-1)), AdvanceTimePolicy.Adjust); } The configInfo .CtiFrequency property is a value I pass through to define after how many events I want a Cti to be injected and this in turn will flush through the stream of data. I usually pass a value of 1 for this setting. The second parameter determines the CTI timestamp in terms of a delay relative to the events. -1 ticks in the past results in 1 tick in the future, i.e., ahead of the event. The problem with this method though is that if consecutive events have the same StartTime then only one of those events will be enqueued. In this example I use the following to define how I assign the StartTime of my events currEvent.StartTime = (DateTimeOffset)dt.c2; If I go ahead and run my StreamInsight process with this configuration i can see on the output adapter that two events have been removed To see this in a little more depth I can use the StreamInsight Debugger and see what happens internally. What is happening here is that the first event arrives and a Cti is injected with a time of 1 tick after the StartTime of that event (Also the EndTime of the event). The second event arrives and it has a StartTime of before the Cti and even though we specified AdvanceTimePolicy.Adjust on the factory we know that a point event can never be adjusted like this and the event is dropped. The same happens for the third event as well (The second and third events get trumped by the Cti). For a more detailed discussion of why this happens look here http://www.sqlis.com/sqlis/post/AdvanceTimePolicy-and-Point-Event-Streams-In-StreamInsight.aspx We end up with a single event being pushed into the output adapter and our result now makes sense. The next way I tried to solve this problem by changing the value of the second parameter to TimeSpan.Zero Here is how my factory code now looks public AdapterAdvanceTimeSettings DeclareAdvanceTimeProperties<TPayload>(SimpleInputConfig configInfo, EventShape eventShape) { return new AdapterAdvanceTimeSettings( new AdvanceTimeGenerationSettings(configInfo.CtiFrequency, TimeSpan.Zero), AdvanceTimePolicy.Adjust); } What I am doing here is declaring a policy that says inject a Cti together with every event and stamp it with a StartTime that is equal to the start time of the event itself (TimeSpan.Zero). This method has plus points as well as a downside. The upside is that no events will be lost by having the same StartTime as previous events. The Downside is that because the Cti is declared with the StartTime of the event itself then it does not actually flush that particular event because in the StreamInsight algebra, a Cti commits only those events that occurred strictly before them. To flush the events we need a Cti to be enqueued with a greater StartTime than the events themselves. Here is what happened when I ran this configuration As you can see all we got through was the Cti and none of the events. The debugger output shows the stamps on the Cti and the events themselves. Because the Cti issued has the same timestamp (StartTime) as the events then none of the events get flushed. I was nearly there but not quite. Because my stream was bursty it was possible that the next event would not come along for a few seconds and this was far too long for an event to be enqueued and not be flushed to the output adapter. I needed another solution. Two possible solutions crossed my mind although only one of them made sense when I explored it some more. Where multiple events have the same StartTime I could add 1 tick to the first event, two to the second, three to third etc thereby giving them unique StartTime values. Add a timer to manually inject Ctis The problem with the first implementation is that I would be giving the events a new StartTime. This would cause me the following problems If I want to define windows over the stream then some events may not be captured in the right windows and therefore any calculations on those windows I did would be wrong What would happen if we had 10,000 events with the same StartTime? I would enqueue them with StartTime + n ticks. Along comes a genuine event with a StartTime of the very first event + 1 tick. It is now too far in the past as far as my stream is concerned and it would be dropped. Not what I would want to do at all. I decided then to look at the Timer based solution I created a timer on my input adapter that elapsed every 200ms. private Timer tmr; public SimpleInputAdapter(SimpleInputConfig configInfo) { ctx = new SimpleTimeExtractDataContext(configInfo.ConnectionString); this.configInfo = configInfo; tmr = new Timer(200); tmr.Elapsed += new ElapsedEventHandler(t_Elapsed); tmr.Enabled = true; } void t_Elapsed(object sender, ElapsedEventArgs e) { ts = DateTime.Now - dtCtiIssued; if (ts.TotalMilliseconds >= 200 && TimerIssuedCti == false) { EnqueueCtiEvent(System.DateTime.Now.AddTicks(-100)); TimerIssuedCti = true; } }   In the t_Elapsed event handler I find out the difference in time between now and when the last event was processed (dtCtiIssued). I then check to see if that is greater than or equal to 200ms and if the last issuing of a Cti was done by the timer or by a genuine event (TimerIssuedCti). If I didn’t do this check then I would enqueue a Cti every time the timer elapsed which is not something I wanted. If the difference between the two times is greater than or equal to 500ms and the last event enqueued was by a real event then I issue a Cti through the timer to flush the event Queue, otherwise I do nothing. When I enqueue the Ctis into my stream in my ProduceEvents method I also set the values of dtCtiIssued and TimerIssuedCti   currEvent = CreateInsertEvent(); currEvent.StartTime = (DateTimeOffset)dt.c2; TimerIssuedCti = false; dtCtiIssued = currEvent.StartTime; If I go ahead and run this configuration I see the following in my output. As we can see the first Cti gets enqueued as before but then another is enqueued by the timer and because this has a later timestamp it flushes the enqueued events through the engine. Conclusion Hopefully this has shown how the enqueuing of Ctis can have a dramatic effect on the responsiveness of your output in StreamInsight. Understanding the temporal nature of the product is for me one of the most important things you can learn. I have attached my solution for the demos. It is all in one project and testing each variation is a simple matter of commenting and un-commenting the parts in the code we have been dealing with here.

    Read the article

  • CodePlex Daily Summary for Tuesday, February 23, 2010

    CodePlex Daily Summary for Tuesday, February 23, 2010New Projects.NET Beginners: This project is a summary of project for first time developer and .net beginners. the aim is to provide tools and libraries to get startet with dev...A simple ASP.NET Currency / Money TextBox User Control: A ASP.NET TextBox control used with AJAX maskeditextender makes it possible to enter numbers but it's not very intuitive to use. CurrencyTextBox co...Academiki: Academik The project aims to be a university social network with content sharing and intellectual property. Academik makes it easier for students t...Acessando Campos com XPath Expression: Esse é um exemplo de como usar Xpath Expression na orchestration do Biztalk 2006. O Artigo do exemplo esta em www.biztalkbrasil.blogspot.comAg.CommandManager: A command manager implementation for Silverlights. Supports commanding to more or less any event using the ICommand interfaceApunta Notas: Apunta notas is just a note taking program that I created to learn WPF. Now you can write everything you need to remember or tell somebody. Or you...AzureKit: AzureKit provides a more direct approach to Azure's Table Service, which takes more advantage of the NoSQL nature of the storage medium.CRM 4.0 Distribute Workflow Activity: This plugin allows to execute a workflow for each entity that has a 1:N or N:N relationship to a given entity. For example: execute a workflow for...Dbg Shell: Dbg Shell replaces WinDbg for debugging dumps. All standard Dbg commands are supported. You can also write scripts in .Net assemblies to automated ...Egg Timer: Egg Timer is a very simple Windows Form application for setting short time-frame alarms.Enterprise Library Extensions: Extensions for the Microsoft Enterprise Library applications blocks which makes programming applications even easier.Event Calendar 2.0 Data Extractor: Really simple app to recover event calendar 2.0 information from iGoogle settings files and turn them into CSV format info for importing to other c...iTwiit: Silverlight Twitter Clientlibtym: Have your complete movie collection at a glance to manage all your movie files very comfortably!Metabolite Enterprise Libraries for EPiServer CMS using Page Type Builder: The Metabolite Enterprise libraries are a set of C# Class Libraries developed for use with EPiServer 5 R2 SP2+ projects using PageTypeBuilder. They...Metro UX: Metro UXMTI: -Personal Expense Tracker: Personal Expense Tracker helps you track your expenses. I tried to find simple win forms expense tracker but found none interesting, so i made one...rarouš: repository for rarouš.weblog articlesSacDotNetUG: SacDotNetUG is an ASP.NET MVC 2 Web application intended for the Sacramento .NET user group. This project servers 2 main goals: to promote the adop...ShellLight: ShellLight is essentially a graphical shell for Silverlight applications that enables a quick auto-complete launcher for features in your solution.Sina Weibo(Microblog): sina weibo .NET libraries and applications.Terrain Independant Navigating Automaton v2.0: This is where members of the Robot Design Team from Stony Brook University come together and work on our unique TINA. This project is for a self-go...Url Rewrite.Net: Url Rewrite.Net is an open-source SEO project which contains Custom Http Module example and Custom Configrutaion Module.It is developed in C#.NET 2...WebPart and WebService Currency Converter: This is only a sample code how to get data from yahoo finance and how to implementing on Sharepoint WebPart or WebServices. This code it is freely...WPF AutoComplete TextBox Control: A AutoComplete TextBox Control written in WPF, Looks like the system built-in auto-completion(SHAutoComplete).ZWaveAPI: This project is aim to create an open class library on ZWave. It is based on article from digiWave.dk New Releases.NET Beginners: MathLab Visual Studio Project Template: First preview to a mathlab beginners library..NET Beginners: Turtle Visual Studio Project Template: The turtle engine is a very simlpe turtle which runs over a beach and leaves a track.A simple ASP.NET Currency / Money TextBox User Control: CurrencyTextBox Source v1.0: Source code with a test project.A simple ASP.NET Currency / Money TextBox User Control: CurrencyTextBox.dll: The User Control for use in projects.Acessando Campos com XPath Expression: Source Code SampleXPathExpression: O Source code contem o Projeto em Visual Studio 2005.Actipro WPF Controls Contrib: v2009.2 build 515: Minor tweaks and updated to target Actipro WPF Studio 2009.2 (build 515).Analysis Management System: 1.0.0.1 Update: Fix - Issue 4004 Nieuw - Beschikbare klanten kunnen nu bekeken worden via Extra/Aanvragers (Ctrl R)Apunta Notas: Apunta Notas 1.0 Release Candidate: There is the Release Candidate of Apunta Notas.ARSoft.Tools.Net - C# DNS and SPF Library: 1.2.0: Added asynchronous operations for DNS client.CRM 4.0 Distribute Workflow Activity: Beta: Initial release. Complete functionality, limited testing.Dbg Shell: First Public Release: First ReleaseDnDns and PocketDnDns - A .NET DNS Client Resolver Library: DnDns Library Release 2: A DNS protocol library written completely in managed code (C#). Supports common DNS records types like A, CNAME, MX, SRV, and more. Works on Window...Egg Timer: Egg Timer v1.0: Pretty simple application. Set the time directly or use the 5 minute and 1 hour increment/decrement buttons.EnhSim: EnhSim v1.9.7.2 BETA: 1.9.7.2 BETAImportant!: This beta version includes the changes to the Flame shock damage-over-time component which are currently on the PTR. Downlo...EnhSim: EnhSim v1.9.7.3 BETA: Important!: This beta version includes the changes to the Flame shock damage-over-time component which are currently on the PTR. Download 1.9.7.1 f...Esendex SMS SDK and Downloads for Microsoft.NET languages: Esendex .NET SDK v0.4.0: Features Messaging Service: Send a single SMS message and multiple SMS messages. Send a single Voice Message and multiple Voice Messages. Send...Event Calendar 2.0 Data Extractor: V1.0: First ever build of extractorExcelDna: ExcelDna Version 0.22: An important bugfix release that fixes a critical bug in the MultiThreaded marshaling support (under Excel 2007).InfoService: InfoService v1.5 Beta 8: InfoService Beta Release Please note this is a BETA. It should be stable, but i can't gurantee that! So use it on your own risk. Please read Plugi...Krypton Palette Selectors: Release 1.2: Adds the new KryptonPaletteContextMenu and refactors the KryptonPaletteDropButton to use it.kuuy static system: kss_v1.0beta: kuuy static sytem 1.0 beta editionlibtym: libtym: First public release version. Full functionality, tested. Please notify about bugs.mojoPortal: 2.3.3.9: see release notes on mojoPortal.com http://www.mojoportal.com/mojoportal-2339-released.aspx This release makes it easy to use Artisteer html templa...Net Tool: v1.01: User interface has been changedOpen NFSe: Open NFSe 0.1 (Salvador): Open NFSe 0.1 (Salvador)Paint.NET PSD Plugin: 1.0.7: Further substantial improvements in speed of both loading and saving. In particular, loading is about 5x as fast as in version 1.0.6. Saving is ...Project Starlight: 2.0: Release 2.0 final Changes: -Numerous stability fixes -Firefox 3.5 support -Safari 64 bit support (Snow Leopard) Mac and Windows Binaries are avai...RoTwee: RoTwee 5.0.0.1: This version fix for 16620Secure Data: Secure Data 2010.02.22.01: This version has been rewritten and contains many enhancements and I encourage anyone interested to download the source code and work through the Q...TreeSizeNet: TreeSizeNet 0.10.1: - Complete Redesign - Improved Stability - Improved Performance - PieChart for directory contentUrl Rewrite.Net: Beta V0.1 Release: Includes : -Custom Http Module -Custom Configuration Section Module ( in web.config) -Rewriting ModuleVCC: Latest build, v2.1.30222.0: Automatic drop of latest buildWebPart and WebService Currency Converter: CurrencyConverter.zip: CurrencyConverter.zip Have 3 Project on this files: 1. Library Project 2. Load Library Project Using Web Services 3. Load Library Project Using We...WSUS Smart Approve: 1.0.0.2: Fix: 25903ZuneConsole: ZuneConsole: Console GUI for the Zune Customs library, which should make everything work. This is what you should download.ZuneConsole: ZuneConsole How-To Manual: A quick (only 9 pages lol) tutorial on how to add custom artist info to your Zune.Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETDotNetNuke® Community EditionImage Resizer Powertoy Clone for WindowsMicrosoft SQL Server Community & SamplesMost Active ProjectsDinnerNow.netRawrBlogEngine.NETNB_Store - Free DotNetNuke Ecommerce Catalog ModuleSharpyInfoServicejQuery Library for SharePoint Web ServicesPHPExcelpatterns & practices – Enterprise LibrarySharePoint Contrib

    Read the article

  • BizTalk 2009 - Naming Guidelines

    - by StuartBrierley
    The following is effectively a repost of the BizTalk 2004 naming guidlines that I have previously detailed.  I have posted these again for completeness under BizTalk 2009 and to allow an element of separation in case I find some reason to amend these for BizTalk 2009. These guidlines should be universal across any version of BizTalk you may wish to apply them to. General Rules All names should be named with a Pascal convention. Project Namespaces For message schemas: [CompanyName].XML.Schemas.[FunctionalName]* Examples:  ABC.XML.Schemas.Underwriting DEF.XML.Schemas.MarshmellowTradingExchange * Donates potential for multiple levels of functional name, such as Underwriting.Dictionary.Valuation For web services: [CompanyName].Web.Services.[FunctionalName] Examples: ABC.Web.Services.OrderJellyBeans For the main BizTalk Projects: [CompanyName].BizTalk.[AssemblyType].[FunctionalName]* Examples: ABC.BizTalk.Mappings.Underwriting ABC.BizTalk.Orchestrations.Underwriting * Donates potential for multiple levels of functional name, such as Mappings.Underwriting.Valuations Assemblies BizTalk Assembly names should match the associated Project Namespace, such as ABC.BizTalk.Mappings.Underwriting. This pertains to the formal assembly name and the DLL name. The Solution name should take the name of the main project within the solution, and also therefore the namespace for that project. Although long names such as this can be unwieldy to work with, the benefits of having the full scope available when the assemblies are installed on the target server are generally judged to outweigh this inconvenience. Messaging Artifacts Artifact Standard Notes Example Schema <DescriptiveName>.xsd   .NET Type name should match, without file extension.    .NET Namespace will likely match assembly name. PurchaseOrderAcknowledge_FF.xsd  or FNMA100330_FF.xsd Property Schema <DescriptiveName>.xsd Should be named to reflect possible common usage across multiple schemas  IspecMessagePropertySchema.xsd UnderwritingOrchestrationKeys.xsd Map <SourceSchema>2<DestinationSchema>.btm Exceptions to this may be made where the source and destination schemas share the majority of the name, such as in mainframe web service maps InstructionResponse2CustomEmailRequest.btm (exception example) AccountCustomerAddressSummaryRequest2MainframeRequest.btm Orchestration <DescriptiveName>.odx   GetValuationReports.odx SendMTEDecisionResponse.odx Send/Receive Pipeline <DescriptiveName>.btp   ValidatingXMLReceivePipeline.btp FlatFileAssembler.btp Receive Port A plainly worded phrase that will clearly explain the function.    FraudPreventionServices LetterProcessing   Receive Location A plainly worded phrase that will clearly explain the function.  ? Do we want to include the transport type here ? Arrears Web Service Send Port Group A plainly worded phrase that will clearly explain the function.   Customer Updates Send Port A plainly worded phrase that will clearly explain the function.    ABCProductUpdater LogLendingPolicyOutput Parties A meaningful name for a Trading Partner. If dealing with multiple entities within a Trading Partner organization, the Organization name could be used as a prefix.   Roles A meaningful name for the role that a Trading Partner plays.     Orchestration Workflow Shapes Shape Standard Notes Example Scopes <DescriptionOfContainedWork> or <DescOfcontainedWork><TxType>   Including info about transaction type may be appropriate in some situations where it adds significant documentation value to the diagram. HandleReportResponse         Receive Receive<MessageName> Typically, MessageName will be the same as the name of the message variable that is being received “into”. ReceiveReportResponse Send Send<MessageName> Typically, MessageName will be the same as the name of the message variable that is being sent. SendValuationDetailsRequest Expression <DescriptionOfEffect> Expression shapes should be named to describe the net effect of the expression, similar to naming a method.  The exception to this is the case where the expression is interacting with an external .NET component to perform a function that overlaps with existing BizTalk functionality – use closest BizTalk shape for this case. CreatePrintXML Decide <DescriptionOfDecision> A description of what will be decided in the “if” branch Report Type? Perform MF Save? If-Branch <DescriptionOfDecision> A (potentially abbreviated) description of what is being decided Mortgage Valuation Yes Else-Branch Else Else-branch shapes should always be named “Else” Else Construct Message (Assign) Create<Message> (for Construct)     <ExpressionDescription> (for expression) If a Construct shape contains a message assignment, it should be prefixed with “Create” followed by an abbreviated name of the message being assigned.    The actual message assignment shape contained should be named to describe the expression that is contained. CreateReportDataMV   which contains expression: ExtractReportData Construct Message (Transform) Create<Message> (for Construct)   <SourceSchema>2<DestSchema> (for transform) If a Construct shape contains a message transform, it should be prefixed with “Create” followed by an abbreviated name of the message being assigned.   The actual message transform shape contained should generally be named the same as the called map.  CreateReportDataMV   which contains transform: ReportDataMV2ReportDataMV                 Construct Message (containing multiple shapes)   If a Construct Message shape uses multiple assignments or transforms, the overall shape should be named to communicate the net effect, using no prefix.     Call/Start Orchestration Call<OrchestrationName>   Start<OrchestrationName>     Throw Throw<ExceptionType> The corresponding variable name for the exception type should (often) be the same name as the exception type, only camel-cased. ThrowRuleException, which references the “ruleException” variable.     Parallel <DescriptionOfParallelWork> Parallel shapes should be named by a description of what work will be done in parallel   Delay <DescriptionOfWhatWaitingFor> Delay shapes should be named by a description of what is being waited for.  POAcknowledgeTimeout Listen <DescriptionOfOutcomes> Listen shapes should be named by a description that captures (to the degree possible) all the branches of the Listen shape POAckOrTimeout FirstShippingBid Loop <DescriptionOfLoop> A (potentially abbreviated) description of what the loop is. ForEachValuationReport WhileErrorFlagTrue Role Link   See “Roles” in messaging naming conventions above.   Suspend <ReasonDescription> Describe what action an administrator must take to resume the orchestration.  More detail can be passed to error property – and should include what should be done by the administrator before resuming the orchestration. ReEstablishCreditLink Terminate <ReasonDescription> Describe why the orchestration terminated.  More detail can be passed to error property. TimeoutsExpired Call Rules Call<PolicyName> The policy name may need to be abbreviated. CallLendingPolicy Compensate Compensate or Compensate<TxName> If the shape compensates nested transactions, names should be suffixed with the name of the nested transaction – otherwise it should simple be Compensate. CompensateTransferFunds Orchestration Types Type Standard Notes Example Multi-Part Message Types <LogicalDocumentType>   Multi-part types encapsulate multiple parts.  The WSDL spec indicates “parts are a flexible mechanism for describing the logical abstract content of a message.”  The name of the multi-part type should correspond to the “logical” document type, i.e. what the sum of the parts describes. InvoiceReceipt   (which might encapsulate an invoice acknowledgement and a payment voucher.) Multi-Part Messsage Part <SchemaNameOfPart> Should be named (most often) simply for the schema (or simple type) associated with the part. InvoiceHeader Messages <SchemaName> or <MuliPartMessageTypeName> Should be named based on the corresponding schema type or multi-part message type.  If there is more than one variable of a type, name for its use within the orchestration. ReportDataMV UpdatedReportDataMV Variables <DescriptiveName>   TargetFilePath StringProcessor Port Types <FunctionDescription>PortType Should be named to suggest the nature of an endpoint, with pascal casing and suffixed with “PortType”.   If there will be more than one Port for a Port Type, the Port Type should be named according to the abstract service supplied.   The WSDL spec indicates port types are “a named set of abstract operations and the abstract messages involved” that also encapsulates the message pattern (i.e. one-way, request-response, solicit-response) that all operations on the port type adhere to. ReceiveReportResponsePortType  or CallEAEPortType (This is a two way port, so Receove or Send alone would not be appropriate.  Could have been ProcessEAERequestPortType etc....) Ports <FunctionDescription>Port Should be named to suggest a grouping of functionality, with pascal casing and suffixed with “Port.”  ReceiveReportResponsePort CallEAEPort Correlation types <DescriptiveName> Should be named based on the logical name of what is being used to correlate.  PurchaseOrderNumber Correlation sets <DescriptiveName> Should be named based on the corresponding correlation type.  If there is more than one, it should be named to reflect its specific purpose within the orchestration.   PurchaseOrderNumber Orchestration parameters <DescriptiveName> Should be named to match the caller’s names for the corresponding variables where appropriate.

    Read the article

  • Professional Scrum Developer (.NET) Training in London

    - by Martin Hinshelwood
    On the 26th - 30th July in Microsoft’s offices in London Adam Cogan from SSW will be presenting the first Professional Scrum Developer course in the UK. I will be teaching this course along side Adam and it is a fantastic experience. You are split into teams and go head-to-head to deliver units of potentially shippable work in four two hour sprints. The Professional Scrum Developer course is the only course endorsed by both Microsoft and Ken Schwaber and they have worked together very effectively in brining this course to fruition. This course is the brain child of Richard Hundhausen, a Microsoft Regional Director, and both Adam and I attending the Trainer Prep in Sydney when he was there earlier this year. He is a fantastic trainer and no matter where you do this course you can be safe in the knowledge that he has trained and vetted all of the teachers. A tools version of Ken if you will Find a course and register Download this syllabus Download the Scrum Guide What is the Professional Scrum Developer course all about? Professional Scrum Developer course is a unique and intensive five-day experience for software developers. The course guides teams on how to turn product requirements into potentially shippable increments of software using the Scrum framework, Visual Studio 2010, and modern software engineering practices. Attendees will work in self-organizing, self-managing teams using a common instance of Team Foundation Server 2010. Who should attend this course? This course is suitable for any member of a software development team – architect, programmer, database developer, tester, etc. Entire teams are encouraged to attend and experience the course together, but individuals are welcome too. Attendees will self-organize to form cross-functional Scrum teams. These teams require an aggregate of skills specific to the selected case study. Please see the last page of this document for specific details. Product Owners, ScrumMasters, and other stakeholders are welcome too, but keep in mind that everyone who attends will be expected to commit to work and pull their weight on a Scrum team. What should you know by the end of the course? Scrum will be experienced through a combination of lecture, demonstration, discussion, and hands-on exercises. Attendees will learn how to do Scrum correctly while being coached and critiqued by the instructor, in the following topic areas: Form effective teams Explore and understand legacy “Brownfield” architecture Define quality attributes, acceptance criteria, and “done” Create automated builds How to handle software hotfixes Verify that bugs are identified and eliminated Plan releases and sprints Estimate product backlog items Create and manage a sprint backlog Hold an effective sprint review Improve your process by using retrospectives Use emergent architecture to avoid technical debt Use Test Driven Development as a design tool Setup and leverage continuous integration Use Test Impact Analysis to decrease testing times Manage SQL Server development in an Agile way Use .NET and T-SQL refactoring effectively Build, deploy, and test SQL Server databases Create and manage test plans and cases Create, run, record, and play back manual tests Setup a branching strategy and branch code Write more maintainable code Identify and eliminate people and process dysfunctions Inspect and improve your team’s software development process What does the week look like? This course is a mix of lecture, demonstration, group discussion, simulation, and hands-on software development. The bulk of the course will be spent working as a team on a case study application delivering increments of new functionality in mini-sprints. Here is the week at a glance: Monday morning and most of the day Friday will be spent with the computers powered off, so you can focus on sharpening your game of Scrum and avoiding the common pitfalls when implementing it. The Sprints Timeboxing is a critical concept in Scrum as well as in this course. We expect each team and student to understand and obey all of the timeboxes. The timebox duration will always be clearly displayed during each activity. Expect the instructor to enforce it. Each of the ½ day sprints will roughly follow this schedule: Component Description Minutes Instruction Presentation and demonstration of new and relevant tools & practices 60 Sprint planning meeting Product owner presents backlog; each team commits to delivering functionality 10 Sprint planning meeting Each team determines how to build the functionality 10 The Sprint The team self-organizes and self-manages to complete their tasks 120 Sprint Review meeting Each team will present their increment of functionality to the other teams = 30 Sprint Retrospective A group retrospective meeting will be held to inspect and adapt 10 Each team is expected to self-organize and manage their own work during the sprint. Pairing is highly encouraged. The instructor/product owner will be available if there are questions or impediments, but will be hands-off by default. You should be prepared to communicate and work with your team members in order to achieve your sprint goal. If you have development-related questions or get stuck, your partner or team should be your first level of support. Module 1: INTRODUCTION This module provides a chance for the attendees to get to know the instructors as well as each other. The Professional Scrum Developer program, as well as the day by day agenda, will be explained. Finally, the Scrum team will be selected and assembled so that the forming, storming, norming, and performing can begin. Trainer and student introductions Professional Scrum Developer program Agenda Logistics Team formation Retrospective Module 2: SCRUMDAMENTALS This module provides a level-setting understanding of the Scrum framework including the roles, timeboxes, and artifacts. The team will then experience Scrum firsthand by simulating a multi-day sprint of product development, including planning, review, and retrospective meetings. Scrum overview Scrum roles Scrum timeboxes (ceremonies) Scrum artifacts Simulation Retrospective It’s required that you read Ken Schwaber’s Scrum Guide in preparation for this module and course. MODULE 3: IMPLEMENTING SCRUM IN VISUAL STUDIO 2010 This module demonstrates how to implement Scrum in Visual Studio 2010 using a Scrum process template*. The team will learn the mapping between the Scrum concepts and how they are implemented in the tool. After connecting to the shared Team Foundation Server, the team members will then return to the simulation – this time using Visual Studio to manage their product development. Mapping Scrum to Visual Studio 2010 User Story work items Task work items Bug work items Demonstration Simulation Retrospective Module 4: THE CASE STUDY In this module the team is introduced to their problem domain for the week. A kickoff meeting by the Product Owner (the instructor) will set the stage for the why and what that will take during the upcoming sprints. The team will then define the quality attributes of the project and their definition of “done.” The legacy application code will be downloaded, built, and explored, so that any bugs can be discovered and reported. Introduction to the case study Download the source code, build, and explore the application Define the quality attributes for the project Define “done” How to file effective bugs in Visual Studio 2010 Retrospective Module 5: HOTFIX This module drops the team directly into a Brownfield (legacy) experience by forcing them to analyze the existing application’s architecture and code in order to locate and fix the Product Owner’s high-priority bug(s). The team will learn best practices around finding, testing, fixing, validating, and closing a bug. How to use Architecture Explorer to visualize and explore Create a unit test to validate the existence of a bug Find and fix the bug Validate and close the bug Retrospective Module 6: PLANNING This short module introduces the team to release and sprint planning within Visual Studio 2010. The team will define and capture their goals as well as other important planning information. Release vs. Sprint planning Release planning and the Product Backlog Product Backlog prioritization Acceptance criteria and tests Sprint planning and the Sprint Backlog Creating and linking Sprint tasks Retrospective At this point the team will have the knowledge of Scrum, Visual Studio 2010, and the case study application to begin developing increments of potentially shippable functionality that meet their definition of done. Module 7: EMERGENT ARCHITECTURE This module introduces the architectural practices and tools a team can use to develop a valid design on which to develop new functionality. The teams will learn how Scrum supports good architecture and design practices. After the discussion, the teams will be presented with the product owner’s prioritized backlog so that they may select and commit to the functionality they can deliver in this sprint. Architecture and Scrum Emergent architecture Principles, patterns, and practices Visual Studio 2010 modeling tools UML and layer diagrams SPRINT 1 Retrospective Module 8: TEST DRIVEN DEVELOPMENT This module introduces Test Driven Development as a design tool and how to implement it using Visual Studio 2010. To maximize productivity and quality, a Scrum team should setup Continuous Integration to regularly build every team member’s code changes and run regression tests. Refactoring will also be defined and demonstrated in combination with Visual Studio’s Test Impact Analysis to efficiently re-run just those tests which were impacted by refactoring. Continuous integration Team Foundation Build Test Driven Development (TDD) Refactoring Test Impact Analysis SPRINT 2 Retrospective Module 9: AGILE DATABASE DEVELOPMENT This module lets the SQL Server database developers in on a little secret – they can be agile too. By using the database projects in Visual Studio 2010, the database developers can join the rest of the team. The students will see how to apply Agile database techniques within Visual Studio to support the SQL Server 2005/2008/2008R2 development lifecycle. Agile database development Visual Studio database projects Importing schema and scripts Building and deploying Generating data Unit testing SPRINT 3 Retrospective Module 10: SHIP IT Teams need to know that just because they like the functionality doesn’t mean the Product Owner will. This module revisits acceptance criteria as it pertains to acceptance testing. By refining acceptance criteria into manual test steps, team members can execute the tests, recording the results and reporting bugs in a number of ways. Manual tests will be defined and executed using the Microsoft Test Manager tool. As the Sprint completes and an increment of functionality is delivered, the team will also learn why and when they should create a branch of the codeline. Acceptance criteria Testing in Visual Studio 2010 Microsoft Test Manager Writing and running manual tests Branching SPRINT 4 Retrospective Module 11: OVERCOMING DYSFUNCTION This module introduces the many types of people, process, and tool dysfunctions that teams face in the real world. Many dysfunctions and scenarios will be identified, along with ideas and discussion for how a team might mitigate them. This module will enable you and your team to move toward independence and improve your game of Scrum when you depart class. Scrum-butts and flaccid Scrum Best practices working as a team Team challenges ScrumMaster challenges Product Owner challenges Stakeholder challenges Course Retrospective What will be expected of you and you team? This is a unique course in that it’s technically-focused, team-based, and employs timeboxes. It demands that the members of the teams self-organize and self-manage their own work to collaboratively develop increments of software. All attendees must commit to: Pay attention to all lectures and demonstrations Participate in team and group discussions Work collaboratively with other team members Obey the timebox for each activity Commit to work and do your best to deliver All teams should have these skills: Understanding of Scrum Familiarity with Visual Studio 201 C#, .NET 4.0 & ASP.NET 4.0 experience*  SQL Server 2008 development experience Software testing experience * Check with the instructor ahead of time for the exact technologies Self-organising teams Another unique attribute of this course is that it’s a technical training class being delivered to teams of developers, not pairs, and not individuals. Ideally, your actual software development team will attend the training to ensure that all necessary skills are covered. However, if you wish to attend an open enrolment course alone or with just a couple of colleagues, realize that you may be placed on a team with other attendees. The instructor will do his or her best to ensure that each team is cross-functional to tackle the case study, but there are no guarantees. You may be required to try a new role, learn a new skill, or pair with somebody unfamiliar to you. This is just good Scrum! Who should NOT take this course? Because of the nature of this course, as explained above, certain types of people should probably not attend this course: Students requiring command and control style instruction – there are no prescriptive/step-by-step (think traditional Microsoft Learning) labs in this course Students who are unwilling to work within a timebox Students who are unwilling to work collaboratively on a team Students who don’t have any skill in any of the software development disciplines Students who are unable to commit fully to their team – not only will this diminish the student’s learning experience, but it will also impact their team’s learning experience Find a course and register Download this syllabus Download the Scrum Guide Technorati Tags: Scrum,SSW,Pro Scrum Dev

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45  | Next Page >