Search Results

Search found 4050 results on 162 pages for 'requirements'.

Page 15/162 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Web App vs Portal Platform - convincing the customer

    - by shinynewbike
    We're evaluating a set of requirements for a customer who wants Liferay which mainly has AAA and Web CMS requirements, and allowing user to upload their own content. Also all inetgration is via web services. However there is no need for other features such as actual "portlets", i18n, mashups, skins, themes, tagging, social presence, no collaboration etc So we feel we can do this as a standard JEE web app and not use Liferay (or any other portal product) since these are overheads we dont need. The customer feels the Web CMS requirements + user upload justify the "portal" product. Can anyone help me with some points to convince the customer? Assuming our point of view is right.

    Read the article

  • Very original V&V explanation (Bohm) - I cannot understand its point

    - by user970696
    Hopefully my last thread about V&V as I found the B.Boehm is text which I just do not understand well (likely my technical English is not that good). http://csse.usc.edu/csse/TECHRPTS/1979/usccse79-501/usccse79-501.pdf Basically he says that verification is about checking that products derived from requirments baseline must correspond to it and that deviation leads only to changes in these derived products (design, code). But he says it begins with design and ends with acceptance tests (you can check the V model inside). The thing is, I have accepted ISO12207 in terms of all testing is validation, yet it does not make any sense here. In order to be sure the product complies with requirements (acceptance test) I need to test it. Also it says that validation problems means that requirements are bad and needs to be changed - which does not happen with testing that testers do, who just checks correspondence with requirements.

    Read the article

  • What are the requirements to consider someone as a Software Engineer? Is it just a job title? [close

    - by mike
    I am employed as a Software Engineer with a background in C# and .NET. I am trying to interface with an web API. I asked for help because I didn't know to connect to it or handle the results. Someone told me that I shouldn't consider myself a Software Engineer because I didn't know to how to do it. They said the API was well documented, required no authentication, and returned XML that I could easily parse. They said the documentation should be enough to figure out how to use the API. Isn't having a job title of Software Engineer the only consideration for being a Software Engineer?

    Read the article

  • Windows Azure Use Case: Agility

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Agility in this context is defined as the ability to quickly develop and deploy an application. In theory, the speed at which your organization can develop and deploy an application on available hardware is identical to what you could deploy in a distributed environment. But in practice, this is not always the case. Having an option to use a distributed environment can be much faster for the deployment and even the development process. Implementation: When an organization designs code, they are essentially becoming a Software-as-a-Service (SaaS) provider to their own organization. To do that, the IT operations team becomes the Infrastructure-as-a-Service (IaaS) to the development teams. From there, the software is developed and deployed using an Application Lifecycle Management (ALM) process. A simplified view of an ALM process is as follows: Requirements Analysis Design and Development Implementation Testing Deployment to Production Maintenance In an on-premise environment, this often equates to the following process map: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including physical plant, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to on-premise Testing servers. If no server capacity available, more resources procured through standard budgeting and ordering processes. Manual and automated functional, load, security, etc. performed. Deployment to Production Server team involved to select platform and environments with available capacity. If no server capacity available, standard budgeting and procurement process followed. If no server capacity available, systems built, configured and put under standard organizational IT control. Systems configured for proper operating systems, patches, security and virus scans. System maintenance, HA/DR, backups and recovery plans configured and put into place. Maintenance Code changes evaluated and altered according to need. In a distributed computing environment like Windows Azure, the process maps a bit differently: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including budget, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to Azure. Manual and automated functional, load, security, etc. performed. Deployment to Production Code deployed to Azure. Point in time backup and recovery plans configured and put into place.(HA/DR and automated backups already present in Azure fabric) Maintenance Code changes evaluated and altered according to need. This means that several steps can be removed or expedited. It also means that the business function requesting the application can be held directly responsible for the funding of that request, speeding the process further since the IT budgeting process may not be involved in the Azure scenario. An additional benefit is the “Azure Marketplace”, In effect this becomes an app store for Enterprises to select pre-defined code and data applications to mesh or bolt-in to their current code, possibly saving development time. Resources: Whitepaper download- What is ALM?  http://go.microsoft.com/?linkid=9743693  Whitepaper download - ALM and Business Strategy: http://go.microsoft.com/?linkid=9743690  LiveMeeting Recording on ALM and Windows Azure (registration required, but free): http://www.microsoft.com/uk/msdn/visualstudio/contact-us.aspx?sbj=Developing with Windows Azure (ALM perspective) - 10:00-11:00 - 19th Jan 2011

    Read the article

  • Software Architecture verses Software Design

    Recently, I was asked what the differences between software architecture and software design are. At a very superficial level both architecture and design seem to mean relatively the same thing. However, if we examine both of these terms further we will find that they are in fact very different due to the level of details they encompass. Software Architecture can be defined as the essence of an application because it deals with high level concepts that do not include any details as to how they will be implemented. To me this gives stakeholders a view of a system or application as if someone was viewing the earth from outer space. At this distance only very basic elements of the earth can be detected like land, weather and water. As the viewer comes closer to earth the details in this view start to become more defined. Details about the earth’s surface will start to actually take form as well as mane made structures will be detected. The process of transitioning a view from outer space to inside our earth’s atmosphere is similar to how an architectural concept is transformed to an architectural design. From this vantage point stakeholders can start to see buildings and other structures as if they were looking out of a small plane window. This distance is still high enough to see a large area of the earth’s surface while still being able to see some details about the surface. This viewing point is very similar to the actual design process of an application in that it takes the very high level architectural concept or concepts and applies concrete design details to form a software design that encompasses the actual implementation details in the form of responsibilities and functions. Examples of these details include: interfaces, components, data, and connections. In review, software architecture deals with high level concepts without regard to any implementation details. Software design on the other hand takes high level concepts and applies concrete details so that software can be implemented. As part of the transition between software architecture to the creation of software design an evaluation on the architecture is recommended. There are several benefits to including this step as part of the transition process. It allows for projects to ensure that they are on the correct path as to meeting the stakeholder’s requirement goals, identifies possible cost savings and can be used to find missing or nonspecific requirements that cause ambiguity in a design. In the book “Evaluating Software Architectures: Methods and Case Studies”, they define key benefits to adding an architectural review process to ensure that an architecture is ready to move on to the design phase. Benefits to evaluating software architecture: Gathers all stakeholders to communicate about the project Goals are clearly defined in regards to the creation or validation of specific requirements Goals are prioritized so that when conflicts occur decisions will be made based on goal priority Defines a clear expectation of the architecture so that all stakeholders have a keen understanding of the project Ensures high quality documentation of the architecture Enables discoveries of architectural reuse  Increases the quality of architecture practices. I can remember a few projects that I worked on that could have really used an architectural review prior to being passed on to developers. This project was to create some new advertising space on the company’s website in order to sell space based on the location and some other criteria. I was one of the developer selected to lead this project and I was given a high level design concept and a long list of ever changing requirements due to the fact that sales department had no clear direction as to what exactly the project was going to do or how they were going to bill the clients once they actually agreed to purchase the Ad space. In my personal opinion IT should have pushed back to have the requirements further articulated instead of forcing programmers to code blindly attempting to build such an ambiguous project.  Unfortunately, we had to suffer with this project for about 4 months when it should have only taken 1.5 to complete due to the constantly changing and unclear requirements. References  Clements, P., Kazman, R., & Klein, M. (2002). Evaluating Software Architectures. Westford, Massachusetts: Courier Westford. 

    Read the article

  • Organizational characteristics that impact the selection of Development Methodology concepts applied to a project

    Based on my experience, no one really follows a specific methodology exactly as it is formally designed. In fact, the key concepts of a few methodologies are usually combined to form a hybrid methodology for each project based on the current organizational makeup and the project need/requirements to be accomplished. Organizational characteristics that impact the selection of methodology concepts applied to a project. Prior subject knowledge pertaining to a project can be critical when deciding on what methodology or combination of methodologies to apply to a project. For example, if a project is very straight forward, and the development staff has experience in developing  that are similar, then the waterfall method could possibly be the best choice because little to no research is needed  in order to complete the project tasks and there is very little need for changes to occur.  On the other hand, if the development staff has limited subject knowledge or the requirements/specification of the project could possibly change as the project progresses then the use of spiral, iterative, incremental, agile, or any combination would be preferred. The previous methodologies used by an organization typically do not change much from project to project unless the needs of a project dictate differently. For example, if the waterfall method is the preferred development methodology then most projects will be developed by the waterfall method. Depending on the time allotted to a project each day can impact the selection of a development methodology. In one example, if the staff can only devote a few hours a day to a project then the incremental methodology might be ideal because modules can be added to the final project as they are developed. On the other hand, if daily time allocation is not an issue, then a multitude of methodologies could work well for a project. Project characteristics that impact the selection of methodology concepts applied to a project. The type of project being developed can often dictate the type of methodology used for the project. Based on my experience, projects that tend to have a lot of user interaction, follow a more iterative, incremental, or agile approach typically using a prototype that develops into a final project. These methodologies desire back and forth communication between users, clients, and developers to allow for requirements to change and functionality to be enhanced. Conversely, limited interaction applications or automated services can still sometimes get away with using the waterfall or transactional approach. The timeline of a project can also force an organization to prefer a particular methodology over the rest. For instance, if the project must be completed within 24 hours, then there is very little time for discussions back and forth between clients, users and the development team. In this scenario, the waterfall method would be perfect because the only interaction with the client occurs prior to a development project to outline the system requirements, and the development team can quickly move through the software development stages in order to complete the project within the deadline. If the team had more time, then the other methodologies could also be considered because there is more time for client and users to review the project and make changes as they see fit, and/or allow for more time to review the project in order to enhance the business performance and functionality. Sometimes the client and or user involvement can dictate the selection of methodologies applied to a project. One example of this is if a client is highly motivated to get a project completed and desires to play an active part in the development process then the agile development approach would work perfectly with this client because it allows for frequent interaction between clients, users and the development team. The inverse of this situation is a client that just wants to provide the project requirements and only wants to get involved when the project is to be delivered. In this case the waterfall method would work well because there is no room for changes and no back and forth between the users, clients or the development team.

    Read the article

  • InfoPath 2010 Form Design and Web Part Deployment

    - by JKenderdine
    In January I had the pleasure to speak at SharePoint Saturday Virginia Beach.  I presented a session on InfoPath 2010 forms design which included some of the basics of Forms Design, description of some of the new options with InfoPath 2010 and SharePoint 2010, and other integration possibilities.  Included below is the information presented as well as the solution to create the demo: First thing you need to understand is what the difference is between an InfoPath List form and a Form Library Form?  SharePoint List Forms:  Store data directly in a SharePoint list.  Each control (e.g. text box) in the form is bound to a column in the list. SharePoint list forms are directly connected to the list, which means that you don’t have to worry about setting up the publish and submit locations. You also do not have the option for back-end code. Form Library Forms:  Store data in XML files in a SharePoint form library.  This means they are more flexible and you can do more with them.  For example, they can be configured to save drafts and submit to different locations. However, they are more complex to work with and require more decisions to be made during configuration.  You do have the option of back-end code with these type of forms. Next steps: You need to create your File Architecture Plan.  Plan the location for the saved template – both Test and Production (This is pretty much a given, but just in case - Always make sure to have a test environment) Plan for the location of the published template Then you need to document your Form Template Design Plan.  Some questions to ask to gather your requirements: What will the form be designed to do? Will it gather user information? Will it display data from a data source? Do we need to show different views to different users? What do we base this on? How will it be implemented for the users? Browser or Client based form Site collection content type – Published through Central Admin Form Library – Published directly to form library So what are the requirements for this template?  Business Card Request Form Template Design Plan Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Show and hide fields as necessary for requirements Create multiple views – one for those submitting the form and another view for the executive assistants placing the orders. Browser based form integrated into SharePoint team site Published directly to form library The form was published through Central Administration and incorporated into the site as a content type. Utilizing the new InfoPath Web part, the form is integrated into the page and the users can complete the form directly from within that page. For now, if you are interested in the final form XSN, contact me using the Contact link above.   I will post soon with the details on how the form was created and how it integrated the requirements detailed above.

    Read the article

  • Retrofit Certification

    - by Bill Evjen
    Impact of Regulations on Cabin Systems Installation John Courtright, Structural Integrity Engineering There are “heightened” FAA attention to technical issues related to IFE and Wi-Fi Systems Installations The Aging Aircraft Safety Rule – EWIS & Damage Tolerance Analysis The Challenge: Maximize Flight Safety While Minimizing Costs Issue Papers & Testing, Testing, Testing The role of Airworthiness Directives (ADs) on the design of many IFE systems and all antenna systems. Goal is safety AND cost-effective maintenance intervals and inspection techniques The STC Process Briefly Stated Type Certifications (TC) Supplemental Type Certifications (STC) The STC Process Project Specific Certification Plan (PSCP) Managed by FAA Aircraft Certification Office (ACO) Type of Project (Electrical/Mechanical Systems or Structural) Specific Type of Aircraft Being Modified Schedule Design & Installation Location What does the STC Plan (PSCP) Cover? System Description – What does the system do? System qualification – Are the components qualified? Certification requirements – What FARs are applicable? Installation detail – what is being modified? Prototype installation – What is new? Functional hazard Assessment (FHA) – is it safe? EZAP-EWIS Requirements – Any aging aircraft issues? Certification Data – How is compliance achieved? Delegation and FAA involvement – Who is doing the work? Proposed certification schedule – When is the installation? Certification documentation – What the FAA Expects to see Cabin Systems Certification Concerns In addition to meeting the requirements for DO-160, Cabin System Certification needs to address issues related to: Power management: Generally, IFE and Wi-Fi Systems are classified as “Non-Essential Equipment” from a certification viewpoint. Connected to “non-essential” power buses Must be able to shed IFE & Wi-Fi Systems in a smoke/fire event or Other electrical emergency (FAA Policy 00-111-160) FAA is more relaxed with testing wi-fi. It used to be that you had to have 150 seats with laptops running wi-fi, but now it is down to around 50. Aging aircraft concerns – electrical and structural Issue papers addressing technical concerns involving: “Structural Certification Criteria for Large Antenna Installations” Antenna “Vibration/Buffeting Compliance Criteria” DO-160 : Environmental Test Procedures DO 160 – “Environmental Conditions and Test Procedures for Airborne Equipment”, Issued by RTCA Provides guidance to equipment manufacturers as to testing requirements Temperature: –40C to +55C Vibration and Shock Contaminant susceptibility – fluids and dust Electro-magnetic Interference Cabin systems are generally classified as “non-essential” Swissair 111 crashed (in part) due to non-standard wiring practices. EWIS Design Implications Installation design must take EWIS Requirements into account. This generally means: Aircraft surveys are needed to identify proper wire routing Ensure existing wiring diagrams are correct Identify primary/Secondary/Tertiary bus locations Verify proper separation of wire bundles exist Required separation from fuel quantity indicator system (FQIS) to prevent fuel tang ignition Enhanced Zonal Analysis Procedure (EZAP) Performed EZAP was developed by the Aging Transport Systems Rulemaking Advisory Committee (ATSRAC) EZAP is the method for analyzing airplane zones with an emphasis on evaluating wiring systems and the existence of combustibles  in the cabin. Certification Considerations for Wi-Fi Systems Electrical – All existing DO 160 testing required Issue papers required Onboard EMI testing – any interference with aircraft systems when multiple wi-fi users are logged on? Vibration/Buffeting compliance criteria – what is the effect of the antenna on aircraft flight characteristics? Structural certification criteria – what are the stress loads on the aircraft at the antenna location and what is the impact on maintenance inspection criteria for the airline? Damage tolerance analysis required Goal – minimize maintenance inspection intervals

    Read the article

  • When should I use a Process Model versus a Use Case?

    - by Dave Burke
    This Blog entry is a follow on to https://blogs.oracle.com/oum/entry/oum_is_business_process_and and addresses a question I sometimes get asked…..i.e. “when I am gathering requirements on a Project, should I use a Process Modeling approach, or should I use a Use Case approach?” Not surprisingly, the short answer is “it depends”! Let’s take a scenario where you are working on a Sales Force Automation project. We’ll call the process that is being implemented “Lead-to-Order”. I would typically think of this type of project as being “Process Centric”. In other words, the focus will be on orchestrating a series of human and system related tasks that ultimately deliver value to the business in a cost effective way. Put in even simpler terms……implement an automated pre-sales system. For this type of (Process Centric) project, requirements would typically be gathered through a series of Workshops where the focal point will be on creating, or confirming, the Future-State (To-Be) business process. If pre-defined “best-practice” business process models exist, then of course they could and should be used during the Workshops, but even in their absence, the focus of the Workshops will be to define the optimum series of Tasks, their connections, sequence, and dependencies that will ultimately reflect a business process that meets the needs of the business. Now let’s take another scenario. Assume you are working on a Content Management project that involves automating the creation and management of content for User Manuals, Web Sites, Social Media publications etc. Would you call this type of project “Process Centric”?.......well you could, but it might also fall into the category of complex configuration, plus some custom extensions to a standard software application (COTS). For this type of project it would certainly be worth considering using a Use Case approach in order to 1) understand the requirements, and 2) to capture the functional requirements of the custom extensions. At this point you might be asking “why couldn’t I use a Process Modeling approach for my Content Management project?” Well, of course you could, but you just need to think about which approach is the most effective. Start by analyzing the types of Tasks that will eventually be automated by the system, for example: Best Suited To? Task Name Process Model Use Case Notes Manage outbound calls Ö A series of linked human and system tasks for calling and following up with prospects Manage content revision Ö Updating the content on a website Update User Preferences Ö Updating a users display preferences Assign Lead Ö Reviewing a lead, then assigning it to a sales person Convert Lead to Quote Ö Updating the status of a lead, and then converting it to a sales order As you can see, it’s not an exact science, and either approach is viable for the Tasks listed above. However, where you have a series of interconnected Tasks or Activities, than when combined, deliver value to the business, then that would be a good indicator to lead with a Process Modeling approach. On the other hand, when the Tasks or Activities in question are more isolated and/or do not cross traditional departmental boundaries, then a Use Case approach might be worth considering. Now let’s take one final scenario….. As you captured the To-Be Process flows for the Sales Force automation project, you discover a “Gap” in terms of what the client requires, and what the standard COTS application can provide. Let’s assume that the only way forward is to develop a Custom Extension. This would now be a perfect opportunity to document the functional requirements (behind the Gap) using a Use Case approach. After all, we will be developing some new software, and one of the most effective ways to begin the Software Development Lifecycle is to follow a Use Case approach. As always, your comments are most welcome.

    Read the article

  • Pros/Cons of document based database vs relational database

    - by damian
    I've been trying to see if I can accomplish some requirements with a document based database, in this case CouchDB. Two generic requirements: CRUD of entities with some fields which have unique index on it ecommerce web app like eBay (better description here). And I'm begining to think that a Document-based database isn't the best choice to address these requirements. Furthermore, I can´t imagine a use for a Document based database (maybe my imagination is too little). Can you explain me if I am asking pears to an elm when I try to use a Document based database for this requirements?

    Read the article

  • Shell script to emulate warnings-as-errors?

    - by talkaboutquality
    Some compilers let you set warnings as errors, so that you'll never leave any compiler warnings behind, because if you do, the code won't build. This is a Good Thing. Unfortunately, some compilers don't have a flag for warnings-as-errors. I need to write a shell script or wrapper that provides the feature. Presumably it parses the compilation console output and returns failure if there were any compiler warnings (or errors), and success otherwise. "Failure" also means (I think) that object code should not be produced. What's the shortest, simplest UNIX/Linux shell script you can write that meets the explicit requirements above, as well as the following implicit requirements of otherwise behaving just like the compiler: - accepts all flags, options, arguments - supports redirection of stdout and stderr - produces object code and links as directed Key words: elegant, meets all requirements. Extra credit: easy to incorporate into a GNU make file. Thanks for your help. === Clues === This solution to a different problem, using shell functions (?), Append text to stderr redirects in bash, might figure in. Wonder how to invite litb's friend "who knows bash quite well" to address my question? === Answer status === Thanks to Charlie Martin for the short answer, but that, unfortunately, is what I started out with. A while back I used that, released it for office use, and, within a few hours, had its most severe drawback pointed out to me: it will PASS a compilation with no warnings, but only errors. That's really bad because then we're delivering object code that the compiler is sure won't work. The simple solution also doesn't meet the other requirements listed. Thanks to Adam Rosenfield for the shorthand, and Chris Dodd for introducing pipefail to the solution. Chris' answer looks closest, because I think the pipefail should ensure that if compilation actually fails on error, that we'll get failure as we should. Chris, does pipefail work in all shells? And have any ideas on the rest of the implicit requirements listed above?

    Read the article

  • installing ruby 1.9.3 of osx mavricsmavrics

    - by user1648962
    I am trying to install ruby 1.9.3 on my osx 10.9 operating system and I keep getting the following error : Error running 'requirements_osx_port_update_system ruby-1.9.3-p448', please read /Users/ramesh/.rvm/log/1383430694_ruby-1.9.3-p448/update_system.log Requirements installation failed with status: 1. I am using the following command to do the installtion : rvm install 1.9.3 The complete log is as given below : Ramesh:Downloads ramesh$ rvm install 1.9.3 Searching for binary rubies, this might take some time. No binary rubies available for: osx/10.9/x86_64/ruby-1.9.3-p448. Continuing with compilation. Please read 'rvm help mount' to get more information on binary rubies. Checking requirements for osx. Installing requirements for osx. Updating system............................................................................................................................................................................................................................................................. Error running 'requirements_osx_port_update_system ruby-1.9.3-p448', please read /Users/ramesh/.rvm/log/1383430694_ruby-1.9.3-p448/update_system.log Requirements installation failed with status: 1. Ramesh:Downloads ramesh$ clear

    Read the article

  • TFS Integration with Rational ClearQuest and Requirement Manager

    - by Kangkan
    I am working on an integration approach for integrating Rationa (IBM Jazz) Requirement Manager (RM) and Clear Quest (CQ) with TFS. As the teams are moving from ClearCase to TFS, what we are looking at is still being able to manage the requirements in RM and manage testing using CQ. The flow will be something like: Requirements are planned and detailed in RM Create work items in TFS connected to the Requirements in RM Create design and code using VS2010 and managing the version control in TFS Creating test plans and test cases in CQ (connected to requirements in RM) Run test against builds in TFS Publish test results in CQ against builds in TFS Run reports in RM, CQ and TFS that links up the items across the platforms. I have started looking at TFS Integration platform. But shall like to have your guidance for an early resolution and better solution approach.

    Read the article

  • How do I imply code contracts of chained methods to avoid superfluous checks while chaining?

    - by Sandor Drieënhuizen
    I'm using Code Contracts in C# 4.0. I'm applying the usual static method chaining to simulate optional parameters (I know C# 4.0 supports optional parameters but I really don't want to use them). The thing is that my contract requirements are executed twice (or possibly the number of chained overloads I'd implement) if I call the Init(string , string[]) method -- an obvious effect from the sample source code below. This can be expensive, especially due to relatively expensive requirements like the File.Exists I use. public static void Init(string configurationPath, string[] mappingAssemblies) { // The static contract checker 'makes' me put these here as well as // in the overload below. Contract.Requires<ArgumentNullException>(configurationPath != null, "configurationPath"); Contract.Requires<ArgumentException>(configurationPath.Length > 0, "configurationPath is an empty string."); Contract.Requires<FileNotFoundException>(File.Exists(configurationPath), configurationPath); Contract.Requires<ArgumentNullException>(mappingAssemblies != null, "mappingAssemblies"); Contract.ForAll<string>(mappingAssemblies, (n) => File.Exists(n)); Init(configurationPath, mappingAssemblies, null); } public static void Init(string configurationPath, string[] mappingAssemblies, string optionalArgument) { // This is the main implementation of Init and all calls to chained // overloads end up here. Contract.Requires<ArgumentNullException>(configurationPath != null, "configurationPath"); Contract.Requires<ArgumentException>(configurationPath.Length > 0, "configurationPath is an empty string."); Contract.Requires<FileNotFoundException>(File.Exists(configurationPath), configurationPath); Contract.Requires<ArgumentNullException>(mappingAssemblies != null, "mappingAssemblies"); Contract.ForAll<string>(mappingAssemblies, (n) => File.Exists(n)); //... } If however, I remove the requirements from that method, the static checker complains that the requirements of the Init(string, string[], string) overload are not met. I reckon that the static checker doesn't understand that there requirements of the Init(string, string[], string) overload implicitly apply to the Init(string, string[]) method as well; something that would be perfectly deductable from the code IMO. This is the situation I would like to achieve: public static void Init(string configurationPath, string[] mappingAssemblies) { // I don't want to repeat the requirements here because they will always // be checked in the overload called here. Init(configurationPath, mappingAssemblies, null); } public static void Init(string configurationPath, string[] mappingAssemblies, string optionalArgument) { // This is the main implementation of Init and all calls to chained // overloads end up here. Contract.Requires<ArgumentNullException>(configurationPath != null, "configurationPath"); Contract.Requires<ArgumentException>(configurationPath.Length > 0, "configurationPath is an empty string."); Contract.Requires<FileNotFoundException>(File.Exists(configurationPath), configurationPath); Contract.Requires<ArgumentNullException>(mappingAssemblies != null, "mappingAssemblies"); Contract.ForAll<string>(mappingAssemblies, (n) => File.Exists(n)); //... } So, my question is this: is there a way to have the requirements of Init(string, string[], string) implicitly apply to Init(string, string[]) automatically?

    Read the article

  • Circular Shifts on Strings in Bash

    - by Kyle Van Koevering
    I have a homework assignment where I need to take input from a file and continuously remove the first word in a line and append it to the end of the line until all combinations have been done. I really don't know where to begin and would be thankful for any sort of direction. The part that has me confused is that this is suppose to be performed without the use of arrays. I'm not just fishing for someone to solve the problem for me, I'm just looking for some direction. Thank you very much for your time and help. SAMPlE INPUT: Pipes and Filters Java Swing Software Requirements Analysis SAMPLE OUTPUT: Analysis Software Requirements Filters Pipes and Java Swing Pipes and Filters Requirements Analysis Software Software Requirements Analysis Swing Java

    Read the article

  • Symfony routing with an API Web Service

    - by fesja
    Hi, I'm finishing the API of our web service. Now I'm thinking on how to make the route changes, so if we decide to make a new version we don't break the first API. right now: url: /api/:action param: { module: api, action: :action } requirements: sf_format: (?:xml|json) what i've thought: url: /api/v1/:module/:action param: { module: api1, action: :action } requirements: sf_format: (?:xml|json) url: /api/v2/:module/:action param: { module: api2, action: :action } requirements: sf_format: (?:xml|json) That's easy, but the perfect solution would be to have the following kind of route # Automatically redirects to one module or another url: /api/v:version/:module/:action param: { module: api:version, action: :action } requirements: sf_format: (?:xml|json) any ideas on how to do it? what do you recommend us to do? thanks!

    Read the article

  • Issue with InstalShield

    - by Alnahas
    I Use Installsheild 2009 To Deployment VS2005 Project with Sql express 2005 DB I put my exe and DB files And I typed "/qn SQLAUTOSTART=1 ADDLOCAL=ALL DISABLENETWORKPROTOCOLS=1" in command line to Make Silent Install My problem is that after I build this project and I try using it, it only works if the computer has the requirements just by one click. But if computer needs these requirements after installation, the user must click again on the setup icon to finish the setup. So first click to install requirements and second click to install my project I need the progress does not stop until all the installation is done (the needed requirements and my project)

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 3

    - by SQLOS Team
    In parts 1 and 2 of this series we looked at the basics of Hyper-V Dynamic Memory and SQL Server memory management. In this part Serdar looks at configuration guidelines for SQL Server memory management. Part 3: Configuration Guidelines for Hyper-V Dynamic Memory and SQL Server Now that we understand SQL Server Memory Management and Hyper-V Dynamic Memory basics, let’s take a look at general configuration guidelines in order to utilize benefits of Hyper-V Dynamic Memory in your SQL Server VMs. Requirements Host Operating System Requirements Hyper-V Dynamic Memory feature is introduced with Windows Server 2008 R2 SP1. Therefore in order to use Dynamic Memory for your virtual machines, you need to have Windows Server 2008 R2 SP1 or Microsoft Hyper-V Server 2008 R2 SP1 in your Hyper-V host. Guest Operating System Requirements In addition to this Dynamic Memory is only supported in Standard, Web, Enterprise and Datacenter editions of windows running inside VMs. Make sure that your VM is running one of these editions. For additional requirements on each operating system see “Dynamic Memory Configuration Guidelines” here. SQL Server Requirements All versions of SQL Server support Hyper-V Dynamic Memory. However, only certain editions of SQL Server are aware of dynamically changing system memory. To have a truly dynamic environment for your SQL Server VMs make sure that you are running one of the SQL Server editions listed below: ·         SQL Server 2005 Enterprise ·         SQL Server 2008 Enterprise / Datacenter Editions ·         SQL Server 2008 R2 Enterprise / Datacenter Editions Configuration guidelines for other versions of SQL Server are covered below in the FAQ section. Guidelines for configuring Dynamic Memory Parameters Here is how to configure Dynamic Memory for your SQL VMs in a nutshell: Hyper-V Dynamic Memory Parameter Recommendation Startup RAM 1 GB + SQL Min Server Memory Maximum RAM > SQL Max Server Memory Memory Buffer % 5 Memory Weight Based on performance needs   Startup RAM In order to ensure that your SQL Server VMs can start correctly, ensure that Startup RAM is higher than configured SQL Min Server Memory for your VMs. Otherwise SQL Server service will need to do paging in order to start since it will not be able to see enough memory during startup. Also note that Startup Memory will always be reserved for your VMs. This will guarantee a certain level of performance for your SQL Servers, however setting this too high will limit the consolidation benefits you’ll get out of your virtualization environment. Maximum RAM This one is obvious. If you’ve configured SQL Max Server Memory for your SQL Server, make sure that Dynamic Memory Maximum RAM configuration is higher than this value. Otherwise your SQL Server will not grow to memory values higher than the value configured for Dynamic Memory. Memory Buffer % Memory buffer configuration is used to provision file cache to virtual machines in order to improve performance. Due to the fact that SQL Server is managing its own buffer pool, Memory Buffer setting should be configured to the lowest value possible, 5%. Configuring a higher memory buffer will prevent low resource notifications from Windows Memory Manager and it will prevent reclaiming memory from SQL Server VMs. Memory Weight Memory weight configuration defines the importance of memory to a VM. Configure higher values for the VMs that have higher performance requirements. VMs with higher memory weight will have more memory under high memory pressure conditions on your host. Questions and Answers Q1 – Which SQL Server memory model is best for Dynamic Memory? The best SQL Server model for Dynamic Memory is “Locked Page Memory Model”. This memory model ensures that SQL Server memory is never paged out and it’s also adaptive to dynamically changing memory in the system. This will be extremely useful when Dynamic Memory is attempting to remove memory from SQL Server VMs ensuring no SQL Server memory is paged out. You can find instructions on configuring “Locked Page Memory Model” for your SQL Servers here. Q2 – What about other SQL Server Editions, how should I configure Dynamic Memory for them? Other editions of SQL Server do not adapt to dynamically changing environments. They will determine how much memory they should allocate during startup and don’t change this value afterwards. Therefore make sure that you configure a higher startup memory for your VM because that will be all the memory that SQL Server utilize Tune Maximum Memory and Memory Buffer based on the other workloads running on the system. If there are no other workloads consider using Static Memory for these editions. Q3 – What if I have multiple SQL Server instances in a VM? Having multiple SQL Server instances in a VM is not a general recommendation for predictable performance, manageability and isolation. In order to achieve a predictable behavior make sure that you configure SQL Min Server Memory and SQL Max Server Memory for each instance in the VM. And make sure that: ·         Dynamic Memory Startup Memory is greater than the sum of SQL Min Server Memory values for the instances in the VM ·         Dynamic Memory Maximum Memory is greater than the sum of SQL Max Server Memory values for the instances in the VM Q4 – I’m using Large Page Memory Model for my SQL Server. Can I still use Dynamic Memory? The short answer is no. SQL Server does not dynamically change its memory size when configured with Large Page Memory Model. In virtualized environments Hyper-V provides large page support by default. Most of the time, Large Page Memory Model doesn’t bring any benefits to a SQL Server if it’s running in virtualized environments. Q5 – How do I monitor SQL performance when I’m trying Dynamic Memory on my VMs? Use the performance counters below to monitor memory performance for SQL Server: Process - Working Set: This counter is available in the VM via process performance counters. It represents the actual amount of physical memory being used by SQL Server process in the VM. SQL Server – Buffer Cache Hit Ratio: This counter is available in the VM via SQL Server counters. This represents the paging being done by SQL Server. A rate of 90% or higher is desirable. Conclusion These blog posts are a quick start to a story that will be developing more in the near future. We’re still continuing our testing and investigations to provide more detailed configuration guidelines with example performance numbers with a white paper in the upcoming months. Now it’s time to give SQL Server and Hyper-V Dynamic Memory a try. Use this guidelines to kick-start your environment. See what you think about it and let us know of your experiences. - Serdar Sutay Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Technology Selection for a dynamic product

    - by Kuntal Shah
    We are building a product for Procurement Domain in JAVA. Following are the main technical requirements. Platform Independent Database Independent Browser Independent In functional requirements the product is very dynamic in nature. The main reason being the procurement process around the world is different from client to client. Briefly we need to have a dynamic workflow engine and a dynamic template engine. The workflow engine by which we can define any kind of workflows and the template engine allows us to define any kind of data structures and based on definition it can get the user input through workflow. We have been developing this product for almost 2 years. It has been a long time till we can get down with the dynamics of requirements. Till now we have developed a basic workflow and template engine and which is in use at one of the client. We have been using following technologies. GWT-Ext (Front End Framework) Hibernate (Database Layer) In between we have faced some issues with GWT-Ext (mainly browser compatibility) and database optimization due to sub classing in hibernate. For resolving GWT-Ext issue, which a dying community so we decided to move to SmartGWT. In SmartGWT we faced issues related to loading and now we are able to finalize that GWT 2.3 will be the way to go as the library is rich and performance is upto the mark. We are able to almost finalize GWT-Spring based front and middle layer. In hibernate, we found main issues with sub-classing due to that it was throwing astronomical queries and sometimes it would stop firing any queries for 5-10 seconds or may be around 30 seconds and then resume again. Few days back I came to one article related to ORM. I am a traditional .Net SQL developer and I have always worked with relational database. Reading through this article, I also found it relating to the issues I face. I am still not completely convinced of using hibernate and this article just supported my opinion. Following are the questions for which I am looking for an answer. Should we be going with Hibernate in case of dynamic database requirements and the load of the data will be heavy in future? How can we partition the data, how we can efficiently join the data, how we can optimize the queries? If the answer is no then how do we achieve database independence? Is our choice related to GWT and Spring proper or do we need to change that too? Should we use any other key value pair database if the data is dynamic in nature and it is very difficult to make it relational?

    Read the article

  • WebLogic 12c hands-on bootcamps for partners – free of charge

    - by JuergenKress
    For all WebLogic Partner Community members we offer our WebLogic 12c hands-on Bootcamps – free of charge! If you want to become an Application Grid Specialized: Register Here! Country Date Location Registration Germany 3-5 April 2012 Oracle Düsseldorf Click here France 24-26 April 2012 Oracle Colombes Click here Spain 08-10 May 2012 Oracle Madrid Click here Netherlands 22-24 May 2012 Oracle Amsterdam Click here United Kingdom 06-08 June 2012 Oracle Reading Click here Italy 19-21 June 2012 Oracle Cinisello Balsamo Click here Portugal 10-12 July 2012 Oracle Lisbon Click here Skill requirements Attendees need to have the following skills as this is required by the product-set and to make sure they get the most out of the training: Basic knowledge in Java and JavaEE Understanding the Application Server concept Basic knowledge in older releases of WebLogic Server would be beneficial Member of WebLogic Partner Community for registration please vist http://www.oracle.com/partners/goto/wls-emea Hardware requirements Every participant works on his own notebook. The minimal hardware requirements are: 4Gb physical RAM (we will boot the image with 2Gb RAM) dual core CPU 15 GB HD Software requirements Please install Oracle VM VirtualBox 4.1.8 Follow-up and certification With the workshop registration you agree to the following next steps Follow-up training attend and pass the Oracle Application Grid Certified Implementation Specialist Registration For details and registration please visit Register Here Free WebLogic Certification (Free assessment voucher to become certified) For all WebLogic experts, we offer free vouchers worth $195 for the Oracle Application Grid Certified Implementation Specialist assessment. To demonstrate your WebLogic knowledge you first have to pass the free online assessment Oracle Application Grid PreSales Specialist. For free vouchers, please send an e-mail with the screenshot of your Oracle Application Grid PreSales certirficate to [email protected] including your Name, Company, E-mail and Country. Note: This offer is limited to partners from Europe Middle East and Africa. Partners from other countries please contact your Oracle partner manager. WebLogic Specialization To become specialized in Application Grid, please make sure that you access the: Application Grid Specialization Guide Application Grid Specialization Checklist If you have any questions please contact the Oracle Partner Business Center. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic training,Java training,WebLogic 12c,WebLogic Community,OPN,WebLogic,Oracle,WebLogic Certification,Apps Grid Certification,Implementation Specialist,Apps Grid Specialist,Oracle education

    Read the article

  • Five Ways Enterprise 2.0 Can Transform Your Business - Q&A from the Webcast

    - by [email protected]
    A few weeks ago, Vince Casarez and I presented with KMWorld on the Five Ways Enterprise 2.0 Can Transform Your Business. It was an enjoyable, interactive webcast in which Vince and I discussed the ways Enterprise 2.0 can transform your business and more importantly, highlighted key customer examples of how to do so. If you missed the webcast, you can catch a replay here. We had a lot of audience participation in some of the polls we conducted and in the Q&A session. We weren't able to address all of the questions during the broadcast, so we attempted to answer them here: Q: Which area within your firm focuses on Web 2.0? Meaning, do you find new departments developing just to manage the web 2.0 (Twitter, Facebook, etc.) user experience or are you structuring current departments? A: There are three distinct efforts within Oracle. The first is around delivery of these Web 2.0 services for enterprise deployments. This is the focus of the WebCenter team. The second effort is injecting these Web 2.0 services into use cases that drive the different enterprise applications. This effort is focused on how to manage these external services and bring them into a cohesive flow for marketing programs, customer care, and purchasing. The third effort is how we consume these services internally to enhance Oracle's business delivery. It leverages the technologies and use cases of the first two but also pushes the envelope with regards to future directions of these other two areas. Q: In a business, Web 2.0 is mostly like action logs. How can we leverage the official process practice versus the logs of a recent action? Example: a system configuration modified last night on a call out versus the official practice that everybody would use in the morning.A: The key thing to remember is that most Web 2.0 actions / activity streams today are based on collaboration and communication type actions. At least with public social sites like Facebook and Twitter. What we're delivering as part of the WebCenter Suite are not just these types of activities but also enterprise application activities. These enterprise application activities come from different application modules: purchasing, HR, order entry, sales opportunity, etc. The actions within these systems are normally tied to a business object or process: purchase order/customer, employee or department, customer and supplier, customer and product, respectively. Therefore, the activities or "logs" as you name them are able to be "typed" so that as a viewer, you can filter or decide to see only certain types of information. In your example, you could have a view that only showed you recent "configuration" changes and this could be right next to a view that showed off the items to be watched every morning. Q: It's great to hear about customers using the software but is there any plan for future webinars to show what the products/installs look like? That would be very helpful.A: We don't have a webinar planned to show off the install process. However, we have a viewlet that's posted on Oracle Technology Network. You can see it here:http://www.oracle.com/technetwork/testcontent/wcs-install-098014.htmlAnd we've got excellent documentation that walks you through the steps here:http://download.oracle.com/docs/cd/E14571_01/install.1111/e12001/install.htmAnd there's a whole set of demos and examples of what WebCenter can do at this URL:http://www.oracle.com/technetwork/middleware/webcenter/release11-demos-097468.html Q: How do you anticipate managing metadata across the enterprise to make content findable?A: We need to first make sure we are all talking about the same thing when we use a word like "metadata". Here's why...  For a developer, metadata means information that describes key elements of the portal or application and what the portal or application can do. For content systems, metadata means key terms that provide a taxonomy or folksonomy about the information that is being indexed, ordered, and managed. For business intelligence systems, metadata means key terms that provide labels to groups of data that most non-mathematicians need to understand. And for SOA, metadata means labels for parts of the processes that business owners should understand that connect development terminology. There are also additional requirements for metadata to be available to the team building these new solutions as well as requirements to make this metadata available to the running system. These requirements are often separated by "design time" and "run time" respectively. So clearly, a general goal of managing metadata across the enterprise is very challenging. We've invested a huge amount of resources around Oracle Metadata Services (MDS) to be able to provide a more generic system for all of these elements. No other vendor has anything like this technology foundation in their products. This provides a huge benefit to our customers as they will now be able to find content, processes, people, and information from a common set of search interfaces with consistent enterprise wide results. Q: Can you give your definition of terms as to document and content, please?A: Content applies to a broad category of information from Word documents, presentations and reports through attachments to invoices and/or purchase orders. Content is essentially any type of digital asset including images, video, and voice. A document is just one type of content. Q: Do you have special integration tools to realize an interaction between UCM and WebCenter Spaces/Services?A: Yes, we've dedicated a whole team of engineers to exploit the key features of Oracle UCM within WebCenter.  While ensuring that WebCenter can connect to other non-Oracle systems, we've made sure that with the combined set of Oracle technology, no other solution can match the combined power and integration.  This is part of the Oracle Fusion Middleware strategy which is to provide best in class capabilities for Content and Portals.  When combined together, the synergy between the two products enables users to quickly add capabilities when they are needed.  For example, simple document sharing is part of the combined product offering, but if legal discovery or archiving is required, Oracle UCM product includes these capabilities that can be quickly added.  There's no need to move content around or add another system to support this, it's just a feature that gets turned on within Oracle UCM. Q: All customers have some interaction with their applications and have many older versions, how do you see some of these new Enterprise 2.0 capabilities adding value to existing enterprise application deployments?A: Just as Service Oriented Architectures allowed for connecting the processes of different applications systems to work together, there's a need for a similar approach with regards to these enterprise 2.0 capabilities. Oracle WebCenter is built on a core architecture that allows for SOA of these Enterprise 2.0 services so that one set of scalable services can be used and integrated directly into any type of application. In this way, users can get immediate value out of the Enterprise 2.0 capabilities without having to wait for the next major release or upgrade. These centrally managed WebCenter services expose a set of standard interfaces that make it extremely easy to add them into existing applications no matter what technology the application has been implemented. Q: We've heard about Oracle Next Generation applications called "Fusion Applications", can you tell me how all this works together?A: Oracle WebCenter powers the core collaboration and social computing services found within Fusion Applications. It is the core user experience technology for how all the application screens have been implemented. And the core concept of task flows allows for all the Fusion Applications modules to be adaptable and composable by business users and IT without needing to be a professional developer. Oracle WebCenter is at the heart of the new Fusion Applications. In addition, the same patterns and technologies are now being added to the existing applications including JD Edwards, Siebel, Peoplesoft, and eBusiness Suite. The core technology enables all these customers to have a much smoother upgrade path to Fusion Applications. They get immediate benefits of injecting new user interactions into their existing applications without having to completely move to Fusion Applications. And then when the time comes, their users will already be well versed in how the new capabilities work. Q: Does any of this work with non Oracle software? Other databases? Other application servers? etc.A: We have made sure that Oracle WebCenter delivers the broadest set of development choices so that no matter what technology you developers are using, WebCenter capabilities can be quickly and easily added to the site or application. In addition, we have certified Oracle WebCenter to run against non-Oracle databases like DB2 and SQLServer. We have stated plans for certification against MySQL as well. Later in CY 2011, Oracle will provide certification on non-Oracle application servers such as WebSphere and JBoss. Q: How do we balance User and IT requirements in regards to Enterprise 2.0 technologies?A: Wrong decisions are often made because employee knowledge is not tapped efficiently and opportunities to innovate are often missed because the right people do not work together. Collaboration amongst workers in the right business context is critical for success. While standalone Enterprise 2.0 technologies can improve collaboration for collaboration's sake, using social collaboration tools in the context of business applications and processes will improve business responsiveness and lead companies to a more competitive position. As these systems become more mission critical it is essential that they maintain the highest level of performance and availability while scaling to support larger communities. Q: What are the ways in which Enterprise 2.0 can improve business responsiveness?A: With a wide range of Enterprise 2.0 tools in the marketplace, CIOs need to deploy solutions that will meet the requirements from users as well as address the requirements from IT. Workers want a next-generation user experience that is personalized and aggregates their daily tools and tasks, while IT needs to ensure the solution is secure, scalable, flexible, reliable and easily integrated with existing systems. An open and integrated approach to deploying portals, content management, and collaboration can enhance your business by addressing both the needs of knowledge workers for better information and the IT mandate to conserve resources by simplifying, consolidating and centralizing infrastructure and administration.  

    Read the article

  • VS 2010 ALM Whitepapers &ndash; link from Neno Loje

    - by johndoucette
    Overview of Visual Studio ALM Whitepapers by Microsoft Overview Visual Studio 2010 Quick Reference Guidance Installation, Configuration & Administration Team Foundation Installation Guide for Visual Studio Team System 2010 Administration Guide for Microsoft Visual Studio 2010 Team Foundation Server Visual Studio 2010 TFS Upgrade Guide Visual Studio 2010 and Team Foundation Server 2010 VM Factory TFS Integration Platform Visual Studio 2010 Licensing White Paper Requirements Visual Studio 2010 Team Foundation Server Requirements Management Guidance Version Control & Configuration Management Visual Studio TFS Branching Guide 2010 Some guide or whitepaper missing? Let me know!

    Read the article

  • How do you think about an Application Generator? [closed]

    - by Mehdi Sheyda
    I'm designing an application-generating application. It is an application that takes the requirements of customer as inputs , analyzes the requirements, creates classes and produces program files in C#. I am at the beginning of this project and have a long way to go with this application. Do you have an experience with designing similar kinds of projects? What risks might I encounter with this project?

    Read the article

  • Getting your bearings and defining the project objective

    - by johndoucette
    I wrote this two years ago and thought it was worth posting… Some may think this is a daunting task and some may even say “what a waste of time” and want to open MS Project and start typing out tasks because someone asked for an estimate and a task list. Hell, maybe you even use Excel and pump out a spreadsheet with some real scientific formula for guessing how long it will take to code a bunch of classes. However, this short exercise will provide the basis for the entire project, whether small or large and be a great friend when communicating to anyone on your team or even your client. I call this the Project Brief. If you find yourself going beyond a single page, then you must decompose the sections and summarize your findings so there is a complete and clear picture of the project you are working on in a relatively short statement. Here is a great quote from the PMBOK (Project Management Body of Knowledge) relative to what a project is;   A project is a temporary endeavor undertaken to create a unique product, service or result. With this in mind, the project brief should encompass the entirety (objective) of the endeavor in its explanation and what it will take (goals) to create the product, service or result (deliverables). Normally the process of identifying the project objective is done during the first stage of a project called the Project Kickoff, but you can perform this very important step anytime to help you get a bearing. There are many more parts to helping a project stay on course, but this is usually the foundation where it can be grounded on. Through a series of 3 exercises, you should be able to come up with the objective, goals and deliverables on your project. Follow these steps, and in no time (about &frac12; hour), you will have the foundation of your project plan. (See examples below) Exercise 1 – Objectives Begin with the end in mind. Think about your project in business terms with a couple things to help you understand the objective; Reference the business benefit in terms of cost, speed and / or quality, Provide a higher level of what the outcome will look like (future sense) It should be non-measurable, that’s what the goals are all about The output should be a single paragraph with three sentences and take 10 minutes to write. *Typically, agreement must be reached on the objectives of the project before you would proceed to the next steps of the project. Exercise 2 – Goals A project goal is a statement that answers questions about who, what, why, where and when. A good project goal statement; Answers the five “W” questions for the project Is measurable in each of its parts Is published and agreed on by all the owners This helps the Project Manager receive confirmation on defining the project target. Using the established project objective done in the first exercise, think about the things it will take to get the job done. Think about tangible activities which are the top level tasks in a typical Work Breakdown Structure (WBS). The overall goal statement plus all the deliverables (next exercise) can be seen as the project team’s contract with the project owners. Write 3 - 5 goals in about 10 minutes. You should not write the words “Who, what, why, where and when, but merely be able to answer the questions when you read a goal. Exercise 3 – Deliverables Every project creates some type of output and these outputs are called deliverables. There are two classes of deliverables; Internal – produced for project team members to meet their goals External – produced for project owners to meet their expectations The list you enter here provides a checklist for the team’s delivery and/or is a statement of all the expectations of the project owners. Here are some typical project deliverables; Product and product documentation End product/system Requirements/feature documents Installation guides Demo/prototype System design documents User guides/help files Plans Project plan Training plan Conversion/installation/delivery plan Test plans Documentation plan Communication plan Reports and general documentation Progress reports System acceptance tests Outstanding bug list Procedures Risk and issue logs Project history Deliverables should go with each of the goals. Have 3-5 deliverables for each goal. When you are done, you will have established a great foundation for the clarity of your project. This exercise can take some time, but with practice, you should be able to whip this one out in 10 minutes as well, especially if you are intimate with an ongoing project. Samples  Objective [Client] is implementing a series of MOSS sites to support external public (Internet), internal employee (Intranet) and an external secure (password protected Internet) applications. This project will focus on the public-facing web site and will provide [Client] with architectural recommendations based on the current design being done by their design partner [Partner] and the internal Content Team. In addition, it will provide [Client] with a development plan and confidence they need to deploy a world class public Internet website. Goals 1.  [Consultant] will provide technical guidance and set project team expectations for the implementation of the MOSS Internet site based on provided features/functions within three weeks. 2.  [Consultant] will understand phase 2 secure password-protected Internet site design and provide recommendations.   Deliverables 1.1  Public Internet (unsecure) Architectural Recommendation Plan 1.2  Physical Site construction Work Breakdown Structure and plan (Time, cost and resources needed) 2.1  Two Factor authentication recommendation document   Objective [Client] is currently using an application developed by [Consultant] many years ago called "XXX". This application, although functional, does not meet their new updated business requirements and contains a few defects which [Client] has developed work-around processes. [Client] would like to have a "new and improved" system to support their membership management needs by expanding membership and subscription capabilities, provide accounting integration with internal (GL) and external (VeriSign) systems, and implement hooks to the current CRM solution. This effort will take place through a series of phases, beginning with envisioning. Goals 1. Through discussions with users, [Consultant] will discover current issues/bugs which need to be resolved which must meet the current functionality requirements within three weeks. 2. [Consultant] will gather requirements from the users about what is "needed" vs. "what they have" for enhancements and provide a high level document supporting their needs. 3. [Consultant] will meet with the team members through a series of meetings and help define the overall project plan to deliver a new and improved solution. Deliverables 1.1 Prioritized list of Current application issues/bugs that need to be resolved 1.2 Provide a resolution plan on the issues/bugs identified in the current application 1.3 Risk Assessment Document 2.1 Deliver a Requirements Document showing high-level [Client] needs for the new XXX application. · New feature functionality not in the application today · Existing functionality that will remain in the new functionality 2.2 Reporting Requirements Document 3.1 A Project Plan showing the deliverables and cost for the next (second) phase of this project. 3.2 A Statement of Work for the next (second) phase of this project. 3.3 An Estimate of any work that would need to follow the second phase.

    Read the article

  • New Options for MySQL High Availability

    - by Mat Keep
    Data is the currency of today’s web, mobile, social, enterprise and cloud applications. Ensuring data is always available is a top priority for any organization – minutes of downtime will result in significant loss of revenue and reputation. There is not a “one size fits all” approach to delivering High Availability (HA). Unique application attributes, business requirements, operational capabilities and legacy infrastructure can all influence HA technology selection. And then technology is only one element in delivering HA – “People and Processes” are just as critical as the technology itself. For this reason, MySQL Enterprise Edition is available supporting a range of HA solutions, fully certified and supported by Oracle. MySQL Enterprise HA is not some expensive add-on, but included within the core Enterprise Edition offering, along with the management tools, consulting and 24x7 support needed to deliver true HA. At the recent MySQL Connect conference, we announced new HA options for MySQL users running on both Linux and Solaris: - DRBD for MySQL - Oracle Solaris Clustering for MySQL DRBD (Distributed Replicated Block Device) is an open source Linux kernel module which leverages synchronous replication to deliver high availability database applications across local storage. DRBD synchronizes database changes by mirroring data from an active node to a standby node and supports automatic failover and recovery. Linux, DRBD, Corosync and Pacemaker, provide an integrated stack of mature and proven open source technologies. DRBD Stack: Providing Synchronous Replication for the MySQL Database with InnoDB Download the DRBD for MySQL whitepaper to learn more, including step-by-step instructions to install, configure and provision DRBD with MySQL Oracle Solaris Cluster provides high availability and load balancing to mission-critical applications and services in physical or virtualized environments. With Oracle Solaris Cluster, organizations have a scalable and flexible solution that is suited equally to small clusters in local datacenters or larger multi-site, multi-cluster deployments that are part of enterprise disaster recovery implementations. The Oracle Solaris Cluster MySQL agent integrates seamlessly with MySQL offering a selection of configuration options in the various Oracle Solaris Cluster topologies. Putting it All Together When you add MySQL Replication and MySQL Cluster into the HA mix, along with 3rd party solutions, users have extensive choice (and decisions to make) to deliver HA services built on MySQL To make the decision process simpler, we have also published a new MySQL HA Solutions Guide. Exploring beyond just the technology, the guide presents a methodology to select the best HA solution for your new web, cloud and mobile services, while also discussing the importance of people and process in ensuring service continuity. This is subject recently presented at Oracle Open World, and the slides are available here. Whatever your uptime requirements, you can be sure MySQL has an HA solution for your needs Please don't hesitate to let us know of your HA requirements in the comments section of this blog. You can also contact MySQL consulting to learn more about their HA Jumpstart offering which will help you scope out your scaling and HA requirements.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >