Search Results

Search found 5267 results on 211 pages for 'use cases'.

Page 8/211 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • PHP specifying a fixed include source for scripts in different directories

    - by Extrakun
    I am currently doing unit testing, and use folders to organize my test cases. All cases pertaining to managing of user accounts, for example, go under \tests\accounts. Over time, there are more test cases, and I begin to seperate the cases by types, such as \tests\accounts\create, \tests\account\update and etc. However, one annoying problem is I have to specify the path to a set of common includes. I have to use includes like this: include_once ("../../../../autoload.php"); include_once ("../../../../init.php"); A test case in tests\accounts\ would require change to the include (one less directory level down). Is there anyway to have them somehow locating my two common includes? I understand I could set include paths within my PHP's configurations, or use server environment variables, but I would like to avoid such solutions as they make the application less portable and coupled with another layer which the programmer can't control (some web-host doesn't allow configuration of PHP's configuration settings, for example)

    Read the article

  • Why is 'using' improving C# performances

    - by Wernight
    It seems that in most cases the C# compiler could call Dispose() automatically. Like most cases of the using pattern look like: public void SomeMethod() { ... using (var foo = new Foo()) { ... } // Foo isn't use after here (obviously). ... } Since foo isn't used (that's a very simple detection) and since its not provided as argument to another method (that's a supposition that applies to many use cases and can be extended), the compiler could automatically call Dispose() without the developper requiring to do it. This means that in most cases the using is pretty useless if the compiler does some smart job. IDisposable seem low level enough to me to be taken in account by a compiler. Now why isn't this done? Wouldn't that improve the performances (if the developpers are... dirty).

    Read the article

  • setting value to a parameter - always saying that it is null REALLY NEED INPUT

    - by Amina
    So I have this schedule visit page with two groups. Group 1 contains a list of: Cases. Group 2 contains a list of: Parties. each group has a checkboxes next to its item for the user to select. My Issue when i select a case and/or a party and save -- then go to the edit page of the visit i just saved and only my selected case is checked and the party i selected is not checked. After debugging i realized that the partyId is not being saved properly during the create page and thus not showing as selected or saved on the edit page. I really need help on how to properly save the party selected and setting for it a value from the parameter. Here is my code of what i have for saving a case and would like to know how to properly save with party. Controller [HttpPost] [ValidateInput(false)] public ActionResult Create(VisitViewModel viewModel, Guid[] associatedCasesSelected, Guid[] selectedParties) { if (!ModelState.IsValid) { viewModel.Time = _timeEntryHelper.Value; AddLookupsToViewModel(viewModel); return View(viewModel); } var visitEntry = Mapper.Map<VisitViewModel, VisitEntry>(viewModel); ... viewModel.CasePartyIds = selectedParties; try { _visitEntryService.Create(visitEntry, associatedCasesSelected); this.FlashInfo(string.Format(Message.ConfirmationMessageCreate, Resources.Entities.Visit.EntityName)); } catch (RulesException ex) { ex.CopyTo(ModelState); } if (ModelState.IsValid) return RedirectToAction("Edit", "Case", new { caseId = viewModel.CaseId }); AddLookupsToViewModel(viewModel); return View(viewModel); } VisitEntryService public void Create(VisitEntry visitEntry,IList<Guid>caseIds) { EnsureValid(visitEntry); _visitEntryRepository.Save(visitEntry); caseIds = AddCurrentCaseToCases(visitEntry.CaseId, caseIds); foreach (var caseId in caseIds.Distinct()) { var visit = new Visit {CaseId = caseId, VisitEntryId = visitEntry.VisitEntryId}; _visitService.Create(visit); } } AddCurrentCaseToCases private static IList<Guid>AddCurrentCaseToCases(Guid caseId, IEnumerable<Guid>caseIds) { var cases = new List<Guid>(); if (caseIds != null) { cases.AddRange(caseIds); if(!caseIds.Contains(caseId)) cases.Add(caseId); } else cases.Add(caseId); return cases; } VisitService public Visit Get(Guid visitId) { return DataContext.Visits.SingleOrDefault(v => v.VisitId == visitId); } public void Save(Visit visit) { if(visit.VisitId == Guid.Empty) { visit.VisitId = Guid.NewGuid(); DataContext.Visits.InsertOnSubmit(visit); } else { var currentVisit = Get(visit.VisitId); if (currentVisit == null) throw RepositoryExceptionFactory.Create("Visit", "VisitId"); } DataContext.SubmitChanges(); } Any TIPS or IDEAS is greatly appreciated at this time :) The entitiy for the parties will be VisitEntryParty

    Read the article

  • Struts2 Value Stack

    - by vipul12389
    I want to understand Struts 2 value stack vs request scope. I want the struts2 value stack to work same as request scope. for e.g. i have invoked action1 in struts 2, the action performs some db task and gets back. it performs some operation on a object called cases (type Cases, where Cases is bean class with getters and setters). cases object is declared at class level. action1 led a view to be rendered say jsp1. jsp1 again has some action called as action2. action2 leads to the same java file as of action1 but has different method. Now, i want to access the object which was used in action1. during action1 cases was pushed to Value Stack and was accessed on jsp1. I simply tried accessing its getter methods, but it returns a null value....!! any solution on how to do ??? or is it possible ?? i know if its possible then what is the difference between vs and request scope...

    Read the article

  • What is the correlation between the quality of the software development process and the quality of the product?

    - by Ophir Yoktan
    I used to believe the practicing "good" software development methods tends to yield a better product in the long run. However, I've seen quite a few cases where "quick-and-dirty" \ "brute-force" \ "copy-paste" programming appeared to give decent results quicker, and cheaper. This appears especially in cases where time to market is more critical then maintenance overhead. Is there a correlation between the quality of the development process and techniques and the quality of the product?

    Read the article

  • Confusion about inheritance

    - by Samuel Adam
    I know I might get downvoted for this, but I'm really curious. I was taught that inheritance is a very powerful polymorphism tool, but I can't seem to use it well in real cases. So far, I can only use inheritance when the base class is an abstract class. Examples : If we're talking about Product and Inventory, I quickly assumed that a Product is an Inventory because a Product must be inventorized as well. But a problem occured when user wanted to sell their Inventory item. It just doesn't seem to be right to change an Inventory object to it's subtype (Product), it's almost like trying to convert a parent to it's child. Another case is Customer and Member. It is logical (at least for me) to think that a Member is a Customer with some more privileges. Same problem occurred when user wanted to upgrade an existing Customer to become a Member. A very trivial case is the Employee case. Where Manager, Clerk, etc can be derived from Employee. Still, the same upgrading issue. I tried to use composition instead for some cases, but I really wanted to know if I'm missing something for inheritance solution here. My composition solution for those cases : Create a reference of Inventory inside a Product. Here I'm making an assumption about that Product and Inventory is talking in a different context. While Product is in the context of sales (price, volume, discount, etc), Inventory is in the context of physical management (stock, movement, etc). Make a reference of Membership instead inside Customer class instead of previous inheritance solution. Therefor upgrading a Customer is only about instantiating the Customer's Membership property. This example is keep being taught in basic programming classes, but I think it's more proper to have those Manager, Clerk, etc derived from an abstract Role class and make it a property in Employee. I found it difficult to find an example of a concrete class deriving from another concrete class. Is there any inheritance solution in which I can solve those cases? Being new in this OOP thing, I really really need a guidance. Thanks!

    Read the article

  • How have Guava unit tests been generated automatically?

    - by dzieciou
    Guava has unit test cases automatically generated: Guava has staggering numbers of unit tests: as of July 2012, the guava-tests package includes over 286,000 individual test cases. Most of these are automatically generated, not written by hand, but Guava's test coverage is extremely thorough, especially for com.google.common.collect. How they were generated? What techniques and technologies were used to design and generate them?

    Read the article

  • Where can I find useful Maven documentation?

    - by jprete
    I don't have to play with Maven configuation often, but when I do, it's usually to do something odd, like use a build plugin that hasn't matured enough to be on the main Maven repositories. The Maven documentation that I find doesn't cover unusual use cases well and tends to assume you already know what you're doing. Where can I find really useful, well-written documentation on Maven that also covers the corner cases of usage?

    Read the article

  • What are the ways to build a failover cluster?

    - by light
    I have a task where I need to build a failover cluster in two cases: first with servers on Red Hat Enterprise 5.1 and second with SUSE Linux Enterprise 11 SP1. Both cases have SAN. I know there are many ways to build failover cluster, but I can’t find out more, so I need next: The ways to build it? I know only virtualization. Any good book or resource to broad my mind? I’ll be glad to hear any suggestion. Thanks! EDIT #1: failover of servers with bussiness application on it. EDIT #2: will be great to hear summary about solutions with SLES servers? EDIT #3: So if I understand correctly, in my cases the main ways are to use internal solutions or virtualization. So now I have additional questions: Does manufacturer of blades provide some solution? For example HP or IBM. (Without virtualization) Do I need additional server to control "heartbeat" between main and redundant servers? (Virtualization) For example I have several physical servers with VMs. Do I need additional server to control availability of VMs and to move VMs to another physical server in the case their physical server failure? Sorry for my poor English. EDIT #4: Failover of VM or OS on physical server. In both cases will be used SAN , it's not specified, but I think with file system image on it. I started to think that my question is stupid and I need to remake it.

    Read the article

  • SQL SERVER – Example of Performance Tuning for Advanced Users with DB Optimizer

    - by Pinal Dave
    Performance tuning is such a subject that everyone wants to master it. In beginning everybody is at a novice level and spend lots of time learning how to master the art of performance tuning. However, as we progress further the tuning of the system keeps on getting very difficult. I have understood in my early career there should be no need of ego in the technology field. There are always better solutions and better ideas out there and we should not resist them. Instead of resisting the change and new wave I personally adopt it. Here is a similar example, as I personally progress to the master level of performance tuning, I face that it is getting harder to come up with optimal solutions. In such scenarios I rely on various tools to teach me how I can do things better. Once I learn about tools, I am often able to come up with better solutions when I face the similar situation next time. A few days ago I had received a query where the user wanted to tune it further to get the maximum out of the performance. I have re-written the similar query with the help of AdventureWorks sample database. SELECT * FROM HumanResources.Employee e INNER JOIN HumanResources.EmployeeDepartmentHistory edh ON e.BusinessEntityID = edh.BusinessEntityID INNER JOIN HumanResources.Shift s ON edh.ShiftID = s.ShiftID; User had similar query to above query was used in very critical report and wanted to get best out of the query. When I looked at the query – here were my initial thoughts Use only column in the select statements as much as you want in the application Let us look at the query pattern and data workload and find out the optimal index for it Before I give further solutions I was told by the user that they need all the columns from all the tables and creating index was not allowed in their system. He can only re-write queries or use hints to further tune this query. Now I was in the constraint box – I believe * was not a great idea but if they wanted all the columns, I believe we can’t do much besides using *. Additionally, if I cannot create a further index, I must come up with some creative way to write this query. I personally do not like to use hints in my application but there are cases when hints work out magically and gives optimal solutions. Finally, I decided to use Embarcadero’s DB Optimizer. It is a fantastic tool and very helpful when it is about performance tuning. I have previously explained how it works over here. First open DBOptimizer and open Tuning Job from File >> New >> Tuning Job. Once you open DBOptimizer Tuning Job follow the various steps indicates in the following diagram. Essentially we will take our original script and will paste that into Step 1: New SQL Text and right after that we will enable Step 2 for Generating Various cases, Step 3 for Detailed Analysis and Step 4 for Executing each generated case. Finally we will click on Analysis in Step 5 which will generate the report detailed analysis in the result pan. The detailed pan looks like. It generates various cases of T-SQL based on the original query. It applies various hints and available hints to the query and generate various execution plans of the query and displays them in the resultant. You can clearly notice that original query had a cost of 0.0841 and logical reads about 607 pages. Whereas various options which are just following it has different execution cost as well logical read. There are few cases where we have higher logical read and there are few cases where as we have very low logical read. If we pay attention the very next row to original query have Merge_Join_Query in description and have lowest execution cost value of 0.044 and have lowest Logical Reads of 29. This row contains the query which is the most optimal re-write of the original query. Let us double click over it. Here is the query: SELECT * FROM HumanResources.Employee e INNER JOIN HumanResources.EmployeeDepartmentHistory edh ON e.BusinessEntityID = edh.BusinessEntityID INNER JOIN HumanResources.Shift s ON edh.ShiftID = s.ShiftID OPTION (MERGE JOIN) If you notice above query have additional hint of Merge Join. With the help of this Merge Join query hint this query is now performing much better than before. The entire process takes less than 60 seconds. Please note that it the join hint Merge Join was optimal for this query but it is not necessary that the same hint will be helpful in all the queries. Additionally, if the workload or data pattern changes the query hint of merge join may be no more optimal join. In that case, we will have to redo the entire exercise once again. This is the reason I do not like to use hints in my queries and I discourage all of my users to use the same. However, if you look at this example, this is a great case where hints are optimizing the performance of the query. It is humanly not possible to test out various query hints and index options with the query to figure out which is the most optimal solution. Sometimes, we need to depend on the efficiency tools like DB Optimizer to guide us the way and select the best option from the suggestion provided. Let me know what you think of this article as well your experience with DB Optimizer. Please leave a comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Joins, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to link data in different worksheets

    - by user2961726
    I tried consolidation but I can not get the following to work as it keeps saying no data consolidated. Can somebody try this dummy application and if they figure out how to do the following below can give me a step by step guide so I can attempt myself to learn. I'm not sure if I need to use any coding for this: In the dummy application I have 2 worksheets. One known as "1st", the other "Cases". In the "1st" worksheet you can insert and delete records for the "Case" table at the bottom, what I want to do is insert a row into the Case Table in worksheet "1st" and enter in the data for that row. What should happen is that data should be automatically be updated in the table in the "Cases" worksheet. But I can't seem to get this to work. Also if I delete a row from the table in Worksheet "1st" it should automatically remove that record from the "Cases" worksheet table. Please help. Below is the spreadsheet: http://ge.tt/8sjdkVx/v/0

    Read the article

  • Agile Development

    - by James Oloo Onyango
    Alot of literature has and is being written about agile developement and its surrounding philosophies. In my quest to find the best way to express the importance of agile methodologies, i have found Robert C. Martin's "A Satire Of Two Companies" to be both the most concise and thorough! Enjoy the read! Rufus Inc Project Kick Off Your name is Bob. The date is January 3, 2001, and your head still aches from the recent millennial revelry. You are sitting in a conference room with several managers and a group of your peers. You are a project team leader. Your boss is there, and he has brought along all of his team leaders. His boss called the meeting. "We have a new project to develop," says your boss's boss. Call him BB. The points in his hair are so long that they scrape the ceiling. Your boss's points are just starting to grow, but he eagerly awaits the day when he can leave Brylcream stains on the acoustic tiles. BB describes the essence of the new market they have identified and the product they want to develop to exploit this market. "We must have this new project up and working by fourth quarter October 1," BB demands. "Nothing is of higher priority, so we are cancelling your current project." The reaction in the room is stunned silence. Months of work are simply going to be thrown away. Slowly, a murmur of objection begins to circulate around the conference table.   His points give off an evil green glow as BB meets the eyes of everyone in the room. One by one, that insidious stare reduces each attendee to quivering lumps of protoplasm. It is clear that he will brook no discussion on this matter. Once silence has been restored, BB says, "We need to begin immediately. How long will it take you to do the analysis?" You raise your hand. Your boss tries to stop you, but his spitwad misses you and you are unaware of his efforts.   "Sir, we can't tell you how long the analysis will take until we have some requirements." "The requirements document won't be ready for 3 or 4 weeks," BB says, his points vibrating with frustration. "So, pretend that you have the requirements in front of you now. How long will you require for analysis?" No one breathes. Everyone looks around to see whether anyone has some idea. "If analysis goes beyond April 1, we have a problem. Can you finish the analysis by then?" Your boss visibly gathers his courage: "We'll find a way, sir!" His points grow 3 mm, and your headache increases by two Tylenol. "Good." BB smiles. "Now, how long will it take to do the design?" "Sir," you say. Your boss visibly pales. He is clearly worried that his 3 mms are at risk. "Without an analysis, it will not be possible to tell you how long design will take." BB's expression shifts beyond austere.   "PRETEND you have the analysis already!" he says, while fixing you with his vacant, beady little eyes. "How long will it take you to do the design?" Two Tylenol are not going to cut it. Your boss, in a desperate attempt to save his new growth, babbles: "Well, sir, with only six months left to complete the project, design had better take no longer than 3 months."   "I'm glad you agree, Smithers!" BB says, beaming. Your boss relaxes. He knows his points are secure. After a while, he starts lightly humming the Brylcream jingle. BB continues, "So, analysis will be complete by April 1, design will be complete by July 1, and that gives you 3 months to implement the project. This meeting is an example of how well our new consensus and empowerment policies are working. Now, get out there and start working. I'll expect to see TQM plans and QIT assignments on my desk by next week. Oh, and don't forget that your crossfunctional team meetings and reports will be needed for next month's quality audit." "Forget the Tylenol," you think to yourself as you return to your cubicle. "I need bourbon."   Visibly excited, your boss comes over to you and says, "Gosh, what a great meeting. I think we're really going to do some world shaking with this project." You nod in agreement, too disgusted to do anything else. "Oh," your boss continues, "I almost forgot." He hands you a 30-page document. "Remember that the SEI is coming to do an evaluation next week. This is the evaluation guide. You need to read through it, memorize it, and then shred it. It tells you how to answer any questions that the SEI auditors ask you. It also tells you what parts of the building you are allowed to take them to and what parts to avoid. We are determined to be a CMM level 3 organization by June!"   You and your peers start working on the analysis of the new project. This is difficult because you have no requirements. But from the 10-minute introduction given by BB on that fateful morning, you have some idea of what the product is supposed to do.   Corporate process demands that you begin by creating a use case document. You and your team begin enumerating use cases and drawing oval and stick diagrams. Philosophical debates break out among the team members. There is disagreement as to whether certain use cases should be connected with <<extends>> or <<includes>> relationships. Competing models are created, but nobody knows how to evaluate them. The debate continues, effectively paralyzing progress.   After a week, somebody finds the iceberg.com Web site, which recommends disposing entirely of <<extends>> and <<includes>> and replacing them with <<precedes>> and <<uses>>. The documents on this Web site, authored by Don Sengroiux, describes a method known as stalwart-analysis, which claims to be a step-by-step method for translating use cases into design diagrams. More competing use case models are created using this new scheme, but again, people can't agree on how to evaluate them. The thrashing continues. More and more, the use case meetings are driven by emotion rather than by reason. If it weren't for the fact that you don't have requirements, you'd be pretty upset by the lack of progress you are making. The requirements document arrives on February 15. And then again on February 20, 25, and every week thereafter. Each new version contradicts the previous one. Clearly, the marketing folks who are writing the requirements, empowered though they might be, are not finding consensus.   At the same time, several new competing use case templates have been proposed by the various team members. Each template presents its own particularly creative way of delaying progress. The debates rage on. On March 1, Prudence Putrigence, the process proctor, succeeds in integrating all the competing use case forms and templates into a single, all-encompassing form. Just the blank form is 15 pages long. She has managed to include every field that appeared on all the competing templates. She also presents a 159- page document describing how to fill out the use case form. All current use cases must be rewritten according to the new standard.   You marvel to yourself that it now requires 15 pages of fill-in-the-blank and essay questions to answer the question: What should the system do when the user presses Return? The corporate process (authored by L. E. Ott, famed author of "Holistic Analysis: A Progressive Dialectic for Software Engineers") insists that you discover all primary use cases, 87 percent of all secondary use cases, and 36.274 percent of all tertiary use cases before you can complete analysis and enter the design phase. You have no idea what a tertiary use case is. So in an attempt to meet this requirement, you try to get your use case document reviewed by the marketing department, which you hope will know what a tertiary use case is.   Unfortunately, the marketing folks are too busy with sales support to talk to you. Indeed, since the project started, you have not been able to get a single meeting with marketing, which has provided a never-ending stream of changing and contradictory requirements documents.   While one team has been spinning endlessly on the use case document, another team has been working out the domain model. Endless variations of UML documents are pouring out of this team. Every week, the model is reworked.   The team members can't decide whether to use <<interfaces>> or <<types>> in the model. A huge disagreement has been raging on the proper syntax and application of OCL. Others on the team just got back from a 5-day class on catabolism, and have been producing incredibly detailed and arcane diagrams that nobody else can fathom.   On March 27, with one week to go before analysis is to be complete, you have produced a sea of documents and diagrams but are no closer to a cogent analysis of the problem than you were on January 3. **** And then, a miracle happens.   **** On Saturday, April 1, you check your e-mail from home. You see a memo from your boss to BB. It states unequivocally that you are done with the analysis! You phone your boss and complain. "How could you have told BB that we were done with the analysis?" "Have you looked at a calendar lately?" he responds. "It's April 1!" The irony of that date does not escape you. "But we have so much more to think about. So much more to analyze! We haven't even decided whether to use <<extends>> or <<precedes>>!" "Where is your evidence that you are not done?" inquires your boss, impatiently. "Whaaa . . . ." But he cuts you off. "Analysis can go on forever; it has to be stopped at some point. And since this is the date it was scheduled to stop, it has been stopped. Now, on Monday, I want you to gather up all existing analysis materials and put them into a public folder. Release that folder to Prudence so that she can log it in the CM system by Monday afternoon. Then get busy and start designing."   As you hang up the phone, you begin to consider the benefits of keeping a bottle of bourbon in your bottom desk drawer. They threw a party to celebrate the on-time completion of the analysis phase. BB gave a colon-stirring speech on empowerment. And your boss, another 3 mm taller, congratulated his team on the incredible show of unity and teamwork. Finally, the CIO takes the stage to tell everyone that the SEI audit went very well and to thank everyone for studying and shredding the evaluation guides that were passed out. Level 3 now seems assured and will be awarded by June. (Scuttlebutt has it that managers at the level of BB and above are to receive significant bonuses once the SEI awards level 3.)   As the weeks flow by, you and your team work on the design of the system. Of course, you find that the analysis that the design is supposedly based on is flawedno, useless; no, worse than useless. But when you tell your boss that you need to go back and work some more on the analysis to shore up its weaker sections, he simply states, "The analysis phase is over. The only allowable activity is design. Now get back to it."   So, you and your team hack the design as best you can, unsure of whether the requirements have been properly analyzed. Of course, it really doesn't matter much, since the requirements document is still thrashing with weekly revisions, and the marketing department still refuses to meet with you.     The design is a nightmare. Your boss recently misread a book named The Finish Line in which the author, Mark DeThomaso, blithely suggested that design documents should be taken down to code-level detail. "If we are going to be working at that level of detail," you ask, "why don't we simply write the code instead?" "Because then you wouldn't be designing, of course. And the only allowable activity in the design phase is design!" "Besides," he continues, "we have just purchased a companywide license for Dandelion! This tool enables 'Round the Horn Engineering!' You are to transfer all design diagrams into this tool. It will automatically generate our code for us! It will also keep the design diagrams in sync with the code!" Your boss hands you a brightly colored shrinkwrapped box containing the Dandelion distribution. You accept it numbly and shuffle off to your cubicle. Twelve hours, eight crashes, one disk reformatting, and eight shots of 151 later, you finally have the tool installed on your server. You consider the week your team will lose while attending Dandelion training. Then you smile and think, "Any week I'm not here is a good week." Design diagram after design diagram is created by your team. Dandelion makes it very difficult to draw these diagrams. There are dozens and dozens of deeply nested dialog boxes with funny text fields and check boxes that must all be filled in correctly. And then there's the problem of moving classes between packages. At first, these diagram are driven from the use cases. But the requirements are changing so often that the use cases rapidly become meaningless. Debates rage about whether VISITOR or DECORATOR design patterns should be used. One developer refuses to use VISITOR in any form, claiming that it's not a properly object-oriented construct. Someone refuses to use multiple inheritance, since it is the spawn of the devil. Review meetings rapidly degenerate into debates about the meaning of object orientation, the definition of analysis versus design, or when to use aggregation versus association. Midway through the design cycle, the marketing folks announce that they have rethought the focus of the system. Their new requirements document is completely restructured. They have eliminated several major feature areas and replaced them with feature areas that they anticipate customer surveys will show to be more appropriate. You tell your boss that these changes mean that you need to reanalyze and redesign much of the system. But he says, "The analysis phase is system. But he says, "The analysis phase is over. The only allowable activity is design. Now get back to it."   You suggest that it might be better to create a simple prototype to show to the marketing folks and even some potential customers. But your boss says, "The analysis phase is over. The only allowable activity is design. Now get back to it." Hack, hack, hack, hack. You try to create some kind of a design document that might reflect the new requirements documents. However, the revolution of the requirements has not caused them to stop thrashing. Indeed, if anything, the wild oscillations of the requirements document have only increased in frequency and amplitude.   You slog your way through them.   On June 15, the Dandelion database gets corrupted. Apparently, the corruption has been progressive. Small errors in the DB accumulated over the months into bigger and bigger errors. Eventually, the CASE tool just stopped working. Of course, the slowly encroaching corruption is present on all the backups. Calls to the Dandelion technical support line go unanswered for several days. Finally, you receive a brief e-mail from Dandelion, informing you that this is a known problem and that the solution is to purchase the new version, which they promise will be ready some time next quarter, and then reenter all the diagrams by hand.   ****   Then, on July 1 another miracle happens! You are done with the design!   Rather than go to your boss and complain, you stock your middle desk drawer with some vodka.   **** They threw a party to celebrate the on-time completion of the design phase and their graduation to CMM level 3. This time, you find BB's speech so stirring that you have to use the restroom before it begins. New banners and plaques are all over your workplace. They show pictures of eagles and mountain climbers, and they talk about teamwork and empowerment. They read better after a few scotches. That reminds you that you need to clear out your file cabinet to make room for the brandy. You and your team begin to code. But you rapidly discover that the design is lacking in some significant areas. Actually, it's lacking any significance at all. You convene a design session in one of the conference rooms to try to work through some of the nastier problems. But your boss catches you at it and disbands the meeting, saying, "The design phase is over. The only allowable activity is coding. Now get back to it."   ****   The code generated by Dandelion is really hideous. It turns out that you and your team were using association and aggregation the wrong way, after all. All the generated code has to be edited to correct these flaws. Editing this code is extremely difficult because it has been instrumented with ugly comment blocks that have special syntax that Dandelion needs in order to keep the diagrams in sync with the code. If you accidentally alter one of these comments, the diagrams will be regenerated incorrectly. It turns out that "Round the Horn Engineering" requires an awful lot of effort. The more you try to keep the code compatible with Dandelion, the more errors Dandelion generates. In the end, you give up and decide to keep the diagrams up to date manually. A second later, you decide that there's no point in keeping the diagrams up to date at all. Besides, who has time?   Your boss hires a consultant to build tools to count the number of lines of code that are being produced. He puts a big thermometer graph on the wall with the number 1,000,000 on the top. Every day, he extends the red line to show how many lines have been added. Three days after the thermometer appears on the wall, your boss stops you in the hall. "That graph isn't growing quickly enough. We need to have a million lines done by October 1." "We aren't even sh-sh-sure that the proshect will require a m-million linezh," you blather. "We have to have a million lines done by October 1," your boss reiterates. His points have grown again, and the Grecian formula he uses on them creates an aura of authority and competence. "Are you sure your comment blocks are big enough?" Then, in a flash of managerial insight, he says, "I have it! I want you to institute a new policy among the engineers. No line of code is to be longer than 20 characters. Any such line must be split into two or more preferably more. All existing code needs to be reworked to this standard. That'll get our line count up!"   You decide not to tell him that this will require two unscheduled work months. You decide not to tell him anything at all. You decide that intravenous injections of pure ethanol are the only solution. You make the appropriate arrangements. Hack, hack, hack, and hack. You and your team madly code away. By August 1, your boss, frowning at the thermometer on the wall, institutes a mandatory 50-hour workweek.   Hack, hack, hack, and hack. By September 1st, the thermometer is at 1.2 million lines and your boss asks you to write a report describing why you exceeded the coding budget by 20 percent. He institutes mandatory Saturdays and demands that the project be brought back down to a million lines. You start a campaign of remerging lines. Hack, hack, hack, and hack. Tempers are flaring; people are quitting; QA is raining trouble reports down on you. Customers are demanding installation and user manuals; salespeople are demanding advance demonstrations for special customers; the requirements document is still thrashing, the marketing folks are complaining that the product isn't anything like they specified, and the liquor store won't accept your credit card anymore. Something has to give.    On September 15, BB calls a meeting. As he enters the room, his points are emitting clouds of steam. When he speaks, the bass overtones of his carefully manicured voice cause the pit of your stomach to roll over. "The QA manager has told me that this project has less than 50 percent of the required features implemented. He has also informed me that the system crashes all the time, yields wrong results, and is hideously slow. He has also complained that he cannot keep up with the continuous train of daily releases, each more buggy than the last!" He stops for a few seconds, visibly trying to compose himself. "The QA manager estimates that, at this rate of development, we won't be able to ship the product until December!" Actually, you think it's more like March, but you don't say anything. "December!" BB roars with such derision that people duck their heads as though he were pointing an assault rifle at them. "December is absolutely out of the question. Team leaders, I want new estimates on my desk in the morning. I am hereby mandating 65-hour work weeks until this project is complete. And it better be complete by November 1."   As he leaves the conference room, he is heard to mutter: "Empowermentbah!" * * * Your boss is bald; his points are mounted on BB's wall. The fluorescent lights reflecting off his pate momentarily dazzle you. "Do you have anything to drink?" he asks. Having just finished your last bottle of Boone's Farm, you pull a bottle of Thunderbird from your bookshelf and pour it into his coffee mug. "What's it going to take to get this project done? " he asks. "We need to freeze the requirements, analyze them, design them, and then implement them," you say callously. "By November 1?" your boss exclaims incredulously. "No way! Just get back to coding the damned thing." He storms out, scratching his vacant head.   A few days later, you find that your boss has been transferred to the corporate research division. Turnover has skyrocketed. Customers, informed at the last minute that their orders cannot be fulfilled on time, have begun to cancel their orders. Marketing is re-evaluating whether this product aligns with the overall goals of the company. Memos fly, heads roll, policies change, and things are, overall, pretty grim. Finally, by March, after far too many sixty-five hour weeks, a very shaky version of the software is ready. In the field, bug-discovery rates are high, and the technical support staff are at their wits' end, trying to cope with the complaints and demands of the irate customers. Nobody is happy.   In April, BB decides to buy his way out of the problem by licensing a product produced by Rupert Industries and redistributing it. The customers are mollified, the marketing folks are smug, and you are laid off.     Rupert Industries: Project Alpha   Your name is Robert. The date is January 3, 2001. The quiet hours spent with your family this holiday have left you refreshed and ready for work. You are sitting in a conference room with your team of professionals. The manager of the division called the meeting. "We have some ideas for a new project," says the division manager. Call him Russ. He is a high-strung British chap with more energy than a fusion reactor. He is ambitious and driven but understands the value of a team. Russ describes the essence of the new market opportunity the company has identified and introduces you to Jane, the marketing manager, who is responsible for defining the products that will address it. Addressing you, Jane says, "We'd like to start defining our first product offering as soon as possible. When can you and your team meet with me?" You reply, "We'll be done with the current iteration of our project this Friday. We can spare a few hours for you between now and then. After that, we'll take a few people from the team and dedicate them to you. We'll begin hiring their replacements and the new people for your team immediately." "Great," says Russ, "but I want you to understand that it is critical that we have something to exhibit at the trade show coming up this July. If we can't be there with something significant, we'll lose the opportunity."   "I understand," you reply. "I don't yet know what it is that you have in mind, but I'm sure we can have something by July. I just can't tell you what that something will be right now. In any case, you and Jane are going to have complete control over what we developers do, so you can rest assured that by July, you'll have the most important things that can be accomplished in that time ready to exhibit."   Russ nods in satisfaction. He knows how this works. Your team has always kept him advised and allowed him to steer their development. He has the utmost confidence that your team will work on the most important things first and will produce a high-quality product.   * * *   "So, Robert," says Jane at their first meeting, "How does your team feel about being split up?" "We'll miss working with each other," you answer, "but some of us were getting pretty tired of that last project and are looking forward to a change. So, what are you people cooking up?" Jane beams. "You know how much trouble our customers currently have . . ." And she spends a half hour or so describing the problem and possible solution. "OK, wait a second" you respond. "I need to be clear about this." And so you and Jane talk about how this system might work. Some of her ideas aren't fully formed. You suggest possible solutions. She likes some of them. You continue discussing.   During the discussion, as each new topic is addressed, Jane writes user story cards. Each card represents something that the new system has to do. The cards accumulate on the table and are spread out in front of you. Both you and Jane point at them, pick them up, and make notes on them as you discuss the stories. The cards are powerful mnemonic devices that you can use to represent complex ideas that are barely formed.   At the end of the meeting, you say, "OK, I've got a general idea of what you want. I'm going to talk to the team about it. I imagine they'll want to run some experiments with various database structures and presentation formats. Next time we meet, it'll be as a group, and we'll start identifying the most important features of the system."   A week later, your nascent team meets with Jane. They spread the existing user story cards out on the table and begin to get into some of the details of the system. The meeting is very dynamic. Jane presents the stories in the order of their importance. There is much discussion about each one. The developers are concerned about keeping the stories small enough to estimate and test. So they continually ask Jane to split one story into several smaller stories. Jane is concerned that each story have a clear business value and priority, so as she splits them, she makes sure that this stays true.   The stories accumulate on the table. Jane writes them, but the developers make notes on them as needed. Nobody tries to capture everything that is said; the cards are not meant to capture everything but are simply reminders of the conversation.   As the developers become more comfortable with the stories, they begin writing estimates on them. These estimates are crude and budgetary, but they give Jane an idea of what the story will cost.   At the end of the meeting, it is clear that many more stories could be discussed. It is also clear that the most important stories have been addressed and that they represent several months worth of work. Jane closes the meeting by taking the cards with her and promising to have a proposal for the first release in the morning.   * * *   The next morning, you reconvene the meeting. Jane chooses five cards and places them on the table. "According to your estimates, these cards represent about one perfect team-week's worth of work. The last iteration of the previous project managed to get one perfect team-week done in 3 real weeks. If we can get these five stories done in 3 weeks, we'll be able to demonstrate them to Russ. That will make him feel very comfortable about our progress." Jane is pushing it. The sheepish look on her face lets you know that she knows it too. You reply, "Jane, this is a new team, working on a new project. It's a bit presumptuous to expect that our velocity will be the same as the previous team's. However, I met with the team yesterday afternoon, and we all agreed that our initial velocity should, in fact, be set to one perfectweek for every 3 real-weeks. So you've lucked out on this one." "Just remember," you continue, "that the story estimates and the story velocity are very tentative at this point. We'll learn more when we plan the iteration and even more when we implement it."   Jane looks over her glasses at you as if to say "Who's the boss around here, anyway?" and then smiles and says, "Yeah, don't worry. I know the drill by now."Jane then puts 15 more cards on the table. She says, "If we can get all these cards done by the end of March, we can turn the system over to our beta test customers. And we'll get good feedback from them."   You reply, "OK, so we've got our first iteration defined, and we have the stories for the next three iterations after that. These four iterations will make our first release."   "So," says Jane, can you really do these five stories in the next 3 weeks?" "I don't know for sure, Jane," you reply. "Let's break them down into tasks and see what we get."   So Jane, you, and your team spend the next several hours taking each of the five stories that Jane chose for the first iteration and breaking them down into small tasks. The developers quickly realize that some of the tasks can be shared between stories and that other tasks have commonalities that can probably be taken advantage of. It is clear that potential designs are popping into the developers' heads. From time to time, they form little discussion knots and scribble UML diagrams on some cards.   Soon, the whiteboard is filled with the tasks that, once completed, will implement the five stories for this iteration. You start the sign-up process by saying, "OK, let's sign up for these tasks." "I'll take the initial database generation." Says Pete. "That's what I did on the last project, and this doesn't look very different. I estimate it at two of my perfect workdays." "OK, well, then, I'll take the login screen," says Joe. "Aw, darn," says Elaine, the junior member of the team, "I've never done a GUI, and kinda wanted to try that one."   "Ah, the impatience of youth," Joe says sagely, with a wink in your direction. "You can assist me with it, young Jedi." To Jane: "I think it'll take me about three of my perfect workdays."   One by one, the developers sign up for tasks and estimate them in terms of their own perfect workdays. Both you and Jane know that it is best to let the developers volunteer for tasks than to assign the tasks to them. You also know full well that you daren't challenge any of the developers' estimates. You know these people, and you trust them. You know that they are going to do the very best they can.   The developers know that they can't sign up for more perfect workdays than they finished in the last iteration they worked on. Once each developer has filled his or her schedule for the iteration, they stop signing up for tasks.   Eventually, all the developers have stopped signing up for tasks. But, of course, tasks are still left on the board.   "I was worried that that might happen," you say, "OK, there's only one thing to do, Jane. We've got too much to do in this iteration. What stories or tasks can we remove?" Jane sighs. She knows that this is the only option. Working overtime at the beginning of a project is insane, and projects where she's tried it have not fared well.   So Jane starts to remove the least-important functionality. "Well, we really don't need the login screen just yet. We can simply start the system in the logged-in state." "Rats!" cries Elaine. "I really wanted to do that." "Patience, grasshopper." says Joe. "Those who wait for the bees to leave the hive will not have lips too swollen to relish the honey." Elaine looks confused. Everyone looks confused. "So . . .," Jane continues, "I think we can also do away with . . ." And so, bit by bit, the list of tasks shrinks. Developers who lose a task sign up for one of the remaining ones.   The negotiation is not painless. Several times, Jane exhibits obvious frustration and impatience. Once, when tensions are especially high, Elaine volunteers, "I'll work extra hard to make up some of the missing time." You are about to correct her when, fortunately, Joe looks her in the eye and says, "When once you proceed down the dark path, forever will it dominate your destiny."   In the end, an iteration acceptable to Jane is reached. It's not what Jane wanted. Indeed, it is significantly less. But it's something the team feels that can be achieved in the next 3 weeks.   And, after all, it still addresses the most important things that Jane wanted in the iteration. "So, Jane," you say when things had quieted down a bit, "when can we expect acceptance tests from you?" Jane sighs. This is the other side of the coin. For every story the development team implements,   Jane must supply a suite of acceptance tests that prove that it works. And the team needs these long before the end of the iteration, since they will certainly point out differences in the way Jane and the developers imagine the system's behaviour.   "I'll get you some example test scripts today," Jane promises. "I'll add to them every day after that. You'll have the entire suite by the middle of the iteration."   * * *   The iteration begins on Monday morning with a flurry of Class, Responsibilities, Collaborators sessions. By midmorning, all the developers have assembled into pairs and are rapidly coding away. "And now, my young apprentice," Joe says to Elaine, "you shall learn the mysteries of test-first design!"   "Wow, that sounds pretty rad," Elaine replies. "How do you do it?" Joe beams. It's clear that he has been anticipating this moment. "OK, what does the code do right now?" "Huh?" replied Elaine, "It doesn't do anything at all; there is no code."   "So, consider our task; can you think of something the code should do?" "Sure," Elaine said with youthful assurance, "First, it should connect to the database." "And thereupon, what must needs be required to connecteth the database?" "You sure talk weird," laughed Elaine. "I think we'd have to get the database object from some registry and call the Connect() method. "Ah, astute young wizard. Thou perceives correctly that we requireth an object within which we can cacheth the database object." "Is 'cacheth' really a word?" "It is when I say it! So, what test can we write that we know the database registry should pass?" Elaine sighs. She knows she'll just have to play along. "We should be able to create a database object and pass it to the registry in a Store() method. And then we should be able to pull it out of the registry with a Get() method and make sure it's the same object." "Oh, well said, my prepubescent sprite!" "Hay!" "So, now, let's write a test function that proves your case." "But shouldn't we write the database object and registry object first?" "Ah, you've much to learn, my young impatient one. Just write the test first." "But it won't even compile!" "Are you sure? What if it did?" "Uh . . ." "Just write the test, Elaine. Trust me." And so Joe, Elaine, and all the other developers began to code their tasks, one test case at a time. The room in which they worked was abuzz with the conversations between the pairs. The murmur was punctuated by an occasional high five when a pair managed to finish a task or a difficult test case.   As development proceeded, the developers changed partners once or twice a day. Each developer got to see what all the others were doing, and so knowledge of the code spread generally throughout the team.   Whenever a pair finished something significant whether a whole task or simply an important part of a task they integrated what they had with the rest of the system. Thus, the code base grew daily, and integration difficulties were minimized.   The developers communicated with Jane on a daily basis. They'd go to her whenever they had a question about the functionality of the system or the interpretation of an acceptance test case.   Jane, good as her word, supplied the team with a steady stream of acceptance test scripts. The team read these carefully and thereby gained a much better understanding of what Jane expected the system to do. By the beginning of the second week, there was enough functionality to demonstrate to Jane. She watched eagerly as the demonstration passed test case after test case. "This is really cool," Jane said as the demonstration finally ended. "But this doesn't seem like one-third of the tasks. Is your velocity slower than anticipated?"   You grimace. You'd been waiting for a good time to mention this to Jane but now she was forcing the issue. "Yes, unfortunately, we are going more slowly than we had expected. The new application server we are using is turning out to be a pain to configure. Also, it takes forever to reboot, and we have to reboot it whenever we make even the slightest change to its configuration."   Jane eyes you with suspicion. The stress of last Monday's negotiations had still not entirely dissipated. She says, "And what does this mean to our schedule? We can't slip it again, we just can't. Russ will have a fit! He'll haul us all into the woodshed and ream us some new ones."   You look Jane right in the eyes. There's no pleasant way to give someone news like this. So you just blurt out, "Look, if things keep going like they're going, we're not going to be done with everything by next Friday. Now it's possible that we'll figure out a way to go faster. But, frankly, I wouldn't depend on that. You should start thinking about one or two tasks that could be removed from the iteration without ruining the demonstration for Russ. Come hell or high water, we are going to give that demonstration on Friday, and I don't think you want us to choose which tasks to omit."   "Aw forchrisakes!" Jane barely manages to stifle yelling that last word as she stalks away, shaking her head. Not for the first time, you say to yourself, "Nobody ever promised me project management would be easy." You are pretty sure it won't be the last time, either.   Actually, things went a bit better than you had hoped. The team did, in fact, have to drop one task from the iteration, but Jane had chosen wisely, and the demonstration for Russ went without a hitch. Russ was not impressed with the progress, but neither was he dismayed. He simply said, "This is pretty good. But remember, we have to be able to demonstrate this system at the trade show in July, and at this rate, it doesn't look like you'll have all that much to show." Jane, whose attitude had improved dramatically with the completion of the iteration, responded to Russ by saying, "Russ, this team is working hard, and well. When July comes around, I am confident that we'll have something significant to demonstrate. It won't be everything, and some of it may be smoke and mirrors, but we'll have something."   Painful though the last iteration was, it had calibrated your velocity numbers. The next iteration went much better. Not because your team got more done than in the last iteration but simply because the team didn't have to remove any tasks or stories in the middle of the iteration.   By the start of the fourth iteration, a natural rhythm has been established. Jane, you, and the team know exactly what to expect from one another. The team is running hard, but the pace is sustainable. You are confident that the team can keep up this pace for a year or more.   The number of surprises in the schedule diminishes to near zero; however, the number of surprises in the requirements does not. Jane and Russ frequently look over the growing system and make recommendations or changes to the existing functionality. But all parties realize that these changes take time and must be scheduled. So the changes do not cause anyone's expectations to be violated. In March, there is a major demonstration of the system to the board of directors. The system is very limited and is not yet in a form good enough to take to the trade show, but progress is steady, and the board is reasonably impressed.   The second release goes even more smoothly than the first. By now, the team has figured out a way to automate Jane's acceptance test scripts. The team has also refactored the design of the system to the point that it is really easy to add new features and change old ones. The second release was done by the end of June and was taken to the trade show. It had less in it than Jane and Russ would have liked, but it did demonstrate the most important features of the system. Although customers at the trade show noticed that certain features were missing, they were very impressed overall. You, Russ, and Jane all returned from the trade show with smiles on your faces. You all felt as though this project was a winner.   Indeed, many months later, you are contacted by Rufus Inc. That company had been working on a system like this for its internal operations. Rufus has canceled the development of that system after a death-march project and is negotiating to license your technology for its environment.   Indeed, things are looking up!

    Read the article

  • How can a non-technical person learn to write a spec for small projects?

    - by Joseph Turian
    How can a non-technical person learn to write specs for small projects? A friend of mine is trying to outsource some development on a statistics project. In particular, he does a lot of work in excel, and wants to outsource the creation of scripts to do what he now does by hand. However, my friend is extremely non-technical. He is poor at writing technical specs. When he does write a spec, it is written the way you would describe doing something in excel (go to this cell and then copy the value to that cell). It is also overly verbose, and does examples several times. I'm not sure if he properly describes corner cases. The first project he outsourced was a failure. I think he overdescribed some details, but underdescribed corner cases. That and/or the coder he hired didn't think through the corner cases and ask appropriate questions. I'm not sure. I got on IM with him and it took me half an hour to dig out a description that should have taken five minutes or less to describe. I wrote the scripts for him at the end, but didn't examine why his process with the coder failed. He has asked me for help. However, I refuse to get involved, because taking his spec and translating it into clear requirements is 10x more work than executing on a clearly written spec. What is the right way for him to learn? Are there resources he could use? Are there ways he can learn from small, low-pressure practice projects with coders? Most of his scripts are statistical and data processing oriented. e.g. take this column and run an average over it. Remove these rows under these conditions. So the challenge is different than spec'ing a web app.

    Read the article

  • How can a non-technical person can learn to write a spec for small projects?

    - by Joseph Turian
    How can a non-technical person learn to write specs for small projects? A friend of mine is trying to outsource some development on a statistics project. In particular, he does a lot of work in excel, and wants to outsource the creation of scripts to do what he now does by hand. However, my friend is extremely non-technical. He is poor at writing technical specs. When he does write a spec, it is written the way you would describe doing something in excel (go to this cell and then copy the value to that cell). It is also overly verbose, and does examples several times. I'm not sure if he properly describes corner cases. The first project he outsourced was a failure. I think he overdescribed some details, but underdescribed corner cases. That and/or the coder he hired didn't think through the corner cases and ask appropriate questions. I'm not sure. I got on IM with him and it took me half an hour to dig out a description that should have taken five minutes or less to describe. I wrote the scripts for him at the end, but didn't examine why his process with the coder failed. He has asked me for help. However, I refuse to get involved, because taking his spec and translating it into clear requirements is 10x more work than executing on a clearly written spec. What is the right way for him to learn? Are there resources he could use? Are there ways he can learn from small, low-pressure practice projects with coders? [edit: Most of his scripts are statistical and data processing oriented. e.g. take this column and run an average over it. Remove these rows under these conditions. So the challenge is different than spec'ing a web app.]

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • How do I convince my team that a requirements specification is unnecessary if we adopt user-stories?

    - by Nupul
    We are planning to adopt user-stories to capture stakeholder 'intent' in a lightweight fashion rather than a heavy SRS (software requirements specifications). However, it seems that though they understand the value of stories, there is still a desire to 'convert' the stories into an SRS-like language with all the attributes, priorities, input, outputs, source, destination etc. User-stories 'eliminate' the need for a formal SRS like artifact to begin with so what's the point in having an SRS? How should I convince my team (who are all very qualified CS folks by the way - both by education and practice) that the SRS would be 'eliminated' if we adopted user-stories for capturing the functional requirements of the system? (NFRs etc can be captured too, but that's not the intent of the question). So here's my 'work-flow' argument: Capture initial requirements as user-stories and later elaborate them to use-cases (which are required to be documented at a low level i.e. describing interactions with the UI prototypes/mockups and are a deliverable post deployment). Thus going from user-stories to use-cases rather than user-stories to SRS to use-cases. How are you all currently capturing user-stories at your workplace (if at all) and how do you suggest I 'make a case' for absence of SRS in presence of user-stories?

    Read the article

  • What is a good support knowledge base tool?

    - by Guillaume
    I have been searching for a tool to help my team organize its knowledge for resolving recurring support cases. I know this question will probably be closed, but I'll try my change anyway because I know that I can have some good answers about that. Context: our team is developing and supporting an huge applications (lots of different screens and workflow processes. We already have a good tool for managing our documentation, but we are struggling with support cases. Support action involve often quite a lot of manual steps to fix stuff and the knowledge for these actions is more 'oral transmission' than modern tools. We need an efficient way to store them in a knowledge base to be able to retrieve similar cases based on patterns (a stacktrace, an error message, a component name, a workflow step, ...) and ranked by similarity. Our wiki search is not very powerful when it come to this kind of search and the team members don't want to 'waste' time writing a report that will never be found... Do you know efficient knowledge base tool for this kind of use case ?

    Read the article

  • Working with multiple interfaces on a single mock.

    - by mehfuzh
    Today , I will cover a very simple topic, which can be useful in cases we want to mock different interfaces on our expected mock object.  Our target interface is simple and it looks like:   public interface IFoo : IDisposable {     void Do(); } Now, as we can see that our target interface has implemented IDisposable and in normal cases if we have to implement it in class where language rules require use to implement that as well[no doubt about it] and whether or not there can be more complex cases, we want to ensure that rather having an extra call(..As()) or constructs to prepare it for us, we should do it in the simplest way possible. Therefore, keeping that in mind, first we create a mock of IFoo var foo = Mock.Create<IFooDispose>(); Then, as we are interested with IDisposable, we simply do: var iDisposable = foo as IDisposable;   Finally, we proceed with our existing mock code. Considering the current context, we I will check if the dispose method has invoked our mock code successfully.   bool called = false;   Mock.Arrange(() => iDisposable.Dispose()).DoInstead(() => called = true);     iDisposable.Dispose();   Assert.True(called);   Further, we assert our expectation as follows: Mock.Assert(() => iDisposable.Dispose(), Occurs.Once());   Hopefully that will help a bit and stay tuned. Enjoy!!

    Read the article

  • Extending AutoVue Through the API

    - by GrahamOracle
    The AutoVue API (previously called the “VueBean” API) is a great way to extend AutoVue Client/Server Deployment – specifically the client component – beyond the out-of-the-box capabilities and into new use-cases. In addition to having a solid grasp of J2SE programming, make sure to leverage the following resources if you’re developing or interested in developing customizations/extensions to AutoVue Client/Server Deployment: Programmer’s Guide: Before all else, read through the AutoVue API Programmer’s Guide to get an understanding of the architecture of the API. The Programmer’s Guide is included with the installation of AutoVue, and is posted on the Oracle Technology Network (OTN) website for the recent versions of AutoVue: http://www.oracle.com/technetwork/documentation/autovue-091442.html Javadocs: The AutoVue API Javadocs document the many packages, classes, and methods available to you. The Javadocs are included in the product installation under the \docs\JavaDocs\VueBean folder (easiest starting point is through the file index.html). Integrations Forum: If you have development questions that aren’t answered through the documentation, feel free to register and post in the public AutoVue Integrations Forum. For more information refer to the following blog post from October 2010: https://blogs.oracle.com/enterprisevisualization/entry/exciting_news_autovue_integrat Code Samples: Although the Oracle Support team’s scope of Support for API/customization topics is to answer questions regarding information already provided in the documentation (i.e. not to design or develop custom solutions), there are cases where Support comes across interesting samples or code snippets that may benefit various customers. In those cases, our Support team posts the samples into the Oracle knowledge base, and tracks them through a single reference note. The link to the KM Note depends on how you currently access the My Oracle Support portal: Flash interface: https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=REFERENCE&id=1325990.1 (New) HTML interface: https://supporthtml.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jspx?type=DOCUMENT&id=1325990.1 Happy coding!

    Read the article

  • Adding tolerance to a point in polygon test

    - by David Gouveia
    I've been using this method which was taken from Game Coding Complete to detect whether a point is inside of a polygon. It works in almost every case, but is failing on a few edge cases, and I can't figure out the reason. For example, given a polygon with vertices at (0,0) (0,100) and (100,100), the algorithm is returning: True for any point strictly inside the polygon False for any of the vertices False for (0, 50) which lies on one of the edges of the polygon True (?) for (50,50) which is also on one of the edges of the polygon I'd actually like to relax the algorithm so that it returns true in all of these cases. In other words, it should return true for points that are strictly inside, for the vertices themselves, and for points on the edges of the polygon. If possible I'd also like to give it enough tolerance so that it always tend towards "true" in face of floating point fluctuations. For example, I have another method, that given a line segment and a point, returns the closest location on the line segment to the given point. Currently, given any point outside the polygon and one of its edges, there are cases where the result is categorized as being inside by the method above, while other points are considered outside. I'd like to give it enough tolerance so that it always returns true in this situation. The way I've currently solved the problem is an hack, which consists of using an external library to inflate the polygon by a few pixels, and performing the tests on the inflated polygon, but I'd really like to replace this with a proper solution.

    Read the article

  • Nullable types and ?? operator C# [en-US]

    - by ruimachado
    Nullable types vs Non-nullable types   While developing our C# projects its frequent the null comparison operation to avoid null exceptions. This simple operation is mainly coded using the "var x = null" code example inside an if clause. However not all types of variables are nullable, which means that setting a variable to null is not allowed in every cases, it depends on what kind of type are you defining. But what if there was an extension to your non-nullable type that would convert your variable types to nullable? This extension really exists. As I said before in C# you have nullable types which represent all the values of an underlying type, and an additional null value and can be declared easily using "T?", where T is the type of the variable and for example the normal int type cannot be null, so its a non-nullable type, however if you define a "int?" your variable can be null, what you do is convert a non-nullable type to a nullable type. Example: int x=null;     Not allowed     int? x=null;   Allowed     While using nullable types you can check if a variable is null the same way you do it with nullable types:     But what about setting a default value when a certain variable is null?   In this cases the c# .net framework let you set a default value when you try to assign a nullable type to a non-nullable type, using the ?? operator. If you don't use this operator you can still catch the InvalidOperationException which is throw in this cases. For example  without the ?? operator :     Using the ?? operator your code becomes cleaner and more easy to read and you get a bonus, you can set a default value for multiple variables using the ?? in a chain set.     That’s it,   Thanks, Rui Machado rpmachado.wordpress.com

    Read the article

  • Automatically generate table of function pointers in C.

    - by jeremytrimble
    I'm looking for a way to automatically (as part of the compilation/build process) generate a "table" of function pointers in C. Specifically, I want to generate an array of structures something like: typedef struct { void (*p_func)(void); char * funcName; } funcRecord; /* Automatically generate the lines below: */ extern void func1(void); extern void func2(void); /* ... */ funcRecord funcTable[] = { { .p_func = &func1, .funcName = "func1" }, { .p_func = &func2, .funcName = "func2" } /* ... */ }; /* End automatically-generated code. */ ...where func1 and func2 are defined in other source files. So, given a set of source files, each of which which contain a single function that takes no arguments and returns void, how would one automatically (as part of the build process) generate an array like the one above that contains each of the functions from the files? I'd like to be able to add new files and have them automatically inserted into the table when I re-compile. I realize that this probably isn't achievable using the C language or preprocessor alone, so consider any common *nix-style tools fair game (e.g. make, perl, shell scripts (if you have to)). But Why? You're probably wondering why anyone would want to do this. I'm creating a small test framework for a library of common mathematical routines. Under this framework, there will be many small "test cases," each of which has only a few lines of code that will exercise each math function. I'd like each test case to live in its own source file as a short function. All of the test cases will get built into a single executable, and the test case(s) to be run can be specified on the command line when invoking the executable. The main() function will search through the table and, if it finds a match, jump to the test case function. Automating the process of building up the "catalog" of test cases ensures that test cases don't get left out (for instance, because someone forgets to add it to the table) and makes it very simple for maintainers to add new test cases in the future (just create a new source file in the correct directory, for instance). Hopefully someone out there has done something like this before. Thanks, StackOverflow community!

    Read the article

  • PHP & MySQL deleting multiple rows script problem.

    - by oReiLLy
    I'm trying to delete two tables rows from two different tables at once when a user clicks the delete button, but for some reason I cant get the table rows to delete can some one help me figure out what is wrong with my script? Thanks Here is the MySQL tables. CREATE TABLE cases ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, file VARCHAR(255) NOT NULL, case VARCHAR(255) NOT NULL, name VARCHAR(255) NOT NULL, PRIMARY KEY (id) ); CREATE TABLE users_cases ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, cases_id INT UNSIGNED NOT NULL, user_id INT UNSIGNED NOT NULL, PRIMARY KEY (id) ); Here is the PHP & MySQL script. if(isset($_POST['delete_case'])) { $cases_ids = array(); $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"SELECT cases.*, users_cases.* FROM cases INNER JOIN users_cases ON users_cases.cases_id = cases.id WHERE users_cases.user_id='$user_id'"); if (!$dbc) { print mysqli_error($mysqli); } else { while($row = mysqli_fetch_array($dbc)){ $cases_ids[] = $row["cases_id"]; } } foreach($_POST['delete_id'] as $di) { if(in_array($di, $cases_ids)) { $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"DELETE FROM users_cases WHERE cases_id = '$delete_id'"); $dbc2 = mysqli_query($mysqli,"DELETE FROM cases WHERE id = '$delete_id'"); } } } Here is the XHTML. <li> <input type="text" name="file[]" size="25" /> <input type="text" name="case[]" size="25" /> <input type="text" name="name[]" size="25" /> <input type="hidden" name="delete_id" value="' . $row['cases_id'] . '" /> </li> <li> <input type="text" name="file[]" size="25" /> <input type="text" name="case[]" size="25" /> <input type="text" name="name[]" size="25" /> <input type="hidden" name="delete_id" value="' . $row['cases_id'] . '" /> </li> <li> <input type="text" name="file[]" size="25" /> <input type="text" name="case[]" size="25" /> <input type="text" name="name[]" size="25" /> <input type="hidden" name="delete_id" value="' . $row['cases_id'] . '" /> </li>

    Read the article

  • Dictionary w/ null key?

    - by Ralph
    Firstly, why doesn't Dictionary<TKey, TValue> support a single null key? Secondly, is there an existing dictionary-like collection that does? I want to store an "empty" or "missing" or "default" System.Type, thought null would work well for this. More specifically, I've written this class: class Switch { private Dictionary<Type, Action<object>> _dict; public Switch(params KeyValuePair<Type, Action<object>>[] cases) { _dict = new Dictionary<Type, Action<object>>(cases.Length); foreach (var entry in cases) _dict.Add(entry.Key, entry.Value); } public void Execute(object obj) { var type = obj.GetType(); if (_dict.ContainsKey(type)) _dict[type](obj); } public static void Execute(object obj, params KeyValuePair<Type, Action<object>>[] cases) { var type = obj.GetType(); foreach (var entry in cases) { if (entry.Key == null || type.IsAssignableFrom(entry.Key)) { entry.Value(obj); break; } } } public static KeyValuePair<Type, Action<object>> Case<T>(Action action) { return new KeyValuePair<Type, Action<object>>(typeof(T), x => action()); } public static KeyValuePair<Type, Action<object>> Case<T>(Action<T> action) { return new KeyValuePair<Type, Action<object>>(typeof(T), x => action((T)x)); } public static KeyValuePair<Type, Action<object>> Default(Action action) { return new KeyValuePair<Type, Action<object>>(null, x => action()); } } For switching on types. There are two ways to use it: Statically. Just call Switch.Execute(yourObject, Switch.Case<YourType>(x => x.Action())) Precompiled. Create a switch, and then use it later with switchInstance.Execute(yourObject) Works great except when you try to add a default case to the "precompiled" version (null argument exception).

    Read the article

  • Where’s my MD.050?

    - by Dave Burke
    A question that I’m sometimes asked is “where’s my MD.050 in OUM?” For those not familiar with an MD.050, it serves the purpose of being a Functional Design Document (FDD) in one of Oracle’s legacy Methods. Functional Design Documents have existed for many years with their primary purpose being to describe the functional aspects of one or more components of an IT system, typically, a Custom Extension of some sort. So why don’t we have a direct replacement for the MD.050/FDD in OUM? In simple terms, the disadvantage of the MD.050/FDD approach is that it tends to lead practitioners into “Design mode” too early in the process. Whereas OUM encourages more emphasis on gathering, and describing the functional requirements of a system ahead of the formal Analysis and Design process. So that just means more work up front for the Business Analyst or Functional Consultants right? Well no…..the design of a solution, particularly when it involves a complex custom extension, does not necessarily take longer just because you put more thought into the functional requirements. In fact, one could argue the complete opposite, in that by putting more emphasis on clearly understanding the nuances of functionality requirements early in the process, then the overall time and cost incurred during the Analysis to Design process should be less. In short, as your understanding of requirements matures over time, it is far easier (and more cost effective) to update a document or a diagram, than to change lines of code. So how does that translate into Tasks and Work Products in OUM? Let us assume you have reached a point on a project where a Custom Extension is needed. One of the first things you should consider doing is creating a Use Case, and remember, a Use Case could be as simple as a few lines of text reflecting a “User Story”, or it could be what Cockburn1 describes a “fully dressed Use Case”. It is worth mentioned at this point the highly scalable nature of OUM in the sense that “documents” should not be produced just because that is the way we have always done things. Some projects may well be predicated upon a base of electronic documents, whilst other projects may take a much more Agile approach to describing functional requirements; through “User Stories” perhaps. In any event, it is quite common for a Custom Extension to involve the creation of several “components”, i.e. some new screens, an interface, a report etc. Therefore several Use Cases might be required, which in turn can then be assembled into a Use Case Package. Once you have the Use Cases attributed to an appropriate (fit-for-purpose) level of detail, and assembled into a Package, you can now create an Analysis Model for the Package. An Analysis Model is conceptual in nature, and depending on the solution being developing, would involve the creation of one or more diagrams (i.e. Sequence Diagrams, Collaboration Diagrams etc.) which collectively describe the Data, Behavior and Use Interface requirements of the solution. If required, the various elements of the Analysis Model may be indexed via an Analysis Specification. For Custom Extension projects that follow a pure Object Orientated approach, then the Analysis Model will naturally support the development of the Design Model without any further artifacts. However, for projects that are transitioning to this approach, then the various elements of the Analysis Model may be represented within the Analysis Specification. If we now return to the original question of “Where’s my MD.050”. The full answer would be: Capture the functional requirements within a Use Case Group related Use Cases into a Package Create an Analysis Model for each Package Consider creating an Analysis Specification (AN.100) as a index to each Analysis Model artifact An alternative answer for a relatively simple Custom Extension would be: Capture the functional requirements within a Use Case Optionally, group related Use Cases into a Package Create an Analysis Specification (AN.100) for each package 1 Cockburn, A, 2000, Writing Effective Use Case, Addison-Wesley Professional; Edition 1

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >