Search Results

Search found 1204 results on 49 pages for 'agile'.

Page 20/49 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Code review versus pair programming

    - by mericano1
    I was wondering what is the general idea about code review and pair programming. I do have my own opinion but I'd like to hear from somebody else as well. Here are a few questions, please give me your opinion even on some of the point First of all are you aware of way to measure the effectiveness of this practices? Do you think that if you pair program, code reviews are not necessary or it's still good to have them both? Do you think anybody can do code review or maybe is better done by seniors only? In terms of productivity do you think it suffers from pairing all the times or you will eventually get in back in the long run?

    Read the article

  • How do bug reports factor in to a sprint?

    - by Mark Ingram
    I've been reading up on Scrum recently. From my understanding, a meeting is held before the sprint starts, to decide what gets moved from the product backlog to the upcoming sprint backlog. Once a feature is completed in the current sprint, it will go into the "Ready to QA" bucket, and it's at this point that I'm getting confused. Do bug reports go back into the product backlog? I assume they can't go back into the sprint backlog as we've already decided what work will be done for this cycle? What happens when QA finds a bug? Where does it go?

    Read the article

  • Requesting quality analysis test cases up front of implementation/change

    - by arin
    Recently I have been assigned to work on a major requirement that falls between a change request and an improvement. The previous implementation was done (badly) by a senior developer that left the company and did so without leaving a trace of documentation. Here were my initial steps to approach this problem: Considering that the release date was fast approaching and there was no time for slip-ups, I initially asked if the requirement was a "must have". Since the requirement helped the product significantly in terms of usability, the answer was "If possible, yes". Knowing the wide-spread use and affects of this requirement, had it come to a point where the requirement could not be finished prior to release, I asked if it would be a viable option to thrash the current state and revert back to the state prior to the ex-senior implementation. The answer was "Most likely: no". Understanding that the requirement was coming from the higher management, and due to the complexity of it, I asked all usability test cases to be written prior to the implementation (by QA) and given to me, to aid me in the comprehension of this task. This was a big no-no for the folks at the management as they failed to understand this approach. Knowing that I had to insist on my request and the responsibility of this requirement, I insisted and have fallen out of favor with some of the folks, leaving me in a state of "baffledness". Basically, I was trying a test-driven approach to a high-risk, high-complexity and must-have requirement and trying to be safe rather than sorry. Is this approach wrong or have I approached it incorrectly? P.S.: The change request/improvement was cancelled and the implementation was reverted back to the prior state due to the complexity of the problem and lack of time. This only happened after a 2 hour long meeting with other seniors in order to convince the aforementioned folks.

    Read the article

  • Creating Multiple Queries for Running Objects

    - by edurdias
    Running Objects combines the power of LINQ with Metadata definition to let you leverage multiples perspectives of your queries of objects. By default, RO brings all the objects in natural order of insertion and including all the visible properties of your class. In this post, we will understand how the QueryAttribute class is structured and how to make use of it. The QueryAttribute class This class is the responsible to specify all the possible perspectives of a list of objects. In other words, is...(read more)

    Read the article

  • Naming your unit tests

    - by kerry
    When you create a test for your class, what kind of naming convention do you use for the tests? How thorough are your tests? I have lately switched from the conventional camel case test names to lower case letters with underscores. I have found this increases the readability and causes me to write better tests. A simple utility class: public class ArrayUtils { public static T[] gimmeASlice(T[] anArray, Integer start, Integer end) { // implementation (feeling lazy today) } } I have seen some people who would write a test like this: public class ArrayUtilsTest { @Test public void testGimmeASliceMethod() { // do some tests } } A more thorough and readable test would be: public class ArrayUtilsTest { @Test public void gimmeASlice_returns_appropriate_slice() { // ... } @Test public void gimmeASlice_throws_NullPointerException_when_passed_null() { // ... } @Test public void gimmeASlice_returns_end_of_array_when_slice_is_partly_out_of_bounds() { // ... } @Test public void gimmeASlice_returns_empty_array_when_slice_is_completely_out_of_bounds() { // ... } } Looking at this test, you have no doubt what the method is supposed to do. And, when one fails, you will know exactly what the issue is.

    Read the article

  • Testing Workflows &ndash; Test-First

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-first.aspxThis is the second of two posts on some common strategies for approaching the job of writing tests.  The previous post covered test-after workflows where as this will focus on test-first.  Each workflow presented is a method of attack for adding tests to a project.  The more tools in your tool belt the better.  So here is a partial list of some test-first methodologies. Ping Pong Ping Pong is a methodology commonly used in pair programing.  One developer will write a new failing test.  Then they hand the keyboard to their partner.  The partner writes the production code to get the test passing.  The partner then writes the next test before passing the keyboard back to the original developer. The reasoning behind this testing methodology is to facilitate pair programming.  That is to say that this testing methodology shares all the benefits of pair programming, including ensuring multiple team members are familiar with the code base (i.e. low bus number). Test Blazer Test Blazing, in some respects, is also a pairing strategy.  The developers don’t work side by side on the same task at the same time.  Instead one developer is dedicated to writing tests at their own desk.  They write failing test after failing test, never touching the production code.  With these tests they are defining the specification for the system.  The developer most familiar with the specifications would be assigned this task. The next day or later in the same day another developer fetches the latest test suite.  Their job is to write the production code to get those tests passing.  Once all the tests pass they fetch from source control the latest version of the test project to get the newer tests. This methodology has some of the benefits of pair programming, namely lowering the bus number.  This can be good way adding an extra developer to a project without slowing it down too much.  The production coder isn’t slowed down writing tests.  The tests are in another project from the production code, so there shouldn’t be any merge conflicts despite two developers working on the same solution. This methodology is also a good test for the tests.  Can another developer figure out what system should do just by reading the tests?  This question will be answered as the production coder works there way through the test blazer’s tests. Test Driven Development (TDD) TDD is a highly disciplined practice that calls for a new test and an new production code to be written every few minutes.  There are strict rules for when you should be writing test or production code.  You start by writing a failing (red) test, then write the simplest production code possible to get the code working (green), then you clean up the code (refactor).  This is known as the red-green-refactor cycle. The goal of TDD isn’t the creation of a suite of tests, however that is an advantageous side effect.  The real goal of TDD is to follow a practice that yields a better design.  The practice is meant to push the design toward small, decoupled, modularized components.  This is generally considered a better design that large, highly coupled ball of mud. TDD accomplishes this through the refactoring cycle.  Refactoring is only possible to do safely when tests are in place.  In order to use TDD developers must be trained in how to look for and repair code smells in the system.  Through repairing these sections of smelly code (i.e. a refactoring) the design of the system emerges. For further information on TDD, I highly recommend the series “Is TDD Dead?”.  It discusses its pros and cons and when it is best used. Acceptance Test Driven Development (ATDD) Whereas TDD focuses on small unit tests that concentrate on a small piece of the system, Acceptance Tests focuses on the larger integrated environment.  Acceptance Tests usually correspond to user stories, which come directly from the customer. The unit tests focus on the inputs and outputs of smaller parts of the system, which are too low level to be of interest to the customer. ATDD generally uses the same tools as TDD.  However, ATDD uses fewer mocks and test doubles than TDD. ATDD often complements TDD; they aren’t competing methods.  A full test suite will usually consist of a large number of unit (created via TDD) tests and a smaller number of acceptance tests. Behaviour Driven Development (BDD) BDD is more about audience than workflow.  BDD pushes the testing realm out towards the client.  Developers, managers and the client all work together to define the tests. Typically different tooling is used for BDD than acceptance and unit testing.  This is done because the audience is not just developers.  Tools using the Gherkin family of languages allow for test scenarios to be described in an English format.  Other tools such as MSpec or FitNesse also strive for highly readable behaviour driven test suites. Because these tests are public facing (viewable by people outside the development team), the terminology usually changes.  You can’t get away with the same technobabble you can with unit tests written in a programming language that only developers understand.  For starters, they usually aren’t called tests.  Usually they’re called “examples”, “behaviours”, “scenarios”, or “specifications”. This may seem like a very subtle difference, but I’ve seen this small terminology change have a huge impact on the acceptance of the process.  Many people have a bias that testing is something that comes at the end of a project.  When you say we need to define the tests at the start of the project many people will immediately give that a lower priority on the project schedule.  But if you say we need to define the specification or behaviour of the system before we can start, you’ll get more cooperation.   Keep these test-first and test-after workflows in your tool belt.  With them you’ll be able to find new opportunities to apply them.

    Read the article

  • Expected time for an CakePHP MVC form/controller and db make up

    - by hephestos
    I would like to know, what is an average time for building a form in MVC pattern with for example CakePHP. I build 8 functions, two of them do custom queries, return json data, split them, expand them in a model in memory and delivers to the view. Those are three queries if you consider and an array to feed view for making some combo box. Why? all these, because I have data from json and I split them in order to make row of data like a table. Like that I changed a bit the edit.ctp but not a lot. And I created a javascript outside, with three functions. One collects data the other upon a change of a combo returnes the selected values, and does also some redirection flow. All this, in average how much time should it take ?

    Read the article

  • Right mix of planning and programing on a new project

    - by WarrenFaith
    I am about to start a new project (a game, but thats unimportant). The basic idea is in my head but not all the details. I don't want to start programming without planning, but I am seriously fighting my urge to just do it. I want some planning before to prevent refactoring the whole app just because a new feature I could think of requires it. On the other hand, I don't want to plan multiple months (spare time) and start that because I have some fear that I will lose my motivation in this time. What I am looking for is a way of combining both without one dominating the other. Should I realize the project in the way of scrum? Should I creating user stories and then realize them? Should I work feature driven? (I have some experience in scrum and the classic "specification to code" way.)

    Read the article

  • Pair programming and unit testing

    - by TheSilverBullet
    My team follows the Scrum development cycle. We have received feedback that our unit testing coverage is not very good. A team member is suggesting the addition of an external testing team to assist the core team, but I feel this will backfire in a bad way. I am thinking of suggesting pair programming approach. I have a feeling that this should help the code be more "test-worthy" and soon the team can move to test driven development! What are the potential problems that might arise out of pair programming??

    Read the article

  • Scrum - how to carry over a partially complete User Story to the next Sprint without skewing the backlog

    - by Nick
    We're using Scrum and occasionally find that we can't quite finish a User Story in the sprint in which it was planned. In true Scrum style, we ship the software anyway and consider including the User Story in the next sprint during the next Sprint Planning session. Given that the User Story we are carrying over is partially complete, how do we estimate for it correctly in the next Sprint Planning session? We have considered: a) Adjusting the number of Story Points down to reflect just the work which remains to complete the User Story. Unfortunately this will mess up reporting the Product Backlog. b) Close the partially-completed User Story and raise a new one to implement the remainder of that feature, which will have fewer Story Points. This will affect our ability to retrospectively see what we didn't complete in that sprint and seems a bit time consuming. c) Not bother with either a or b and continue to guess during Sprint Planning saying things like "Well that User Story may be X story points, but I know it's 95% finished so I'm sure we can fit it in."

    Read the article

  • Is it a good idea to appoint one of the scrum team member or scrum master as Product Owner?

    - by Sandy
    Lately we had a project, in which client was busy touring. As usual scrum team was formed, management decided to appoint our analyst as Product owner since Client won’t be able to participate actively. Analyst was the one who worked closely with client for requirement analysis and specification drafting. Client doesn’t have the time to review first two releases. Everything went smoothly until, client saw third release; he wasn’t satisfied with some functionalities, and those was introduced by make shift Product Owner (our analyst). We were told to wait till design team finished mock-up of all pages and client checked each one and approved to continue working. Scrum team is there, but no sprints – we finished work almost like classic waterfall method. Is it a good idea to appoint scrum team member or master as product owner? Do we need to follow scrum in the absence of client/product owner participation?

    Read the article

  • What is the next promotion for a scrum master

    - by gnebar
    I'm currently a scrum master. I have been offered a promotion to a role that will allow me to have a wider impact. (more involved in company wide architectural decisions, possible secondment to kick start major projects, etc). The role and title of the job has yet to be decided but my company are open to guidance from me. I'm happy I can mould the role to suit me and the company but I'm unsure about the job title that fits this role. Technical Evangelist has been suggested but i'm not sure that is the correct title. I'm keen to proceed down the technical route. What would you suggest? What other roles do people take after scrum master/technical lead? EDIT: (I am aware that my current role is a mix of a technical lead and scrum master role, but that's how we do it in my company :) )

    Read the article

  • What level/format of access should be given to a client to the issue tracking system?

    - by dukeofgaming
    So, I used to think that it would be a good idea to give the customer access to the issue tracking system, but now I've seen that it creates less than ideal situations, like: Customer judging progress solely on ticket count Developers denied to add issues to avoid customer thinking that there is less progress Customer appointing people on their side to add issues who don't always do a good job (lots of duplicate issues, insufficient information to reproduce, and other things that distract people from doing their real job) However, I think customers should have access to some indicators or proof that there is progress being done, as well as a right to report bugs. So, what would be the ideal solution to this situation?, specially, getting out of or improving the first situation described?

    Read the article

  • Is hierarchical product backlog a good idea in TFS 2012-2013?

    - by Matías Fidemraizer
    I'd like to validate I'm not in the wrong way. My team project is using Visual Studio Scrum 2.x. Since each area/product has a lot of kind of requirements (security, user interface, HTTP/REST services...), I tried to manage this creating "parent backlogs" which are "open forever" and they contain generic requirements. Those parent backlogs have other "open forever" backlogs, and/or sprint backlogs. For example: HTTP/REST Services (forever) ___ Profiles API (forever) ________ POST profile (forever) _______________ We need a basic HTTP/REST profiles' API to register new user profiles (sprint backlog) Is it the right way of organizing the product backlog? Note: I know there're different points of view and that would be right for some and wrong for others. I'm looking for validation about if this is a possible good practice on TFS with Visual Studio Scrum.

    Read the article

  • Are there any arguments that can make a contractor reconsider working on fixed price ?

    - by julien
    I've been working for a contractor who brings in some good projects, but they are all fixed-price and often fixed-time. As a result he always has me making a quote over loose requirements, which never fails to bring a lot of tension due to feature creep. He claims he'd never get a contract if he couldn't agree on a price with his clients first, but as far as I'm concerned I don't wanna go through another project under these terms. Is there any argument I could make to have him pay me by the hour, or should I just suck less at estimating ?

    Read the article

  • What modelling technique do you use for your continuous design?

    - by d3prok
    Together with my teammates, I'm trying to self-learn XP and apply its principles. We're successfully working in TDD and happily refactoring our code and design. However we're having problems with the overall view of the design of the project. Lately we were wondering what would be the "good" practices for an effective continuous design of the code. We're not strictly seeking the right model, like CRC cards, communication diagrams, etc., instead we're looking for a technique to constantly collaborate on the high level view of the system (not too high though). I'll try to explain myself better: I'm actually interested in the way CRC cards are used to brainstorm a model and I would mix them with some very rough UML diagrams (that we already use). However, what we're looking for are some principles for deciding when, how and how much to model during our iterations. Have you any suggestion on this matter? For example, when your teammates and you know you need a design session and how your meetings work?

    Read the article

  • Code review vs pair programming

    - by mericano1
    I was wondering what is the general idea about code review and pair programming. I do have my own opinion but I'd like to hear from somebody else as well. Here are a few questions, please give me your opinion even on some of the point First of all are you aware of way to measure the effectiveness of this practices? Do you think that if you pair program, code reviews are not necessary or it's still good to have them both? Do you think anybody can do code review or maybe is better done by seniors only? In terms of productivity do you think it suffers from pairing all the times or you will eventually get in back in the long run? Thanks!

    Read the article

  • What arguments can I use to "sell" the BDD concept to a team reluctant to adopt it?

    - by S.Robins
    I am a bit of a vocal proponent of the BDD methodology. I've been applying BDD for a couple of years now, and have adopted StoryQ as my framework of choice when developing DotNet applications. Even though I have been unit testing for many years, and had previously shifted to a test-first approach, I've found that I get much more value out of using a BDD framework, because my tests capture the intent of the requirements in relatively clear English within my code, and because my tests can execute multiple assertions without ending the test halfway through - meaning I can see which specific assertions pass/fail at a glance without debugging to prove it. This has really been the tip of the iceberg for me, as I've also noticed that I am able to debug both test and implementation code in a more targeted manner, with the result that my productivity has grown significantly, and that I can more easily determine where a failure occurs if a problem happens to make it all the way to the integration build due to the output that makes its way into the build logs. Further, the StoryQ api has a lovely fluent syntax that is easy to learn and which can be applied in an extraordinary number of ways, requiring no external dependencies in order to use it. So with all of these benefits, you would think it an easy to introduce the concept to the rest of the team. Unfortunately, the other team members are reluctant to even look at StoryQ to evaluate it properly (let alone entertain the idea of applying BDD), and have convinced each other to try and remove a number of StoryQ elements from our own core testing framework, even though they originally supported the use of StoryQ, and that it doesn't impact on any other part of our testing system. Doing so would end up increasing my workload significantly overall and really goes against the grain, as I am convinced through practical experience that it is a better way to work in a test-first manner in our particular working environment, and can only lead to greater improvements in the quality of our software, given I've found it easier to stick with test first using BDD. So the question really comes down to the following: What arguments can I use to really drive the point home that it would be better to use StoryQ, or at the very least apply the BDD methodology? Can you point me to any anecdotal evidence that I can use to support my argument to adopt BDD as our standard method of choice? What counter arguments can you think of that could suggest that my wish to convert the team efforts to BDD might be in error? Yes, I'm happy to be proven wrong provided the argument is a sound one. NOTE: I am not advocating that we rewrite our tests in their entirety, but rather to simply start working in a different manner for all future testing work.

    Read the article

  • Project Management Techniques (high level)

    - by Sam J
    Our software dev team is currently using kanban for our development lifecycles, and, from the reasonably short experience of a few months, I think it's going quite well (certainly compared to a few months ago when we didn't really have a methodology). Our team, however, is directed to do work defined by project managers (not software project managers, just general business), and they're using the PMBOK methodology. Question is, how does a traditional methodology like PMBOK, Prince2 etc fit with a lean software development methodology like kanban or scrum? Is it just wasting everyone's time as all the requirements are effectively drawn up to start with (although inevitably changed along the way)?

    Read the article

  • How to approach scrum task burn down when tasks have multiple peoples involvement?

    - by AgileMan
    In my company, a single task can never be completed by one individual. There is going to be a separate person to QA and Code Review each task. What this means is that each individual will give their estimates, per task, as to how much time it will take to complete. The problem is, how should I approach burn down? If I aggregate the hours together, assume the following estimate: 10 hrs - Dev time 4 hrs - QA 4 hrs - Code Review. Task Estimate = 18hrs At the end of each day I ask that the task be updated with "how much time is left until it is done". However, each person generally just thinks about their part of it. Should they mark the effort remaining, and then ADD the effort estimates to that? How are you guys doing this? UPDATE To help clarify a few things, at my organization each Task within a story requires 3 people. Someone to develop the task. (do unit tests, ect...) A QA specialist to review task (they primarily do integration and regression tests) A Tech lead to do code review. I don't think there is a wrong way or a right way, but this is our way ... and that won't be changing. We work as a team to complete even the smallest level of a story whenever possible. You cannot actually test if something works until it is dev complete, and you cannot review the quality of the code either ... so the best you can do is split things up into small logical slices so that the bare minimum functionality can be tested and reviewed as early into the process as possible. My question to those that work this way would be how to burn down a "task" when they are setup this way. Unless a Task has it's own sub-tasks (which JIRA doesn't allow) ... I'm not sure the best way to accomplish tracking "what's left" on a daily basis.

    Read the article

  • How to make sprint planning fun

    - by Jacob Spire
    Not only are our sprint planning meetings not fun, they're downright dreadful. The meetings are tedious, and boring, and take forever (a day, but it feels like a lot longer). The developers complain about it, and dread upcoming plannings. Our routine is pretty standard (user story inserted into sprint backlog by priority story is taken apart to tasks tasks are estimated in hours repeat), and I can't figure out what we're doing wrong. How can we make the meetings more enjoyable? ... Some more details, in response to requests for more information: Why are the backlog items not inserted and prioritized before sprint kickoff? User stories are indeed prioritized; we have no idea how long they'll take until we break them down into tasks! From the (excellent) answers here, I see that maybe we shouldn't estimate tasks at all, only the user stories. The reason we estimate tasks (and not stories) is because we've been getting story-estimates terribly wrong -- but I guess that's the subject for an altogether different question. Why are developers complaining? Meetings are long. Meetings are monotonous. Story after story, task after task, struggling (yes, struggling) to estimate how long it will take and what it involves. Estimating tasks makes user-story-estimation seem pointless. The longer the meeting, the less focus in the room. The less focused colleagues are, the longer the meeting takes. A recursive hate-spiral develops. We've considered splitting the meeting into two days in order to keep people focused, but the developers wouldn't hear of it. One day of planning is bad enough; now we'll have two?! Part of our problem is that we go into very small detail (in order to get more accurate estimations). But when we estimate roughly, we go way off the mark! To sum up the question: What are we doing wrong? What additional ways are there to make the meeting generally more enjoyable?

    Read the article

  • Feature Driven Development in the work place?

    - by FXquincy
    Question Please explain Feature Driven Development in a nutshell? Situation My Business Analyst calls their documentation FDD, but it just seems overwhelmed by details. In a Nutshell An 'in a nutshell' example would be good, since I'm trying to reduce unnecessary detail and confusion. I want to add clarity, and an Occam's' razor approach to the documentation. Thanks for your help, Here's what I found

    Read the article

  • Are there references discussing the use parallel programming as a development methodology? [closed]

    - by ahsteele
    I work on a team which employs many of the extreme programming practices. We've gone to great lengths to utilize paired programming as much as possible. Unfortunately the practice sometimes breaks down and becomes ineffective. In looking for ways to tweak our process I came across two articles describing parallel pair programming: Parallel Pair Programming Death of paired programming. Its 2008 move on to parallel pairing While these are good resources I wanted to read a bit more on the topic. As you can imagine Googling for variations on parallel pair programming nets mostly results which relate to parallel programming. What I'm after is additional discussion on the topic of parallel pair programming. Do additional references exist that my Google-fu is unable to discern? Has anyone used the practice and care to share here (thus creating a reference)?

    Read the article

  • How do you track existing requirements over time?

    - by CaptainAwesomePants
    I'm a software engineer working on a complex, ongoing website. It has a lot of moving parts and a small team of UI designers and business folks adding new features and tweaking old ones. Over the last year or so, we've added hundreds of interesting little edge cases. Planning, implementing, and testing them is not a problem. The problem comes later, when we want to refactor or add another new feature. Nobody remembers half of the old features and edge cases from a year ago. When we want to add a new change, we notice that code does all sorts of things in there, and we're not entirely sure which things are intentional requirements and which are meaningless side effects. Did someone last year request that the login token was supposed to only be valid for 30 minutes, or did some programmers just pick a sensible default? Can we change it? Back when the product was first envisioned, we created some documentation describing how the site worked. Since then we created a few additional documents describing new features, but nobody ever goes back and updates those documents when new features are requested, so the only authoritative documentation is the code itself. But the code provides no justification, no reason for its actions: only the how, never the why. What do other long-running teams do to keep track of what the requirements were and why?

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >