Search Results

Search found 4783 results on 192 pages for 'tests'.

Page 35/192 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Smoke testing a .NET web application

    - by pdr
    I cannot believe I'm the first person to go through this thought process, so I'm wondering if anyone can help me out with it. Current situation: developers write a web site, operations deploy it. Once deployed, a developer Smoke Tests it, to make sure the deployment went smoothly. To me this feels wrong, it essentially means it takes two people to deploy an application; in our case those two people are on opposite sides of the planet and timezones come into play, causing havoc. But the fact remains that developers know what the minimum set of tests is and that may change over time (particularly for the web service portion of our app). Operations, with all due respect to them (and they would say this themselves), are button-pushers who need a set of instructions to follow. The manual solution is that we document the test cases and operations follow that document each time they deploy. That sounds painful, plus they may be deploying different versions to different environments (specifically UAT and Production) and may need a different set of instructions for each. On top of this, one of our near-future plans is to have an automated daily deploy environment, so then we'll have to instruct a computer as to how to deploy a given version of our app. I would dearly like to add to that instructions for how to smoke test the app. Now developers are better at documenting instructions for computers than they are for people, so the obvious solution seems to be to use a combination of nUnit (I know these aren't unit tests per se, but it is a built-for-purpose test runner) and either the Watin or Selenium APIs to run through the obvious browser steps and call to the web service and explain to the Operations guys how to run those unit tests. I can do that; I have mostly done it already. But wouldn't it be nice if I could make that process simpler still? At this point, the Operations guys and the computer are going to have to know which set of tests relate to which version of the app and tell the nUnit runner which base URL it should point to (say, www.example.com = v3.2 or test.example.com = v3.3). Wouldn't it be nicer if the test runner itself had a way of giving it a base URL and letting it download say a zip file, unpack it and edit a configuration file automatically before running any test fixtures it found in there? Is there an open source app that would do that? Is there a need for one? Is there a solution using something other than nUnit, maybe Fitnesse? For the record, I'm looking at .NET-based tools first because most of the developers are primarily .NET developers, but we're not married to it. If such a tool exists using other languages to write the tests, we'll happily adapt, as long as there is a test runner that works on Windows.

    Read the article

  • correct approach to store in database

    - by John
    I'm developing an online website (using Django and Mysql). I have a Tests table and User table. I have 50 tests within the table and each user completes them at their own pace. How do I store the status of the tests in my DB? One idea that came to my mind is to create an additional column in User table. That column containing testid's separated by comma or any other delimiter. userid | username | testscompleted 1 john 1, 5, 34 2 tom 1, 10, 23, 25 Another idea was to create a seperate table to store userid and testid. So, I'll have only 2 columns but thousands of rows (no of tests * no of users) and they will always continue to increase. userid | testid 1 1 1 5 2 1 1 34 2 10

    Read the article

  • Unit testing custom controls in Silverlight

    - by Hrvoje
    I have several custom controls (some kind of frames for content and layout management, like wrap panel), and would like to write unit tests for them. It's hard to find any good examples except Silverlight control toolkit, which has some helper classes to do unit tests and it's quite complicated. For MVVM classes it's easy to write tests because they don't use SL dependency system and infrastructure. Questions: how to unit test DepedenyProperty, what do I need to test how to test attached property do I test bindings with theme or UserControl, like simple textblock content binding, or command/event binding in MVVM with UserControl what else do I test in my custom controls, beside my business logic any good tutorial to achieve tests like those in control toolkit How do I start? Is SL controls toolkit only option for learning? For testing framework i'm using one from control toolkit, and for continuus integration on TFS build server I planned to use Statlight (from codeplex). Any advice on that?

    Read the article

  • JUnit test failing - complaining of missing data that was just inserted

    - by Collin Peters
    I have an extremely odd problem in my JUnit tests that I just can't seem to nail down. I have a multi-module java webapp project with a fairly standard structure (DAO's, service clasess, etc...). Within this project I have a 'core' project which contains some abstracted setup code which inserts a test user along with the necessary items for a user (in this case an 'enterprise', so a user must belong to an enterprise and this is enforced at the database level) Fairly simple so far... but here is where the strangeness begins some tests fail to run and throw a database exception where it complains that a user cannot be inserted because an enterprise does not exist. But it just created the enterprise in the preceding line of code! And there was no errors in the insertion of the enterprise. Stranger yet, if this test class is run by itself everything works fine. It is only when the test is run as part of the project that it fails! And the exact same abstracted code was run by 10+ tests before the one that fails! f I have been banging my head against a wall with this for days and haven't really made any progress. I'm not even sure what information to offer up to help diagnose this. Using JUnit 4.4, Spring 2.5.6, iBatis 2.3.0, Postgresql 8.3 Switching to org.springframework.jdbc.datasource.DriverManagerDataSource from org.apache.commons.dbcp.BasicDataSource changed the problem. Using DriverManagerDataSource the tests work for the first time, but now all of a sudden a lot of data isn't rolled back in the database! It leaves everything behind. All with no errors Tests fail when run via Eclipse & Maven Please ask for any info which may help me solve my problem!

    Read the article

  • MSBuild CreateItem condition include based on config file

    - by Mac
    I'm trying to select a list of test dlls that contain corresponding config files MyTest.Tests.dll MyTest.Tests.config I have to use a createItem as the dlls are not available at the time of the script loading <CreateItem Include="$(AssemblyFolder)\*.Tests.dll" Condition="???" <Output TaskParameter="Include" ItemName="TestBinariesWithConfig"/> </CreateItem> Is there a condition I can use or is this the wrong approach? Thanks Mac

    Read the article

  • Caching result of setUp() using Python unittest

    - by dbr
    I currently have a unittest.TestCase that looks like.. class test_appletrailer(unittest.TestCase): def setup(self): self.all_trailers = Trailers(res = "720", verbose = True) def test_has_trailers(self): self.failUnless(len(self.all_trailers) > 1) # ..more tests.. This works fine, but the Trailers() call takes about 2 seconds to run.. Given that setUp() is called before each test is run, the tests now take almost 10 seconds to run (with only 3 test functions) What is the correct way of caching the self.all_trailers variable between tests? Removing the setUp function, and doing.. class test_appletrailer(unittest.TestCase): all_trailers = Trailers(res = "720", verbose = True) ..works, but then it claims "Ran 3 tests in 0.000s" which is incorrect.. The only other way I could think of is to have a cache_trailers global variable (which works correctly, but is rather horrible): cache_trailers = None class test_appletrailer(unittest.TestCase): def setUp(self): global cache_trailers if cache_trailers is None: cache_trailers = self.all_trailers = all_trailers = Trailers(res = "720", verbose = True) else: self.all_trailers = cache_trailers

    Read the article

  • Recommendations for Continuous integration for Mercurial/Kiln + MSBuild + MSTest

    - by TDD
    We have our source code stored in Kiln/Mercurial repositories; we use MSBuild to build our product and we have Unit Tests that utilize MSTest (Visual Studio Unit Tests). What solutions exist to implement a continuous integration machine (i.e. Build machine). The requirements for this are: A build should be kicked of when necessary (i.e. code has changed in the Repositories we care about) Before the actual build, the latest version of the source code must be acquired from the repository we are building from The build must build the entire product The build must build all Unit Tests The build must execute all unit tests A summary of success/failure must be sent out after the build has finished; this must include information about the build itself but also about which Unit Tests failed and which ones succeeded. The summary must contain which changesets were in this build that were not yet in the previous successful (!) build The system must be configurable so that it can build from multiple branches(/Repositories). Ideally, this system would run on a single box (our product isn't that big) without any server components. What solutions are currently available? What are their pros/cons? From the list above, what can be done and what cannot be done? Thanks

    Read the article

  • How to do concurrent modification testing for grails application

    - by werner5471
    I'd like to run tests that simulate users modifying certain data at the same time for a grails application. Are there any plug-ins / tools / mechanisms I can use to do this efficiently? They don't have to be grails specific. It should be possible to fire multiple actions in parallel. I'd prefer to run the tests on functional level (so far I'm using Selenium for other tests) to see the results from the user perspective. Of course this can be done in addition to integration testing if you'd recommend to run concurrent modification tests on integration level as well.

    Read the article

  • Convincing why testing is good

    - by FireAphis
    Hello, In my team of real-time-embedded C/C++ developers, most people don't have any culture of testing their code beyond the casual manual sanity checks. I personally strongly believe in advantages of autonomous automatic tests, but when I try to convince I get some reappearing arguments like: We will spend more time on writing the tests than writing the code. It takes a lot of effort to maintain the tests. Our code is spaghetti; no way we can unit-test it. Our requirement are not sealed – we’ll have to rewrite all the tests every time the requirements are changed. Now, I'd gladly hear any convincing tips and advises, but what I am really looking for are references to researches, articles, books or serious surveys that show (preferably in numbers) how testing is worth the effort. Something like "We in IBM/Microsoft/Google, surveying 3475 active projects, found out that putting 50% more development time into testing decreased by 75% the time spent on fixing bugs" or "after half a year, the time needed to write code with test was only marginally longer than what used to take without tests". Any ideas? P.S.: I'm adding C++ tag too in case someone has a specific experience with convincing this, usually elitist, type of developers :-)

    Read the article

  • Is there a way to undo Mocha stubbing of any_instance?

    - by Steve Weet
    Within my controller specs I am stubbing out valid? for some routing tests, (based on Ryan Bates nifty_scaffold) as follows :- it "create action should render new template when model is invalid" do Company.any_instance.stubs(:valid?).returns(false) post :create response.should render_template(:new) end This is fine when I test the controllers in isolation. I also have the following in my model spec it "is valid with valid attributes" do @company.should be_valid end Again this works fine when tested in isolation. The problem comes if I run spec for both models and controllers. The model test always fails as the valid? method has been stubbed out. Is there a way for me to remove the stubbing of any_instance when the controller test is torn down. I have got around the problem by running the tests in reverse alphabetic sequence to ensure the model tests run before the controllers but I really don't like my tests being sequence dependant.

    Read the article

  • Python doctests / sphinx : style guide, how to use those and have a readable code ?

    - by Sébastien Piquemal
    Hi ! I love doctests, it is the only testing framwork I use, because it is so quick to write, and because used with sphinx it makes such great documentations with almost no effort... However, very often, I end-up doing things like this : """ Descriptions ============= bla bla bla ... >>> test 1 bla bla bla + tests tests tests * 200 lines = poor readability of the actual code """ What I mean is that I put all my tests with documentation explanations on the top of the module, so you have to scroll stupidly to find the actual code, and this is quite ugly (in my opinion). However, I think that the doctests should still stay in the module, because you should be able to read them while reading the source code. So here comes my question : sphinx/doctests lovers, how do you organize your doctests, such as the code readability doesn't suffer ?

    Read the article

  • Given a short (2-week) sprint, is it ever acceptable to forgo TDD to "get things done"?

    - by Ben Aston
    Given a short sprint, is it ever acceptable to forgo TDD to "get things done" within the sprint. For example a given piece of work might need say 1/3 of the sprint to design the object model around an existing implementation. Under this scenario you might well end up with implemented code, say half way through the sprint, without any tests (implementing unit tests during this "design" stage would add significant effort and the tests would likely be thrown away a few times until the final "design" is settled upon). You might then spend a day or two in the second week adding in unit / integration tests after the fact. Is this acceptable?

    Read the article

  • NUnit - Multiple properties of the same name? Linking to requirements

    - by Ryan Ternier
    I'm linking all our our System Tests to test cases and to our Requirements. Every requirement has an ID. Every Test Case / System Tests tests a variety of requirements. Every module of code links to multiple requirements. I'm trying to find the best way to link every system test to its driving requirements. I was hoping to do something like: [NUnit.Framework.Property("Release", "6.0.0")] [NUnit.Framework.Property("Requirement", "FR50082")] [NUnit.Framework.Property("Requirement", "FR50084")] [NUnit.Framework.Property("Requirement", "FR50085")] [TestCase(....)] public void TestSomething(string a, string b...) However, that will break because Property is a Key-Value pair. The system will not allow me to have multiple Properties with the same key. The reason I'm wanting this is to be able to test specific requirements in our system if a module changes that touches these requirements. Rather than run over 1,000 system tests on every build, this would allow us to target what to test based on changes done to our code. Some system tests run upwards of 5 minutes (Enterprise healthcare system), so "Just run all of them" isn't a viable solution. We do that, but only before promoting through our environments. Thoughts?

    Read the article

  • Is it possible to use the ScalaTest BDD syntax in a JUnit environment?

    - by ebruchez
    I would like to describe tests in BDD style e.g. with FlatSpec but keep JUnit as a test runner. The ScalaTest Quick Start does not seem to show any example of this: http://www.scalatest.org/getting_started_with_junit_4 I first tried naively to write tests within @Test methods, but that doesn't work and the assertion is never tested: @Test def foobarBDDStyle { "The first name control" must "be valid" in { assert(isValid("name·1")) } // etc. } Is there any way to achieve this? It would be even better if regular tests can be mixed and matched with BDD-style tests.

    Read the article

  • Is it possible to use maven only for running selenium plugin?

    - by tputkonen
    Our pom.xml currently contains both the build settings, as well as execution of selenium using selenium-maven-plugin. I would like to split it in to two pom files, one for the build and unit tests and the second one for executing selenium tests. (This way I could first build the project in Hudson, and after successful build execute Selenium tests using another project). Is it possible to configure maven to only execute the selenium-maven-plugin?

    Read the article

  • Android Test testPreconditions

    - by user1184113
    In Android developers I've seen that testPreconditions() method is supposed to be launch before all tests. But in my app test, it's acting like a normal test. It does not run before all tests. Is there something wrong ? Here is the description about testPreconditions() from android developer : "A preconditions test checks the initial application conditions prior to executing other tests. It's similar to setUp(), but with less overhead, since it only runs once."

    Read the article

  • Running unittest with typical test directory structure.

    - by Major Major
    The very common directory structure for even a simple Python module seems to be to separate the unit tests into their own test directory: new_project/ antigravity/ antigravity.py test/ test_antigravity.py setup.py etc. for example see this Python project howto. My question is simply What's the usual way of actually running the tests? I suspect this is obvious to everyone except me, but you can't just run python test_antigravity.py from the test directory as its import antigravity will fail as the module is not on the path. I know I could modify PYTHONPATH and other search path related tricks, but I can't believe that's the simplest way - it's fine if you're the developer but not realistic to expect your users to use if they just want to check the tests are passing. The other alternative is just to copy the test file into the other directory, but it seems a bit dumb and misses the point of having them in a separate directory to start with. So, if you had just downloaded the source to my new project how would you run the unit tests? I'd prefer an answer that would let me say to my users: "To run the unit tests do X."

    Read the article

  • Problem with anonymouse delegate within foreach

    - by geting
    public Form1() { InitializeComponent(); Collection<Test> tests = new Collection<Test>(); tests.Add(new Test("test1")); tests.Add(new Test("test2")); foreach (Test test in tests) { Button button = new Button(); button.Text = test.name; button.Click+=new EventHandler((object obj, EventArgs arg)=>{ this.CreateTest(test); }); this.flowLayoutPanel1.Controls.Add(button); } } public void CreateTest(Test test) { MessageBox.Show(test.name); } } when i click the button witch text is 'test1', the messagebox will show 'test2',but my expect is 'test1'. So ,would anyone please tell me why or what`s wrong with my code.

    Read the article

  • How to ignore a test within the JUnit test method itself

    - by Benju
    We have a number of integration tests that fail when our staging server goes down for weekly maintenance. When the staging server is down we send a specific response that I could detect in my integration tests. When I get this response instead of failing the tests I'm wondering if it is possible to skip/ignore that test even though it has started running. This would keep our test reports a bit cleaner. Does anybody have suggestions?

    Read the article

  • Robotium BDD with Cucumber

    - by LucasGomes
    I want to know if you guys know how to make BDD tests with Robotium. As I research Robotium works with a different Virtual Machine (Dalvik) so I cannot run as Junit Test (Only with Android Junit Test). So I found a possible solution to run Robotium with Junit with RoboRemote https://github.com/groupon/robo-remote. But when i tried to integrate with cucumber the tests became unstable. So you guys know some way to make BDD tests using Robotium?

    Read the article

  • How can I run Gcov over an installed Cocoa application?

    - by Joe
    I have a Cocoa application which uses an installer. I want to be able to run code coverage over the code (after it has been installed). This is not the usual unit-test scenario where a single binary will run a suite of tests. Rather, the tests in question will interact with the UI and the app back-end whilst it is running, so I ideally want to be able to start the application knowing that Gcov is profiling it and then run tests against it. Any ideas?

    Read the article

  • Java test framework for Selenium RC

    - by sebstein.hpfsc.de
    I'm going to use Selenium RC to replay some tests for a website. I want to kickoff those tests from a Java test framework so that I get nice reports how many tests failed, etc. Which java test framework should I use? Is JUnit the preferred framework for this purpose?

    Read the article

  • GuestPost: Unit Testing Entity Framework (v1) Dependent Code using TypeMock Isolator

    - by Eric Nelson
    Time for another guest post (check out others in the series), this time bringing together the world of mocking with the world of Entity Framework. A big thanks to Moses for agreeing to do this. Unit Testing Entity Framework Dependent Code using TypeMock Isolator by Muhammad Mosa Introduction Unit testing data access code in my opinion is a challenging thing. Let us consider unit tests and integration tests. In integration tests you are allowed to have environmental dependencies such as a physical database connection to insert, update, delete or retrieve your data. However when performing unit tests it is often much more efficient and productive to remove environmental dependencies. Instead you will need to fake these dependencies. Faking a database (also known as mocking) can be relatively straight forward but the version of Entity Framework released with .Net 3.5 SP1 has a number of implementation specifics which actually makes faking the existence of a database quite difficult. Faking Entity Framework As mentioned earlier, to effectively unit test you will need to fake/simulate Entity Framework calls to the database. There are many free open source mocking frameworks that can help you achieve this but it will require additional effort to overcome & workaround a number of limitations in those frameworks. Examples of these limitations include: Not able to fake calls to non virtual methods Not able to fake sealed classes Not able to fake LINQ to Entities queries (replace database calls with in-memory collection calls) There is a mocking framework which is flexible enough to handle limitations such as those above. The commercially available TypeMock Isolator can do the job for you with less code and ultimately more readable unit tests. I’m going to demonstrate tackling one of those limitations using MoQ as my mocking framework. Then I will tackle the same issue using TypeMock Isolator. Mocking Entity Framework with MoQ One basic need when faking Entity Framework is to fake the ObjectContext. This cannot be done by passing any connection string. You have to pass a correct Entity Framework connection string that specifies CSDL, SSDL and MSL locations along with a provider connection string. Assuming we are going to do that, we’ll explore another limitation. The limitation we are going to face now is related to not being able to fake calls to non-virtual/overridable members with MoQ. I have the following repository method that adds an EntityObject (instance of a Blog entity) to Blogs entity set in an ObjectContext. public override void Add(Blog blog) { if(BlogContext.Blogs.Any(b=>b.Name == blog.Name)) { throw new InvalidOperationException("Blog with same name already exists!"); } BlogContext.AddToBlogs(blog); } The method does a very simple check that the name of the new Blog entity instance doesn’t exist. This is done through the simple LINQ query above. If the blog doesn’t already exist it simply adds it to the current context to be saved when SaveChanges of the ObjectContext instance (e.g. BlogContext) is called. However, if a blog with the same name exits, and exception (InvalideOperationException) will be thrown. Let us now create a unit test for the Add method using MoQ. [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void Add_Should_Throw_InvalidOperationException_When_Blog_With_Same_Name_Already_Exits() { //(1) We shouldn't depend on configuration when doing unit tests! But, //its a workaround to fake the ObjectContext string connectionString = ConfigurationManager .ConnectionStrings["MyBlogConnString"] .ConnectionString; //(2) Arrange: Fake ObjectContext var fakeContext = new Mock<MyBlogContext>(connectionString); //(3) Next Line will pass, as ObjectContext now can be faked with proper connection string var repo = new BlogRepository(fakeContext.Object); //(4) Create fake ObjectQuery<Blog>. Will be used to substitute MyBlogContext.Blogs property var fakeObjectQuery = new Mock<ObjectQuery<Blog>>("[Blogs]", fakeContext.Object); //(5) Arrange: Set Expectations //Next line will throw an exception by MoQ: //System.ArgumentException: Invalid setup on a non-overridable member fakeContext.SetupGet(c=>c.Blogs).Returns(fakeObjectQuery.Object); fakeObjectQuery.Setup(q => q.Any(b => b.Name == "NewBlog")).Returns(true); //Act repo.Add(new Blog { Name = "NewBlog" }); } This test method is checking to see if the correct exception ([ExpectedException(typeof(InvalidOperationException))]) is thrown when a developer attempts to Add a blog with a name that’s already exists. On (1) a connection string is initialized from configuration file. To retrieve the full connection string. On (2) a fake ObjectContext is being created. The ObjectContext here is MyBlogContext and its being created using this var fakeContext = new Mock<MyBlogContext>(connectionString); This way a fake context is being created using MoQ. On (3) a BlogRepository instance is created. BlogRepository has dependency on generate Entity Framework ObjectContext, MyObjectContext. And so the fake context is passed to the constructor. var repo = new BlogRepository(fakeContext.Object); On (4) a fake instance of ObjectQuery<Blog> is being created to use as a substitute to MyObjectContext.Blogs property as we will see in (5). On (5) setup an expectation for calling Blogs property of MyBlogContext and substitute the return result with the fake ObjectQuery<Blog> instance created on (4). When you run this test it will fail with MoQ throwing an exception because of this line: fakeContext.SetupGet(c=>c.Blogs).Returns(fakeObjectQuery.Object); This happens because the generate property MyBlogContext.Blogs is not virtual/overridable. And assuming it is virtual or you managed to make it virtual it will fail at the following line throwing the same exception: fakeObjectQuery.Setup(q => q.Any(b => b.Name == "NewBlog")).Returns(true); This time the test will fail because the Any extension method is not virtual/overridable. You won’t be able to replace ObjectQuery<Blog> with fake in memory collection to test your LINQ to Entities queries. Now lets see how replacing MoQ with TypeMock Isolator can help. Mocking Entity Framework with TypeMock Isolator The following is the same test method we had above for MoQ but this time implemented using TypeMock Isolator: [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void Add_New_Blog_That_Already_Exists_Should_Throw_InvalidOperationException() { //(1) Create fake in memory collection of blogs var fakeInMemoryBlogs = new List<Blog> {new Blog {Name = "FakeBlog"}}; //(2) create fake context var fakeContext = Isolate.Fake.Instance<MyBlogContext>(); //(3) Setup expected call to MyBlogContext.Blogs property through the fake context Isolate.WhenCalled(() => fakeContext.Blogs) .WillReturnCollectionValuesOf(fakeInMemoryBlogs.AsQueryable()); //(4) Create new blog with a name that already exits in the fake in memory collection in (1) var blog = new Blog {Name = "FakeBlog"}; //(5) Instantiate instance of BlogRepository (Class under test) var repo = new BlogRepository(fakeContext); //(6) Acting by adding the newly created blog () repo.Add(blog); } When running the above test method it will pass as the Add method of BlogRepository is going to throw an InvalidOperationException which is the expected behaviour. Nothing prevents us from faking out the database interaction! Even faking ObjectContext  at (2) didn’t require a connection string. On (3) Isolator sets up a faking result for MyBlogContext.Blogs when its being called through the fake instance fakeContext created on (2). The faking result is just an in-memory collection declared an initialized on (1). Finally at (6) action we call the Add method of BlogRepository passing a new Blog instance that has a name that’s already exists in the fake in-memory collection which we set up at (1). As expected the test will pass because it will throw the expected exception defined on top of the test method - InvalidOperationException. TypeMock Isolator succeeded in faking Entity Framework with ease. Conclusion We explored how to write a simple unit test using TypeMock Isolator for code which is using Entity Framework. We also explored a few of the limitations of other mocking frameworks which TypeMock is successfully able to handle. There are workarounds that you can use to overcome limitations when using MoQ or Rhino Mock, however the workarounds will require you to write more code and your tests will likely be more complex. For a comparison between different mocking frameworks take a look at this document produced by TypeMock. You might also want to check out this open source project to compare mocking frameworks. I hope you enjoyed this post Muhammad Mosa http://mosesofegypt.net/ http://twitter.com/mosessaur Screencast of unit testing Entity Framework Related Links GuestPost: Introduction to Mocking GuesPost: Typemock Isolator – Much more than an Isolation framework

    Read the article

  • Professional Scrum Developer (.NET) Training in London

    - by Martin Hinshelwood
    On the 26th - 30th July in Microsoft’s offices in London Adam Cogan from SSW will be presenting the first Professional Scrum Developer course in the UK. I will be teaching this course along side Adam and it is a fantastic experience. You are split into teams and go head-to-head to deliver units of potentially shippable work in four two hour sprints. The Professional Scrum Developer course is the only course endorsed by both Microsoft and Ken Schwaber and they have worked together very effectively in brining this course to fruition. This course is the brain child of Richard Hundhausen, a Microsoft Regional Director, and both Adam and I attending the Trainer Prep in Sydney when he was there earlier this year. He is a fantastic trainer and no matter where you do this course you can be safe in the knowledge that he has trained and vetted all of the teachers. A tools version of Ken if you will Find a course and register Download this syllabus Download the Scrum Guide What is the Professional Scrum Developer course all about? Professional Scrum Developer course is a unique and intensive five-day experience for software developers. The course guides teams on how to turn product requirements into potentially shippable increments of software using the Scrum framework, Visual Studio 2010, and modern software engineering practices. Attendees will work in self-organizing, self-managing teams using a common instance of Team Foundation Server 2010. Who should attend this course? This course is suitable for any member of a software development team – architect, programmer, database developer, tester, etc. Entire teams are encouraged to attend and experience the course together, but individuals are welcome too. Attendees will self-organize to form cross-functional Scrum teams. These teams require an aggregate of skills specific to the selected case study. Please see the last page of this document for specific details. Product Owners, ScrumMasters, and other stakeholders are welcome too, but keep in mind that everyone who attends will be expected to commit to work and pull their weight on a Scrum team. What should you know by the end of the course? Scrum will be experienced through a combination of lecture, demonstration, discussion, and hands-on exercises. Attendees will learn how to do Scrum correctly while being coached and critiqued by the instructor, in the following topic areas: Form effective teams Explore and understand legacy “Brownfield” architecture Define quality attributes, acceptance criteria, and “done” Create automated builds How to handle software hotfixes Verify that bugs are identified and eliminated Plan releases and sprints Estimate product backlog items Create and manage a sprint backlog Hold an effective sprint review Improve your process by using retrospectives Use emergent architecture to avoid technical debt Use Test Driven Development as a design tool Setup and leverage continuous integration Use Test Impact Analysis to decrease testing times Manage SQL Server development in an Agile way Use .NET and T-SQL refactoring effectively Build, deploy, and test SQL Server databases Create and manage test plans and cases Create, run, record, and play back manual tests Setup a branching strategy and branch code Write more maintainable code Identify and eliminate people and process dysfunctions Inspect and improve your team’s software development process What does the week look like? This course is a mix of lecture, demonstration, group discussion, simulation, and hands-on software development. The bulk of the course will be spent working as a team on a case study application delivering increments of new functionality in mini-sprints. Here is the week at a glance: Monday morning and most of the day Friday will be spent with the computers powered off, so you can focus on sharpening your game of Scrum and avoiding the common pitfalls when implementing it. The Sprints Timeboxing is a critical concept in Scrum as well as in this course. We expect each team and student to understand and obey all of the timeboxes. The timebox duration will always be clearly displayed during each activity. Expect the instructor to enforce it. Each of the ½ day sprints will roughly follow this schedule: Component Description Minutes Instruction Presentation and demonstration of new and relevant tools & practices 60 Sprint planning meeting Product owner presents backlog; each team commits to delivering functionality 10 Sprint planning meeting Each team determines how to build the functionality 10 The Sprint The team self-organizes and self-manages to complete their tasks 120 Sprint Review meeting Each team will present their increment of functionality to the other teams = 30 Sprint Retrospective A group retrospective meeting will be held to inspect and adapt 10 Each team is expected to self-organize and manage their own work during the sprint. Pairing is highly encouraged. The instructor/product owner will be available if there are questions or impediments, but will be hands-off by default. You should be prepared to communicate and work with your team members in order to achieve your sprint goal. If you have development-related questions or get stuck, your partner or team should be your first level of support. Module 1: INTRODUCTION This module provides a chance for the attendees to get to know the instructors as well as each other. The Professional Scrum Developer program, as well as the day by day agenda, will be explained. Finally, the Scrum team will be selected and assembled so that the forming, storming, norming, and performing can begin. Trainer and student introductions Professional Scrum Developer program Agenda Logistics Team formation Retrospective Module 2: SCRUMDAMENTALS This module provides a level-setting understanding of the Scrum framework including the roles, timeboxes, and artifacts. The team will then experience Scrum firsthand by simulating a multi-day sprint of product development, including planning, review, and retrospective meetings. Scrum overview Scrum roles Scrum timeboxes (ceremonies) Scrum artifacts Simulation Retrospective It’s required that you read Ken Schwaber’s Scrum Guide in preparation for this module and course. MODULE 3: IMPLEMENTING SCRUM IN VISUAL STUDIO 2010 This module demonstrates how to implement Scrum in Visual Studio 2010 using a Scrum process template*. The team will learn the mapping between the Scrum concepts and how they are implemented in the tool. After connecting to the shared Team Foundation Server, the team members will then return to the simulation – this time using Visual Studio to manage their product development. Mapping Scrum to Visual Studio 2010 User Story work items Task work items Bug work items Demonstration Simulation Retrospective Module 4: THE CASE STUDY In this module the team is introduced to their problem domain for the week. A kickoff meeting by the Product Owner (the instructor) will set the stage for the why and what that will take during the upcoming sprints. The team will then define the quality attributes of the project and their definition of “done.” The legacy application code will be downloaded, built, and explored, so that any bugs can be discovered and reported. Introduction to the case study Download the source code, build, and explore the application Define the quality attributes for the project Define “done” How to file effective bugs in Visual Studio 2010 Retrospective Module 5: HOTFIX This module drops the team directly into a Brownfield (legacy) experience by forcing them to analyze the existing application’s architecture and code in order to locate and fix the Product Owner’s high-priority bug(s). The team will learn best practices around finding, testing, fixing, validating, and closing a bug. How to use Architecture Explorer to visualize and explore Create a unit test to validate the existence of a bug Find and fix the bug Validate and close the bug Retrospective Module 6: PLANNING This short module introduces the team to release and sprint planning within Visual Studio 2010. The team will define and capture their goals as well as other important planning information. Release vs. Sprint planning Release planning and the Product Backlog Product Backlog prioritization Acceptance criteria and tests Sprint planning and the Sprint Backlog Creating and linking Sprint tasks Retrospective At this point the team will have the knowledge of Scrum, Visual Studio 2010, and the case study application to begin developing increments of potentially shippable functionality that meet their definition of done. Module 7: EMERGENT ARCHITECTURE This module introduces the architectural practices and tools a team can use to develop a valid design on which to develop new functionality. The teams will learn how Scrum supports good architecture and design practices. After the discussion, the teams will be presented with the product owner’s prioritized backlog so that they may select and commit to the functionality they can deliver in this sprint. Architecture and Scrum Emergent architecture Principles, patterns, and practices Visual Studio 2010 modeling tools UML and layer diagrams SPRINT 1 Retrospective Module 8: TEST DRIVEN DEVELOPMENT This module introduces Test Driven Development as a design tool and how to implement it using Visual Studio 2010. To maximize productivity and quality, a Scrum team should setup Continuous Integration to regularly build every team member’s code changes and run regression tests. Refactoring will also be defined and demonstrated in combination with Visual Studio’s Test Impact Analysis to efficiently re-run just those tests which were impacted by refactoring. Continuous integration Team Foundation Build Test Driven Development (TDD) Refactoring Test Impact Analysis SPRINT 2 Retrospective Module 9: AGILE DATABASE DEVELOPMENT This module lets the SQL Server database developers in on a little secret – they can be agile too. By using the database projects in Visual Studio 2010, the database developers can join the rest of the team. The students will see how to apply Agile database techniques within Visual Studio to support the SQL Server 2005/2008/2008R2 development lifecycle. Agile database development Visual Studio database projects Importing schema and scripts Building and deploying Generating data Unit testing SPRINT 3 Retrospective Module 10: SHIP IT Teams need to know that just because they like the functionality doesn’t mean the Product Owner will. This module revisits acceptance criteria as it pertains to acceptance testing. By refining acceptance criteria into manual test steps, team members can execute the tests, recording the results and reporting bugs in a number of ways. Manual tests will be defined and executed using the Microsoft Test Manager tool. As the Sprint completes and an increment of functionality is delivered, the team will also learn why and when they should create a branch of the codeline. Acceptance criteria Testing in Visual Studio 2010 Microsoft Test Manager Writing and running manual tests Branching SPRINT 4 Retrospective Module 11: OVERCOMING DYSFUNCTION This module introduces the many types of people, process, and tool dysfunctions that teams face in the real world. Many dysfunctions and scenarios will be identified, along with ideas and discussion for how a team might mitigate them. This module will enable you and your team to move toward independence and improve your game of Scrum when you depart class. Scrum-butts and flaccid Scrum Best practices working as a team Team challenges ScrumMaster challenges Product Owner challenges Stakeholder challenges Course Retrospective What will be expected of you and you team? This is a unique course in that it’s technically-focused, team-based, and employs timeboxes. It demands that the members of the teams self-organize and self-manage their own work to collaboratively develop increments of software. All attendees must commit to: Pay attention to all lectures and demonstrations Participate in team and group discussions Work collaboratively with other team members Obey the timebox for each activity Commit to work and do your best to deliver All teams should have these skills: Understanding of Scrum Familiarity with Visual Studio 201 C#, .NET 4.0 & ASP.NET 4.0 experience*  SQL Server 2008 development experience Software testing experience * Check with the instructor ahead of time for the exact technologies Self-organising teams Another unique attribute of this course is that it’s a technical training class being delivered to teams of developers, not pairs, and not individuals. Ideally, your actual software development team will attend the training to ensure that all necessary skills are covered. However, if you wish to attend an open enrolment course alone or with just a couple of colleagues, realize that you may be placed on a team with other attendees. The instructor will do his or her best to ensure that each team is cross-functional to tackle the case study, but there are no guarantees. You may be required to try a new role, learn a new skill, or pair with somebody unfamiliar to you. This is just good Scrum! Who should NOT take this course? Because of the nature of this course, as explained above, certain types of people should probably not attend this course: Students requiring command and control style instruction – there are no prescriptive/step-by-step (think traditional Microsoft Learning) labs in this course Students who are unwilling to work within a timebox Students who are unwilling to work collaboratively on a team Students who don’t have any skill in any of the software development disciplines Students who are unable to commit fully to their team – not only will this diminish the student’s learning experience, but it will also impact their team’s learning experience Find a course and register Download this syllabus Download the Scrum Guide Technorati Tags: Scrum,SSW,Pro Scrum Dev

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >