Search Results

Search found 383 results on 16 pages for 'isolation'.

Page 9/16 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • How to figure the read/write ratio in Sql Server?

    - by Bill Paetzke
    How can I query the read/write ratio in Sql Server 2005? Are there any caveats I should be aware of? Perhaps it can be found in a DMV query, a standard report, a custom report (i.e the Performance Dashboard), or examining a Sql Profiler trace. I'm not sure exactly. Why do I care? I'm taking time to improve the performance of my web app's data layer. It deals with millions of records and thousands of users. One of the points I'm examining is database concurrency. Sql Server uses pessimistic concurrency by default--good for a write-heavy app. If my app is read-heavy, I might switch it to optimistic concurrency (isolation level: read uncommitted snapshot) like Jeff Atwood did with StackOverflow.

    Read the article

  • .NET TDD with a Database and ADO.NET Entity Framework - Integration Tests

    - by Brian
    Hello, I'm using ADO.NET entity framework, and am using an AdventureWorks database attached to my local database server. For unit testing, what approaches have people taken to work with a database? Obviously, the database has to be in a pre-defined state of change so that the tests can have some isolation from each other... so I need to be able to run through the inserts and updates, then rollback either between tests or after the batch of tests are done. Any advice? Thanks.

    Read the article

  • Obtain Update Table Lock at start of Stored Procedure in SQL Server

    - by Jim Hurne
    I'm writing a SQL Server stored procedure in which I want to lock a table for update before executing the body of the stored procedure. I don't want to prevent other processes from reading the table, but I do want to prevent other processes updating the table. Here is my first attempt: CREATE PROCEDURE someProcedure BEGIN SET TRANSACTION ISOLATION LEVEL READ COMITTED BEGIN TRANSANCTION SELECT COUNT(*) FROM TheTable WITH (UPDLOCK, TABLOCK) -- Pause procedure so that we can view the locks with sp_lock WAITFOR DELAY '00:15' -- Do stuff COMMIT END When I execute the stored procedure, and invoke sp_lock, I see that the table is indeed locked. However, it's locked with an Exclusive lock instead of an update lock: spid | dbid | ObjId | IndId | Type | Resource | Mode | Status ------------------------------------------------------------------ 63 | 10 | 233208031 | 0 | TAB | | X | GRANT How can I get an update (U) lock instead?

    Read the article

  • Concurency issues with scheduling app

    - by Sazug
    Our application needs a simple scheduling mechanism - we can schedule only one visit per room for the same time interval (but one visit can be using one or more rooms). Using SQL Server 2005, sample procedure could look like this: CREATE PROCEDURE CreateVisit @start datetime, @end datetime, @roomID int AS BEGIN DECLARE @isFreeRoom INT BEGIN TRANSACTION SELECT @isFreeRoom = COUNT(*) FROM visits V INNER JOIN visits_rooms VR on VR.VisitID = V.ID WHERE @start = start AND @end = [end] AND VR.RoomID = @roomID IF (@isFreeRoom = 0) BEGIN INSERT INTO visits (start, [end]) VALUES (@start, @end) INSERT INTO visits_rooms (visitID, roomID) VALUES (SCOPE_IDENTITY(), @roomID) END COMMIT TRANSACTION END In order to not have the same room scheduled for two visits at the same time, how should we handle this problem in procedure? Should we use SERIALIZABLE transaction isolation level or maybe use table hints (locks)? Which one is better?

    Read the article

  • IIS 6 session timing out a lot quicker than expected

    - by Echiban
    I am working with an web application that has its sessions timing out a lot quicker than expected. We expected a timeout of 15 minutes but it's timing out at 3-4 minutes. Info about environment: IIS6 classic ASP / COM+ app timeout OK on current PROD, much quicker in dev / QA environments We already disabled app pool recycling, and even put IIS in isolation mode - no effect HTTP err log doesn't display any lines when session times out We've done a close comparison of PROD and DEV / QA environments, and given we use virtual machines on all of them, settings should be preserved. I tried to find IIS blog notes from David Wang but many of them now have HTTP 404 errors, and I don't know what else to do. Please help! At the very least, is there a way to get IIS to log every time a session expires? At the very least some means of logging / debugging IIS would be useful. Thanks in advance.

    Read the article

  • What are the main advantages of adding your custom functions to a javascript libraries namepsace?

    - by yaya3
    It is fairly well known in JavaScript that declaring variables within the global scope is a bad thing. So code I tend to work on contains namespaced JavaScript. There seems to be two different approaches taken to this - Adding your application specific functions to the libraries' namespace e.g. $.myCarouselfunction Creating your own namespace e.g. MyApplication.myCarouselFunction I wanted to know whether or not there is a 'better' solution or if they tend to meet somewhere close in terms of pros and cons. The reason for me personally deciding not to go with the library is for Seperation / Isolation / Lack of conflict with library code and potential plugins that are likely to share that namespace. But I am sure there is more to this. Thanks

    Read the article

  • What is the fastest way to learn JPA ?

    - by Jacques René Mesrine
    I'm looking for the best resources (books, frameworks, tutorials) that will help me get up to speed with JPA. I've been happily using iBatis/JDBC for my persistence needs, so I need resources that will hopefully provide comparable functions on how to do things. e.g. how to I set the isolation level for each transaction ? I know there might be 10 books on the topic, so hopefully, your recommendation could narrow down to the best 2 books. Should I start with OpenJPA or are there other opensource JPA frameworks to use ? P.S. Do suggest if I should learn JPA2 or JPA1 ? My goal ultimately is to be able to write a Google App Engine app (which uses JPA1). Thanks Jacque

    Read the article

  • How Do I get the current instance from an AppDomain?

    - by Spanners
    Hi, I use the default appdomain (AD) which I use to create new appdomains (AD1) when required for running plugins in isolation. When creating the new domain I also wire up the AppDomainUnload event to allow me to call clean up code etc. The issue I seem to have is: 1) Create AD1 from AD 2) Run code in AD1 3) Call AD.Unload(AD1) The code switches to AD1 and calls the unloading event passing in a reference to the current AppDomain (AD1). At this point I'd like to get a reference to the current instance running in AD1 to call a shutdown method however there is no GetInstance on the AppDomain class. Any ideas how I can go about getting it?

    Read the article

  • Constructing mocks in unit tests

    - by Flynn1179
    Is there any way to have a mock constructed instead of a real instance when testing code that calls a constructor? For example: public class ClassToTest { public void MethodToTest() { MyObject foo = new MyObject(); Console.WriteLine(foo.ToString()); } } In this example, I need to create a unit test that confirms that calling MethodToTest on an instance of ClassToTest will indeed output whatever the result of the ToString() method of a newly created instance of MyObject. I can't see a way of realistically testing the 'ClassToTest' class in isolation; testing this method would actually test the 'myObject.ToString()' method as well as the MethodToTest method.

    Read the article

  • How to issue SQL Server lock hints in WCF Entity Framework?

    - by TMN
    I'm just learning the WCF entity framework (and .net in general), and I'm running into a problem specifying lock hints in an embedded SQL query. I'm trying to specify a query that has lock hints in it (e.g., "SELECT * FROM xyz WITH(XLOCK, ROWLOCK)") and I keep getting errors from the runtime that the query syntax is not valid. The query works if I enter it directly into SQL Server. Is there some special syntax for lock hints when creating IQueryable result sets, or do I have to somehow mess with the transaction isolation level (probably not possible, as I need to specify different lock hints on a subquery within the main query).

    Read the article

  • move data from one table to another, postgresql edition

    - by IggShaman
    Hi All, I'd like to move some data from one table to another (with a possibly different schema). Straightforward solution that comes into mind is - start a transaction with serializable isolation level; INSERT INTO dest_table SELECT data FROM orig_table,other-tables WHERE <condition>; DELETE FROM orig_table USING other-tables WHERE <condition>; COMMIT; Now what if the amount of data is rather big, and the <condition> is expensive to compute? In PostgreSQL, a RULE or a stored procedure can be used to delete data on the fly, evaluating condition only once. Which solution is better? Are there other options?

    Read the article

  • Snapshot on, still deadlocks, ROWLOCK

    - by Patto
    I turned snapshot isolation on in my database using the following code ALTER DATABASE MyDatabase SET ALLOW_SNAPSHOT_ISOLATION ON ALTER DATABASE MyDatabase SET READ_COMMITTED_SNAPSHOT ON and got rid off lots of deadlocks. But my database still produces deadlocks, when I need to run a script every hour to clean up 100,000+ rows. Is there a way I can avoid deadlocks in my cleanup script, do I need to set ROWLOCK specifically in that query? Is there a way to increase the number of row level locks that a database uses? How are locks promoted? From row level to page level to table level? My delete script is rather simple: delete statvalue from statValue, (select dateadd(minute,-60, getdate()) as cutoff_date) cd where temporaryStat = 1 and entrydate < cutoff_date Right now I am looking for quick solution, but a long term solution would be even nicer. Thanks a lot, Patrikc

    Read the article

  • Visualizing branch topology in git

    - by Benjol
    I'm playing with git in isolation on my own machine, and even like that I find it difficult to maintain a mental model of all my branches and commits. I know I can do a git log to see the commit history from where I am, but is there a way to see the entire branch topography, something like these ascii maps that seem to be used everywhere for explaining branches? .-A---M---N---O---P / / / / / I B C D E \ / / / / `-------------' It just feels like someone coming along and trying to pick up my repository would have difficulty working out exactly what was going on. I guess I'm influenced by AccuRev's stream browser...

    Read the article

  • [.NET] What's the point of MarshalByValue Object?

    - by smwikipedia
    Hi awesome! We know that MarshalByRef allow us to create an object in a different AppDomain and use a Proxy object to access it. And the behavior of that object is in a different context of the AppDomain where it actually lives in. Well this sounds faily reseaonable in the regard of isolation and safety. But why is there still MarshalByValue? MarshalByValue just leads to an newly deserialized object which is an exact copy of the object in a different AppDomain. If we need that object, why not just create it in the current AppDomain? Why bother to first create it in a different AppDomain and then get it back by MarshaoByValue? Many thanks.

    Read the article

  • Drawing with GDI+ under IIS

    - by Zac
    I'm running a web application under IIS that we draw graphs with that are sent to the clients. We were previously running under iis6, while migrating to 2008 ( iis7 ) we have encountered some very weird issues with the graphing. I stumbled across the msdn docs for GDI+ stating that "GDI+ functions and classes are not supported for use within a Windows service." I suspect that my issues are probably related to further isolation of services http://msdn.microsoft.com/en-us/library/ms533798%28VS.85%29.aspx My question is how the heck are we supposed to draw graphics? Raw GDI? OpenGL - but doesn't that still require a DC?

    Read the article

  • Separating Spring and Dojo single page app codebases

    - by Pie21
    I'm working on a web application with a Spring Roo REST API backend and a Dojo single-page client app. Spring Roo provides a handy admin interface (with Web MVC) out of the box, so I'd like to keep that accessible for administrative duties. However I'd like the Dojo app to be decoupled from the server application as possible. The server is a platform for which we expect new client applications to be developed, so I'd like to develop the 'official' client web app in isolation. What is the best (or even just one good) way to structure the application code? It's easy to leave the server alone as an API and admin interface, but where does all the Dojo JS live? It works okay when it's in the Spring webapp directory, but gets complicated quickly as soon as the structure gets more complex or is moved anywhere else (Spring URL mappings are still awfully opaque to me). Ideally it could be hosted locally on a development machine, which introduces all the cross-domain JS issues as well.

    Read the article

  • Reset JPA generated value between tests

    - by Rythmic
    I'm running spring + hibernate + JUnit with springJunit4runner and transactional set to default rollback I'm using in-memory derbydb as Database. Hibernate is used as a JPA Provider and I am successfully testing CRUD kinds of stuff. However, I have a problem with JPA and the behaviour of @GeneratedValue If I run one of my tests in isolation, two entitys are persisted with id 1 and 2. If i run the whole test suite the ids are instead 6 and 7. Spring does rollbacks just fine so there are only these two entitys in the database after addition and of course zero before. But behaviour of @GeneratedValue doesn't allow me to reliable findById unless I would return the Id from the dao.add(Entity e) //method I don't feel like doing that for the sake of testing, or is it a good practise to return the entity that was persisted so I should be doing it anyway?

    Read the article

  • What scenarios/settings will result in a query on SQL Server (2008) return stale data

    - by s1mm0t
    Most applications rarely need to display 100% accurate data. For example if this stack overflow question displays that there have been 0 views, when there have really been 10, it doesn't really matter. This is one way that the (perceived) performance of applications can be improved, by caching results and therefore sometimes not showing 100% accurate results. There are some cases where the data does need to be 100% accurate though. So if I run the query select * from Foo I want to be sure that the results are not stale. Now depending on how my database is set up, other activity on the database, use of transactions and isolation levels etc this query may or may not be a true reflection of the world. What scenario's and settings can people think of that will result in this query returning stale results or given that another connection is part way through a transaction that has updated this table, how can I guarantee that when the above query returns, the results will be accurate.

    Read the article

  • SQL Server 2005 Transactions

    - by mcallec
    I have a long running stored proc (approx 30 mins) which is currently running within a transaction (isolation level snapshot). I've set the transaction to snapshot to avoid locking records preventing other processes from accessing the data. What I'm trying to do is write to and read from a status table, but although we're in a transaction I'd like to write to and read from the status table as if I'm not in a transaction. I need this so that other processes can read any updates to this table by my stored proc, and this stored proc can also read any inserts made by other processes. I realise that having my entire stored proc running within a transaction isn't recommended, but this has been done for other reasons and we need to stick with that approach. So my question is within a transaction, is it possible to execute a query or call a stored proc which effectively isn't enlisted in the transaction?

    Read the article

  • SSIS - Access Denied with UNC paths - The file name is a device or contains invalid characters

    - by simonsabin
    I spent another day tearing my hair out yesterday trying to resolve an issue with SSIS packages runnning in SQLAgent (not got much left at the moment, maybe I should contact the SSIS team for a wig). My situation was that I am deploying packages to a development server, and to provide isolation I was running jobs with a proxy account that only had access to the development servers. Proxies are an awesome feature and mean that you should never have to "just run the job as sysadmin". The issue I was facing was that the job step was failing. The job step was a simple execution of the package.The following errors appeared in my log file. I always check the "Log step output in history" for a job step, this ensures you get all the output from the command that you run. I'll blog about this later. If looking at the output in sysdtslog90 then you will have an entry with datacode -1073573533 and error message File or directory "<filename>" represented by connection "<connection>" does not exist.  Not exactly helpful. If you get the output from the console then you will also get these errors. 0xC0202070 "The file name property is not valid. The file name is a device or contains invalid characters." 0xC001401E "specified in the connection was not valid." It appears this error is due to the use of a UNC path and the account runnnig the package not having access to all the folders in the path. Solution To solve this you need to ensure that the proxy account has access to ALL folders in the path you are accessing. To check this works, logon as the relevant proxy user, or run a command window as the specified user. Then try and do net use \\server\share and then do a dir for each folder in the path and check you have access. If these work and you still have the problem then you have some other problem, sorry. The following are posts on experts exchange that also discuss this,http://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/SSIS/Q_24056047.htmlhttp://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/SSIS/Q_23968903.html This blog had a post about it being a 64 bit issue. That definitely wasn't the issue for me as I was on a 32 bit server http://blogs.perkinsconsulting.com/post/64-bit-SQL-Server-2005-SSIS-and-UNC-paths-Part-2.aspx  

    Read the article

  • Guidance: A Branching strategy for Scrum Teams

    - by Martin Hinshelwood
    Having a good branching strategy will save your bacon, or at least your code. Be careful when deviating from your branching strategy because if you do, you may be worse off than when you started! This is one possible branching strategy for Scrum teams and I will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even assess your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Acknowledgements Bill Heys – Bill offered some good feedback on this post and helped soften the language. Note: Bill is a VS ALM Ranger and co-wrote the Branching Guidance for TFS 2010 Willy-Peter Schaub – Willy-Peter is an ex Visual Studio ALM MVP turned blue badge and has been involved in most of the guidance including the Branching Guidance for TFS 2010 Chris Birmele – Chris wrote some of the early TFS Branching and Merging Guidance. Dr Paul Neumeyer, Ph.D Parallel Processes, ScrumMaster and SSW Solution Architect – Paul wanted to have feature branches coming from the release branch as well. We agreed that this is really a spin-off that needs own project, backlog, budget and Team. Scenario: A product is developed RTM 1.0 is released and gets great sales.  Extra features are demanded but the new version will have double to price to pay to recover costs, work is approved by the guys with budget and a few sprints later RTM 2.0 is released.  Sales a very low due to the pricing strategy. There are lots of clients on RTM 1.0 calling out for patches. As I keep getting Reverse Integration and Forward Integration mixed up and Bill keeps slapping my wrists I thought I should have a reminder: You still seemed to use reverse and/or forward integration in the wrong context. I would recommend reviewing your document at the end to ensure that it agrees with the common understanding of these terms merge (forward integration) from parent to child (same direction as the branch), and merge  (reverse integration) from child to parent (the reverse direction of the branch). - one of my many slaps on the wrist from Bill Heys.   As I mentioned previously we are using a single feature branching strategy in our current project. The single biggest mistake developers make is developing against the “Main” or “Trunk” line. This ultimately leads to messy code as things are added and never finished. Your only alternative is to NEVER check in unless your code is 100%, but this does not work in practice, even with a single developer. Your ADD will kick in and your half-finished code will be finished enough to pass the build and the tests. You do use builds don’t you? Sadly, this is a very common scenario and I have had people argue that branching merely adds complexity. Then again I have seen the other side of the universe ... branching  structures from he... We should somehow convince everyone that there is a happy between no-branching and too-much-branching. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   A key benefit of branching for development is to isolate changes from the stable Main branch. Branching adds sanity more than it adds complexity. We do try to stress in our guidance that it is important to justify a branch, by doing a cost benefit analysis. The primary cost is the effort to do merges and resolve conflicts. A key benefit is that you have a stable code base in Main and accept changes into Main only after they pass quality gates, etc. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft The second biggest mistake developers make is branching anything other than the WHOLE “Main” line. If you branch parts of your code and not others it gets out of sync and can make integration a nightmare. You should have your Source, Assets, Build scripts deployment scripts and dependencies inside the “Main” folder and branch the whole thing. Some departments within MSFT even go as far as to add the environments used to develop the product in there as well; although I would not recommend that unless you have a massive SQL cluster to house your source code. We tried the “add environment” back in South-Africa and while it was “phenomenal”, especially when having to switch between environments, the disk storage and processing requirements killed us. We opted for virtualization to skin this cat of keeping a ready-to-go environment handy. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   I think people often think that you should have separate branches for separate environments (e.g. Dev, Test, Integration Test, QA, etc.). I prefer to think of deploying to environments (such as from Main to QA) rather than branching for QA). - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   You can read about SSW’s Rules to better Source Control for some additional information on what Source Control to use and how to use it. There are also a number of branching Anti-Patterns that should be avoided at all costs: You know you are on the wrong track if you experience one or more of the following symptoms in your development environment: Merge Paranoia—avoiding merging at all cost, usually because of a fear of the consequences. Merge Mania—spending too much time merging software assets instead of developing them. Big Bang Merge—deferring branch merging to the end of the development effort and attempting to merge all branches simultaneously. Never-Ending Merge—continuous merging activity because there is always more to merge. Wrong-Way Merge—merging a software asset version with an earlier version. Branch Mania—creating many branches for no apparent reason. Cascading Branches—branching but never merging back to the main line. Mysterious Branches—branching for no apparent reason. Temporary Branches—branching for changing reasons, so the branch becomes a permanent temporary workspace. Volatile Branches—branching with unstable software assets shared by other branches or merged into another branch. Note   Branches are volatile most of the time while they exist as independent branches. That is the point of having them. The difference is that you should not share or merge branches while they are in an unstable state. Development Freeze—stopping all development activities while branching, merging, and building new base lines. Berlin Wall—using branches to divide the development team members, instead of dividing the work they are performing. -Branching and Merging Primer by Chris Birmele - Developer Tools Technical Specialist at Microsoft Pty Ltd in Australia   In fact, this can result in a merge exercise no-one wants to be involved in, merging hundreds of thousands of change sets and trying to get a consolidated build. Again, we need to find a happy medium. - Willy-Peter Schaub on Merge Paranoia Merge conflicts are generally the result of making changes to the same file in both the target and source branch. If you create merge conflicts, you will eventually need to resolve them. Often the resolution is manual. Merging more frequently allows you to resolve these conflicts close to when they happen, making the resolution clearer. Waiting weeks or months to resolve them, the Big Bang approach, means you are more likely to resolve conflicts incorrectly. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Main line, this is where your stable code lives and where any build has known entities, always passes and has a happy test that passes as well? Many development projects consist of, a single “Main” line of source and artifacts. This is good; at least there is source control . There are however a couple of issues that need to be considered. What happens if: you and your team are working on a new set of features and the customer wants a change to his current version? you are working on two features and the customer decides to abandon one of them? you have two teams working on different feature sets and their changes start interfering with each other? I just use labels instead of branches? That's a lot of “what if’s”, but there is a simple way of preventing this. Branching… In TFS, labels are not immutable. This does not mean they are not useful. But labels do not provide a very good development isolation mechanism. Branching allows separate code sets to evolve separately (e.g. Current with hotfixes, and vNext with new development). I don’t see how labels work here. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Creating a single feature branch means you can isolate the development work on that branch.   Its standard practice for large projects with lots of developers to use Feature branching and you can check the Branching Guidance for the latest recommendations from the Visual Studio ALM Rangers for other methods. In the diagram above you can see my recommendation for branching when using Scrum development with TFS 2010. It consists of a single Sprint branch to contain all the changes for the current sprint. The main branch has the permissions changes so contributors to the project can only Branch and Merge with “Main”. This will prevent accidental check-ins or checkouts of the “Main” line that would contaminate the code. The developers continue to develop on sprint one until the completion of the sprint. Note: In the real world, starting a new Greenfield project, this process starts at Sprint 2 as at the start of Sprint 1 you would have artifacts in version control and no need for isolation.   Figure: Once the sprint is complete the Sprint 1 code can then be merged back into the Main line. There are always good practices to follow, and one is to always do a Forward Integration from Main into Sprint 1 before you do a Reverse Integration from Sprint 1 back into Main. In this case it may seem superfluous, but this builds good muscle memory into your developer’s work ethic and means that no bad habits are learned that would interfere with additional Scrum Teams being added to the Product. The process of completing your sprint development: The Team completes their work according to their definition of done. Merge from “Main” into “Sprint1” (Forward Integration) Stabilize your code with any changes coming from other Scrum Teams working on the same product. If you have one Scrum Team this should be quick, but there may have been bug fixes in the Release branches. (we will talk about release branches later) Merge from “Sprint1” into “Main” to commit your changes. (Reverse Integration) Check-in Delete the Sprint1 branch Note: The Sprint 1 branch is no longer required as its useful life has been concluded. Check-in Done But you are not yet done with the Sprint. The goal in Scrum is to have a “potentially shippable product” at the end of every Sprint, and we do not have that yet, we only have finished code.   Figure: With Sprint 1 merged you can create a Release branch and run your final packaging and testing In 99% of all projects I have been involved in or watched, a “shippable product” only happens towards the end of the overall lifecycle, especially when sprints are short. The in-between releases are great demonstration releases, but not shippable. Perhaps it comes from my 80’s brain washing that we only ship when we reach the agreed quality and business feature bar. - Willy-Peter Schaub, VS ALM Ranger, Microsoft Although you should have been testing and packaging your code all the way through your Sprint 1 development, preferably using an automated process, you still need to test and package with stable unchanging code. This is where you do what at SSW we call a “Test Please”. This is first an internal test of the product to make sure it meets the needs of the customer and you generally use a resource external to your Team. Then a “Test Please” is conducted with the Product Owner to make sure he is happy with the output. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: If you find a deviation from the expected result you fix it on the Release branch. If during your final testing or your “Test Please” you find there are issues or bugs then you should fix them on the release branch. If you can’t fix them within the time box of your Sprint, then you will need to create a Bug and put it onto the backlog for prioritization by the Product owner. Make sure you leave plenty of time between your merge from the development branch to find and fix any problems that are uncovered. This process is commonly called Stabilization and should always be conducted once you have completed all of your User Stories and integrated all of your branches. Even once you have stabilized and released, you should not delete the release branch as you would with the Sprint branch. It has a usefulness for servicing that may extend well beyond the limited life you expect of it. Note: Don't get forced by the business into adding features into a Release branch instead that indicates the unspoken requirement is that they are asking for a product spin-off. In this case you can create a new Team Project and branch from the required Release branch to create a new Main branch for that product. And you create a whole new backlog to work from.   Figure: When the Team decides it is happy with the product you can create a RTM branch. Once you have fixed all the bugs you can, and added any you can’t to the Product Backlog, and you Team is happy with the result you can create a Release. This would consist of doing the final Build and Packaging it up ready for your Sprint Review meeting. You would then create a read-only branch that represents the code you “shipped”. This is really an Audit trail branch that is optional, but is good practice. You could use a Label, but Labels are not Auditable and if a dispute was raised by the customer you can produce a verifiable version of the source code for an independent party to check. Rare I know, but you do not want to be at the wrong end of a legal battle. Like the Release branch the RTM branch should never be deleted, or only deleted according to your companies legal policy, which in the UK is usually 7 years.   Figure: If you have made any changes in the Release you will need to merge back up to Main in order to finalise the changes. Nothing is really ever done until it is in Main. The same rules apply when merging any fixes in the Release branch back into Main and you should do a reverse merge before a forward merge, again for the muscle memory more than necessity at this stage. Your Sprint is now nearly complete, and you can have a Sprint Review meeting knowing that you have made every effort and taken every precaution to protect your customer’s investment. Note: In order to really achieve protection for both you and your client you would add Automated Builds, Automated Tests, Automated Acceptance tests, Acceptance test tracking, Unit Tests, Load tests, Web test and all the other good engineering practices that help produce reliable software.     Figure: After the Sprint Planning meeting the process begins again. Where the Sprint Review and Retrospective meetings mark the end of the Sprint, the Sprint Planning meeting marks the beginning. After you have completed your Sprint Planning and you know what you are trying to achieve in Sprint 2 you can create your new Branch to develop in. How do we handle a bug(s) in production that can’t wait? Although in Scrum the only work done should be on the backlog there should be a little buffer added to the Sprint Planning for contingencies. One of these contingencies is a bug in the current release that can’t wait for the Sprint to finish. But how do you handle that? Willy-Peter Schaub asked an excellent question on the release activities: In reality Sprint 2 starts when sprint 1 ends + weekend. Should we not cater for a possible parallelism between Sprint 2 and the release activities of sprint 1? It would introduce FI’s from main to sprint 2, I guess. Your “Figure: Merging print 2 back into Main.” covers, what I tend to believe to be reality in most cases. - Willy-Peter Schaub, VS ALM Ranger, Microsoft I agree, and if you have a single Scrum team then your resources are limited. The Scrum Team is responsible for packaging and release, so at least one run at stabilization, package and release should be included in the Sprint time box. If more are needed on the current production release during the Sprint 2 time box then resource needs to be pulled from Sprint 2. The Product Owner and the Team have four choices (in order of disruption/cost): Backlog: Add the bug to the backlog and fix it in the next Sprint Buffer Time: Use any buffer time included in the current Sprint to fix the bug quickly Make time: Remove a Story from the current Sprint that is of equal value to the time lost fixing the bug(s) and releasing. Note: The Team must agree that it can still meet the Sprint Goal. Cancel Sprint: Cancel the sprint and concentrate all resource on fixing the bug(s) Note: This can be a very costly if the current sprint has already had a lot of work completed as it will be lost. The choice will depend on the complexity and severity of the bug(s) and both the Product Owner and the Team need to agree. In this case we will go with option #2 or #3 as they are uncomplicated but severe bugs. Figure: Real world issue where a bug needs fixed in the current release. If the bug(s) is urgent enough then then your only option is to fix it in place. You can edit the release branch to find and fix the bug, hopefully creating a test so it can’t happen again. Follow the prior process and conduct an internal and customer “Test Please” before releasing. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: After you have fixed the bug you need to ship again. You then need to again create an RTM branch to hold the version of the code you released in escrow.   Figure: Main is now out of sync with your Release. We now need to get these new changes back up into the Main branch. Do a reverse and then forward merge again to get the new code into Main. But what about the branch, are developers not working on Sprint 2? Does Sprint 2 now have changes that are not in Main and Main now have changes that are not in Sprint 2? Well, yes… and this is part of the hit you take doing branching. But would this scenario even have been possible without branching?   Figure: Getting the changes in Main into Sprint 2 is very important. The Team now needs to do a Forward Integration merge into their Sprint and resolve any conflicts that occur. Maybe the bug has already been fixed in Sprint 2, maybe the bug no longer exists! This needs to be identified and resolved by the developers before they continue to get further out of Sync with Main. Note: Avoid the “Big bang merge” at all costs.   Figure: Merging Sprint 2 back into Main, the Forward Integration, and R0 terminates. Sprint 2 now merges (Reverse Integration) back into Main following the procedures we have already established.   Figure: The logical conclusion. This then allows the creation of the next release. By now you should be getting the big picture and hopefully you learned something useful from this post. I know I have enjoyed writing it as I find these exploratory posts coupled with real world experience really help harden my understanding.  Branching is a tool; it is not a silver bullet. Don’t over use it, and avoid “Anti-Patterns” where possible. Although the diagram above looks complicated I hope showing you how it is formed simplifies it as much as possible.   Technorati Tags: Branching,Scrum,VS ALM,TFS 2010,VS2010

    Read the article

  • SQLAuthority News – Monthly Roundup of Best SQL Posts

    - by pinaldave
    After receiving lots of requests from different readers for long time I have decided to write first monthly round up. If all of you like it I will continue writing the same every month. In fact, I really like the idea as I was able to go back and read all of my posts written in this month. This month was started with answering one of the most common question asked me to about What is Adventureworks? Many of you know the answer but to the surprise more number of the reader did not know the answer. There were few extra blog post which were in the same line as following. SQL SERVER – The Difference between Dual Core vs. Core 2 Duo SQLAuthority News – Wireless Router Security and Attached Devices – Complex Password SQL SERVER – DATE and TIME in SQL Server 2008 DMVs are also one of the most handy tools available in SQL Server, I have written following blog post where I have used DMV in scripts. SQL SERVER – Get Latest SQL Query for Sessions – DMV SQL SERVER – Find Most Expensive Queries Using DMV SQL SERVER – List All the DMV and DMF on Server I was able to write two follow-up of my earlier series where I was finding the size of the indexes using different SQL Scripts. And in fact one of the article Powershell is used as well. This was my very first attempt to use Powershell. SQL SERVER – Size of Index Table for Each Index – Solution 2 SQL SERVER – Size of Index Table for Each Index – Solution 3 – Powershell SQL SERVER – Four Posts on Removing the Bookmark Lookup – Key Lookup Without realizing I wrote series of the blog post on disabled index here is its complete list. I plan to write one more follow-up list on the same. SQL SERVER – Disable Clustered Index and Data Insert SQL SERVER – Understanding ALTER INDEX ALL REBUILD with Disabled Clustered Index SQL SERVER – Disabled Index and Update Statistics Two special post which I found very interesting to write are as following. SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008 SQL SERVER – Simple Example of Snapshot Isolation – Reduce the Blocking Transactions In personal adventures, I won the Community Impact Award for Last Year from Microsoft. Please leave your comment about how can I improve this round up or what more details I should include in the same. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Big Data – Buzz Words: What is NoSQL – Day 5 of 21

    - by Pinal Dave
    In yesterday’s blog post we explored the basic architecture of Big Data . In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – NoSQL. What is NoSQL? NoSQL stands for Not Relational SQL or Not Only SQL. Lots of people think that NoSQL means there is No SQL, which is not true – they both sound same but the meaning is totally different. NoSQL does use SQL but it uses more than SQL to achieve its goal. As per Wikipedia’s NoSQL Database Definition – “A NoSQL database provides a mechanism for storage and retrieval of data that uses looser consistency models than traditional relational databases.“ Why use NoSQL? A traditional relation database usually deals with predictable structured data. Whereas as the world has moved forward with unstructured data we often see the limitations of the traditional relational database in dealing with them. For example, nowadays we have data in format of SMS, wave files, photos and video format. It is a bit difficult to manage them by using a traditional relational database. I often see people using BLOB filed to store such a data. BLOB can store the data but when we have to retrieve them or even process them the same BLOB is extremely slow in processing the unstructured data. A NoSQL database is the type of database that can handle unstructured, unorganized and unpredictable data that our business needs it. Along with the support to unstructured data, the other advantage of NoSQL Database is high performance and high availability. Eventual Consistency Additionally to note that NoSQL Database may not provided 100% ACID (Atomicity, Consistency, Isolation, Durability) compliance.  Though, NoSQL Database does not support ACID they provide eventual consistency. That means over the long period of time all updates can be expected to propagate eventually through the system and data will be consistent. Taxonomy Taxonomy is the practice of classification of things or concepts and the principles. The NoSQL taxonomy supports column store, document store, key-value stores, and graph databases. We will discuss the taxonomy in detail in later blog posts. Here are few of the examples of the each of the No SQL Category. Column: Hbase, Cassandra, Accumulo Document: MongoDB, Couchbase, Raven Key-value : Dynamo, Riak, Azure, Redis, Cache, GT.m Graph: Neo4J, Allegro, Virtuoso, Bigdata As of now there are over 150 NoSQL Database and you can read everything about them in this single link. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – Hadoop. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • What&rsquo;s new in VS.10 &amp; TFS.10?

    - by johndoucette
    Getting my geek on… I have decided to call the products VS.10 (Visual Studio 2010), TP.10 (Test Professional 2010),  and TFS.10 (Team Foundation Server 2010) Thanks Neno Loje. What's new in Visual Studio & Team Foundation Server 2010? Focusing on Visual Studio Team System (VSTS) ALM-related parts: Visual Studio Ultimate 2010 NEW: IntelliTrace® (aka the historical debugger) NEW: Architecture Tools New Project Type: Modeling Project UML Diagrams UML Use Case Diagram UML Class Diagram UML Sequence Diagram (supports reverse enginneering) UML Activity Diagram UML Component Diagram Layer Diagram (with Team Build integration for layer validation) Architecuture Explorer Dependency visualization DGML Web & Load Tests Visual Studio Premium 2010 NEW: Architecture Tools Read-only model viewer Development Tools Code Analysis New Rules like SQL Injection detection Rule Sets Code Profiler Multi-Tier Profiling JScript Profiling Profiling applications on virtual machines in sampling mode Code Metrics Test Tools Code Coverage NEW: Test Impact Analysis NEW: Coded UI Test Database Tools (DB schema versioning & deployment) Visual Studio Professional 2010 Debuger Mixed Mode Debugging for 64-bit Applications Export/Import of Breakpoints and data tips Visual Studio Test Professional 2010 Microsoft Test Manager (MTM, formerly known as "Camano")) Fast Forward Testing Visual Studio Team Foundation Server 2010 Work Item Tracking and Project Management New MSF templatesfor Agile and CMMI (V 5.0) Hierarchical Work Items Custom Work Item Link Types Ready to use Excel agile project management workbooks for managing your backlogs (including capacity planing) Convert Work Item query to an Excel report MS Excel integration Support for Work Item hierarchies Formatting is preserved after doing a 'Refresh' MS Project integration Hierarchy and successor/predecessor info is now synchronized NEW: Test Case Management Version Control Public Workspaces Branch & Merge Visualization Tracking of Changesets & Work Items Gated Check-In Team Build Build Controllers and Agents Workflow 4-based build process NEW: Lab Management (only a pre-release is avaiable at the moment!) Project Portal & Reporting Dashboards (on SharePoint Portal) Burndown Chart TFS Web Parts (to show data from TFS) Administration & Operations Topology enhancements Application tier network load balancing (NLB) SQL Server scale out Improved Sharepoint flexibility Report Server flexibility Zone support Kerberos support Separation of TFS and SQL administration Setup Separate install from configure Improved installation wizards Optional components Simplified account requirements Improved Reporting Services configuration Setup consolidation Upgrading from previous TFS versions Improved IIS flexibility Administration Consolidation of command line tools User rename support Project Collections Archive/restore individual project collections Move Team Project Collections Server consolidation Team Project Collection Split Team Project Collection Isolation Server request cancellation Licensing: TFS server license included in MSDN subscriptions Removed features (former features not part of Visual Studio 2010): Debug » Start With Application Verifier Object Test Bench IntelliSense for C++ / CLI Debugging support for SQL 2000

    Read the article

  • SBUG Session: The Enterprise Cache

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] I did a session on "The Enterprise Cache" at the UK SOA/BPM User Group yesterday which generated some useful discussion. The proposal was for a dedicated caching layer which all app servers and service providers can hook into, sharing resources and common data. The architecture might end up like this: I'll update this post with a link to the slide deck once it's available. The next session will have Udi Dahan walking through nServiceBus, register on EventBrite if you want to come along. Synopsis Looked at the benefits and drawbacks of app-centric isolated caches, compared to an enterprise-wide shared cache running on dedicated nodes; Suggested issues and risks around caching including staleness of data, resource usage, performance and testing; Walked through a generic service cache implemented as a WCF behaviour – suitable for IIS- or BizTalk-hosted services - which I'll be releasing on CodePlex shortly; Listed common options for cache providers and their offerings. Discussion Cache usage. Different value propositions for utilising the cache: improved performance, isolation from underlying systems (e.g. service output caching can have a TTL large enough to cover downtime), reduced resource impact – CPU, memory, SQL and cost (e.g. caching results of paid-for services). Dedicated cache nodes. Preferred over in-host caching provided latency is acceptable. Depending on cache provider, can offer easy scalability and global replication so cache clients always use local nodes. Restriction of AppFabric Caching to Windows Server 2008 not viewed as a concern. Security. Limited security model in most cache providers. Options for securing cache content suggested as custom implementations. Obfuscating keys and serialized values may mean additional security is not needed. Depending on security requirements and architecture, can ensure cache servers only accessible to cache clients via IPsec. Staleness. Generally thought to be an overrated problem. Thinking in line with eventual consistency, that serving up stale data may not be a significant issue. Good technical arguments support this, although I suspect business users will be harder to persuade. Providers. Positive feedback for AppFabric Caching – speed, configurability and richness of the distributed model making it a good enterprise choice. .NET port of memcached well thought of for performance but lack of replication makes it less suitable for these shared scenarios. Replicated fork – repcached – untried and less active than memcached. NCache also well thought of, but Express version too limited for enterprise scenarios, and commercial versions look costly compared to AppFabric.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >