Search Results

Search found 3479 results on 140 pages for 'chris boesch'.

Page 123/140 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • FairWarning Privacy Monitoring Solutions Rely on MySQL to Secure Patient Data

    - by Rebecca Hansen
    FairWarning® solutions have audited well over 120 billion events, each of which was processed and stored in a MySQL database. FairWarning is the world's leading supplier of privacy monitoring solutions for electronic health records, relied on by over 1,200 Hospitals and 5,000 Clinics to keep their patients' data safe. In January 2014, FairWarning was awarded the highest commendation in healthcare IT as the first ever Category Leader for Patient Privacy Monitoring in the "2013 Best in KLAS: Software & Services" report[1]. FairWarning has used MySQL as their solutions’ database from their start in 2005 to worldwide expansion and market leadership. FairWarning recently migrated their solutions from MyISAM to InnoDB and updated from MySQL 5.5 to 5.6. Following are some of benefits they’ve had as a result of those changes and reasons for their continued reliance on MySQL (from FairWarning MySQL Case Study). Scalability to Handle Terabytes of Data FairWarning's customers have a lot of data: On average, FairWarning customers receive over 700,000 events to be processed daily. Over 25% of their customers receive over 30 million events per day, which equates to over 1 billion events and nearly one terabyte (TB) of new data each month. Databases range in size from a few hundred GBs to 10+ TBs for enterprise deployments (data are rolled off after 13 months). Low or Zero Admin = Few DBAs "MySQL has not required a lot of administration. After it's been tuned, configured, and optimized for size on initial setup, we have very low administrative costs. I can scale and add more customers without adding DBAs. This has had a big, positive impact on our business.” - Chris Arnold, FairWarning Vice President of Product Management and Engineering. Performance Schema  As the size of FairWarning's customers has increased, so have their tables and data volumes. MySQL 5.6’ new maintenance and management features have helped FairWarning keep up. In particular, MySQL 5.6 performance schema’s low-level metrics have provided critical insight into how the system is performing and why. Support for Mutli-CPU Threads MySQL 5.6' support for multiple concurrent CPU threads, and FairWarning's custom data loader allow multiple files to load into a single table simultaneously vs. one at a time. As a result, their data load time has been reduced by 500%. MySQL Enterprise Hot Backup Because hospitals and clinics never stop, FairWarning solutions can’t either. FairWarning changed from using mysqldump to MySQL Enterprise Hot Backup, which has reduced downtime, restore time, and storage requirements. For many of their larger customers, restore time has decreased by 80%. MySQL Enterprise Edition and Product Roadmap Provide Complete Solution "MySQL's product roadmap fully addresses our needs. We like the fact that MySQL Enterprise Edition has everything included; there's no need to purchase separate modules."  - Chris Arnold Learn More>> FairWarning MySQL Case Study Why MySQL 5.6 is an Even Better Embedded Database for Your Products presentation Updating Your Products to MySQL 5.6, Best Practices for OEMs on-demand webinar (audio and / or slides + Q&A transcript) MyISAM to InnoDB – Why and How on-demand webinar (same stuff) Top 10 Reasons to Use MySQL as an Embedded Database white paper [1] 2013 Best in KLAS: Software & Services report, January, 2014. © 2014 KLAS Enterprises, LLC. All rights reserved.

    Read the article

  • Make seems to think a prerequisite is an intermediate file, removes it

    - by James
    For starters, this exercise in GNU make was admittedly just that: an exercise rather than a practicality, since a simple bash script would have sufficed. However, it brought up interesting behavior I don't quite understand. I've written a seemingly simple Makefile to handle generation of SSL key/cert pairs as necessary for MySQL. My goal was for make <name> to result in <name>-key.pem, <name>-cert.pem, and any other necessary files (specifically, the CA pair if any of it is missing or needs updating, which leads into another interesting follow-up exercise of handling reverse deps to reissue any certs that had been signed by a missing/updated CA cert). After executing all rules as expected, make seems to be too aggressive at identifying intermediate files for removal; it removes a file I thought would be "safe" since it should have been generated as a prereq to the main rule I'm invoking. (Humbly translated, I likely have misinterpreted make's documented behavior to suit my expectation, but don't understand how. ;-) Edited (thanks, Chris!) Adding %-cert.pem to .PRECIOUS does, of course, prevent the deletion. (I had been using the wrong syntax.) Makefile: OPENSSL = /usr/bin/openssl # Corrected, thanks Chris! .PHONY: clean default: ca clean: rm -I *.pem %: %-key.pem %-cert.pem @# Placeholder (to make this implicit create a rule and not cancel one) Makefile: @# Prevent the catch-all from matching Makefile ca-cert.pem: ca-key.pem $(OPENSSL) req -new -x509 -nodes -days 1000 -key ca-key.pem $@ %-key.pem: $(OPENSSL) genrsa 2048 $@ %-cert.pem: %-csr.pem ca-cert.pem ca-key.pem $(OPENSSL) x509 -req -in $ $@ Output: $ make host1 /usr/bin/openssl genrsa 2048 ca-key.pem /usr/bin/openssl req -new -x509 -nodes -days 1000 -key ca-key.pem ca-cert.pem /usr/bin/openssl genrsa 2048 host1-key.pem /usr/bin/openssl req -new -days 1000 -nodes -key host1-key.pem host1-csr.pem /usr/bin/openssl x509 -req -in host1-csr.pem -days 1000 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 host1-cert.pem rm host1-csr.pem host1-cert.pem This is driving me crazy, and I'll happily try any suggestions and post results. If I'm just totally noobing out on this one, feel free to jibe away. You can't possibly hurt my feelings. :)

    Read the article

  • Python DictReader - Skipping rows with missing columns?

    - by victorhooi
    heya, I have a Excel .CSV file I'm attempting to read in with DictReader. All seems to be well, except it seems to omit rows, specifically those with missing columns. Our input looks like: mail,givenName,sn,lorem,ipsum,dolor,telephoneNumber [email protected],ian,bay,3424,8403,2535,+65(2)34523534545 [email protected],mike,gibson,3424,8403,2535,+65(2)34523534545 [email protected],ross,martin,,,,+65(2)34523534545 [email protected],david,connor,,,,+65(2)34523534545 chris[email protected],chris,call,3424,8403,2535,+65(2)34523534545 So some of the rows have missing lorem/ipsum/dolor columns, and it's just a string of commas for those. We're reading it in with: def read_gd_dump(input_file="blah 20100423.csv"): gd_extract = csv.DictReader(open('blah 20100423.csv'), restval='missing', dialect='excel') return dict([(row['something'], row) for row in gd_extract]) And I checked that "something" (the key for our dict) isn't one of the missing columns, I had originally suspected it might be that. It's one of the columns after that. However, DictReader seems to completely skip over the rows. I tried setting restval to something, didn't seem to make any difference. I can't seem to find anything in Python's CSV docs (http://docs.python.org/library/csv.html) that would explain this behaviour, but I may have misread something. Any ideas? Thanks, Victor

    Read the article

  • using xpath get the correct value in selectName

    - by Sangeeta
    Below I have written the code. I need help to get the right value in selectName. I am new to XPath. Basically with this code I am trying to achieve if employeeName = Chris I need to return 23549 to the calling function. Please help. Part of the code: public static string getEmployeeeID(string employeeName) { Cache cache = HttpContext.Current.Cache; string cacheNameEmployee = employeeName + "Tech"; if (cache[cacheNameEpm] == null) { XPathDocument doc = new XPathDocument(HttpContext.Current.Request.MapPath("inc/xml/" + SiteManager.Site.ToString() + "TechToolTip.xml")); XPathNavigator navigator = doc.CreateNavigator(); string selectName = "/Technologies/Technology[FieldName='" + fieldName + "']/EpmId"; XPathNodeIterator nodes = navigator.Select(selectName); if (nodes.Count > 0) { if (nodes.Current.Name.Equals("FieldName")) //nodes.Current.MoveToParent(); //nodes.Current.MoveToParent(); //nodes.Current.MoveToChild("//EpmId"); cache.Add(cacheNameEpm, nodes.Current.Value, null, DateTime.Now + new TimeSpan(4, 0, 0), System.Web.Caching.Cache.NoSlidingExpiration, CacheItemPriority.Default, null); } } return cache[cacheNameEpm] as string; } XML file <?xml version="1.0" encoding="utf-8" ?> <Employees> <Employee> <EName>Chris</EName> <ID>23556</ID> </Employee> <Employee> <EName>CBailey</EName> <ID>22222</ID> </Employee> <Employee> <EName>Meghan</EName> <ID>12345</ID> </Employee> </Employees> PLEASE NOTE:This XML file has 100 nodes. I just put 1 to indicate the structure.

    Read the article

  • WP7 - listbox Binding

    - by Jeff V
    I have an ObservableCollection that I want to bind to my listbox... lbRosterList.ItemsSource = App.ViewModel.rosterItemsCollection; However, in that collection I have another collection within it: [DataMember] public ObservableCollection<PersonDetail> ContactInfo { get { return _ContactInfo; } set { if (value != _ContactInfo) { _ContactInfo = value; NotifyPropertyChanged("ContactInfo"); } } } PersonDetail contains 2 properties: name and e-mail I would like the listbox to have those values for each item in rosterItemsCollection RosterId = 0; RosterName = "test"; ContactInfo.name = "Art"; ContactInfo.email = "[email protected]"; RosterId = 0; RosterName = "test" ContactInfo.name = "bob"; ContactInfo.email = "[email protected]"; RosterId = 1; RosterName = "test1" ContactInfo.name = "chris"; ContactInfo.email = "[email protected]"; RosterId = 1; RosterName = "test1" ContactInfo.name = "Sam"; ContactInfo.email = "[email protected]"; I would like that listboxes to display the ContactInfo information. I hope this makes sense... My XAML that I've tried with little success: <listbox x:Name="lbRosterList" ItemsSource="rosterItemCollection"> <textblock x:name="itemText" text="{Binding Path=name}"/> What am I doing incorrectly?

    Read the article

  • Easy Oracle Log-Shipping

    - by ItsAMystery
    Hi All I am looking for a decent way of keeping a secondary Oracle database up to date without exporting and importing the database each time. There are 3 users on the instance that I would essentially like to 'log ship' if thats what it is called on Oracle! Can anyone suggest anything? The database is well under a GB total and we are running 10g express (although I have thought about using 10g standard as we have a spare license). Cheers Chris

    Read the article

  • Windows 2003 Dynamic Disk error

    - by ChrisH
    Hi, I was trying to ghost a partition on a Windows 2003 server, using Ghost 2003. Unfortunately things went horribly wrong, and now I can't boot back into my system. As you can see, Ghost creates a wee little partition to do its dirty work, and has dislodged my other partitions. Partition 2 in the image below is my C drive. Any suggestions as to how I might get this active again so that it boots? Cheers, Chris

    Read the article

  • Debugger for Iptables

    - by chris_l
    Hi, I'm looking for an easy way to follow a packet through the iptables rules. This is not so much about logging, because I don't want to log all traffic (and I only want to have LOG targets for very few rules). Something like Wireshark for Iptables. Or maybe even something similar to a debugger for a programming language. Thanks Chris

    Read the article

  • Automated incremental backups from Plesk on Centos to Amazon S3

    - by ChrisS
    Hi, I've done a far bit of research on this via Google and there seems to be quite a few ways of possibly doing this. I'm looking to incrementally backup new and updated files in two directories on my Plesk run Centos 5.2 server: /backups and /var/www/vhosts (preferable only httdocs within each vhost) Has anyone got some great feedback from using the various solutions - seems to be various Java, Perl and Ruby based solutions out there. Many thanks, Chris

    Read the article

  • Mint Linux - Downgrade Java to 1.5

    - by Chrisc
    Hello, I posted this over at stack overflow, but had a suggestion to post it here for better help. Currently, I am running Mint Linux (Release 9). I need to downgrade Java from version 1.6 to 1.5, and have been trying to figure out how to go about this. So far, I've had no luck. The package manager doesn't seem to have it. Does anyone have any suggestions? Thanks, - Chris

    Read the article

  • Software to Stream Media Content from Dedicated Server [closed]

    - by Christian
    We have Windows 2008 R2 Servers and we want to stream content (avi, wmv, mpeg etc) to Windows/Mac OS X/iOS etc devices. The visitor must be able to select the file (s)he want to view withing the library. We tried to accomplish this using: VLC Windows Media Service (WMS) Mediaportal VLC: We didnt find a solution to publish the content in a library WMS: only supports WMV/WMA, needs MediaPlayer MediaPortal: it is not supported on W2k8R2 Server Any suggestions? /chris

    Read the article

  • Configure a File Type Item through GPO for a Win2008 R2 Terminal server

    - by user40021
    Hello, I try to configure a file-type item for .axd filetype. There I have troubles with the associated class for this file-type. E.g. I have tried it with "XML-document" (xml-informations are included at the files with .axd) but it does not work. The .axd file will not be opened with the associated application. Any ideas how to solve this? Many thanks in advance Best regards Chris

    Read the article

  • Silverlight 4 missing from Visual Studio 2010

    - by mouters
    Hi I've just finished installing Visual Studio 2010 professional onto Vista. But don't seem to have Silverlight 4. If I try to create a new project I can see Silverlight project templates but only seem to be able to target Silverlight 3. Is Silverlight 4 not part of vs2010 pro by default? I also noticed the msbuild targets is missing ie the v4.0 folder doesn't exist at the following folder: C:\Program Files (x86)\MSBuild\Microsoft\Silverlight\ Any help/thoughts would be much appreciated. Thanks Chris

    Read the article

  • Silverlight 4 mising from Visual Studio 2010

    - by mouters
    Hi I've just finished installing Visual Studio 2010 professional onto Vista. But don't seem to have Silverlight 4. If I try to create a new project I can see Silverlight project templates but only seem to be able to target Silverlight 3. Is Silverlight 4 not part of vs2010 pro by default? I also noticed the msbuild targets is missing ie the v4.0 folder doesn't exist at the following folder: C:\Program Files (x86)\MSBuild\Microsoft\Silverlight\ Any help/thoughts would be much appreciated. Thanks Chris

    Read the article

  • Guidance: A Branching strategy for Scrum Teams

    - by Martin Hinshelwood
    Having a good branching strategy will save your bacon, or at least your code. Be careful when deviating from your branching strategy because if you do, you may be worse off than when you started! This is one possible branching strategy for Scrum teams and I will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even assess your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Acknowledgements Bill Heys – Bill offered some good feedback on this post and helped soften the language. Note: Bill is a VS ALM Ranger and co-wrote the Branching Guidance for TFS 2010 Willy-Peter Schaub – Willy-Peter is an ex Visual Studio ALM MVP turned blue badge and has been involved in most of the guidance including the Branching Guidance for TFS 2010 Chris Birmele – Chris wrote some of the early TFS Branching and Merging Guidance. Dr Paul Neumeyer, Ph.D Parallel Processes, ScrumMaster and SSW Solution Architect – Paul wanted to have feature branches coming from the release branch as well. We agreed that this is really a spin-off that needs own project, backlog, budget and Team. Scenario: A product is developed RTM 1.0 is released and gets great sales.  Extra features are demanded but the new version will have double to price to pay to recover costs, work is approved by the guys with budget and a few sprints later RTM 2.0 is released.  Sales a very low due to the pricing strategy. There are lots of clients on RTM 1.0 calling out for patches. As I keep getting Reverse Integration and Forward Integration mixed up and Bill keeps slapping my wrists I thought I should have a reminder: You still seemed to use reverse and/or forward integration in the wrong context. I would recommend reviewing your document at the end to ensure that it agrees with the common understanding of these terms merge (forward integration) from parent to child (same direction as the branch), and merge  (reverse integration) from child to parent (the reverse direction of the branch). - one of my many slaps on the wrist from Bill Heys.   As I mentioned previously we are using a single feature branching strategy in our current project. The single biggest mistake developers make is developing against the “Main” or “Trunk” line. This ultimately leads to messy code as things are added and never finished. Your only alternative is to NEVER check in unless your code is 100%, but this does not work in practice, even with a single developer. Your ADD will kick in and your half-finished code will be finished enough to pass the build and the tests. You do use builds don’t you? Sadly, this is a very common scenario and I have had people argue that branching merely adds complexity. Then again I have seen the other side of the universe ... branching  structures from he... We should somehow convince everyone that there is a happy between no-branching and too-much-branching. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   A key benefit of branching for development is to isolate changes from the stable Main branch. Branching adds sanity more than it adds complexity. We do try to stress in our guidance that it is important to justify a branch, by doing a cost benefit analysis. The primary cost is the effort to do merges and resolve conflicts. A key benefit is that you have a stable code base in Main and accept changes into Main only after they pass quality gates, etc. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft The second biggest mistake developers make is branching anything other than the WHOLE “Main” line. If you branch parts of your code and not others it gets out of sync and can make integration a nightmare. You should have your Source, Assets, Build scripts deployment scripts and dependencies inside the “Main” folder and branch the whole thing. Some departments within MSFT even go as far as to add the environments used to develop the product in there as well; although I would not recommend that unless you have a massive SQL cluster to house your source code. We tried the “add environment” back in South-Africa and while it was “phenomenal”, especially when having to switch between environments, the disk storage and processing requirements killed us. We opted for virtualization to skin this cat of keeping a ready-to-go environment handy. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   I think people often think that you should have separate branches for separate environments (e.g. Dev, Test, Integration Test, QA, etc.). I prefer to think of deploying to environments (such as from Main to QA) rather than branching for QA). - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   You can read about SSW’s Rules to better Source Control for some additional information on what Source Control to use and how to use it. There are also a number of branching Anti-Patterns that should be avoided at all costs: You know you are on the wrong track if you experience one or more of the following symptoms in your development environment: Merge Paranoia—avoiding merging at all cost, usually because of a fear of the consequences. Merge Mania—spending too much time merging software assets instead of developing them. Big Bang Merge—deferring branch merging to the end of the development effort and attempting to merge all branches simultaneously. Never-Ending Merge—continuous merging activity because there is always more to merge. Wrong-Way Merge—merging a software asset version with an earlier version. Branch Mania—creating many branches for no apparent reason. Cascading Branches—branching but never merging back to the main line. Mysterious Branches—branching for no apparent reason. Temporary Branches—branching for changing reasons, so the branch becomes a permanent temporary workspace. Volatile Branches—branching with unstable software assets shared by other branches or merged into another branch. Note   Branches are volatile most of the time while they exist as independent branches. That is the point of having them. The difference is that you should not share or merge branches while they are in an unstable state. Development Freeze—stopping all development activities while branching, merging, and building new base lines. Berlin Wall—using branches to divide the development team members, instead of dividing the work they are performing. -Branching and Merging Primer by Chris Birmele - Developer Tools Technical Specialist at Microsoft Pty Ltd in Australia   In fact, this can result in a merge exercise no-one wants to be involved in, merging hundreds of thousands of change sets and trying to get a consolidated build. Again, we need to find a happy medium. - Willy-Peter Schaub on Merge Paranoia Merge conflicts are generally the result of making changes to the same file in both the target and source branch. If you create merge conflicts, you will eventually need to resolve them. Often the resolution is manual. Merging more frequently allows you to resolve these conflicts close to when they happen, making the resolution clearer. Waiting weeks or months to resolve them, the Big Bang approach, means you are more likely to resolve conflicts incorrectly. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Main line, this is where your stable code lives and where any build has known entities, always passes and has a happy test that passes as well? Many development projects consist of, a single “Main” line of source and artifacts. This is good; at least there is source control . There are however a couple of issues that need to be considered. What happens if: you and your team are working on a new set of features and the customer wants a change to his current version? you are working on two features and the customer decides to abandon one of them? you have two teams working on different feature sets and their changes start interfering with each other? I just use labels instead of branches? That's a lot of “what if’s”, but there is a simple way of preventing this. Branching… In TFS, labels are not immutable. This does not mean they are not useful. But labels do not provide a very good development isolation mechanism. Branching allows separate code sets to evolve separately (e.g. Current with hotfixes, and vNext with new development). I don’t see how labels work here. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Creating a single feature branch means you can isolate the development work on that branch.   Its standard practice for large projects with lots of developers to use Feature branching and you can check the Branching Guidance for the latest recommendations from the Visual Studio ALM Rangers for other methods. In the diagram above you can see my recommendation for branching when using Scrum development with TFS 2010. It consists of a single Sprint branch to contain all the changes for the current sprint. The main branch has the permissions changes so contributors to the project can only Branch and Merge with “Main”. This will prevent accidental check-ins or checkouts of the “Main” line that would contaminate the code. The developers continue to develop on sprint one until the completion of the sprint. Note: In the real world, starting a new Greenfield project, this process starts at Sprint 2 as at the start of Sprint 1 you would have artifacts in version control and no need for isolation.   Figure: Once the sprint is complete the Sprint 1 code can then be merged back into the Main line. There are always good practices to follow, and one is to always do a Forward Integration from Main into Sprint 1 before you do a Reverse Integration from Sprint 1 back into Main. In this case it may seem superfluous, but this builds good muscle memory into your developer’s work ethic and means that no bad habits are learned that would interfere with additional Scrum Teams being added to the Product. The process of completing your sprint development: The Team completes their work according to their definition of done. Merge from “Main” into “Sprint1” (Forward Integration) Stabilize your code with any changes coming from other Scrum Teams working on the same product. If you have one Scrum Team this should be quick, but there may have been bug fixes in the Release branches. (we will talk about release branches later) Merge from “Sprint1” into “Main” to commit your changes. (Reverse Integration) Check-in Delete the Sprint1 branch Note: The Sprint 1 branch is no longer required as its useful life has been concluded. Check-in Done But you are not yet done with the Sprint. The goal in Scrum is to have a “potentially shippable product” at the end of every Sprint, and we do not have that yet, we only have finished code.   Figure: With Sprint 1 merged you can create a Release branch and run your final packaging and testing In 99% of all projects I have been involved in or watched, a “shippable product” only happens towards the end of the overall lifecycle, especially when sprints are short. The in-between releases are great demonstration releases, but not shippable. Perhaps it comes from my 80’s brain washing that we only ship when we reach the agreed quality and business feature bar. - Willy-Peter Schaub, VS ALM Ranger, Microsoft Although you should have been testing and packaging your code all the way through your Sprint 1 development, preferably using an automated process, you still need to test and package with stable unchanging code. This is where you do what at SSW we call a “Test Please”. This is first an internal test of the product to make sure it meets the needs of the customer and you generally use a resource external to your Team. Then a “Test Please” is conducted with the Product Owner to make sure he is happy with the output. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: If you find a deviation from the expected result you fix it on the Release branch. If during your final testing or your “Test Please” you find there are issues or bugs then you should fix them on the release branch. If you can’t fix them within the time box of your Sprint, then you will need to create a Bug and put it onto the backlog for prioritization by the Product owner. Make sure you leave plenty of time between your merge from the development branch to find and fix any problems that are uncovered. This process is commonly called Stabilization and should always be conducted once you have completed all of your User Stories and integrated all of your branches. Even once you have stabilized and released, you should not delete the release branch as you would with the Sprint branch. It has a usefulness for servicing that may extend well beyond the limited life you expect of it. Note: Don't get forced by the business into adding features into a Release branch instead that indicates the unspoken requirement is that they are asking for a product spin-off. In this case you can create a new Team Project and branch from the required Release branch to create a new Main branch for that product. And you create a whole new backlog to work from.   Figure: When the Team decides it is happy with the product you can create a RTM branch. Once you have fixed all the bugs you can, and added any you can’t to the Product Backlog, and you Team is happy with the result you can create a Release. This would consist of doing the final Build and Packaging it up ready for your Sprint Review meeting. You would then create a read-only branch that represents the code you “shipped”. This is really an Audit trail branch that is optional, but is good practice. You could use a Label, but Labels are not Auditable and if a dispute was raised by the customer you can produce a verifiable version of the source code for an independent party to check. Rare I know, but you do not want to be at the wrong end of a legal battle. Like the Release branch the RTM branch should never be deleted, or only deleted according to your companies legal policy, which in the UK is usually 7 years.   Figure: If you have made any changes in the Release you will need to merge back up to Main in order to finalise the changes. Nothing is really ever done until it is in Main. The same rules apply when merging any fixes in the Release branch back into Main and you should do a reverse merge before a forward merge, again for the muscle memory more than necessity at this stage. Your Sprint is now nearly complete, and you can have a Sprint Review meeting knowing that you have made every effort and taken every precaution to protect your customer’s investment. Note: In order to really achieve protection for both you and your client you would add Automated Builds, Automated Tests, Automated Acceptance tests, Acceptance test tracking, Unit Tests, Load tests, Web test and all the other good engineering practices that help produce reliable software.     Figure: After the Sprint Planning meeting the process begins again. Where the Sprint Review and Retrospective meetings mark the end of the Sprint, the Sprint Planning meeting marks the beginning. After you have completed your Sprint Planning and you know what you are trying to achieve in Sprint 2 you can create your new Branch to develop in. How do we handle a bug(s) in production that can’t wait? Although in Scrum the only work done should be on the backlog there should be a little buffer added to the Sprint Planning for contingencies. One of these contingencies is a bug in the current release that can’t wait for the Sprint to finish. But how do you handle that? Willy-Peter Schaub asked an excellent question on the release activities: In reality Sprint 2 starts when sprint 1 ends + weekend. Should we not cater for a possible parallelism between Sprint 2 and the release activities of sprint 1? It would introduce FI’s from main to sprint 2, I guess. Your “Figure: Merging print 2 back into Main.” covers, what I tend to believe to be reality in most cases. - Willy-Peter Schaub, VS ALM Ranger, Microsoft I agree, and if you have a single Scrum team then your resources are limited. The Scrum Team is responsible for packaging and release, so at least one run at stabilization, package and release should be included in the Sprint time box. If more are needed on the current production release during the Sprint 2 time box then resource needs to be pulled from Sprint 2. The Product Owner and the Team have four choices (in order of disruption/cost): Backlog: Add the bug to the backlog and fix it in the next Sprint Buffer Time: Use any buffer time included in the current Sprint to fix the bug quickly Make time: Remove a Story from the current Sprint that is of equal value to the time lost fixing the bug(s) and releasing. Note: The Team must agree that it can still meet the Sprint Goal. Cancel Sprint: Cancel the sprint and concentrate all resource on fixing the bug(s) Note: This can be a very costly if the current sprint has already had a lot of work completed as it will be lost. The choice will depend on the complexity and severity of the bug(s) and both the Product Owner and the Team need to agree. In this case we will go with option #2 or #3 as they are uncomplicated but severe bugs. Figure: Real world issue where a bug needs fixed in the current release. If the bug(s) is urgent enough then then your only option is to fix it in place. You can edit the release branch to find and fix the bug, hopefully creating a test so it can’t happen again. Follow the prior process and conduct an internal and customer “Test Please” before releasing. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: After you have fixed the bug you need to ship again. You then need to again create an RTM branch to hold the version of the code you released in escrow.   Figure: Main is now out of sync with your Release. We now need to get these new changes back up into the Main branch. Do a reverse and then forward merge again to get the new code into Main. But what about the branch, are developers not working on Sprint 2? Does Sprint 2 now have changes that are not in Main and Main now have changes that are not in Sprint 2? Well, yes… and this is part of the hit you take doing branching. But would this scenario even have been possible without branching?   Figure: Getting the changes in Main into Sprint 2 is very important. The Team now needs to do a Forward Integration merge into their Sprint and resolve any conflicts that occur. Maybe the bug has already been fixed in Sprint 2, maybe the bug no longer exists! This needs to be identified and resolved by the developers before they continue to get further out of Sync with Main. Note: Avoid the “Big bang merge” at all costs.   Figure: Merging Sprint 2 back into Main, the Forward Integration, and R0 terminates. Sprint 2 now merges (Reverse Integration) back into Main following the procedures we have already established.   Figure: The logical conclusion. This then allows the creation of the next release. By now you should be getting the big picture and hopefully you learned something useful from this post. I know I have enjoyed writing it as I find these exploratory posts coupled with real world experience really help harden my understanding.  Branching is a tool; it is not a silver bullet. Don’t over use it, and avoid “Anti-Patterns” where possible. Although the diagram above looks complicated I hope showing you how it is formed simplifies it as much as possible.   Technorati Tags: Branching,Scrum,VS ALM,TFS 2010,VS2010

    Read the article

  • GWB | 30 Posts in 60 Days Update

    - by Staff of Geeks
    One month after the contest started, we definitely have some leaders and one blogger who has reached the mark.  Keep up the good work guys, I have really enjoyed the content being produced by our bloggers. Current Winners: Enrique Lima (37 posts) - http://geekswithblogs.net/enriquelima Almost There: Stuart Brierley (28 posts) - http://geekswithblogs.net/StuartBrierley Dave Campbell (26 posts) - http://geekswithblogs.net/WynApseTechnicalMusings Eric Nelson (23 posts) - http://geekswithblogs.net/iupdateable Coming Along: Liam McLennan (17 posts) - http://geekswithblogs.net/liammclennan Christopher House (13 posts) - http://geekswithblogs.net/13DaysaWeek mbcrump (13 posts) - http://geekswithblogs.net/mbcrump Steve Michelotti (10 posts) - http://geekswithblogs.net/michelotti Michael Freidgeim (9 posts) - http://geekswithblogs.net/mnf MarkPearl (9 posts) - http://geekswithblogs.net/MarkPearl Brian Schroer (8 posts) - http://geekswithblogs.net/brians Chris Williams (8 posts) - http://geekswithblogs.net/cwilliams CatherineRussell (7 posts) - http://geekswithblogs.net/CatherineRussell Shawn Cicoria (7 posts) - http://geekswithblogs.net/cicorias Matt Christian (7 posts) - http://geekswithblogs.net/CodeBlog James Michael Hare (7 posts) - http://geekswithblogs.net/BlackRabbitCoder John Blumenauer (7 posts) - http://geekswithblogs.net/jblumenauer Scott Dorman (7 posts) - http://geekswithblogs.net/sdorman   Technorati Tags: Standings,Geekswithblogs,30 in 60

    Read the article

  • Oracle for PCI-DSS Security Webcast

    - by Alex Blyth
    Thanks to everyone who attended the Oracle for PCI-DSS security webcast today. It was good to see how the products we talked about last week can be used to address the PCI standard requirements. A big thanks to Chris Pickett for presenting a great session and running us through a very cool demo showing how the data is protected through out its life. The replay of the session can be downloaded here. Slides and be down loaded here. Oracle for PCI-DSS Security Compliance View more presentationsfrom Oracle Australia. Next week we resume our regular schedule with Andrew Clarke taking us through Oracle Application Express (APEX) - one of the best kept secrets in the Oracle Database. Enroll for this session here (and now :) ) Till next week Cheers Alex

    Read the article

  • Umbraco Developer 's Christmas Office :)

    - by Vizioz Limited
    This weekend my colleague and I decided it was a good idea to decorate our office for Christmas, it's quite difficult to actually photograph it to it's full effect, but you'll have to take our word for it, it looks pretty Christmasy :) We have a 7' Tree covered in lights and decorations, lights around our PC's, tinsel everywhere we could fit it, and even large snow flakes hanging from the ceiling..You'd think we have no work on, but if fact it's the opposite we're manically busy! But hey, it's a bit of fun and it seems to be cheering everyone up in this otherwise rather Dull Regus Serviced Office ;-)We can definitely recommend doing something a bit different, as it's got us noticed and we've already won enough extra work from companies in the building to pay for our office for a year, not bad :)So here's a photo of our office, has anyone else decorated their office? I'd be happy to update this post with any good Christmas office photos that you send me!Happy Christmas all!Chris

    Read the article

  • LINQ for SQL Developers and DBA’s

    - by AtulThakor
    Firstly I’d just like to thank the guys who organise the SQL Server User Group (Martin/Tony/Chris) and for giving me the opportunity to speak at the recent event. Sorry about the slides taking so long but here they are along with some extra information. Firstly the demo’s were all done using LINQPad 4.0 which can be downloaded here: http://www.linqpad.net/ There are 2 versions 3.5/4.0 With 3.5 you should be able to replicate the problem I showed where a query using a parameter which is X characters long would create a different execution plan to a query which uses a parameter which is Y characters long, otherwise I would just use 4.0 The sample database used is AdventureWorksLT2008 which can be downloaded from here: http://msftdbprodsamples.codeplex.com/releases/view/37109 The scripts have been named so that you can select the appropriate way to run them i.e.: C# expression / C#statement, each script can be run individually be highlighting the query and clicking the play symbol or hitting F5. Scripts and Slides: http://sqlblogcasts.com/blogs/atulthakor/An%20Introduction%20to%20LINQ.zip Please don't hesitate in sending any questions via email/twitter, I’ll try my best to answer your questions! Thanks, Atul

    Read the article

  • Google I/O 2010 - Fireside chat with the Enterprise team

    Google I/O 2010 - Fireside chat with the Enterprise team Google I/O 2010 - Fireside chat with the Enterprise team Fireside Chats, Enterprise Chris Vander Mey, Scott McMullan, Ryan Boyd, David Glazer, Evan Gilbert With the launch of the Google Apps Marketplace, we've introduced a new way to expose your software to businesses - and a new way to extend Google Apps. If you're interested in building apps, what we're thinking about, or if you have other questions about the Marketplace, pull up a chair. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 54 0 ratings Time: 59:38 More in Science & Technology

    Read the article

  • Oracle Extends Life Sciences Edition in New Release

    - by charles.knapp
    By Chris Kanaracus, IDG News Service Oracle (ORCL) announced the 17th version of its on-demand CRM (customer relationship management) application Wednesday and made a fresh push into pharmaceutical sales with a Life Sciences edition of the software. New features in CRM on Demand Release 17 include tools for managing sales pipelines and performing forecasts of future business; a redesigned user interface; and added language support. But one CRM industry observer flagged the Life Sciences product as a particular point of interest. Read the full article here.

    Read the article

  • Google I/O 2010 - Fireside chat with the Social Web team

    Google I/O 2010 - Fireside chat with the Social Web team Google I/O 2010 - Fireside chat with the Social Web team Fireside Chats, Social Web David Glazer, DeWitt Clinton, John Panzer, Joseph Smarr, Sami Shalabi, Todd Jackson, Chris Chabot (moderator) Social is quickly becoming an integral part of how we experience the web, and this is your chance to pick the brains of the people who are working on Buzz, the Buzz API and the underlying open protocols such as Activity Streams and OAuth which are an essential component of a truly open & social web. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 18 0 ratings Time: 01:01:10 More in Science & Technology

    Read the article

  • Google I/O 2010 - Fireside chat with the GWT team

    Google I/O 2010 - Fireside chat with the GWT team Google I/O 2010 - Fireside chat with the GWT team Fireside Chats, GWT Bruce Johnson, Joel Webber, Ray Ryan, Amit Manjhi, Jaime Yap, Kathrin Probst, Eric Ayers, lan Stewart, Christian Dupuis, Chris Ramsdale (moderator) If you're interested in what the GWT team has been up to since 2.0, here's your chance. We'll have several of the core engineers available to discuss the new features and frameworks in GWT, as well as to answer any questions that you might have. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 140 0 ratings Time: 58:32 More in Science & Technology

    Read the article

  • SAPPHIRE 2012 : « 80 % de nos clients sont des PME », SAP revient sur l'évolution de ses solutions Cloud pour répondre à leurs besoins

    SAPPHIRE 2012 : « 80 % de nos clients sont des PME » SAP revient sur ses solutions Cloud et ses évolutions pour répondre à leurs besoins SAP n'a pas l'image d'un éditeur qui s'adresse aux petites entreprises. Et pourtant, 80% de ses clients sont des PME. Le chiffre est avancé par Chris Horak, vice-président en charge des solutions Cloud, dans un entretien à Developpez.com lors du SAPPHIRE NOW 2012. Il est vrai que la catégorie PME regroupe des structures diverses allant du petit au très gros. Une solution comme Business One, qui compte aujourd'hui 30.000 clients, vise cependant bien les plus petites structures. Adaptée pour les entreprises ayant entre...

    Read the article

  • Review - Professional Android Programming with Mono for Android and .NET/C#

    - by Wallym
    Mike Riley of Dev Pro Connections Magazine has a review of our Mono for Android book.  You can read the full review on their siteMono for Android has been available for more than a year. The documentation for the product is adequate and has been improving over time, but until recently, finding a good book about the technology was difficult. Such a constraint has been lifted thanks to Wiley's Professional Android Programming with Mono for Android and .NET/C#. Written under the Wrox imprint by several contributors (Wallace B. McClure, Nathan Blevins, John J. Croft, Jonathan Dick, and Chris Hardy), the book is one of the most comprehensive and helpful Mono for Android titles currently on the market. Please buy 8-10 copies of our book for the ones you love, they make great romantic gifts.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >