Search Results

Search found 28280 results on 1132 pages for 'having clause'.

Page 128/1132 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Memory is leaked only in some machines.

    - by Jorge Córdoba
    We've got a situation where our application is leaking memory while doing some periodic action. The test scenario is composed of a series of processes across two relatively complex WPF windows. The weird thing about the situation is that memory only gets leaked on SOME machines, while others, having exactly the same hardware, can be working for really long times (repeating the process every minute) having their memory almost unchanged (once the GC gets rid of used memory, etc). This is .NET + WPF. Any ideas about where to start looking? What can cause leaks in only some machines? (we're talking about a 30 machine test scenario). I have few experience with WPF, could the graphic card had anything to do with it?

    Read the article

  • How do I put these: @{$subset}, [@ext_subset], [$last_item] in PHP?

    - by Alex
    I'm having trouble translating a subroutine from Perl to PHP (I'm new to Perl). The entire subroutine is as follows: sub find_all_subsets { if (1 == scalar (@_)) {return [@_]} else { my @all_subsets = () ; my $last_item = pop (@_) ; my @first_subsets = find_all_subsets (@_) ; foreach my $subset (@first_subsets) { push (@all_subsets, $subset) ; my @ext_subset = @{$subset} ; push (@ext_subset, $last_item) ; push (@all_subsets, [@ext_subset]) ; } push (@all_subsets, [$last_item]) ; return (@all_subsets) ; } } My problem is that I really don't quite understand the Perl syntax, so I'm having trouble writing these @{$subset}, [@ext_subset] and [$last_item] in PHP. Thanks and sorry if the question is stupid.

    Read the article

  • Any idea why this query always returns duplicate items?

    - by Kardo
    I want to get all Images not used by current ItemID. The this subquery but it also always returns duplicate Images: EDITED select Images.ImageID, Images.ItemStatus, Images.UserName, Images.Url, Image_Item.ItemID, Image_Item.ItemID from Images left join (select ImageID, ItemID, MAX(DateCreated) x from Image_Item where ItemID != '5a0077fe-cf86-434d-9f3b-7ff3030a1b6e' group by ImageID, ItemID having count(*) = 1) image_item on Images.imageid = image_item.imageid where ItemID is not null I guess the problem is with the subquery which I can't avoid duplicate rows: select ImageID, ItemID, MAX(DateCreated) x from Image_Item where ItemID != '5a0077fe-cf86-434d-9f3b-7ff3030a1b6e' group by ImageID, ItemID having count(*) = 1 Result: F2EECBDC-963D-42A7-90B1-4F82F89A64C7 0578AC61-3C32-4A1D-812C-60A09A661E71 F2EECBDC-963D-42A7-90B1-4F82F89A64C7 9A4EC913-5AD6-4F9E-AF6D-CF4455D81C10 42BC8B1A-7430-4915-9CDA-C907CBC76D6A CB298EB9-A105-4797-985E-A370013B684F 16371C34-B861-477C-9A7C-DEB27C8F333D 44E6349B-7EBF-4C7E-B3B0-1C6E2F19992C Table: Images ImageID uniqueidentifier UserName nvarchar(100) DateCreated smalldatetime Url nvarchar(250) ItemStatus char(1) Table: Image_Item ImageID uniqueidentifier ItemID uniqueidentifier UserName nvarchar(100) ItemStatus char(1) DateCreated smalldatetime Any kind help is highly appreciated.

    Read the article

  • Find where a variable is defined in PHP (And/or SMARTY)?

    - by Jon
    I'm currently working on a very large project, and am under a lot of pressure to finish it soon, and I'm having a serious problem. The programmer who wrote this last defined variables in a very odd way - the config variables aren't all in the same file, they're spread out across the entire project of over 500 files and 100k+ lines of code, and I'm having a hell of a time figuring out where a certain variable is, so I can fix an issue. Is there a way to track this variable down? I believe he's using SMARTY (Which I can not stand, due to issues like this), and the variable is a template variable. I'm fairly sure that the variable I'm looking for was initially defined as a PHP variable, then that variable is passed into SMARTY, so I'd like to track down the PHP one, however if that's impossible - how can I track down where he defined the variable for SMARTY? P.S. I'm in Vista, and don't have ssh access to the server, so 'grep' is out of the question.

    Read the article

  • Sort List based on dynamically generated numbers in C++

    - by user367322
    I have a list of objects ("Move"'s in this case) that I want to sort based on their calculated evaluation. So, I have the List, and a bunch of numbers that are "associated" with an element in the list. I now want to sort the List elements with the first element having the lowest associated number, and the last having the highest. Once the items are order I can discard the associated number. How do I do this? This is what my code looks like (kind've): list<Move> moves = board.getLegalMoves(board.turn); for(i = moves.begin(); i != moves.end(); ++i) { //... a = max; // <-- number associated with current Move }

    Read the article

  • How to Use a Windows App with Embeded files and folders to copy to a destination. In C#

    - by Mark Sweetman
    I am trying to write a small application, whose only purpose is to copy some folders and .cs source files into a user specified Directory, I can do it easy enough by simply having the application look for the files and folders in its own install directory then copy them to thier destination Directory, but I was wondering if its possible to Embed the Folders and Files into the Application, so that when you run the application it creates or copies the folders and files from the exe app directly to the install directory, rather than searching for them in the apps install directory then copying them over. Basically Im trying to only have a single exe file rather than having an exe file and a bunch of folders and files along side it. Is this possible to do with just a Windows Form App without using an actual Installer Class?

    Read the article

  • [ruby] How to convert STDIN contents to an array?

    - by miketaylr
    I've got a file INPUT that has the following contents: 123\n 456\n 789 I want to run my script like so: script.rb < INPUT and have it convert the contents of the INPUT file to an array, splitting on the new line character. So, I'd having something like myArray = [123,456,789]. Here's what I've tried to do and am not having much luck: myArray = STDIN.to_s myArray.split(/\n/) puts field.size I'm expecting this to print 3, but I'm getting 15. I'm really confused here. Any pointers?

    Read the article

  • Create Object using ObjectBuilder

    - by dhinesh
    Want to create objects using ObjectBuilder or ObjectBuilder2. I do not want to use StructureMap I was able to create the object having parameterless constructor using the code mentioned below. public class ObjectFactory : BuilderBase<BuilderStage> { public static T BuildUp<T>() { var builder = new Builder(); var locator = new Locator { { typeof(ILifetimeContainer), new LifetimeContainer() } }; var buildUp = builder.BuildUp<T>(locator, null, null); return buildUp; } for creating object of customer you just call ObjectFactory.BuildUp<Customer> However this creates object of class which has no parameters, however I need to create object which are having constructor with parameters.

    Read the article

  • C - How to manipulate typedef structure pointer?

    - by AbhishekJoshi
    typedef struct { int id; char* first; char* last; }* person; person* people; Hi. How can I use this above, all set globally, to fill people with different "person"s? I am having issues wrapping my head regarding the typedef struct pointer. I am aware pointers are like arrays, but I'm having issues getting this all together... I would like to keep the above code as is as well. Edit 1: char first should be char* first.

    Read the article

  • Vim: `cd` to path stored in variable

    - by tsyu80
    I'm pretty new to vim, and I'm having a hard time understanding some subtleties with vim scripting. Specifically, I'm having trouble working with commands that expect an unquoted-string (is there a name for this?). For example cd some/unquoted/string/path The problem is that I'd like to pass a variable, but calling let pathname = 'some/path' cd pathname will try to change the current directory to 'pathname' instead of 'some/path'. One way around this is to use let cmd = 'cd ' . pathname execute cmd but this seems a bit roundabout. This StackOverflow question actually uses cd with a variable, but it doesn't work on my system ("a:path" is treated as the path as described above). I'm using cd as a specific example, but this behavior isn't unique to cd; for example, the edit command also behaves this way. (Is there a name for this type of command?)

    Read the article

  • Why does my computer run slowly and freeze sometimes?

    - by Brae
    EDIT I disabled sound in the BIOS, rebooted and it hung. I removed the (previously) faulty HDD, rebooted and it hung. I have managed to get my Realtek audio manager open again after its mysterious disappearance. Subsequently my microphone is now working again, to fix it I had to uninstall audio drivers, disable audio in BIOS, install audio drivers, enable audio in BIOS. Access via network (with faulty HDD in) seems to not be triggering hangs at the moment. I think with the sound problem fixed it might play a little nicer, but I think its still going to hang. If it does, then I'm fairly sure its been narrowed down to the mobo. EDIT Pretty convinced my motherboard is the culprit, because nothing else seems to have any obvious problems (bar the hard drive, which the PC still hangs without it being plugged in) So thank you all for helping, once I get more rep ill up a few of the answers. My PC is doing some weird things... Sometimes when I open up a program, lets say Adobe Photoshop, it will load everything normally, nice and quick no problems and I can use it fine. Other times its a little odd, and it loads the program as if it's only using half of the CPU. It's pretty obvious when it does it, normally the loading screen skims past, but when it does this weird load you see it slowly tick though each thing, and the program itself becomes incredibly slow. Even Google Chrome does it sometimes. Yet when I exit and reopen the program without doing anything else, it typically opens fine without lagging. I think this problem is probably a symptom of something bigger, because of other problems I'm having. Random hangs; no matter what I have open or what I'm doing. My PC will sort of freeze up for a few seconds. If music is playing it will either loop the last second or two, or it will buzz. This only lasts for a few seconds then it returns to normal without having to restart. During this time all programs lock up and freeze, and the mouse and keyboard are useless. I am also having a weird issue with my audio jacks, without touching my PC at all, sometimes I will see a popup saying that I have unplugged something or plugged something in, neither of which has actually happened. Pretty sure this is cause by the motherboard. I recently had a 'Pink Screen of Death' (yes pink) which pointed to my audio drivers. The lockups seem to happen with some consistency when someone is accessing my shared files via my home LAN. Which leads me to believe either one of my hard drives is acting up or more likely the controller. One of my drives had a bit of a crash before, I used Spinrite and managed to recover my stuff and now the drive appears to be working okay. This is possibly adding to the problems. My best guess is something has gone wrong with my motherboard, possibly a power issue or a chip has died, I really don't know. So what I would like to know is: Have I have missed some obvious diagnostic to help figure out what it is, or should I just bite the bullet and assume its my motherboard acting up and buy a new system? dxdiag[64-bit] - http://pastebin.com/G30kb2TL PSOD (minidump) - http://pastebin.com/aZsv0H56 HWiNFO64 (system info / specs) - http://pastebin.com/X6h3K8g6

    Read the article

  • MAAS nodes stuck on "maas-enlisting-node"

    - by CyberJacob
    I am trying to set up an MAAS cluster, but am having some issues adding nodes. The nodes boot up from the TFTP server on the master server, display a login screen (with the hostname "maas-enlisting-node") and stop. I cannot log in with ubuntu/ubuntu or with my SSH keys. The MAAS server is running DHCP and DNS on the network, and the TFTP boot appears to be loading everything without issues (I can screencap if you want) Any ideas what's going wrong? Thanks!

    Read the article

  • Dividing web.config into multiple files in asp.net

    - by Jalpesh P. Vadgama
    When you are having different people working on one project remotely you will get some problem with web.config, as everybody was having different version of web.config. So at that time once you check in your web.config with your latest changes the other people have to get latest that web.config and made some specific changes as per their local environment. Most of people who have worked things from remotely has faced that problem. I think most common example would be connection string and app settings changes. For this kind of situation this will be a best solution. We can divide particular section of web.config into the multiple files. For example we could have separate ConnectionStrings.config file for connection strings and AppSettings.config file for app settings file. Most of people does not know that there is attribute called ‘configSource’ where we can  define the path of external config file and it will load that section from that external file. Just like below. <configuration> <appSettings configSource="AppSettings.config"/> <connectionStrings configSource="ConnectionStrings.config"/> </configuration> And you could have your ConnectionStrings.config file like following. <connectionStrings> <add name="DefaultConnection" connectionString="Data Source=(LocalDb)\v11.0;Initial Catalog=aspnet-WebApplication1-20120523114732;Integrated Security=True" providerName="System.Data.SqlClient" /> </connectionStrings> Same way you have another AppSettings.Config file like following. <appSettings> <add key="aspnet:UseTaskFriendlySynchronizationContext" value="true" /> <add key="ValidationSettings:UnobtrusiveValidationMode" value="WebForms" /> </appSettings> That's it. Hope you like this post. Stay tuned for more..

    Read the article

  • How do I install axis2 into tomcat6 on 10.04?

    - by spinlock
    I'm trying to install axis2 onto tomcat6 and I'm having some difficulties. I've installed tomcat6 using apt and I've downloaded the axis2.war file and placed it in /usr/share/tomcat6/webapps/. From the instructions I'm following, tomcat should now unpack the war file and create and axis2 directory in webapps/ but this is not happening. I can see the default tomcat page on http://localhost:8080/ but I cannot see the axis2 page on http://localhost:8080/axis2/ Any help would be greatly appreciated.

    Read the article

  • Guidance: A Branching strategy for Scrum Teams

    - by Martin Hinshelwood
    Having a good branching strategy will save your bacon, or at least your code. Be careful when deviating from your branching strategy because if you do, you may be worse off than when you started! This is one possible branching strategy for Scrum teams and I will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even assess your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Acknowledgements Bill Heys – Bill offered some good feedback on this post and helped soften the language. Note: Bill is a VS ALM Ranger and co-wrote the Branching Guidance for TFS 2010 Willy-Peter Schaub – Willy-Peter is an ex Visual Studio ALM MVP turned blue badge and has been involved in most of the guidance including the Branching Guidance for TFS 2010 Chris Birmele – Chris wrote some of the early TFS Branching and Merging Guidance. Dr Paul Neumeyer, Ph.D Parallel Processes, ScrumMaster and SSW Solution Architect – Paul wanted to have feature branches coming from the release branch as well. We agreed that this is really a spin-off that needs own project, backlog, budget and Team. Scenario: A product is developed RTM 1.0 is released and gets great sales.  Extra features are demanded but the new version will have double to price to pay to recover costs, work is approved by the guys with budget and a few sprints later RTM 2.0 is released.  Sales a very low due to the pricing strategy. There are lots of clients on RTM 1.0 calling out for patches. As I keep getting Reverse Integration and Forward Integration mixed up and Bill keeps slapping my wrists I thought I should have a reminder: You still seemed to use reverse and/or forward integration in the wrong context. I would recommend reviewing your document at the end to ensure that it agrees with the common understanding of these terms merge (forward integration) from parent to child (same direction as the branch), and merge  (reverse integration) from child to parent (the reverse direction of the branch). - one of my many slaps on the wrist from Bill Heys.   As I mentioned previously we are using a single feature branching strategy in our current project. The single biggest mistake developers make is developing against the “Main” or “Trunk” line. This ultimately leads to messy code as things are added and never finished. Your only alternative is to NEVER check in unless your code is 100%, but this does not work in practice, even with a single developer. Your ADD will kick in and your half-finished code will be finished enough to pass the build and the tests. You do use builds don’t you? Sadly, this is a very common scenario and I have had people argue that branching merely adds complexity. Then again I have seen the other side of the universe ... branching  structures from he... We should somehow convince everyone that there is a happy between no-branching and too-much-branching. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   A key benefit of branching for development is to isolate changes from the stable Main branch. Branching adds sanity more than it adds complexity. We do try to stress in our guidance that it is important to justify a branch, by doing a cost benefit analysis. The primary cost is the effort to do merges and resolve conflicts. A key benefit is that you have a stable code base in Main and accept changes into Main only after they pass quality gates, etc. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft The second biggest mistake developers make is branching anything other than the WHOLE “Main” line. If you branch parts of your code and not others it gets out of sync and can make integration a nightmare. You should have your Source, Assets, Build scripts deployment scripts and dependencies inside the “Main” folder and branch the whole thing. Some departments within MSFT even go as far as to add the environments used to develop the product in there as well; although I would not recommend that unless you have a massive SQL cluster to house your source code. We tried the “add environment” back in South-Africa and while it was “phenomenal”, especially when having to switch between environments, the disk storage and processing requirements killed us. We opted for virtualization to skin this cat of keeping a ready-to-go environment handy. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   I think people often think that you should have separate branches for separate environments (e.g. Dev, Test, Integration Test, QA, etc.). I prefer to think of deploying to environments (such as from Main to QA) rather than branching for QA). - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   You can read about SSW’s Rules to better Source Control for some additional information on what Source Control to use and how to use it. There are also a number of branching Anti-Patterns that should be avoided at all costs: You know you are on the wrong track if you experience one or more of the following symptoms in your development environment: Merge Paranoia—avoiding merging at all cost, usually because of a fear of the consequences. Merge Mania—spending too much time merging software assets instead of developing them. Big Bang Merge—deferring branch merging to the end of the development effort and attempting to merge all branches simultaneously. Never-Ending Merge—continuous merging activity because there is always more to merge. Wrong-Way Merge—merging a software asset version with an earlier version. Branch Mania—creating many branches for no apparent reason. Cascading Branches—branching but never merging back to the main line. Mysterious Branches—branching for no apparent reason. Temporary Branches—branching for changing reasons, so the branch becomes a permanent temporary workspace. Volatile Branches—branching with unstable software assets shared by other branches or merged into another branch. Note   Branches are volatile most of the time while they exist as independent branches. That is the point of having them. The difference is that you should not share or merge branches while they are in an unstable state. Development Freeze—stopping all development activities while branching, merging, and building new base lines. Berlin Wall—using branches to divide the development team members, instead of dividing the work they are performing. -Branching and Merging Primer by Chris Birmele - Developer Tools Technical Specialist at Microsoft Pty Ltd in Australia   In fact, this can result in a merge exercise no-one wants to be involved in, merging hundreds of thousands of change sets and trying to get a consolidated build. Again, we need to find a happy medium. - Willy-Peter Schaub on Merge Paranoia Merge conflicts are generally the result of making changes to the same file in both the target and source branch. If you create merge conflicts, you will eventually need to resolve them. Often the resolution is manual. Merging more frequently allows you to resolve these conflicts close to when they happen, making the resolution clearer. Waiting weeks or months to resolve them, the Big Bang approach, means you are more likely to resolve conflicts incorrectly. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Main line, this is where your stable code lives and where any build has known entities, always passes and has a happy test that passes as well? Many development projects consist of, a single “Main” line of source and artifacts. This is good; at least there is source control . There are however a couple of issues that need to be considered. What happens if: you and your team are working on a new set of features and the customer wants a change to his current version? you are working on two features and the customer decides to abandon one of them? you have two teams working on different feature sets and their changes start interfering with each other? I just use labels instead of branches? That's a lot of “what if’s”, but there is a simple way of preventing this. Branching… In TFS, labels are not immutable. This does not mean they are not useful. But labels do not provide a very good development isolation mechanism. Branching allows separate code sets to evolve separately (e.g. Current with hotfixes, and vNext with new development). I don’t see how labels work here. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Creating a single feature branch means you can isolate the development work on that branch.   Its standard practice for large projects with lots of developers to use Feature branching and you can check the Branching Guidance for the latest recommendations from the Visual Studio ALM Rangers for other methods. In the diagram above you can see my recommendation for branching when using Scrum development with TFS 2010. It consists of a single Sprint branch to contain all the changes for the current sprint. The main branch has the permissions changes so contributors to the project can only Branch and Merge with “Main”. This will prevent accidental check-ins or checkouts of the “Main” line that would contaminate the code. The developers continue to develop on sprint one until the completion of the sprint. Note: In the real world, starting a new Greenfield project, this process starts at Sprint 2 as at the start of Sprint 1 you would have artifacts in version control and no need for isolation.   Figure: Once the sprint is complete the Sprint 1 code can then be merged back into the Main line. There are always good practices to follow, and one is to always do a Forward Integration from Main into Sprint 1 before you do a Reverse Integration from Sprint 1 back into Main. In this case it may seem superfluous, but this builds good muscle memory into your developer’s work ethic and means that no bad habits are learned that would interfere with additional Scrum Teams being added to the Product. The process of completing your sprint development: The Team completes their work according to their definition of done. Merge from “Main” into “Sprint1” (Forward Integration) Stabilize your code with any changes coming from other Scrum Teams working on the same product. If you have one Scrum Team this should be quick, but there may have been bug fixes in the Release branches. (we will talk about release branches later) Merge from “Sprint1” into “Main” to commit your changes. (Reverse Integration) Check-in Delete the Sprint1 branch Note: The Sprint 1 branch is no longer required as its useful life has been concluded. Check-in Done But you are not yet done with the Sprint. The goal in Scrum is to have a “potentially shippable product” at the end of every Sprint, and we do not have that yet, we only have finished code.   Figure: With Sprint 1 merged you can create a Release branch and run your final packaging and testing In 99% of all projects I have been involved in or watched, a “shippable product” only happens towards the end of the overall lifecycle, especially when sprints are short. The in-between releases are great demonstration releases, but not shippable. Perhaps it comes from my 80’s brain washing that we only ship when we reach the agreed quality and business feature bar. - Willy-Peter Schaub, VS ALM Ranger, Microsoft Although you should have been testing and packaging your code all the way through your Sprint 1 development, preferably using an automated process, you still need to test and package with stable unchanging code. This is where you do what at SSW we call a “Test Please”. This is first an internal test of the product to make sure it meets the needs of the customer and you generally use a resource external to your Team. Then a “Test Please” is conducted with the Product Owner to make sure he is happy with the output. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: If you find a deviation from the expected result you fix it on the Release branch. If during your final testing or your “Test Please” you find there are issues or bugs then you should fix them on the release branch. If you can’t fix them within the time box of your Sprint, then you will need to create a Bug and put it onto the backlog for prioritization by the Product owner. Make sure you leave plenty of time between your merge from the development branch to find and fix any problems that are uncovered. This process is commonly called Stabilization and should always be conducted once you have completed all of your User Stories and integrated all of your branches. Even once you have stabilized and released, you should not delete the release branch as you would with the Sprint branch. It has a usefulness for servicing that may extend well beyond the limited life you expect of it. Note: Don't get forced by the business into adding features into a Release branch instead that indicates the unspoken requirement is that they are asking for a product spin-off. In this case you can create a new Team Project and branch from the required Release branch to create a new Main branch for that product. And you create a whole new backlog to work from.   Figure: When the Team decides it is happy with the product you can create a RTM branch. Once you have fixed all the bugs you can, and added any you can’t to the Product Backlog, and you Team is happy with the result you can create a Release. This would consist of doing the final Build and Packaging it up ready for your Sprint Review meeting. You would then create a read-only branch that represents the code you “shipped”. This is really an Audit trail branch that is optional, but is good practice. You could use a Label, but Labels are not Auditable and if a dispute was raised by the customer you can produce a verifiable version of the source code for an independent party to check. Rare I know, but you do not want to be at the wrong end of a legal battle. Like the Release branch the RTM branch should never be deleted, or only deleted according to your companies legal policy, which in the UK is usually 7 years.   Figure: If you have made any changes in the Release you will need to merge back up to Main in order to finalise the changes. Nothing is really ever done until it is in Main. The same rules apply when merging any fixes in the Release branch back into Main and you should do a reverse merge before a forward merge, again for the muscle memory more than necessity at this stage. Your Sprint is now nearly complete, and you can have a Sprint Review meeting knowing that you have made every effort and taken every precaution to protect your customer’s investment. Note: In order to really achieve protection for both you and your client you would add Automated Builds, Automated Tests, Automated Acceptance tests, Acceptance test tracking, Unit Tests, Load tests, Web test and all the other good engineering practices that help produce reliable software.     Figure: After the Sprint Planning meeting the process begins again. Where the Sprint Review and Retrospective meetings mark the end of the Sprint, the Sprint Planning meeting marks the beginning. After you have completed your Sprint Planning and you know what you are trying to achieve in Sprint 2 you can create your new Branch to develop in. How do we handle a bug(s) in production that can’t wait? Although in Scrum the only work done should be on the backlog there should be a little buffer added to the Sprint Planning for contingencies. One of these contingencies is a bug in the current release that can’t wait for the Sprint to finish. But how do you handle that? Willy-Peter Schaub asked an excellent question on the release activities: In reality Sprint 2 starts when sprint 1 ends + weekend. Should we not cater for a possible parallelism between Sprint 2 and the release activities of sprint 1? It would introduce FI’s from main to sprint 2, I guess. Your “Figure: Merging print 2 back into Main.” covers, what I tend to believe to be reality in most cases. - Willy-Peter Schaub, VS ALM Ranger, Microsoft I agree, and if you have a single Scrum team then your resources are limited. The Scrum Team is responsible for packaging and release, so at least one run at stabilization, package and release should be included in the Sprint time box. If more are needed on the current production release during the Sprint 2 time box then resource needs to be pulled from Sprint 2. The Product Owner and the Team have four choices (in order of disruption/cost): Backlog: Add the bug to the backlog and fix it in the next Sprint Buffer Time: Use any buffer time included in the current Sprint to fix the bug quickly Make time: Remove a Story from the current Sprint that is of equal value to the time lost fixing the bug(s) and releasing. Note: The Team must agree that it can still meet the Sprint Goal. Cancel Sprint: Cancel the sprint and concentrate all resource on fixing the bug(s) Note: This can be a very costly if the current sprint has already had a lot of work completed as it will be lost. The choice will depend on the complexity and severity of the bug(s) and both the Product Owner and the Team need to agree. In this case we will go with option #2 or #3 as they are uncomplicated but severe bugs. Figure: Real world issue where a bug needs fixed in the current release. If the bug(s) is urgent enough then then your only option is to fix it in place. You can edit the release branch to find and fix the bug, hopefully creating a test so it can’t happen again. Follow the prior process and conduct an internal and customer “Test Please” before releasing. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: After you have fixed the bug you need to ship again. You then need to again create an RTM branch to hold the version of the code you released in escrow.   Figure: Main is now out of sync with your Release. We now need to get these new changes back up into the Main branch. Do a reverse and then forward merge again to get the new code into Main. But what about the branch, are developers not working on Sprint 2? Does Sprint 2 now have changes that are not in Main and Main now have changes that are not in Sprint 2? Well, yes… and this is part of the hit you take doing branching. But would this scenario even have been possible without branching?   Figure: Getting the changes in Main into Sprint 2 is very important. The Team now needs to do a Forward Integration merge into their Sprint and resolve any conflicts that occur. Maybe the bug has already been fixed in Sprint 2, maybe the bug no longer exists! This needs to be identified and resolved by the developers before they continue to get further out of Sync with Main. Note: Avoid the “Big bang merge” at all costs.   Figure: Merging Sprint 2 back into Main, the Forward Integration, and R0 terminates. Sprint 2 now merges (Reverse Integration) back into Main following the procedures we have already established.   Figure: The logical conclusion. This then allows the creation of the next release. By now you should be getting the big picture and hopefully you learned something useful from this post. I know I have enjoyed writing it as I find these exploratory posts coupled with real world experience really help harden my understanding.  Branching is a tool; it is not a silver bullet. Don’t over use it, and avoid “Anti-Patterns” where possible. Although the diagram above looks complicated I hope showing you how it is formed simplifies it as much as possible.   Technorati Tags: Branching,Scrum,VS ALM,TFS 2010,VS2010

    Read the article

  • Tulsa SharePoint Interest Group – Meeting Reminder

    - by dmccollough
    Just a quick reminder that the Tulsa SharePoint Interest Group is having it’s monthly meeting this coming Monday April 12th @6:00 PM.   Please come see Corey Roth’s presentation on SharePoint 2010 Business Connectivity Services   We are going to be giving away some GREAT prizes XBox 360 – Halo 3 ODST Telerik Premium Collection ($1,300.00 value) ReSharper ($199.00 value) SQL Sets ($149.00 value) 64 Bit Windows 7 Infragistics NetAdvantage for .NET Platform ($1,195.00 value) You can click here for more information. You can click here to RSVP for the meeting.

    Read the article

  • ASP.NET MVC 3: Implicit and Explicit code nuggets with Razor

    - by ScottGu
    This is another in a series of posts I’m doing that cover some of the new ASP.NET MVC 3 features: New @model keyword in Razor (Oct 19th) Layouts with Razor (Oct 22nd) Server-Side Comments with Razor (Nov 12th) Razor’s @: and <text> syntax (Dec 15th) Implicit and Explicit code nuggets with Razor (today) In today’s post I’m going to discuss how Razor enables you to both implicitly and explicitly define code nuggets within your view templates, and walkthrough some code examples of each of them.  Fluid Coding with Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to the existing .aspx view engine).  You can learn more about Razor, why we are introducing it, and the syntax it supports from my Introducing Razor blog post. Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type. For example, the Razor snippet below can be used to iterate a collection of products and output a <ul> list of product names that link to their corresponding product pages: When run, the above code generates output like below: Notice above how we were able to embed two code nuggets within the content of the foreach loop.  One of them outputs the name of the Product, and the other embeds the ProductID within a hyperlink.  Notice that we didn’t have to explicitly wrap these code-nuggets - Razor was instead smart enough to implicitly identify where the code began and ended in both of these situations.  How Razor Enables Implicit Code Nuggets Razor does not define its own language.  Instead, the code you write within Razor code nuggets is standard C# or VB.  This allows you to re-use your existing language skills, and avoid having to learn a customized language grammar. The Razor parser has smarts built into it so that whenever possible you do not need to explicitly mark the end of C#/VB code nuggets you write.  This makes coding more fluid and productive, and enables a nice, clean, concise template syntax.  Below are a few scenarios that Razor supports where you can avoid having to explicitly mark the beginning/end of a code nugget, and instead have Razor implicitly identify the code nugget scope for you: Property Access Razor allows you to output a variable value, or a sub-property on a variable that is referenced via “dot” notation: You can also use “dot” notation to access sub-properties multiple levels deep: Array/Collection Indexing: Razor allows you to index into collections or arrays: Calling Methods: Razor also allows you to invoke methods: Notice how for all of the scenarios above how we did not have to explicitly end the code nugget.  Razor was able to implicitly identify the end of the code block for us. Razor’s Parsing Algorithm for Code Nuggets The below algorithm captures the core parsing logic we use to support “@” expressions within Razor, and to enable the implicit code nugget scenarios above: Parse an identifier - As soon as we see a character that isn't valid in a C# or VB identifier, we stop and move to step 2 Check for brackets - If we see "(" or "[", go to step 2.1., otherwise, go to step 3  Parse until the matching ")" or "]" (we track nested "()" and "[]" pairs and ignore "()[]" we see in strings or comments) Go back to step 2 Check for a "." - If we see one, go to step 3.1, otherwise, DO NOT ACCEPT THE "." as code, and go to step 4 If the character AFTER the "." is a valid identifier, accept the "." and go back to step 1, otherwise, go to step 4 Done! Differentiating between code and content Step 3.1 is a particularly interesting part of the above algorithm, and enables Razor to differentiate between scenarios where an identifier is being used as part of the code statement, and when it should instead be treated as static content: Notice how in the snippet above we have ? and ! characters at the end of our code nuggets.  These are both legal C# identifiers – but Razor is able to implicitly identify that they should be treated as static string content as opposed to being part of the code expression because there is whitespace after them.  This is pretty cool and saves us keystrokes. Explicit Code Nuggets in Razor Razor is smart enough to implicitly identify a lot of code nugget scenarios.  But there are still times when you want/need to be more explicit in how you scope the code nugget expression.  The @(expression) syntax allows you to do this: You can write any C#/VB code statement you want within the @() syntax.  Razor will treat the wrapping () characters as the explicit scope of the code nugget statement.  Below are a few scenarios where we could use the explicit code nugget feature: Perform Arithmetic Calculation/Modification: You can perform arithmetic calculations within an explicit code nugget: Appending Text to a Code Expression Result: You can use the explicit expression syntax to append static text at the end of a code nugget without having to worry about it being incorrectly parsed as code: Above we have embedded a code nugget within an <img> element’s src attribute.  It allows us to link to images with URLs like “/Images/Beverages.jpg”.  Without the explicit parenthesis, Razor would have looked for a “.jpg” property on the CategoryName (and raised an error).  By being explicit we can clearly denote where the code ends and the text begins. Using Generics and Lambdas Explicit expressions also allow us to use generic types and generic methods within code expressions – and enable us to avoid the <> characters in generics from being ambiguous with tag elements. One More Thing….Intellisense within Attributes We have used code nuggets within HTML attributes in several of the examples above.  One nice feature supported by the Razor code editor within Visual Studio is the ability to still get VB/C# intellisense when doing this. Below is an example of C# code intellisense when using an implicit code nugget within an <a> href=”” attribute: Below is an example of C# code intellisense when using an explicit code nugget embedded in the middle of a <img> src=”” attribute: Notice how we are getting full code intellisense for both scenarios – despite the fact that the code expression is embedded within an HTML attribute (something the existing .aspx code editor doesn’t support).  This makes writing code even easier, and ensures that you can take advantage of intellisense everywhere. Summary Razor enables a clean and concise templating syntax that enables a very fluid coding workflow.  Razor’s ability to implicitly scope code nuggets reduces the amount of typing you need to perform, and leaves you with really clean code. When necessary, you can also explicitly scope code expressions using a @(expression) syntax to provide greater clarity around your intent, as well as to disambiguate code statements from static markup. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Community Megaphone Podcast

    - by Steve Michelotti
    Last week I had the pleasure of being a guest on the Community Megaphone Podcast with Andrew Duthie and Dane Morgridge. We discussed .NET 4, C# 4, MVC 2, “geek religious wars”, and of course community. You can check out Show #5 here or directly download it. Thanks to Dane and Andrew for having me on the show!

    Read the article

  • Agile SOA Governance: SO-Aware and Visual Studio Integration

    - by gsusx
    One of the major limitations of traditional SOA governance platforms is the lack of integration as part of the development process. Tools like HP-Systinet or SOA Software are designed to operate by models on which the architects dictate the governance procedures and policies and the rest of the team members follow along. Consequently, those procedures are frequently rejected by developers and testers given that they can’t incorporate it as part of their daily activities. Having SOA governance products...(read more)

    Read the article

  • Game Design Dilemma

    - by Chris Williams
    I'm working on a 2d tilemapped RPG. I've actually made quite a fair amount of progress, but I'm at a point where I need to make a UI decision. I have the overland world completely mapped out, and I have several towns and special areas. I'm on the fence about how to integrate the two. Scenario 1: I have one ginormous map, where everything is the same scale. This means you can walk in and out of towns without having to load or wait and transition in any way. With everything the same scale, movement costs the same no matter where you are (in terms of time/turns/energy/hunger/whatever/etc...)  The potential downside to this is that it could take quite a long time to get anywhere on foot. Scenario 2: I have an overland map, a set of town maps, overland tactical maps, dungeon maps & special area maps. The overland map is at a different scale than the other maps. This means that time/turns/energy/hunger/whatever/etc is calculated at a different rate than on the other maps, which have a 1:1 scale. When entering a town, dungeon, special area or having a random encounter, you would effectively zoom in from the overland scale to the tactical scale. When you are done with combat, or exit a dungeon or town, it would zoom back out to the overland map. The downside to this is that at the zoomed out scale, the overland map isn't all that big (comparitively) and you can traverse it fairly quickly (in real time, not game world time.) Options: 1) Go with scenario 1, as is. 2) Go with scenario 1 and introduce a slightly speedier version of overland travel, such as a horse. 3) Go with scenario 1 and introduce "instant" travel, via portals or some kind of "click the big map" mechanism. This would only work with places you've already been, or somehow unlocked (perhaps via a quest.) 4) Go with Scenario 2, as is.   Thoughts, opinions, suggestions?  Feedback appreciated.

    Read the article

  • Office Live add-in 1.5 cannot be installed

    - by wisecarver
    Having trouble with a recent Windows Update that failed to install the Office Live add-in 1.5? This has been driving me nuts on a Windows 7 Ultimate 64-bit system for three days. Windows Update would fail, click the “Try again” button and…fail So like I good boy I used http://www.bing.com and have been searching for resolutions. Success! The Microsoft Social forums. http://social.answers.microsoft.com/Forums/en-US/officeinstall/thread/4c62e615-a3e5-4cf9-ae6a-5fd870dfb0bc http://support.microsoft...(read more)

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >