Search Results

Search found 19606 results on 785 pages for 'the thing'.

Page 53/785 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Big Visible Charts

    - by Robert May
    An important part of Agile is the concept of transparency and visibility. In proper functioning teams, stakeholders can look at any team at any time in the iteration or release and see how that team is doing by simply looking at what we call Big Visible Charts. If you’ve done Scrum, you’ve seen these charts. However, interpreting these charts can often be an art form. There are several different charts that can be useful. In this newsletter, I’ll focus on the Iteration Burndown and Cumulative Flow charts. I’ve included a copy of the spreadsheet that I used to create the charts, and if you don’t have a tool that creates them for you, you can use this spreadsheet to do so. Our preferred tool for managing Scrum projects is Rally. Rally creates all of these charts for you, saving you quite a bit of time. The Iteration Burndown and Cumulative Flow Charts This is the main chart that teams use. Although less useful to stakeholders, this chart is critical to the team and provides quite a bit of information to the team about how their iteration is going. Most charts are a combination of the charts below, so you may need to combine aspects of each section to understand what is happening in your iterations. Ideal Ah, isn’t that a pretty picture? Unfortunately, it’s also very unrealistic. I’ve seen iterations that come close to ideal, but never that match perfectly. If your iteration matches perfectly, chances are, someone is playing with the numbers. Reality is just too difficult to have a burndown chart that matches this exactly. Late Planning Iteration started, but the team didn’t. You can tell this by the fact that the real number of estimated hours didn’t appear until day two. In the cumulative flow, you can also see that nothing was defined in Day one and two. You want to avoid situations like this. You’ll note that the team had to burn faster than is ideal to meet the iteration because of the late planning. This often results in long weeks and days. Testing Starved Determining whether or not testing is starved is difficult without the cumulative flow. The pattern in the burndown could be nothing more that developers not completing stories early enough or could be caused by stories being too big. With the cumulative flow, however, you see that only small bites are in progress and stories were completed early, but testing didn’t start testing until the end of the iteration, and didn’t complete testing all stories in the iteration. When this happens, question whether or not your testing resources are sufficient for your team and whether or not acceptance is adequately defined. No Testing With this one, both graphs show the same thing; the team needs testers and testing! Without testing, what was completed cannot be verified to make sure that it is acceptable to the business. If you find yourself in this situation, review your testing practices and acceptance testing process and make changes today. Late Development With this situation, both graphs tell a story. In the top graph, you can see that the hours failed to burn down as quickly as the team expected. This could be caused by the team not correctly estimating their hours or the team could have had illness or some other issue that affected them. Often, when teams are tackling something that is more unknown, they’ll run into technical barriers that cause the burn down to happen slower than expected. In the cumulative flow graph, you can see that not much was completed in the first few days. This could be because of illness or technical barriers or simply poor estimation. Testing was able to keep up with everything that was completed, however. No Tool Updating When you see graphs that look like this, you can be assured that it’s because the team is not updating the tool that generates the graphs. Review your policy for when they are to update. On the teams that I run, I require that each team member updates the tool at least once daily. You should also check to see how well the team is breaking down stories into tasks. If they’re creating few large tasks, graphs can look similar to this. As a general rule, I never allow tasks, other than Unit Testing and Uncertainty, to be greater than eight hours in duration. Scope Increase I always encourage team members to enter in however much time they think they have left on a task, even if that means increasing the total amount of time left to do. You get a much better and more realistic picture this way. Increasing time remaining could explain the burndown graph, but by looking at the cumulative flow graph, we can see that stories were added to the iteration and scope was increased. Since planning should consume all of the hours in the iteration, this is almost always a bad thing. If the scope change happened late in the iteration and the hours remaining were well below the ideal burn, then increasing scope is probably o.k., but estimation needs to get better. However, with the charts above, that’s clearly not what happened and the team was required to do extra work to make the iteration. If you find this happening, your product owner and ScrumMasters need training. The team also needs to learn to say no. Scope Decrease Scope decreases are just as bad as scope increases. Usually, graphs above show that the team did a poor job of estimating their stories and part way through had to reduce scope to change the iteration. This will happen once in a while, but if you find it’s a pattern on your team, you need to re-evaluate planning. Some teams are hopelessly optimistic. In those cases, I’ll introduce a task I call “Uncertainty.” With Uncertainty, the team estimates how many hours they might need if things don’t go well with the tasks they’ve defined. They try to estimate things that could go poorly and increase the time appropriately. Having an Uncertainty task allows them to have a low and high estimate. Uncertainty should not just be an arbitrary buffer. It must correlate to real uncertainty in the tasks that have been defined. Stories are too Big Often, we see graphs like the ones above. Note that the burndown looks fairly good, other than the chunky acceptance of stories. However, when you look at cumulative flow, you can see that at one point, everything is in progress. This is a bad thing. When you see graphs like this, you’re in one of two states. You may just have a very small team and can only handle one or two stories in your iteration. If you have more than one or two people, then the most likely problem is that your stories are far too big. To combat this, break large high hour stories into smaller pieces that can be completed independently and accepted independently. If you don’t, you’ll likely be requiring your testers to do heroic things to complete testing on the last day of the iteration and you’re much more likely to have the entire iteration fail, because of the limited amount of things that can be completed. Summary There are other charts that can be useful when doing scrum. If you don’t have any big visible charts, you really need to evaluate your process and change. These charts can provide the team a wealth of information and help you write better software. If you have any questions about charts that you’re seeing on your team, contact me with a screen capture of the charts and I’ll tell you what I’m seeing in those charts. I always want this information to be useful, so please let me know if you have other questions. Technorati Tags: Agile

    Read the article

  • Increasing deadlocks with NoLock

    - by Dave Ballantyne
    One on my personnel pet issues is with inappropriate use of the NOLOCK hint (and read uncommitted) .  Dont get me wrong, I have used it in exceptional circumstances , but as a general statement it is a bad thing.  Mostly , when NOLOCK, is used the discussion is around a single statement,  “it runs faster with nolock for XYZ reason”,  however ,IMO, this is quite a shorted sighted view.  What about the Transaction ? What about other concurrent users ?  What is good for one statement in isolation , does not mean that it is good for the system as a whole.  I have seen on a number of occasions deadlocks happen, when tasks that would of(and should of) be blocked continue to execute, only for a deadlock to occur at a later data writing (INSERT,UPDATE,DELETE) statement.  Writers will block writers regardless of isolation level. By Way of (fairly contrived ) example , lets generate some dummy tables and populate with some data drop table a go drop table b go Create Table a ( col1 integer ) go insert into a values(1) insert into a values(2) go Create Table b ( col1 integer ) go insert into b values(1) insert into b values(2) go   Now make two connections. In connection one execute set transaction isolation level read committed BEGIN TRAN Select * from a Select * from b delete from a In connection two execute set transaction isolation level read committed BEGIN TRAN Select * from a Select * from b delete from b Right now the ‘select from a’ in connection two is being blocked by the ‘delete from a’ in connection one.  This is ,IMO, quite a healthy and natural thing to be happening , some see this as a ‘slow down’, a drop in performance.  So, lets reach for our ‘NOLOCK’ magic pill.  Cancel the blocked query and ROLLBACK both transactions, then in connection one execute set transaction isolation level read uncommitted BEGIN TRAN Select * from a Select * from b delete from b and then in connection two execute set transaction isolation level read uncommitted BEGIN TRAN Select * from a Select * from b delete from a We have now solved out performance problem , no more blocking.  Lets finish the work required by the transaction, in connection one , execute delete from a Oh, ‘ performance problem’ again , its now being blocked. Still, lets complete the work in connection two…. delete from b DEADLOCK!!  It is important to be clear about the role of the select statements.  They do not participate within the deadlock, but are preventing code executing that would of.   Additionally, without the select readers to block, a deadlock would occur on the deletes with READ COMMITTED. Naturally, other isolation levels will exhibit different behaviour as to where and when they will and wont block,  and I would encourage you to read BOL and satisfy yourself that you really do NEED to NOLOCK.

    Read the article

  • Ubuntu 11.04 Broadcom BCM4312 Not Working

    - by ptran221
    I have a HP MINI 210-1010NR and just installed Ubuntu 11.04 and I can't get my wireless to work.I have checked through multiple Q&A's throughout this FAQ and tried them all. When I go over the wireless thing at the top it says "Wireless Networks device not ready(firmware missing)." Okay, now here is my ~$ lspci -vvnn | grep 14e4 02:00.0 Network controller [0280]: Broadcom Corporation BCM4312 802.11b/g LP-PHY [14e4:4315] (rev 01) Also, when I try to open additional drivers it says that "Downloading package indexes failed, please check your network status."

    Read the article

  • Is hashing of just "username + password" as safe as salted hashing

    - by randomA
    I want to hash "user + password". EDIT: prehashing "user" would be an improvement, so my question is also for hashing "hash(user) + password". If cross-site same user is a problem then the hashing changed to hashing "hash(serviceName + user) + password" From what I read about salted hash, using "user + password" as input to hash function will help us avoid problem with reverse hash table hacking. The same thing can be said about rainbow table. Any reason why this is not as good as salted hashing?

    Read the article

  • October Update to Rules-Driven Maintenance

    - by merrillaldrich
    Happy Fall! It’s a beautiful October here in Minneapolis / Saint Paul. In preparation for my home town SQL Saturday this weekend, as well as the PASS Summit, I offer an update to the Rules-Driven Maintenance code I originally published back in August 2012 . It’s hard to believe this thing is now more than two years old – it’s been an incredible help as the number of databases and instance my team manages has grown. One enhancement with this update is the ability to set overrides for both Index and...(read more)

    Read the article

  • Inserting data to database from Android

    - by Angel
    I have to build an application where the requirement is that my clients will send data from their Android device and I have to save that data to a database. I have done the part of coding that inserts data from Android emulator to my XAMPP database on localhost, now I have to implement the real thing. How can I connect the devices where my application will be installed to the XAMPP database I have created so that the data they send can be inserted into it?

    Read the article

  • How should I pitch moving to an agile/iterative development cycle with mandated 3-week deployments?

    - by Wayne M
    I'm part of a small team of four, and I'm the unofficial team lead (I'm lead in all but title, basically). We've largely been a "cowboy" environment, with no architecture or structure and everyone doing their own thing. Previously, our production deployments would be every few months without being on a set schedule, as things were added/removed to the task list of each developer. Recently, our CIO (semi-technical but not really a programmer) decided we will do deployments every three weeks; because of this I instantly thought that adopting an iterative development process (not necessarily full-blown Agile/XP, which would be a huge thing to convince everyone else to do) would go a long way towards helping manage expectations properly so there isn't this far-fetched idea that any new feature will be done in three weeks. IMO the biggest hurdle is that we don't have ANY kind of development approach in place right now (among other things like no CI or automated tests whatsoever). We don't even use Waterfall, we use "Tell Developer X to do a task, expect him to do everything and get it done". Are there any pointers that would help me start to ease us towards an iterative approach and A) Get the other developers on board with it and B) Get management to understand how iterative works? So far my idea involves trying to set up a CI server and get our build process automated (it takes about 10-20 minutes right now to simply build the application to put it on our development server), since pushing tests and/or TDD will be met with a LOT of resistance at this point, and constantly force us to break larger projects into smaller chunks that could be done iteratively in a three-week cycle; my only concern is that, unless I'm misunderstanding, an agile/iterative process may or may not release the software (depending on the project scope you might have "working" software after three weeks, but there isn't enough of it that works to let users make use of it), while I think the expectation here from management is that there will always be something "ready to go" in three weeks, and that disconnect could cause problems. On that note, is there any literature or references that explains the agile/iterative approach from a business standpoint? Everything I've seen only focuses on the developers, how to do it, but nothing seems to describe it from the perspective of actually getting the buy-in from the businesspeople.

    Read the article

  • Payment Gateway options other than Paypal, for sending out mass payments

    - by Rishav Rastogi
    We were using Paypal Payment pro earlier for the same thing, but for some reason Paypal has been given some new guideline which kinda hinder with the way we need to send out payments at the moment. We receive payments from clients and then send out payments back to vendors on a weekly basis ( deducting our cut ). Can you let me know what options are available to for such transactions other than paypal ? which is the best in terms cost of setup etc. Thanks

    Read the article

  • Is it possible to have multiple sets of key columns in a table?

    - by Peter Larsson
    Filtered indexes is one of my new favorite things with SQL Server 2008. I am currently working on designing a new datawarehouse. There are two restrictions doing this It has to be fed from the old legacy system with both historical data and new data It has to be fed from the new business system with new data When we incorporate the new business system, we are going to do that for one market only. It means the old legacy business system still will produce new data for other markets (together with historical data for all markets) and the new business system produce new data to that one market only. Sounds interesting this far? To accomplish this I did a thorough research about the business requirements about the business intelligence needs. Then I went on to design the sucker. How does this relate to filtered indexes you ask? I'll give one example, the Stock transaction table. Well, the key columns for the old legacy system are different from the key columns from the new business system. The old legacy system has a key of 5 columns Movement date Movement time Product code Order number Sequence number within shipment And to all thing, I found out that the Movement Time column is not really a time. It starts out like a time HH:MM:SS but seconds are added for each delivery within the shipment, so a Movement Time can look like "12:11:68". The sequence number is ordered over the distributors for shipment. As I said, it is a legacy system. The new business system has one key column, the Movement DateTime (accuracy down to 100th of nanosecond). So how to deal with this? On thing would be to have two stock transaction tables, one for legacy system and one for the new business system. But that would lead to a maintenance overhead and using partitioned views for getting data out of the warehouse. Filtered index will be of a great use here. MovementDate DATETIME2(7) MovementTime CHAR(8) NULL ProductCode VARCHAR(15) NOT NULL OrderNumber VARCHAR(30) NULL SequenceNumber INT NULL The sequence number is not even used in the new system, so I created a clustered index for a new IDENTITY column to make a new identity column which can be shared by both systems. Then I created one unique filtered index for old system like this CREATE UNIQUE NONCLUSTERED INDEX IX_Legacy (MovementDate, MovementTime, ProductCode, SequenceNumber) INCLUDE (OrderNumber, Col5, Col6, ... ) WHERE SequenceNumber IS NOT NULL And then I created a new unique filtered index for the new business system like this CREATE UNIQUE NONCLUSTERED INDEX IX_Business (MovementDate) INCLUDE (ProductCode, OrderNumber, Col12, ... ) WHERE SequenceNumber IS NULL This way I can have multiple sets of key columns on same base table which is shared by both systems.

    Read the article

  • Css menu hovering background color problem

    - by Guisasso
    i have a very simple question for many, but complicated enough for me. I have tried to fix this for the last hour with no luck. I downloaded a css menu, and made all the modifications needed to me with no issue, but there's one thing that i'm having no luck trying to fix: When hovering over "ccc" (for example), and going down to 1 for example (this happens to all other cells) the black background doesn't extend all the way to the right. Here's the link: http://riversidesheetmetal.net/gui/questions/aaa.html

    Read the article

  • NoFollow and DoFollow Blog - Comment Links Affect Search Engine Optimization

    Many sellers of information products leave comments on blogs, especially popular ones with high Google PageRank, thinking that they are getting those good inbound links to their sites. But there's a problem. Most blogs put a "No Follow" tag in the link to your website. Sure, readers can click on it and check you out, and that's a good thing. But you get no SEO benefit.

    Read the article

  • BBC flash videos don't play in Firefox (Youtube videos do, and all is fine in Chrome)

    - by Cocoro Cara
    Ubuntu 10.10, 32 bit. Firefox 3.6.14 Why don't BBC videos play in Firefox if Youtube has no problem? Moreover videos play fine in Chrome. Another strange thing: there seem to be two flashplugins in about:plugins File: libflashplayer.so Version: Shockwave Flash 10.1 r102 File: libflashplayer.so Version: Shockwave Flash 10.2 r152 But there is only one flashplugin in the plugins directory: /usr/lib/firefox/plugins/flashplugin-alternative.so - /etc/alternatives/firefox-flashplugin $ update-alternatives --list firefox-flashplugin /usr/lib/flashplugin-installer/libflashplayer.so Any ideas?

    Read the article

  • Filtering your offices IPs from Google Analytics when each has a dynamic IP?

    - by leeand00
    I found the documentation for filtering IPs from Google Analytics, but the address of the several locations of our company all have dynamic IP addresses that change every 30 days from what I'm told. I know from working with Dynamic DNS that the provider usually gives you a script that you configure your router to run when it's IP address changes or when it is restarted, which passes the new IP address to the DDNS server. I'm wondering if there might be a way to write or use a preexisting script to do the same thing with the Google Analytics API.

    Read the article

  • Play Your Position Until the Play Breaks Down&hellip;then Do Whatever it Takes.

    - by AjarnMark
    If I didn’t know better, I would think that K. Brian Kelley (blog | twitter) has been listening in on conversations with my boss. In his recent blog post Successful Teams: Knowing When to Step Out of Your Role, Brian describes quite clearly a philosophy that my boss has been trying to get across to everyone in the department.  We have been using sports analogies, like how important it is to play your position, until the play breaks down (such as a fumble) and then do whatever it takes it to cover each other / recover the ball / win.  While we like having very skilled people who could do a lot of different tasks, it is important that you first do your assigned tasks, and only once those are complete, or failure of the larger mission is probable, do you consider walking away from them to help someone else with their responsibilities. The thing that you cannot afford, especially on a lean team, is the really nice guy who is always trying to help out other people, but in doing so, is never quite getting his own responsibilities taken care of.  Yes, if the Running Back drops the football, you want any member of the team in the vicinity to jump on it, whether that is the leading blocker or the Quarterback.  But until the fumble happens, you want the leading blocker to focus on doing his job, and block for the Running Back.  If the blocker is doing any other job than his primary responsibility, you’re probably going to lose. This sounds logical enough, but it is really easy to go astray with the best of intentions.  This is especially true on a small, tight-knit team, where it is really easy to get sucked into someone else’s task or problem, doubly so if you think you can do it better or faster than them.  Now you are really setting yourself up for failure.  The right thing is to let the other person do the job, even if it seems less efficient in the short-run, so that you can focus on the tasks which require your expertise.  Don’t break formation…don’t abandon your assignment, until it is clear that mission failure is imminent, and even then, as Brian writes, it should be with the agreement of the mission leader. Thanks, Brian, for putting it so well.  This has been distributed throughout our department.

    Read the article

  • Live Mail folder and Thunderbird

    - by Umair Mustafa
    Me Friends Hello, Guys I'm facing a small issue and that is, I created a New account(MS live Email account) on Thunderbird and set the incoming protocol to IMAP and "pop3.live.com" Port No 995 , SSL to SSL/TLS and Outgoing protocol to SMTP and "smtp.live.com" Port No 25 , SSL to STARTTLS Now What I want is that there are sub folders in my Web based Live Email account which are not appearing when I create the account in Thunderbird. Please tell me how to get this thing done

    Read the article

  • Web Site Design Best Practices

    There are several ways to develop web pages but the best thing is that it should be developed in such a way that it appeals and attract other persons. In the field of web page designing a lot of practices are there. Here in this article we target few of the best practices for making and developing good web pages.

    Read the article

  • Tagged: 5 things SQL Server should drop

    - by AaronBertrand
    I was tagged by Paul Randal ( blog | twitter ) last night in his latest blog post, entitled, " What 5 things should SQL Server get rid of? " His top 5 pretty much coincide with my top 5, so I'll have to dig a little deeper. In no particular order: Syntax inconsistencies This isn't really a specific thing that Microsoft should get rid of, but rather an attitude and overall approach to SQL Server's long-term development. Every time they add a feature or option to SQL Server, it seems to be implemented...(read more)

    Read the article

  • The Art of SEO Writing

    Website owners around the world have one thing they all want when it come to their respective websites: web traffic. And how do these website owners improve this? By using SEO techniques. And to do well in SEO, one must understand how to do quality SEO writing.

    Read the article

  • How to write Bash scribt to open two different terminals

    - by Ahmed Zain El Dein
    How to write Bash script to open two different taped terminal ,and write in both of them commands separately to be executed unrelationally for instance : Terminal number one open skype terminal number two open in the end , i want one more thing , can i write in the bash script my skype username and password to put them in skype when open it in terminal one automatically then login too Thanks

    Read the article

  • Article Sharing &ndash; Windows Azure Memcached Plugin

    - by Shaun
    I just found that David Aiken, a windows azure developer and evangelist, wrote a cool article about how to use Memcached in Windows Azure through the new feature Azure Plugin. http://www.davidaiken.com/2011/01/11/windows-azure-memcached-plugin/ I think the best solution for distributed cache in Azure would be the Windows Azure AppFabric Caching but since it’s only in CTP and avaiable in the US data center David’s solution would be the best. Only one thing I’m concerning about, is the stability of windows verion Memcached.

    Read the article

  • Speaking - Automate Your ETL Infrastructure with SSIS and PowerShell

    - by AllenMWhite
    Today at 4:45PM EDT I'm presenting a new session using PowerShell to auto-generate SSIS packages via the BIML language. The really cool thing is that this session will be live broadcast on PASS TV! You can view the session by clicking on this link . If you have questions for me during the session, you can send them to me via Twitter using this hashtag: #posh2biml Brian Davis, my good friend from the Ohio North SQL Server Users Group, will be monitoring that hashtag and feeding me the questions that...(read more)

    Read the article

  • auto-update and email

    - by Colin Pickard
    I've got several Ubuntu 10.10 servers which should all be set to do automatic security updates. Is there any way I could get them to send me an email when they apply updates (particularly if they fail)? I'm using r-u-on to monitor availability, disk space etc but the security updates are very important and I don't have a good way to monitor them. I could possibly script something myself but I figured it's the kind of thing that's probably been solved many times already.

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >