Search Results

Search found 3257 results on 131 pages for 'dave greg'.

Page 13/131 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • SQL SERVER – Checklist for Analyzing Slow-Running Queries

    - by pinaldave
    I am recently working on upgrading my class Microsoft SQL Server 2005/2008 Query Optimization and & Performance Tuning with additional details and more interesting examples. While working on slide deck I realized that I need to have one solid slide which talks about checklist for analyzing slow running queries. A quick search on my saved [...]

    Read the article

  • SQL SERVER – Introduction to Rollup Clause

    - by pinaldave
    In this article we will go over basic understanding of Rollup clause in SQL Server. ROLLUP clause is used to do aggregate operation on multiple levels in hierarchy. Let us understand how it works by using an example. Consider a table with the following structure and data: CREATE TABLE tblPopulation ( Country VARCHAR(100), [State] VARCHAR(100), City VARCHAR(100), [Population (in Millions)] INT ) GO INSERT INTO tblPopulation VALUES('India', 'Delhi','East Delhi',9 [...]

    Read the article

  • SQLAuthority Book Review – Professional SQL Server 2008 Internals and Troubleshooting

    - by pinaldave
    Professional SQL Server 2008 Internals and Troubleshooting by Christian Bolton, Justin Langford, Brent Ozar, James Rowland-Jones, Steven Wort Link to Amazon (Worldwide) Link to Flipkart (India) Brief Review: Having a book on internal and associating that with real life is “almost” an impossible task. The reason for using the word “almost” is because this book has accomplished this [...]

    Read the article

  • SQL SERVER – Recompile Stored Procedure at Run Time

    - by pinaldave
    I recently received an email from reader after reading my previous article on SQL SERVER – Plan Recompilation and Reduce Recompilation – Performance Tuning regarding how to recompile any stored procedure at run time. There are multiple ways to do this. If you want your stored procedure to always recompile at run time, you can add [...]

    Read the article

  • SQLAuthority News – Keeping Your Ducks in a Row

    - by pinaldave
    Last year during my visit to SQLAuthority News – SQL PASS Summit, Seattle 2009 – Day 2 I have received ducks from the event. Well during the same event I had learned from Jonathan Kehayias the saying of ‘Keeping Your Ducks in a Row‘. The most popular theory suggests that “ducks in a row” came [...]

    Read the article

  • SQL SERVER – Rollback TRUNCATE Command in Transaction

    - by pinaldave
    This is very common concept that truncate can not be rolled back. I always hear conversation between developer if truncate can be rolled back or not. If you use TRANSACTIONS in your code, TRUNCATE can be rolled back. If there is no transaction is used and TRUNCATE operation is committed, it can not be retrieved from [...]

    Read the article

  • Postfix not delivering email using Maildir

    - by Greg K
    I've followed this guide to get postfix set up. I've not completed it yet, as from sending test emails, email is no longer being delivered since switching to Maildir from mbox. I have created a Maildir directory with cur, new and tmp sub directories. ~$ ll drwxrwxr-x 5 greg greg 4096 2012-07-07 16:40 Maildir/ ~$ ll Maildir/ drwxrwxr-x 2 greg greg 4096 2012-07-07 16:40 cur drwxrwxr-x 2 greg greg 4096 2012-07-07 16:40 new drwxrwxr-x 2 greg greg 4096 2012-07-07 16:40 tmp Send a test email. ~$ netcat mail.example.com 25 220 ubuntu ESMTP Postfix (Ubuntu) ehlo example.com 250-ubuntu 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN mail from: [email protected] 250 2.1.0 Ok rcpt to: [email protected] 250 2.1.5 Ok data 354 End data with <CR><LF>.<CR><LF> Subject: test email Hi, Just testing. . 250 2.0.0 Ok: queued as 56B541EA53 quit 221 2.0.0 Bye Check the mail queue. ~$ mailq Mail queue is empty Check if mail has been delivered. ~$ ls -l Maildir/new total 0 Some postfix settings: ~$ sudo postconf home_mailbox home_mailbox = Maildir/ ~$ sudo postconf mailbox_command mailbox_command = /var/log/mail.log Jul 7 16:57:33 li305-246 postfix/smtpd[21039]: connect from example.com[178.79.168.xxx] Jul 7 16:58:14 li305-246 postfix/smtpd[21039]: 56B541EA53: client=example.com[178.79.168.xxx] Jul 7 16:58:33 li305-246 postfix/cleanup[21042]: 56B541EA53: message-id=<20120707155814.56B541EA53@ubuntu> Jul 7 16:58:33 li305-246 postfix/qmgr[20882]: 56B541EA53: from=<[email protected]>, size=321, nrcpt=1 (queue active) Jul 7 16:58:33 li305-246 postfix/smtp[21043]: 56B541EA53: to=<[email protected]>, relay=none, delay=30, delays=30/0.01/0/0, dsn=5.4.6, status=bounced (mail for example.com loops back to myself) Jul 7 16:58:33 li305-246 postfix/cleanup[21042]: 1F68B1EA55: message-id=<20120707155833.1F68B1EA55@ubuntu> Jul 7 16:58:33 li305-246 postfix/bounce[21044]: 56B541EA53: sender non-delivery notification: 1F68B1EA55 Jul 7 16:58:33 li305-246 postfix/qmgr[20882]: 1F68B1EA55: from=<>, size=1999, nrcpt=1 (queue active) Jul 7 16:58:33 li305-246 postfix/qmgr[20882]: 56B541EA53: removed Jul 7 16:58:33 li305-246 postfix/smtp[21043]: 1F68B1EA55: to=<[email protected]>, relay=none, delay=0, delays=0/0/0/0, dsn=5.4.6, status=bounced (mail for example.com loops back to myself) Jul 7 16:58:33 li305-246 postfix/qmgr[20882]: 1F68B1EA55: removed Jul 7 16:58:36 li305-246 postfix/smtpd[21039]: disconnect from domain.me[178.79.168.xxx] Jul 7 17:10:38 li305-246 postfix/master[20878]: terminating on signal 15 Jul 7 17:10:39 li305-246 postfix/master[21254]: daemon started -- version 2.8.5, configuration /etc/postfix Any ideas?

    Read the article

  • SQL SERVER – Transcript of Learning SQL Server Performance: Indexing Basics – Interview of Vinod Kumar by Pinal Dave

    - by pinaldave
    Recently I just wrote a blog post on about Learning SQL Server Performance: Indexing Basics and I received lots of request that if we can share some insight into the course. Here is 200 seconds interview of Vinod Kumar I took right after completing the course. We have few free codes to watch the course, please your comment at http://facebook.com/SQLAuth and we will few of first ones, we will send the code. There are many people who said they would like to read the transcript of the video. Here I have generated the same. Pinal: Vinod, we recently released this course, SQL Server Indexing. It is about performance tuning. So tell me – how do indexes help performance? Vinod: I think what happens in the industry when it comes to performance is that developers and DBAs look at indexes first.  So that’s the first step for any performance tuning exercise, indexing is one of the most critical aspects and it is important to learn it the right way. Pinal: Correct. So what you mean to say is that if you know indexing you can pretty much tune any server and query. Vinod: So I might contradict my false statement now. Indexing is usually a stepping stone but it does not lead you to the end. But it’s good to start with indexing and there are lots of nuances to indexing that you need to understand, like how SQL uses indexing and how performance can improve because of the strategies that you have made. Pinal: But now I’m confused. First you said indexes are good, and then you said that indexes can degrade your performance.  So what is this course about?  I mean how does this course really make an impact? Vinod: Ok -so from the course perspective, what we are trying to do is give you a capsule which gives you a good start. Every journey needs a beginning, you need that first step.  This course is that first step in understanding. This is the most basic, fundamental course that we have tried to attack. This is the fundamentals of indexing, some of the key things that you must know about indexing.   Some of the basics of indexing are lesser known and so I think this course is geared towards each and every one of you out there who wants to understand little bit more about indexing. Pinal: So what I understand is that if I enrolled in this course I will have a minimum understanding about indexing when dealing with performance tuning.  Right? Vinod: Exactly. In this course is we have tried to give you a nice summary. We are talking about clustered indexing, non clustered indexing, too many indexes, too few indexes, over indexing, under indexing, duplicate indexing, columns tune indexing, with SQL Server 2012. There’s lot’s to learn. Pinal: You can see the URL [http://bit.ly/sql-index] of the course on the screen. Go ahead, attend, and let us know what you think about it. Thank you. Vinod: Thank you. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology, Video

    Read the article

  • How to resolve symbolic links in a shell script

    - by Greg Hewgill
    Given an absolute or relative path (in a Unix-like system), I would like to determine the full path of the target after resolving any intermediate symlinks. Bonus points for also resolving ~username notation at the same time. If the target is a directory, it might be possible to chdir() into the directory and then call getcwd(), but I really want to do this from a shell script rather than writing a C helper. Unfortunately, shells have a tendency to try to hide the existence of symlinks from the user (this is bash on OS X): $ ls -ld foo bar drwxr-xr-x 2 greg greg 68 Aug 11 22:36 bar lrwxr-xr-x 1 greg greg 3 Aug 11 22:36 foo -> bar $ cd foo $ pwd /Users/greg/tmp/foo $ What I want is a function resolve() such that when executed from the tmp directory in the above example, resolve("foo") == "/Users/greg/tmp/bar".

    Read the article

  • nodejs daemon wrong architecture

    - by Greg Pagendam-Turner
    I'm trying to run 'dali' a highcharts exporter from nodejs on my Mac under OSX Mountain Lion I'm getting the following error: module.js:485 process.dlopen(filename, module.exports); ^ Error: dlopen(/Users/greg/node_modules/daemon/lib/daemon.v0.8.8.node, 1): no suitable image found. Did find: /Users/greg/node_modules/daemon/lib/daemon.v0.8.8.node: mach-o, but wrong architecture at Object.Module._extensions..node (module.js:485:11) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Module.require (module.js:362:17) at require (module.js:378:17) at Object.<anonymous> (/Users/greg/node_modules/daemon/lib/daemon.js:12:11) at Module._compile (module.js:449:26) at Object.Module._extensions..js (module.js:467:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) The key part is: "wrong architecture" If I run: lipo -info /Users/greg/node_modules/daemon/lib/daemon.v0.8.8.node It returns: Non-fat file: /Users/greg/node_modules/daemon/lib/daemon.v0.8.8.node is architecture: i386 I'm guessing a x64 version is requried. How do I get and install the 64 bit version of this lib?

    Read the article

  • #Error showing up in multiple LEFT JOIN statement Access query when value should be NULL

    - by lar
    I'm trying to return an ID's last 4 years of data, if existing. The table (call it A_TABLE) looks like this: ID, Year, Val The idea behind the query is this: for each ID/Year in the table, LEFT JOIN with Year-1, Year-2, and Year-3 (to get 4 years of data) and then return Val for each year. Here's the SQL: SELECT a.ID, a.year AS [Year], a.Val AS VAL, a1.year AS [Year-1], a1.Val AS [VAL-1], a2.year AS [Year-2], a2.Val AS [VAL-2], a3.year AS [Year-3], a3.Val AS [VAL-3] FROM ( ([A_TABLE] AS a LEFT JOIN [A_TABLE] AS a1 ON (a.ID = a1.ID) AND (a.year = a1.year+1)) LEFT JOIN [A_TABLE] AS a2 ON (a.ID = a2.ID) AND (a.year = a2.year+2)) LEFT JOIN [A_TABLE] AS a3 ON (a.ID = a3.ID) AND (a.year = a3.year+3) The problem is that, for past years where there is no data (eg, Year-1), I see "#Error" in the appropriate VAL column (eg, [VAL-1]). The weird thing is, I see the expected "null" in the Year column (eg, [YEAR-1]). Some sample data: ID YEAR VAL Dave 2004 1 Dave 2006 2 Dave 2007 3 Dave 2008 5 Dave 2009 0 outputs like this: ID YEAR VAL YEAR-1 VAL-1 YEAR-2 VAL-2 YEAR-3 VAL-3 Dave 2004 1 #Error #Error #Error Dave 2006 2 #Error 2004 1 #Error Dave 2007 3 2006 2 #Error 2004 1 Dave 2008 5 2007 3 2006 2 #Error Dave 2009 0 2008 5 2007 3 2006 2 Does that make sense? Why am I getting the appropriate NULL val for the non-existent YEARs, but an #Error for the non-existent VALs? (This is Access 2000. Conditional statements like "IIf(a1.val is null, -999, a1.val)" do not seem to do anything.) EDIT: It turns out that the errors are somehow caused by the fact that A_TABLE is actually a query. When I put all the data into an actual table and run the same query, everything shows up as it should. Thanks for the help, everyone.

    Read the article

  • Javascript: Pass array as optional method args

    - by Dave Paroulek
    console.log takes a string and replaces tokens with values, for example: console.log("My name is %s, and I like %", 'Dave', 'Javascript') would print: My name is Dave, and I like Javascript I'd like to wrap this inside a method like so: function log(msg, values) { if(config.log == true){ console.log(msg, values); } } The 'values' arg might be a single value or several optional args. How can I accomplish this? If I call it like so: log("My name is %s, and I like %s", "Dave", "Javascript"); I get this (it doesn't recognize "Javascript" as a 3rd argument): My name is Dave, and I like %s If I call this: log("My name is %s, and I like %s", ["Dave", "Javascript"]); then it treats the second arg as an array (it doesn't expand to multiple args). What trick am I missing to get it to expand the optional args?

    Read the article

  • This Isn’t Hard: Allow Spouses to Attend Conferences

    - by andyleonard
    There was a bit of a hubbub at Tech Ed 2013 North America . It began with generalized disorganization, escalated when site security escorted Greg Young’s ( blog | @gregyoung ) wife from the building, and ended with him cancelling his presentations at both the North American and European conferences. Greg’s post has generated some responses, but – according to him – nothing from Microsoft. That’s disappointing. Greg and his wife deserve an apology. Why Not? The best conferences I’ve attended (I’m...(read more)

    Read the article

  • CQRS &ndash; Questions and Concerns

    - by Dylan Smith
    I’ve been doing a lot of learning on CQRS and Event Sourcing over the last little while and I have a number of questions that I haven’t been able to answer. 1. What is the benefit of CQRS when compared to a typical DDD architecture that uses Event Sourcing and properly captures intent and behavior via verb-based commands? (other than Scalability) 2. When using CQRS what do you do with complex query-based logic? I’m going to elaborate on #1 in this blog post and I’ll do a follow-up post on #2. I watched through Greg Young’s video on the business benefits of CQRS + Event Sourcing and first let me say that I thought it was an excellent presentation that really drives home a lot of the benefits to this approach to architecture (I watched it twice in a row I enjoyed it so much!). But it didn’t answer some of my questions fully (I wish I had been there to ask these of Greg in person!). So let me pick apart some of the points he makes and how they relate to my first question above. I’m completely sold on the idea of event sourcing and have a clear understanding of the benefits that it brings to the table, so I’m not going to question that. But you can use event sourcing without going to a CQRS architecture, so my main question is around the benefits of CQRS + Event Sourcing vs Event Sourcing + Typical DDD architecture Architecture with Event Sourcing + Commands on Left, CQRS on Right Greg talks about how the stereotypical architecture doesn’t support DDD, but is that only because his diagram shows DTO’s coming up from the client. If we use the same diagram but allow the client to send commands doesn’t that remove a lot of the arguments that Greg makes against the stereotypical architecture? We can now introduce verbs into the system. We can capture intent now (storing it still requires event sourcing, but you can implement event sourcing without doing CQRS) We can create a rich domain model (as opposed to an anemic domain model) Scalability is obviously a benefit that CQRS brings to the table, but like Greg says, very few of the systems we create truly need significant scalability Greg talks about the ability to scale your development efforts. He says CQRS allows you to split the system into 3 parts (Client, Domain/Commands, Reads) and assign 3 teams of developers to work on them in parallel; letting you scale your development efforts by 3x with nearly linear gains. But in the stereotypical architecture don’t you already have 2 separate modules that you can split your dev efforts between: The client that sends commands/queries and receives DTO’s, and the Domain which accepts commands/queries, and generates events/DTO’s. If this is true it’s not really a 3x scaling you achieve with CQRS but merely a 1.5x scaling which while great doesn’t sound nearly as dramatic (“I can do it with 10 devs in 12 months – let me hire 5 more and we can have it done in 8 months”). Making the Query side “stupid simple” such that you can assign junior developers (or even outsource it) sounds like a valid benefit, but I have some concerns over what you do with complex query-based logic/behavior. I’m going to go into more detail on this in a follow-up blog post shortly. He also seemed to focus on how “stupid-simple” it is doing queries against the de-normalized data store, but I imagine there is still significant complexity in the event handlers that interpret the events and apply them to the de-normalized tables. It sounds like Greg suggests that because we’re doing CQRS that allows us to apply Event Sourcing when we otherwise wouldn’t be able to (~33:30 in the video). I don’t believe this is true. I don’t see why you wouldn’t be able to apply Event Sourcing without separating out the Commands and Queries. The queries would just operate against the domain model instead of the database. But you’d still get the benefits of Event Sourcing. Without CQRS the queries would only be able to operate against the current state rather than the event history, but even in CQRS the domain behaviors can only operate against the current state and I don’t see that being a big limiting factor. If some query needs to operate against something that is not captured by the current state you would just have to update the domain model to capture that information (no different than if that statement were made about a Command under CQRS). Some of the benefits I do see being applicable are that your domain model might end up being simpler/smaller since it only needs to represent the state needed to process commands and not worry about the reads (like the Deactivate Inventory Item and associated comment example that Greg provides). And also commands that can be handled in a Transaction Script style manner by the command handler simply generating events and not touching the domain model. It also makes it easier for your senior developers to focus on the command behavior and ignore the queries, which is usually going to be a better use of their time. And of course scalability. If anybody out there has any thoughts on this and can help educate me further, please either leave a comment or feel free to get in touch with me via email:

    Read the article

  • Cannot right click with synaptics touchpad

    - by fluteflute
    I have a Sony VAIO E14. The touchpad detects all clicks as Left clicks. In Windows 7, pressing on the right side of the touchpad is recognised as a right click. How can I enable right clicking? greg@greg-SVE14A1C5E:~$ xinput ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? SynPS/2 Synaptics TouchPad id=11 [slave pointer (2)] ... greg@greg-SVE14A1C5E:~$ grep "TouchPad: buttons:" /var/log/Xorg.0.log [ 23.112] (--) synaptics: SynPS/2 Synaptics TouchPad: buttons: left double triple

    Read the article

  • Exchange Server 2007 Forwarding Circles

    - by LorenVS
    Hello, I asked a question quite a while ago about two members of an organization who wanted to receive all of each other's emails, and yet maintain seperate mailboxes. (so all emails to mike@company get sent to mike and dave and all emails to dave@company get sent to mike and dave). At the time, I actually only needed to implement one side of this (only mikes emails got sent to both receipients) and (with the help of ServerFault) I set up forwarding on dave's inbox so that all of his emails would also be sent to mike. I'm now in a situation where I have to implement the other side of this relation (such that mike's emails will also forward to dave). I still remember how to set up the forwarding rule, but I'm worried that I might be creating a circular forwarding rule such that mike@compnay forwards to dave@company which forwards to mike@company and so on. Can anyone clear up my confusion (just want to make sure I don't make a stupid mistake). Thanks a ton

    Read the article

  • How to host multiple mail domains with courier?

    - by Dave Vogt
    I had my server set up with generic aliases in /etc/courier/aliases/users, which worked fine. But today I wanted to host some new domains, where addresses overlap. What I need is that [email protected] goes to account "dv", but [email protected] goes to account "d2". So I set up a new file containing fully qualified addresses ([email protected]: dv) instead of the previous dave: dv. But somehow, courier-smtpd doesn't accept mail to these addresses anymore. makealiases -dump prints all the aliases the way they should be.. so i'm a bit stuck.

    Read the article

  • Figure out what non-symlink path would be?

    - by David Mackintosh
    On Linux, if I've cd'd around and am now in a directory, is there a way to figure out what the real path to that directory is if I had not used a symbolic link to get there? Consider: $ pwd /home/dave/tmp $ mkdir -p 1/2/3/4/5 $ ln -s 1/2/3/4/5 5 $ cd 5 $ pwd /home/dave/tmp/5 Or: $ pwd /home/dave/tmp $ mkdir -p 1/2/3/4/5 $ ln -s 1/2/3/4 4 $ cd 4/5 $ pwd /home/dave/tmp/4/5 Is there any way to figure out that /home/dave/tmp/5 is really /home/dave/1/2/3/4/5 ?

    Read the article

  • Troubles with PyDev and external libraries in OS X

    - by Davide Gualano
    I've successfully installed the latest version of PyDev in my Eclipse (3.5.1) under OS X 10.6.3, with python 2.6.1 I have troubles in making the libraries I have installed work. For example, I'm trying to use the cx_Oracle library, which is perfectly working if called from the python interpeter of from simple scripts made with some text editor. But I cant make it work inside Eclipse: I have this small piece of code: import cx_Oracle conn = cx_Oracle.connect(CONN_STRING) sql = "select field from mytable" cursor = conn.cursor() cursor.execute(sql) for row in cursor: field = row[0] print field If I execute it from Eclipse, I get the following error: import cx_Oracle File "build/bdist.macosx-10.6-universal/egg/cx_Oracle.py", line 7, in <module> File "build/bdist.macosx-10.6-universal/egg/cx_Oracle.py", line 6, in __bootstrap__ ImportError: dlopen(/Users/dave/.python-eggs/cx_Oracle-5.0.3-py2.6-macosx-10.6-universal.egg-tmp/cx_Oracle.so, 2): Library not loaded: /b/227/rdbms/lib/libclntsh.dylib.10.1 Referenced from: /Users/dave/.python-eggs/cx_Oracle-5.0.3-py2.6-macosx-10.6-universal.egg-tmp/cx_Oracle.so Reason: no suitable image found. Did find: /Users/dave/lib/libclntsh.dylib.10.1: mach-o, but wrong architecture Same snippet works perfectly from the python shell I have configured the interpeter in Eclipse in preferences - PyDev -- Interpreter - Python, using the Auto Config option and selecting all the libs found. What am I doing wrong here? Edit: launching file /Users/dave/.python-eggs/cx_Oracle-5.0.3-py2.6-macosx-10.6-universal.egg-tmp/cx_Oracle.so from the command line tells this: /Users/dave/.python-eggs/cx_Oracle-5.0.3-py2.6-macosx-10.6-universal.egg-tmp/cx_Oracle.so: Mach-O universal binary with 3 architectures /Users/dave/.python-eggs/cx_Oracle-5.0.3-py2.6-macosx-10.6-universal.egg-tmp/cx_Oracle.so (for architecture i386): Mach-O bundle i386 /Users/dave/.python-eggs/cx_Oracle-5.0.3-py2.6-macosx-10.6-universal.egg-tmp/cx_Oracle.so (for architecture ppc7400): Mach-O bundle ppc /Users/dave/.python-eggs/cx_Oracle-5.0.3-py2.6-macosx-10.6-universal.egg-tmp/cx_Oracle.so (for architecture x86_64): Mach-O 64-bit bundle x86_64

    Read the article

  • Agilist, Heal Thyself!

    - by Dylan Smith
    I’ve been meaning to blog about a great experience I had earlier in the year at Prairie Dev Con Calgary.  Myself and Steve Rogalsky did a session that we called “Agilist, Heal Thyself!”.  We used a format that was new to me, but that Steve had seen used at another conference.  What we did was start by asking the audience to give us a list of challenges they had had when adopting agile.  We wrote them all down, then had everybody vote on the most interesting ones.  Then we split into two groups, and each group was assigned one of the agile challenges.  We had 20 minutes to discuss the challenge, and suggest solutions or approaches to improve things.  At the end of the 20 minutes, each of the groups gave a brief summary of their discussion and learning's, then we mixed up the groups and repeated with another 2 challenges. The 2 groups I was part of had some really interesting discussions, and suggestions: Unfinished Stories at the end of Sprints The first agile challenge we tackled, was something that every single Scrum team I have worked with has struggled with.  What happens when you get to the end of a Sprint, and there are some stories that are only partially completed.  The team in question was getting very de-moralized as they felt that every Sprint was a failure as they never had a set of fully completed stories. How do you avoid this? and/or what do you do when it happens? There were 2 pieces of advice that were well received: 1. Try to bring stories to completion before starting new ones.  This is advice I give all my Scrum teams.  If you have a 3-week sprint, what happens all too often is you get to the end of week 2, and a lot of stories are almost done; but almost none are completely done.  This is a Bad Thing.  I encourage the teams I work with to only start a new story as a very last resort.  If you finish your task look at the stories in progress and see if there’s anything you can do to help before moving onto a new story.  In the daily standup, put a focus on seeing what stories got completed yesterday, if a few days go by with none getting completed, be sure this fact is visible to the team and do something about it.  Something I’ve been doing recently is introducing WIP (Work In Progress) limits while using Scrum.  My current team has 2-week sprints, and we usually have about a dozen or stories in a sprint.  We instituted a WIP limit of 4 stories.  If 4 stories have been started but not finished then nobody is allowed to start new stories.  This made it obvious very quickly that our QA tasks were our bottleneck (we have 4 devs, but only 1.5 testers).  The WIP limit forced the developers to start to pickup QA tasks before moving onto the next dev tasks, and we ended our sprints with many more stories completely finished than we did before introducing WIP limits. 2. Rather than using time-boxed sprints, why not just do away with them altogether and go to a continuous flow type approach like KanBan.  Limit WIP to keep things under control, but don’t have a fixed time box at the end of which all tasks are supposed to be done.  This eliminates the problem almost entirely.  At some points in the project (releases) you need to be able to burn down all the half finished stories to get a stable release build, but this probably occurs less often than every sprint, and there are alternative approaches to achieve it using branching strategies rather than forcing your team to try to get to Zero WIP every 2-weeks (e.g. when you are ready for a release, create a new branch for any new stories, but finish all existing stories in the current branch and release it). Trying to Introduce Agile into a team with previous Bad Agile Experiences One of the agile adoption challenges somebody described, was he was in a leadership role on a team he had recently joined – lets call him Dave.  This team was currently very waterfall in their ALM process, but they were about to start on a new green-field project.  Dave wanted to use this new project as an opportunity to do things the “right way”, using an Agile methodology like Scrum, adopting TDD, automated builds, proper branching strategies, etc.  The problem he was facing is everybody else on the team had previously gone through an “Agile Adoption” that was a horrible failure.  Dave blamed this failure on the consultant brought in previously to lead this agile transition, but regardless of the reason, the team had very negative feelings towards agile, and was very resistant to trying it out again.  Dave possibly had the authority to try to force the team to adopt Agile practices, but we all know that doesn’t work very well.  What was Dave to do? Ultimately, the best advice was to question *why* did Dave want to adopt all these various practices. Rather than trying to convince his team that these were the “right way” to run a dev project, and trying to do a Big Bang approach to introducing change.  He would be better served by identifying problems the team currently faces, have a discussion with the team to get everybody to agree that specific problems existed, then have an open discussion about ways to address those problems.  This way Dave could incrementally introduce agile practices, and he doesn’t even need to identify them as “agile” practices if he doesn’t want to.  For example, when we discussed with Dave, he said probably the teams biggest problem was long periods without feedback from users, then finding out too late that the software is not going to meet their needs.  Rather than Dave jumping right to introducing Scrum and all it entails, it would be easier to get buy-in from team if he framed it as a discussion of existing problems, and brainstorming possible solutions.  And possibly most importantly, don’t try to do massive changes all at once with a team that has not bought-into those changes.  Taking an incremental approach has a greater chance of success. I see something similar in my day job all the time too.  Clients who for one reason or another claim to not be fans of agile (or not ready for agile yet).  But then they go on to ask me to help them get shorter feedback cycles, quicker delivery cycles, iterative development processes, etc.  It’s kind of funny at times, sometimes you just need to phrase the suggestions in terms they are using and avoid the word “agile”. PS – I haven’t blogged all that much over the past couple of years, but in an attempt to motivate myself, a few of us have accepted a blogger challenge.  There’s 6 of us who have all put some money into a pool, and the agreement is that we each need to blog at least once every 2-weeks.  The first 2-week period that we miss we’re eliminated.  Last person standing gets the money.  So expect at least one blog post every couple of weeks for the near future (I hope!).  And check out the blogs of the other 5 people in this blogger challenge: Steve Rogalsky: http://winnipegagilist.blogspot.ca Aaron Kowall: http://www.geekswithblogs.net/caffeinatedgeek Tyler Doerkson: http://blog.tylerdoerksen.com David Alpert: http://www.spinthemoose.com Dave White: http://www.agileramblings.com (note: site not available yet.  should be shortly or he owes me some money!)

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >