Monday, March 31, 2014

Does Agility Guarantee Faster Delivery?


Let us propose an agile based approach for this project. That will help us compress the schedule and include multiple releases. I don’t think waterfall approach is going to help in making a competitive proposal. We must deliver this project faster with high quality and competitive cost. In order to win, our approach has to be agile based.’ Sounds familiar?

We hear suggestions like this more frequently than ever these days in discussions on creating competitive proposals.  These are business discussions with your senior leaders and pre-sales teams.

Let me ask. Does agility guarantee faster delivery?  Undoubtedly, yes is the only right answer to make some business sense and gain the goodwill of project sponsors but there are other answers too. I am writing this article to present my thoughts and find an answer to this question.

Certainly, adopting an agile approach is a way to enhance visibility and predictability on project progress.  However, can you enforce an iterative schedule and scope on your project team so that you gain the advantage of compressing the overall schedule and cost of your project?  When you enforce a pre-planned schedule and scope – however well concerted it may be, you are not empowering your team to inspect, adapt and learn. This approach will lead to ‘command and control’ culture. Is there an alternative? Yes. There is.

First, your customer and other stakeholders need to know the essence of agility and agile methods. They need to know that an agile approach can enable your team move forward, learn and make things faster provided they are given the right set of tools, infrastructure and governance support to evolve and improve from iteration to iteration. There has to be a good match of mindset and culture.

Second, the product owner or someone who is responsible to provide requirements need to believe in prioritizing user stories or features at regular intervals – and practice it. Moreover, she must be willing to let go some of the low priority features if the release timelines are critical. You can’t do without an effective product owner.

Third, there has to be continuous collaboration in terms of participation in meetings, issue resolution and product demonstrations.  Genuine feedback is essential – without this you can’t avoid last minute surprises. For this your product demonstrations and team retrospectives need to be effective.

Fourth, the team members need to be skilled, self-enabled and aligned to perform.  Have you ever observed how rowing happens when a team of ten or more team members attempt to sail through and win a trophy? Skills and competencies are necessary. With these, alignment is paramount too. Without alignment the rowing boat cannot travel in the right direction. This is where ‘shared vision’ comes in to play. Establishing a shared vision and aligning your team cannot be ignored at any cost.

Fifth, your governance team and customer representatives who are part of the governance team must be able to understand and appreciate technical issues and challenges in the project instead of focusing on getting as many features or user stories done in every iteration.

When you have all these - as well as many other things that I have not explicitly mentioned about, in place, you and your team will be able to maintain sustainable pace, introspect, learn and continuously improve.  That is agility.

Does agility guarantee faster delivery? It does not guarantee faster delivery because software projects come with several variable factors and the dynamics can be complex. However, agility can lead to faster delivery.

More than what you estimate and propose, focus on how you execute, enable, learn and improve.  When you do that, agility can lead to faster delivery!

Friday, March 21, 2014

Is This Bug Worth Reporting?




One of the primary pursuits of software testers is to verify if the application under test conforms to system specifications.  In this pursuit, testers create bug reports whenever test cases fail.  This is how the professional life or journey of a typical software tester starts.  At some instance in this journey a realization strikes.  It is a voice from within and it says, “The quality of bugs matters more than the quantity.”  This is when a software tester understands the true meaning behind the purpose of testing. This post is about a real life story to illustrate this phenomenon.
Anita, a software test engineer joined our project team several years ago.  That was her first project on her first job.  She held an undergraduate degree from a well-known college.   She was very fast in finding failures.  She would find failures against test cases and raise bug reports.  She topped in doing this.  She was happy with this progress. However her interaction and relationship and rapport with developers became unhealthy.  Every defect she found involved not only additional research and debugging but also a great deal of conversation and negotiation. Sometimes more than five out of ten defects reported by her would end up in categories such as ‘Not a Bug’ or ‘Duplicate’. Eventually developers started seeing no value in her bug reports.  She was demotivated.   When we analyzed this situation we understood that she was not collaborative enough and did not think through before creating bug reports. Somehow she was passionate about maximizing the number of defects but not paying attention to the quality of defects.  This episode turned out to be an unpleasant experience for her as well as the developers. One of the senior members in our team coached her and she was willing to learn and improve. It took her several weeks. She was getting better.
One day she was extremely happy because a bug she reported was fixed on the same day.  The developer who fixed it was impressed too.   I appreciated her and asked, “This is awesome. How did this happen?”  She retorted, “Thank you! I know, I did my homework before reporting this bug.  I discussed it with our developer, understood the context, did additional testing by considering related scenarios and created a better report. This approach helped us reduce debugging time.”  I smiled at her with a sign of satisfaction.
Gradually she become a successful professional and started contributing to large projects.
Several years have passed.  Nowadays, I am not playing the role of a full time developer or tester or project manager. I am a specialist. I work with organizations and teams. I write, speak, coach, consult and do several other things.
Couple of months ago I was using an online application that facilitates conference management and includes features such as attendee registration, speaker submission and so on.  This is a new system developed by an enthusiast who is a hardcore techie. Sometimes he works with one or two developers supporting him. Otherwise, he does it alone by adding features, resolving issues and making it better.  As a member of program committee of an upcoming conference, I use this system daily.  All of us in the program committee are aware of the known issues and one among them is a pagination issue – we had to refresh pages couple of times to get the right number of records.
The other day, I observed incorrect number of submissions under a specific category in spite of multiple attempts to refresh a page. I was reluctant to relate it to the pagination issue. I thought it could be due to some issue with the browser cache and went on to complete my tasks of the day before I investigate it further.  That issue did not leave my mind.   I did not react either. I remembered Anita’s happiness when she did it right and got that bug fixed efficiently. I asked myself, “Is this a bug or issue worth reporting?”  The answer was not affirmative.  I wanted to get back to the system and explore it further so that I can help the developer with my bug report.
Late in the evening, I went back to the system. Even though I was able to reproduce that defect I started examining the corresponding behavior under various scenarios and compared it with similar features.  To me, the ability to reproduce is a necessary but not a sufficient condition to report a bug and this bug was not worth reporting yet. I wanted to narrow it down and isolate it so that I can write an informative bug report. 
After multiple attempts, I started getting some clarity. I isolated it to a specific scenario across screens. It appeared to be a programming error or configuration error. I reported it with all my findings.  And the bug was fixed within an hour!
Looking back, I find that these are the small incidents worth sharing as they leave behind some takeaways. Test case failures are most probably the leading indicator of defects.  However, a test case failure need not be defect itself.  It is the responsibility of a software tester to do additional probing or research to isolate the failure in order to write a meaningful bug report.  Next time when you come across a failure, ask yourself, “Is it a bug worth reporting?”  Also, attempt to analyze it further and write a great bug report.  When you do this you will enjoy your work and become a valuable contributor in your team.
 

Thursday, March 6, 2014

Team Velocity: Do You See Trends and Compare?


Some time ago I wrote about my discussions with Joe on fluctuating velocity, readiness at starting point, building on legacy code and dependencies - see 'Related Posts' at the end.  After a gap of several weeks, Joe and I met again. That was sometime last week. I am writing this post to share an interesting part of our discussion. Read on.
When I met Joe last week, I was waiting to hear about his challenges because he has been running two or three projects – all of them using agile methods.  And he had enough information to share. He had some questions too.

He remembered our first meeting and said, “Let me tell you something, the problem of velocity dip and customer concern continues in my project. Velocity is throughput or units of work accepted in every iteration. Right?”
I nodded and said, “Wait. What do we mean by units of work? Number of stories delivered? If you have a good distribution of stories or mix of stories in terms of various complexities, that will work because over several iterations you will see some trends - those must help you in sense making.”

“Well, I am talking about Story Points!  Now I am managing three projects. We have the project velocity of all these three projects posted on our wall.  In our organization velocity is the number of Story Points accepted per iteration.  One day our Vice President walked along the wall and appreciated a team that topped all in terms of velocity! I did not expect that to happen!  Another day, he asked me about the velocity dip in one of the project in front of our customer.  I had reasons but did not buy what I said! That wasn’t good for me.”

Joe took a pause.  I took my pen to do a quick calculation on my writing pad and asked him, “Joe, do you think 59 is equal to 59?  I mean, do you think delivering 59 Story Points in one iteration and 59 in another mean the same? Are they equal?”

Joe was not sure. He wanted more information. And I continued.
“Let me elaborate. In one iteration, let us assume that 11 user stories were accepted.  These 11 include 4 stories of 2 SPs each, 5 stories of 5 SPs each, and 2 stories of 13 SPs each.     In the next iteration, say 8 stories were accepted – these include 4 stories of 3-SPs, 1 story of 8-SPs, and 3 stories of 13-SPs.    That means 59 Story Points delivered in the first iteration and the same 59 Story Points delivered in the next iteration.”

Hearing my example, Joe started writing the following on a piece of paper.
(4x2) + (5x5) + (2x13) = (4x3) + (1x8) + (3x13)
 
8 + 25 + 26 = 12 + 8 + 39
 
59 = 59
 
He said quickly. “Good example. But this is not happening on the ground!”
“Well.  Why do you expect that to happen? 2, 5, 8 and 13 are sizes.  Why do we attempt to add them together?”
“What else can we do? Is there a way around it?”
“Let me tell you and you know some of this. Story Point estimation is based on relativity and complexity.  It is not scientific.  It is not linear. We are not talking about buying potatoes in bags – bags with different weights such as 2 pounds, 3 pounds, 5 pounds, 8 pounds, 13 pounds, etc.  When you buy potatoes like this, you can multiply and add because that makes sense.  Can we apply the same treatment to measuring velocity by adding up Story Points? Does it make sense? No. It doesn’t.  Many organizations have realized this fact. They have stopped this practice.”
“I understand. Do we have another way?”
“Consider the number of user stories accepted per iteration along with several other parameters such as team availability or capacity, technical debt, etc.  Try to put all those together and understand team progress.  Now, do you realize the evil of comparing two teams by looking at the number of Story Points accepted per iteration? That is the second mistake! That is something we must avoid!”

“Yes. That makes sense. I liked that analogy of potatoes. Now I know why 59 is not equal to 59.”
“I am sure you do.   Too much focus on velocity and not measuring it right is a big ‘No-No’.  Unless you spread this awareness, your senior management will not understand.  Talk to your vice president.”
“Sure. I will.”
I could convince Joe. He agreed. 
What do you think? What is your definition of velocity? How do make sense out of velocity trends?
Related Posts