Wednesday, October 15, 2008

An Example of How Testers Contribute

Here's an example to support my previous post about the "hidden resources" that may be lurking in testers, but not used for whatever reason (lack of support from team or managers, lack of time, lack of resources, lack of imagination).

Our web-based app manages all aspects of 401(k) plans, including doing trades. Lots of money moves around, but a lot of it moves around outside the app itself, in QuickBooks, via emails to a custodian, and so on. We, the development team, is quite expert in the 401(k) domain, and fairly lacking in the accounting domain.

We had a story three iterations ago to "tweak" a report so the accountant could use it to help track down the sources of problems when the cash accounts didn't reconcile. This was a small story, but because we never took time to really understand the big picture of how cash flows around the system (both inside and outside of the app itself), and really understand what the accountant needed to do, it grew into a giant story. There were plenty of issues that to me were smells. It was difficult to test because we didn't have production-like data for it and couldn't produce it; the results of running the report were erratic and not always explainable; it turned out that if certain batch processes ran at the same time, the data for the report would be corrupted and become useless.

Long story short, I raised all kinds of flags, but there was pressure from the business to just get this "little" story done. Although both the programmer and I spent time with the accountant trying to understand what was needed, we didn't even learn enough to know that we didn't know enough. We went through two iterations of fixing and re-releasing, to no avail. We wasted time in meetings discussing how to fix the report, when in truth the report was the wrong approach for what the accountant needed from the get go.

I finally convinced everyone that we needed to take a step back and have the accountant explain to us the larger picture of how she has to reconcile accounts, all the different ways money can move around and the possible sources of imbalance, and what information she really needed to have. It was finally decided that a completely different report was needed. The time on this story was wasted, except for the lessons we learned about how little we understood about the accounting domain.

Was it my job, as the tester, to put the brakes on this story and insist that we start from scratch to understand the requirements? No, the programmer could have raised that issue, or the ScrumMaster, or the product owner (the poor accountant tends to get ignored, for no good reason other than accounting doesn't sell 401(k) plans, and she was already doing all she could to try to make herself understood). However, my testing of this report showed that there were some basic issues in the system itself that would have to be fixed before this report could work, so either we had to address those with new stories, or take a different approach to get the necessary data.

I feel that as the tester, I had a better opportunity to see the big picture of what was going on. I took some time to sit with the accountant and understand more about her domain, and what she needed for her reconciliation process. We testers can detect smells and provide information about them to the rest of the team. We can, as I did in this case, bring together customers and developers to re-think a problematic story. I didn't throw myself down on the floor in a fit and say "we can't release this report because I don't think it works". I explained the various issues and asked if key players could meet to understand the problem and find a new approach.

I think many testers would do just what I did, but some may feel it's "not my job", or not feel confident enough to call for a dramatic change in approach. How can we empower all testers to dig into problems and bring together all the people needed to solve them?

1 comment:

SandeepMaher said...

Lisa, While I completely agree that the Test Analyst (aka Tester) always has the big picture and should raise a flag which was not done in this case I also think that the Business Analyst IF she was involved in this case should take responsibility for not having understood the accountant's needs and not translated it well enough to the developer thereby causing the botched implementation / rework or should I say re-development.