Monday, June 30, 2008

CITCON Takeaways

Sounds like CITCON Melbourne was a huge success. Reading about it made me think back to CITCON Denver in April, and what my team is doing differently as a result of what I took away from that. The biggest concrete change is our sys admin is working on moving to Hudson instead of CruiseControl. He's setting up a new build machine with it.

Another takeaway was how intrigued I am with behavior-driven development and tools such as Easyb, although I'm not sure yet how or whether to apply it here. It did affect my thought pattern when writing test cases, although I know BDD is supposed to be about coding and design, not customer-facing tests.

SO MUCH FUN!

The donkey boys (our miniature donkeys) did real work this weekend, hauling brush and logs from a friend's creek bottom to their woodpile. Driving is a much bigger challenge when you have to go up and down steep banks, and navigate around cactus and willows. My skills improved, but I need to get a lot better! Will post photos when I find time...

I suppose I should think of a metaphor to connect this to agile testing in some way, but it was just plain fun, and a great break from so many months of hard work on the book and on presentations. Now, back to work!

Friday, June 27, 2008

Improving

Last sprint was tough, we didn't get finished and we couldn't release. Fortunately this wasn't a big deal, there is no big deadline to meet. However, we only finished one story - that's not good. And it's only about the third time in four years we didn't finish all stories.

We had a long discussion about what went wrong and how to do better. One thing was that although we identified 6 "steel threads" (aka thin slices, tracer bullets) in our big UI story, we didn't finish the third thread until the 7th day of the sprint - at which time most of four more threads were also done. It was incredibly confusing to try to test it, we couldn't tell what should work and what shouldn't, and we missed a couple of gaps in the features. Plus, unit tests were being written after the fact.

Here's our stop/start/continue list that resulted from our retrospective:

Start:
  • Transfer whiteboard drawings from conference room to Scrum arena, and check off threads as they are done
  • First thread of first story has to be done by first Tuesday of Sprint
  • Log remaining hours on task card (in Mingle) prior to Scrum
  • Test first
Stop:
  • Working more than one thread ahead of unfinished thread
We know the steel thread concept works for us - we just have to do it properly. UI stories are harder to do test-first, but we suffer when we don't. Lots of reasons we got off track last sprint, but we think this will get us back on.



Wednesday, June 25, 2008

Even Agile Testing Can Be Frustrating

We're having a hard sprint. We thought the stories were pretty straightforward, but they turned out to be hard, and there are also a lot of interdependencies. We did our steel thread exercise, but for whatever reason, the first thread took many days to finish and that put everything way behind. It will be an interesting retrospective.

I've been multitasking from one story to another and run into roadblocks, both in my manual testing and my automated testing. FitNesse fixtures don't work like I expect, and they're hard to fix. We're replacing old code with new code, but don't want to rip out the old code yet just in case we can't release. So that's confusing. I feel like I just can't get anything really done.

There's much testing left to do, and only one more day left in the sprint. We rarely have sprints like this, and when we do, I have a hard time feeling productive.

Plan of action: Try to quit multitasking, and focus on getting one thing done. Hope for lots of good luck!

Monday, June 23, 2008

Pass-It-On Grants for Women in Technology

If you or anyone you know might be interested in one of these grants, please check it out. If you're a woman in a technology field and not already on the Systers mailing list, check that out too.

The Anita Borg Systers Pass-It-On Grants honor Anita Borg’s desire to create a network of technical women helping one another. The grants, funded by donations from the Systers Online Community, are intended as means for women established in technological fields to support women seeking their place in the fields of technology. The grants are called “Pass-It-On” grants because they come with the moral obligation to “pass on” the benefits gained from the grant.

Pass-it-on Grants are open to any woman over 18 years old in or aspiring to be in the fields of computing. Grants are open to women in all countries and range from $500.00 to $1000.00 USD. Applications covering a wide variety of needs and projects are encouraged, such as:

  • Small grant to help with studies, job transfers or other transitions in life.
  • A broader project that benefits girls and women.
  • Projects that seek to inspire more girls and women to go into the computing field.
  • Assistance with educational fees and materials.
  • Partial funding source for larger scholarship.
  • Mentoring and other supportive groups for women in technology or computing.
For more information, go to:

Pass It On Grants

Thursday, June 19, 2008

Interdisciplinary Awareness

One of the most interesting sessions I attended at Better Software was "You Just Don't Understand Me: Interdisciplinary Awareness to the Rescue" by Mike Tholfsen, principal test manager of the Microsoft OneNote team. (See his blog at http://blogs.msdn.com/onenote_and_education/) He presented a "Team Pyramid" (don't we love all these pyramids?) showing that for a successful team, you need trust as your base. Results are the little top of the pyramid, they come from trust, healthy conflict, commitment, accountability and results.

Mike feels that one way to achieve this is to help people understand their peers' viewpoints better. He introduced an exercise to help people in different roles on the team understand the important traits of each discipline, and trade ideas on what teammates in other roles like or dislike about each discipline.

An interesting point of the presentation was that there was a development team where all the team members scored the same on a Myers-Briggs style test. The manager had hired himself four times. Lack of diversity is not a good thing.

This made me think about the times we've hired a tester onto my agile team. I brought in testers who were great with communication, collaborating with customers, exploratory testing, but not a lot of skills on the automation/technical side. I felt they could contribute value, but my developer coworkers gave them thumbs down. In each case, we hired a very techy tester (fortunately, also good at all the other things).

Mike's presentation made me realize that the developers were most comfortable hiring someone like themselves. This is understandable, but not always in the best interests of the team. Having realized this will help us in future hiring efforts, I think. Do we really not like something about the candidate, or is it just that they're different than we are? Could the differences make us better?

There appears to still be a lot of controversy in the agile community over testers and their role on the team. Interdisciplinary awareness might help agile teams realize that people with a different skill set might add tremendous value to their team.

Friday, June 13, 2008

Better Software

I attended Better Software this week. Each time I go to a conference such as this, I am pleasantly surprised that more and more people are on agile teams and have a good understanding of agile principles, values and practices. There are also plenty of people wanting to learn about agile.

In my class on "Ten Principles of an Agile Tester", a QA manager described an interesting problem (to which I had no ready answer!) The programmers in her company have decided to implement agile, but they think they can do it all by themselves, in isolation. They feel they don't need the testers, analysts and all the other groups to go along. From what I understood, the programmers felt they could still just hand off the code to the QA group when they were finished, and they apparently believed they could communicate with the customers directly and get the requirements.

The QA manager is understandably frustrated with this situation. She has tried repeatedly to sit down with the programmers and figure out how the testers can work with them, and show them how the testers can add value. They aren't cooperating.

What to do? I was once in a situation where I joined a team that had never had testers, only developers, although they did have BAs. They wouldn't let me join the daily standup, they wouldn't include test estimates in story estimates or in iteration planning, and they declared stories "done" before they were tested. I argued that I should be a part of the team, to no avail. After we missed the release date because the testing wasn't finished, I asked the coach if we could try things my way, and work together all as one team, everyone taking responsibility for testing, not considering stories done until they were tested. He agreed to try it for the next release cycle. We released on time and the customers were happy. So there was no more debate.

In my experience, sometimes people have to feel the pain before they'll change. I know for sure that evangelizing doesn't work. (Blogging feels a bit like evangelizing which is one reason I never did it before).

I'm interested in your comments, if you've ever had a situation like this, what did you do?

BTW, a couple of posts back I mentioned that we were tracking our refactoring tasks in our DTS, in our online story board tool, and in our Wiki. We have decided to try tracking them only in the online story board tool, as task cards, and add some fields so we can find them easily and include all the information needed before and after doing the task. We'll see how it works! Experimenting is good.

Friday, June 6, 2008

The Future is Now

I heard Ray Kurzweil on Talk of the Nation Science Friday today. As the NY Times put it last Tuesday in the Science section ( The Future Is Now ), he's a futurist with a track record. He observes that certain aspects of technology follow predictable trajectories. Computing power first doubled every three years, then every two, now every year. IT is revolutionizing biology, medicine, energy and other fields. Nanotechnology, gene sequencing and brain scan resolution all progress exponentially.

I'm no futurist, but I think we are seeing this in testing as a result of agile development. Since the last 90s, tools such as JUnit have led many teams to use practices such as TDD and CI that vastly improve software quality. Getting programmers interested in testing and test automation has led to an explosion of useful open-source testing-related tools.

We testers (and when I say that, I mean all you programmers and everyone else who tests) have been left out in terms of IDEs and other tools that would make writing and automating tests way faster and easier. That's changing now and I bet it will change really fast. Maybe in a year or so we will have fabulous tools for writing and refactoring tests, tools to help us focus better on exploratory testing, maybe tools for types of testing we haven't thought of yet. The Agile Alliance functional test tools group headed by Jennitta Andrea is pushing an effort to get to a new generation of functional test tools.

I think the future of software testing is just about here.

Speaking of the future being now, Janet Gregory and I turned in our draft manuscript for our Agile Testing book last Sunday night. Whoohoo! Four lovely people will be doing technical reviews of it, and then we get to revise it some more, and it's supposed to go into production in August. I hope that means it will be out in the fall. I wish I had a way to express our gratitude to the many people who have helped us, and in turn agile testers and teams everywhere, with this book.

Thursday, June 5, 2008

What to Track Where

We are in the midst of a "refactoring" sprint. This is an opportunity to do tasks that reduce technical debt, and not have to deliver any stories for the business. We're upgrading Spring, log4j, Canoo WebTest, refactoring user searches to use a custom DTO for searching, removing an unnecessary and confusing column from a table (which requires a ton of code changes), and stuff like that.

How to keep a record of what we changed, so that later, someone can say "When was it that we removed that tax id column and changed all that code?" and we can find it quickly? We have a wiki page for each sprint, and up to now, we wrote all the refactoring changes on the sprint wiki page. However, now that we can't use physical cards (because one team member is remote), we have more tools to track things. We put all our refactoring "cards" in a "refactoring" database in our DTS. When we started this sprint, we wrote cards in our online story board (Mingle) for all the refactoring cards we might do (which seems like extra work, but Mingle has some visibility advantages over the DTS).

As we started to log all our refactoring changes actually made on the wiki, it starts to seem like double work. In the spirit of DRYness, we want to pick one place to track the refactoring. The DTS and MIngle seem like the most likely options, but we should pick one or the other.

I'm trying to get a consensus, see this space for what we decide. If you have solved a similar problem, please post a comment!

I haven't figured out the answer to yesterday's dilemma either, about some automated way to verify our test schema whenever we refresh it from production. So far this blog seems to be about questions, rather than answers.

Wednesday, June 4, 2008

Finally!

I've been meaning to start a blog for ages. Now that Janet and I have finished our draft manuscript, it feels like there might be time for a post here or there. Thanks to Dawn Cannan for inspiring me with her great new blog.

So how about something interesting to start out with. Today there was a thread on the agile-testing Yahoo group about defect tracking systems (DTS), and someone asked for an example of when you got information from a DTS that was useful. Today our test system got an error that seemed mighty familiar, but I couldn't remember the cause from when it happened before. I was able to find it easily in the DTS, it was a missing trigger. Periodically we replace one of our test schemas with a fresh copy of production, and in the process, triggers get disabled or lost (not sure why). If I couldn't have found this in the DTS, it would have taken longer to track down the problem.

Someone replied to this post wondering why I didn't have a test to detect the problem. The regression suite that runs in our build uses a different schema, that has canonical data. However, I could easily have run a regression suite against the new copy of production and found the problem. I'll do that from now on, but how to remember to do it? Manual steps are easily forgotten...