Categories
Agile

XP Day 06

Here are some notes from the recent (excellent) XP Day conference. This is based on my internal BT write-up, with a few company-specific bits removed.

Keynote: Selling Agile

Joshua Kerievsky

The main thing I took away from this was something that wasn’t said explicitly, which is the importance of two-way communication when trying to introduce agile to an organisation. I think this is where some of the agile evangelism (as opposed to the coaching) in BT fails, because people hear us banging on about stuff they don’t fully understand, and just assume that it’s either rubbish, or not applicable to them. When you have interaction, people can question things and receive clarification, which might help reduce cynicism and encourage engagement.

Tony also reminded me about another of Joshua’s messages, which was to concentrate on ‘selling’ agile by concentrating on the shortfalls and pain of waterfall.

Awesome Acceptance Testing

Dan North and Joe Walnes

Automated acceptance testing has always been the area of our project where we have the least certainty that we’re going the right way. This talk helped make some of the abstract concepts a lot more concrete, specifically the ‘five aspects’ of acceptance testing:

  • Automation
    • This is stuff like httpUnit and Selenium, which is the layer that automates interaction with the application’s interface (whatever that might be).
  • Vocabulary
    • Has similarities to business-natural or domain-specific languages.
    • Will tend to grow organically, as common repetitive tasks (eg ‘login’) are factored out of test scripts.
    • Three levels:
      • Solution (eg ‘select option’, ‘click button’).
      • Domain (eg ‘transfer funds’, ‘provide service’) – may also include non-functionals like performance.
      • Test Framework (eg ‘assert’, ‘start simulator’).
  • Syntax
    • Not necessarily text – may be diagrams, mathematical formula or whatever.
  • Intent
    • This is the difficult one :-)
    • The tests need to be clear enough that a customer understands the behaviour that’s being tested, and a developer understands the interaction with the system.
  • Harness
    • The wrapper that runs the test suite and reports the results.

These aspects can be separated to some extent; for example you might have vocabulary, syntax and intent even if your tests aren’t automated.

Concentrating on intent rather than specifics allows you to write meaningful tests up-front, before you know exactly how the implementation will look. For example, you might start with a test like this trivial example:

public void testMyCoolNewFeature() {
  doSomeStuff();
  enableCoolNewFeature();
  checkThatCustomerCanFlirbleTheWidget();
}

private void enableCoolNewFeature() {
  fail("Cool new feature does not exist yet");
}

private void checkThatCustomerCanFlirbleTheWidget() {
  fail("Widget has not yet been implemented");
}

The main test method describes what’s being tested, but not how. As the feature is designed and implemented, the internal methods can be fleshed out to interact with whatever UI elements end up being present.

Dan explained some of the BDD thinking behind jBehave, which is to split each test into three parts: given some preconditions, when an event (in the business domain) occurs then we expect something to happen. You can read more on Dan’s site.

Word for the day

Monkification (abst. n): The act of converting a specification of behaviour into a form in which it can be tested by monkeys (or alternatively by automated test frameworks).

Various tips and observations that I noted down:
  • Create the context (‘given’) outside the GUI if possible (eg by creating and persisting domain objects directly), as you’re not testing that bit. It will probably be quicker, and you’ll insulate tests from unrelated GUI changes.
  • It’s worth adding sanity checks when firing the event (‘when’), to make it obvious if the test fails because (say) the login failed or a menu structure changed and you aren’t on the page you thought you were. This saves head-scratching when trying to work out why assertions in the ‘then’ phase have failed.
  • A big advantage of basing your acceptance tests on something like JUnit (as opposed to script-based or graphical tools like Exactor or Selenium Core) is that you automatically get IDE support for command completion, refactoring and so on.
  • Dan uses a system like ours to separate tests for work in progress from those for delivered functionality, ie just have separate directory trees. Joe’s approach is interesting. He started off annotating new tests with may_fail and using a custom testsuite to stop them breaking the build, but found that the annotations didn’t get removed and regression faults were missed. He now marks them as must_fail, and if the annotation isn’t removed and the tests pass, the overall build fails.
  • When extending an existing feature, or writing a test for a special case, it’s often a good idea to rename the original test to make the original assumptions explicit. For example you might add testLoginFailsIfPasswordNotChangedForAMonth, and rename testLoginSucceeds to testLoginSucceedsWhenPasswordChangedWithinAMonth (not a great example).
  • If you find there are more than half a dozen separate accceptance test scenarios for a user story, it’s probably telling you that you ought to split the story up.
  • Not all acceptance tests need to go through the ‘real’ UI. Sometimes it makes sense to have a separate thin interface specifically for testing business logic, with both interfaces connected to the service layer by an event model.
  • The law of diminishing returns applies. There will always be cases where the effort of creating and maintaining automated tests doesn’t cost in, and regular scheduled manual tests are a better option. You should, however, still specify the acceptance criteria, ideally in the same given-when-then form.
Choosing a syntax
  • Expressing tests in code
    • Better tool support, and they will be familiar to developers.
    • Choose the language the team is most comfortable in (probably the same one used for the application itself), rather than trying to use the ‘best’ one for expressing tests, but which people don’t know well.
    • Textual descriptions of the tests and their steps can be generated from the code or from loggers while running, to explain to non-technical people what’s going on.
    • Although customers are unlikely to be able to write tests in code, they can pair with a developer to do so.
  • FIT etc
    • Uses HTML, so editable by anyone, even with MS Word.
    • Descriptive text, diagrams etc can be interspersed with the executable tests.

Are We Nearly There Yet?

Ivan Moore

For tracking, planning and estimation to work, you need ‘done’ to really mean ‘done’ (this was a theme of the previous session too).

Ivan’s suggestions and observations (the ones I noted down, anyway):

  • Separate release-level story estimates from the finer-grained estimates used in iterations. If you try and merge the iteration estimates back into the release plan, your completion date/scope estimates will be unreliable, unless you re-estimate everything, and even that might not help.
  • Velocity is a measure of the pessimism of your estimation.
  • A high ‘load factor’ (actual/estimated ratio) leads to poor morale, because you feel like you’re taking too long to do everything.
  • You need a reasonable number of stories in an iteration to keep velocity reasonably constant.
  • Directly tracking actual effort per story is often misleading and rarely useful.
  • If you put coloured stickers on your cards to indicate the status, don’t remove the old ones. This flags up systemic problems such as stories bouncing between finished and in progress because faults are found by QA instead of developers, and is something we’ll be trying.
  • It’s sometimes worth noting problems (eg build server down), and possibly which pairs are doing what, on the burndown/up chart.
  • We’re thinking of abandoning our current ‘detailed burndown’ graph, where we track estimated remaining time on in-progress stories, but a good point was made that if nothing else it helps developers improve their estimation skills.
  • If you have issue lists etc in a tool, consider setting your CI server up to automatically update them based on keywords in commit comments.
  • Everyone agreed that simple tools (index cards; whiteboards), backed up by simple spreadsheets or wiki pages if necessitated by geography, are by far the best. Other teams had found the same thing as us when using XPlanner, namely that it ends up imposing its own view of working on you. If you need specific tracking tools, it’s better to write them yourself, then they match your needs and processes exactly.

Something that became obvious from the discussion in the room was that all projects are different, and you don’t want to try to impose some kind of one-size-fits-all system for tracking all agile projects in an organisation.

Why Is Simple So Difficult?

Facilitated by Nat Pryce and Jonathan Clarke

A very interesting discussion, with no particular conclusions other than yes, simple is complicated! No-one could even really define exactly what we mean by ‘simple’ as a virtue (as opposed to ‘easy’, ‘simplistic’ etc). Some suggested that it would be easier to describe the things that aren’t simple (duplication of code, pointless use of design patterns, heavyweight processes etc).

Aside from the discussion itself, one thing that stood out was how good the goldfish bowl format is – the best features of a panel discussion, a Q & A session and a pub chat.

See also Simon Baker’s notes.

Keynote: Love in the Age of Software

James Noble and Robert Biddle

Very, very strange. Simon Baker has posted a good summary on Agile in Action (sounds like he might have still been recovering from the beer that Google generously paid for on the Monday too!)

Managing Uncertainty & Risk Using Real Options

Chris Matts

  • Treat decisions a bit like share options. You can exercise them at maturity if it’s beneficial, or you can decline.
  • Just as you wouldn’t commit to exercising an option in advnce, don’t make decisions until you need to. Deciding early reduces the solution space, not the problem space.
  • Sometimes you can do something now to give yourself more options later.
  • Identifying the correct point in the future when a decision should be made helps to reassure people that you aren’t just putting the choice off because you can’t decide.

Literate Testing

Robert Chatley and Tamara Petroff

Basically a brief introduction to the LiFT test framework (which I hadn’t seen before). For those to whom the concept of literate testing is new, the Google Talk video Beyond Test-Driven Development by Dave Astels is also well worth watching (although it covers rspec, the concepts are similar).

Are Your Tests Really Driving Your Development?

Nat Pryce and Steve Freeman

A very good exercise, which made people realise that their tests don’t necessarily describe the required behaviour as well as they think. I won’t elaborate here, because it would lose some of its impact if you knew how it worked in advance.

Some things participants realised and some that Nat and Steve suggested:

  • Use constants in test cases, not magic numbers. I’d previously decided that magic numbers are OK in test cases, but they often don’t do a great job of expressing intent when read out of context.
  • Calculate expected values from your test inputs as part of the test where possible, eg assertEquals(2+2, Calc.add(2, 2)); rather than assertEquals(4, Calc.add(2, 2)); – in a less trivial example, how can you be sure that the behaviour specified in the second version isn’t a * b?
  • Consider wrapping primitives or built-in classes to clarify what they represent, eg use a TelephoneNumber instead of a String.
  • I thought my test method names were reasonably good (much better than just having a doSomething() method and a testDoSomething() test case, anyway!), but they were still describing a specific test case, rather than a piece of behaviour.
  • Descriptive method and variable names are great, but a few well-chosen comments add a lot too!

General

It struck me how important the shared vocabulary of agile is at a conference like this. Although everyone there was doing different things with different tools and following different processes, we all knew what each other meant by things like iteration, TDD, retrospective, refactor, backlog and so on. Coincidentally when I got back I found this article in my RSS list, making a similar point. Of course the caveat here is that in BT we make a specialism out of using words differently to the real world – if we’re going to keep learning from others, we need to make sure that we use the terms properly.

[tags]xpday, agile[/tags]

2 replies on “XP Day 06”

Leave a Reply