News

15 Dec 2015

The Learning Machine: Our Custom Version of Agile Explained

“Agile” is one of those horribly overused words in the IT industry that seems to mean different things to different people.  Not all of those things make a lot of sense to us.  (Our CTO once attended a tech seminar where a speaker asserted that “agile waterfalls are a red herring when you’re creating internal start-ups”. He speculates that this might be some sort of kōan, but the rest of us think it’s just a winning line in buzzword bingo.)

 At worst, views on “agile” seem to range from “Great! Agile means we don’t have to plan or document anything, and all deadlines have been abolished!” to “After many years of study, I am a Level 7 Methodology Alliance Certified Auditable AGILE Gangmaster Practitioner.”

 To us, “agile” means having enough process (and no more) to enable us to release meaningful, incremental product changes – then measure what difference those changes make and feed that back into deciding what to do next.  We call this approach the “Learning Machine”.

Statistically speaking, most new product ideas turn out to be unsuccessful. For every Coca-Cola Cherry, a New Coke; for every N64, a Power Glove. Moreover, it is notoriously hard – indeed many would say impossible – to predict in advance which of a range of plausible-sounding product ideas is actually going to take off. This is, for instance, why, for every Hollywood blockbuster there are several Hollywood flops: a studio just doesn’t know in advance which of its films is going to be a hit.

Fortunately, movies aside, the good news is that most successful products are actually the sum of a large number of good design decisions. (Think, for instance, about all the many different aspects of a Macbook Air that add up to making it more pleasing to use than a traditional laptop.) If you can find a way to generate a lot of individual improvements, then you have a chance to make a big difference to your product. And contrary to what’s often assumed, generating ideas for improvements is not the hard part of doing this. A good brainstorming session – and an openness to input from across the organisation – will generate a long list of candidates for “how to make the product better”. The trick is to do enough work on each idea to discover, or “learn”, which ones work in practice with real users, and to do this without burning any more time, budget, and energy than you need to on the ideas that turn out not to work.

The Learning Machine is a way for us to be systematic about testing our ideas, and disciplined about killing off bad ideas when they turn out not to work.

So how does it work?

Anyone and everyone in the company is allowed to – and indeed encouraged to – come up with ideas to improve our products.  If an idea is plausible, and there’s a quick and measurable way to test how well it works, then there’s a pretty good chance we’ll do the work to try it out.  It’s this combination – a plausible idea with a quick and inexpensive plan for how to test it out and a way that we’re going to measure success – that we call a hypothesis.

We expect most hypotheses to prove unsuccessful.  That doesn’t mean they weren’t worth testing out.  It’s simply an acknowledgement that, from amongst a pool of sensible good ideas, only a small proportion will prove to make a significant difference in practice, and it’s impossible to know in advance which of those ideas will be the ones to make a difference.  What matters is that we can test a lot of ideas in a short space of time, and minimise the amount of time and effort it takes to discover whether a hypothesis is successful.

Learning down the house

So what does it actually look like in action?

The Learning Machine runs in week-long cycles, and does not discriminate when it comes to hypotheses: any employee can submit a potential improvement.  In theory, this could be a problem: the security guard wants to pivot the game’s narrative to make it about a heroic, lantern-jawed security guard who always gets the girl, the sales director wants to shoot tablets into space in order to corner the intergalactic market, etc. etc. So we have a meeting at the beginning of every week where we talk to each product team, sift through all the ideas, and decide which experiments to run. We approve the top ones for a week of testing.

Recently, our Head of Security and Technical Compliance gave us the idea for the “Chain Reaction” part of our game, where digs are linked together across the map. We hypothesised that, if players liked this feature as much as we did, then we might expect to see an increased dig rate by 10 percent or more. When we put it live, we found it actually doubled dig rates – a pattern that held up over a couple of weeks.

Anyway, when we’ve decided what we’re going to do, we establish our priorities thus:

  • Is this idea going to pan out – and will it do so in a reasonable time-frame?
  • In the best-case scenario, will the idea make a measurable, meaningful difference?
  • Is this going to be a pain in the neck to test? Will it cost us all of the money and time, or only some of it?

Once we’ve established all of this, we have another brainstorm focused on testing. How many testing examples are we going to need? Do we need to A/B test landing pages? How can we get this in front of as many actual flesh-and-blood players without doing premature “scaling up” work?

Once we’ve done that, we’ll analyse the results and see how our predictions measured up to the reality. If the idea’s godawful in the extreme, we banish it to the lake of fire, where it lingers for time eternal alongside flared trousers and Homeland Series 2. If it has potential, or might work in different circumstances, we tinker with it until it’s as close to perfect as we can get it. Of course, when it doesn’t work out, it’s important to understand why – if only so we don’t waste time repeating our mistakes. When we’ve worked out what we’re doing, we systemise it. This can sometimes take a little longer than seven days with more complex improvements, but we’ve got the in-house expertise and work ethic to get it done in time in most cases.

With the Learning Machine, we’ve attempted to merge the best of Lean UX and agile to foster a culture of experimental iteration and continuous improvement. It’s not perfect, and we won’t pretend it works for everyone: I’m reasonably confident that at least some of you will read about the multiple 60-90 minute meetings and run away screaming.

Still, we’ve found that it’s allowed us to direct resources and energy to the work that matters most to the business. For example, there was a time when we thought leaderboards might make a significant difference to our game. The Geonomics of old would have spent weeks on the required concept, implementation, coding, and design work. With the Learning Machine, it only took us a few days to discover that nobody at all cared about it – saving us considerable amounts of wasted time and effort.

Learn, baby, learn

What are the results? Well, in 16 weeks, we’ve done 68 different experiments across all relevant teams: 27 losers and 19 wins, with 22 languishing in experimental limbo.

You can plainly see that the Learning Machine does not have an unimpeachable record of flawless victories, but an unimpeachable record of flawless victories is kind of not the point. I’d have taken a 10% win rate if you asked me at the start, so I’m not complaining – and it’s important to remember that every week is an opportunity to tinker with the process as well as the software.

The point: agile’s great, and it works. But only if you adapt it to your company’s specific needs and business goals.

 

Leave a Reply

Your email address will not be published. Required fields are marked *