# News

*10 Feb 2016*

## Markov Chains and our Games

So you are someone like me who likes maths, right? And you like games too? Surely there is a job out there where you can combine the too. Well the search proved surprisingly difficult in my experience until I found a company called Geonomics. Here we solve many mathematical problems so that we make sure the players in our games win a prize just the right amount of times.

One such prize in our Treasure Hunt game is a collection set. To win this prize you much find each of several different pieces to complete the set. In our game this could be parts of a jigsaw, a robot or even a singing frog. (How you could find a part of singing frog I really don’t know. You’d have to delve into the mind of one of our designers – a risky venture.).

Let’s take a particular collection set, say the robot. Here you have to find the head, body, arms and legs in order to complete the set and win the prize. (What use is a robot without any legs anyway?). We’ve hidden the pieces to the robot somewhere in our map, which we’ve divided up into a grid.You are able to “dig” (i.e. search) in each of the resulting squares. Now a part of the robot is equally likely to be in any of the squares and it is equally likely that you will find any particular piece. (In other words, there are just as many heads as bodies, on average, out there, otherwise what sort of crazy world would we be living in?) The question that is important to us is how many searches will it take on average for you to complete the set?

To answer this question we have to take a trip into the world of Markov Chains. So naturally the first question is what is a Markov Chain? (And for all the pedants out there we’re talking about a discrete time, first order Markov Chain – in my experience it’s always worth keeping the pedants happy! ðŸ™‚)

In simple terms, a Markov Chain is a random process in a state space where the transitions between different states are based purely on the current state, ignoring any previous transitions. This is perhaps best explained with an example.

Imagine a village idiot (it’s a Maths problem about probability so there has to be a village idiot!) who spends his life getting on planes. (He’s a surprisingly rich village idiot!). He arrives at an airport on a plane and simply chooses another to leave on at random. Now, in our already highly implausible scenario, we only have three airports in the world, one in the UK, one in France and one in Spain. And the situation gets even more implausible when we look at the planes available from each airport: from the UK there is only one plane available (no matter when, or from where, he arrives) and it always heads to France. From France there are always two planes available and they both head to Spain. Lastly from Spain there are always two planes available, one going to France and one going to the UK.

We can model this situation as a Markov chain as we only care about what airport the village idiot is currently at when working out the probabilities for his next airport. So if we look at when he is in the UK, there is only one plane so we know with certainty that his next destination will be France. For France, even though there are two planes, they both go to Spain so we know with certainty that his next destination will be Spain. Lastly if we look at Spain then there are two planes with two different destinations, but since he chooses a plane randomly we can see that there is half a chance his destination is the UK and half a chance it is France. We have now constructed a Markov Chain model for the village idiot’s travels.

We can now answer questions such as if the village idiot starts in the UK, where is he likely to be in three trips’ time. To answer this we look at our model and see that after one journey he will certainly be in France since this is the only option from the UK, after two journeys he will certainly be in Spain since that is the only option from France and after three journeys there is half a chance he will be in the UK and half a chance he will be in France.

We can also use it to answer a different class of questions such as what is the average proportion of time he spends in each country. I won’t go through the maths for that here, but it is the basis for Google’s PageRank algorithm: there you model the page as your state (or airports in our previous example) and the links between pages as you transitions (or planes).

So how can we apply this to our collection set problem? Just to recap we wanted to work out how many squares a player would have to dig, on average, to find all the pieces. In our particular example there were four pieces. Let’s suppose that in each square there was a 1/10 chance of a particular piece being found and therefore a 6/10 chance that the square is empty.

To solve the problem we can create a Markov Chain where the state is the number of different pieces that you have found. The problem then becomes how long it takes a player to get from state 0 (they haven’t found anything) to state 4 (they’ve completed the set). The key point here is that we do not mind which pieces they have found, since they all have the same probability of being found, just the number of distinct pieces.

Now to look at the transition probabilities. When the player next searches, no matter the number of distinct pieces that they have already found, they can only either find a new piece or not, so in terms of our states you either remain the same or progress to one state higher. For instance when the player doesn’t have any pieces they have a 4/10 chance of finding a new piece (the sum of the individual probabilities of finding each individual piece) and a 6/10 chance of staying at 0 pieces found. Once they have found their first piece (i.e. they are at state 1) then they only have a 3/10 chance of finding a new piece and a 7/10 chance of staying where they are since now finding one of the pieces, the one they already have, does not increase the number of distinct pieces they have. Following this through for the other states leads us to the following Markov Chain.

Now we are in a position to calculate the number of searches it will take, on average, for a user to complete the set. The Markov Chain we have constructed allows us to break this down into a number of smaller calculations. Can you take a guess what they’re going to be?

First we’re going to look at the number of searches on average it will take for a player to complete the set once they have three distinct pieces. Looking at the diagram above we see that they have a 1/10 chance of completing the set as a result of this turn, otherwise they remain in the same state. If we let t

_{3 }be the expected number of searches to complete the set from state 3 then we get the following formula (apologies for the algebra, we’d been doing so well in avoiding it up until this point!):t

_{3}= 1 + 9/10 * t_{3 }+ 1/10 * 0Here the first “1” on the right is the search we make and then the rest of the formula deals with where we end up after that search. We have a 9/10 chance of remaining at state 3 and a 1/10 chance of being done and needing no further searches.

Rearranging gives:

1/10 * t

_{3}= 1t

_{3 }= 10We can then use this result working backwards and solve the problem of how many searches it would take on average once you have two pieces. Here we get:

t

_{2}= 1 + 8/10 * t_{2 }+ 2/10 * t_{3}1/5 * t

_{2 }= 1 + 2/10 * 10t

_{2 }= 15And repeating this again for states 1 and 0 gives us:

t

_{1 }= 18.333 and t_{0 }= 20.833So now we’ve worked out how many searches it will take on average to complete our set: 20.833. By putting this into an Excel spreadsheet we are able to see how changing the odds and number of pieces would change this value and, therefore, the player’s experience.

Now my problem to leave for you (surely you didn’t think I was going to do all the work, did you?) is to work out how many turns it would take on average to complete a set if we were in that crazy world where there were twice as many robot heads out there as bodies. More generally, what if all the pieces had different probabilities of being found – how would you approach that problem?

If you’ve enjoyed working through this, and would like a job where you get to solve these types of problems, then why not have a look at our jobs page.

## Leave a Reply