Borel and Borel: Dice Edition are educational games about probability. I picked up a copy of each because I thought they would be useful in introducing some ideas of probability and gambling without the cultural baggage of better known games.

I'm biased because it's my field, but Borel has a lot more play value than most games of its kind. The dice edition, which is much easier to find, and easier to get into and play, has a set of 7 dice (four 6-sided, and one each of a 10-sided, 20-sided and 30-sided die), and a deck of 100 "experiments", like Experiment 001:

*This post does not contain a paid affiliate link, nor was I paid to write it. However, I did receive both Borel and Borel: Dice Edition from the games' creators for only the price of shipping.*

** **

**Experiment 001: Roll d6, d6,
d6, d6, d10, and d30. Will you roll 5 or more unique numbers?**

Each player bets 1 point on whether the stated outcome will happen or not, so the winner of a game (of 10-20 experiments) is whoever guesses the outcome the most often.

The experiments are designed so that outcomes are close to 50-50. but not quite. If you have a sense of probability, you can work out the more likely outcome by intuition sometimes. However, the experiments are not so obvious that you can scratch out the probability by hand in a minute.

For example, in Experiment 001, finding the distribution of the number of uniques for identical dice isn't hard.

(e.g. Pr(3 or more uniques) = 6/6 * 5/6 * 4/6 * 6/6)

But once you introduce two additional dice of different numbers of sides, the computation gets too complicated for a casual tabletop game. That's the point though, if you enjoy math puzzles, or reading chance magazine or anything by Martin Gardner, then the experiments are deep enough to have nerd gravity. If you just want to play a short casual game with dice and more variety than Yahtzee, then you still have a shot at winning with thinking too hard.

You're not supposed to prepare, it sort of ruins the game. You can't have "Ruin" without "R".

First, let's make a function to roll dice. Not a perfectly efficient one, but a readable, informative one.

random.dice = function(dice_used = c(6,6,6,6,10,20,30), Nrolls=100000, seed=NA)

{

if(!is.na(seed)){set.seed(seed)}

Ndice = length(dice_used)

rollmat = matrix(NA, nrow=Nrolls, ncol=Ndice)

for(k in 1:Ndice)

{

rollmat[,k] = sample(1:dice_used[k], size=Nrolls, replace=T)

}

return(rollmat)

}

# Test: Outputs a 10 x 7 matrix with 10 random rolls of 4d6, d10, d20, d30

# random.dice(Nrolls=10)

and because we can, let's also make a function to generate one of every possibility

exhaustive.dice = function(dice_used= c(6,6,6,6,10,20,30))

{

Ndice = length(dice_used)

Nrolls = prod(dice_used)

rollmat = matrix(NA, nrow=Nrolls, ncol=Ndice)

for(k in 1:Ndice)

{

Ntimes = prod(dice_used[1:k]) / dice_used[k]

Neach = prod(dice_used[k:Ndice]) / dice_used[k]

rollmat[,k] = rep(1:dice_used[k], times=Ntimes, each=Neach)

}

return(rollmat)

}

# Test: Outputs a 24 x 3 matrix with one of each possible combination of d2, d3, d4

# exhaustive.dice(dice_used = c(2,3,4))

The exhaustive.dice() function can be used in cases that that are simple enough to get every possibility, but the memory required scales exponentially with the number of dice rolled, so it won't be availble if we have to do a lot of rolls for one experiment. [Foot 1] We can use random.dice() to get something close to the true probability, and can sometimes use exhaustive.dice() to check our work. [Foot 2]

Finally, the seed argument in random.dice() is optional. If you don't enter anything, you'll get a new random set of numbers each time. If you do choose a seed, you can reproduce the same numbers each time. So you can check your work easier, these notes will use an easy to remember seed: my luggage password.

A lot of these experiments can be solved the same way: Generate some rolls, and apply a function row-by-row to check of each row would be a 'success' or not. What changes each time are the generator inputs and the function being applied.

In this first experiment, we want to know the number of unique numbers from each roll of the experiment.

count.uniques = function(roll)

{

output = length(unique(roll))

}

Nrolls = 100000

rollmat = random.dice(dice_used = c(6,6,6,6,10,30), Nrolls=Nrolls, seed=12345)

Nuniques = apply(rollmat, 1, count.uniques)

## The distribution

table(Nuniques) / nrow(rollmat)

# 1 2 3 4 5 6

# 0.00001 0.00285 0.06233 0.33397 0.46277 0.13807

## Pr(more than 4 uniques)

length(which(Nuniques > 4)) / nrow(rollmat)

# 0.60084

Here's the histogram of the simulated number of uniques

hist(Nuniques, col="#4499BB", n=7, main="Experiment 001 - 100k simulations")

abline(v=4, col="Red", lwd=5)

For experiment 1, there's about a 60.0% chance of success. How close are we to the true probability with 100,000 rolls of the 6 dice of the experiment?

rollmat = exhaustive.dice(dice_used = c(6,6,6,6,10,30)) # 388,000 possibilities

Nuniques = apply(rollmat, 1, count.uniques)

table(Nuniques) / nrow(rollmat)

length(which(Nuniques > 4)) / nrow(rollmat)

# 1 2 3 4 5 6

# 0.000015 0.002886 0.063765 0.333333 0.461111 0.138889

# 0.6

We were within 0.1% of the true value. Not bad! Let's tackle experiments 002 through 007 with these tools.

**Experiment 002: Roll d6, d6,
d6, d6, d10, d20, and d30. Will the sum be of the results be 45 or more?**

Intuitively, what should we expect?

The expected score of an N-sided die roll is (1 + 2 + ... + N) / N = (N + 1) / 2

We have 4 6-sided dice (Expected score = 3.5 * 4 = 14)

We have 1 10-sided die (Expected score = 5.5)

We have 1 20-sided die (Expected score = 10.5)

We have 1 30-sided die (Expected score = 15.5)

Total E[score] = 45.5

So the average score is more than 45, and each die roll has a symmetric distribution, so the sum should be symmetric too [Foot 3], so logically more than 50% of the probability is at 45 or more.

Is our intuition right?

Nrolls = 100000

rollmat = random.dice(dice_used = c(6,6,6,6,10,20,30), Nrolls=Nrolls, seed=12345)

sums = apply(rollmat, 1, sum)

# Plot out the distribution of the sum

hist(sums, col="#4499BB", n=50, main="Experiment 002 - 100k simulations")

abline(v=45, col="Red", lwd=5)

If that histogram shape looks
familiar, that's because it's a normal curve, which we would expect from a sum
of independent values. these aren't *identically distributed* independent values,
so it's not quite a normal, but at a casual glance, it's pretty close.

length(which(sums >= 45)) / nrow(rollmat)

# 0.52988

So the random rolls showed a success 52.99% of the time. The exact answer is going to get a minute or two to compute, but it's doable.

rollmat = exhaustive.dice(dice_used = c(6,6,6,6,10,20,30))

sums = apply(rollmat, 1, sum)

length(which(sums >= 45)) / nrow(rollmat)

# 0.5323611

**Experiment 003: Roll d30, d30,
and d30. Will the highest result be greater than the lowest result by 16 or
more? **

Before we jump into the code, what are we expecting here? Consider the standard uniform distribution for a bit. That's the continuous distribution where every value from 0 to 1 is equally likely.

The expected value from a uniform is 1/2. [Foot 4]

If we take two values from a uniform, the expected value of the higher value is 2/3, and the lower value is 1/3. [Foot 5]

If we take three values from a uniform, the expected value of the highest one is 3/4, the expected value of the lowest one is 1/4, and the expected value of the middle one is 1/2.

Applying this principle the values of a single die roll, which follow a discrete uniform, we might assume that after rolling three 30-sided dice that lowest value would be near 7.5, and the highest value would be near 22.5, making a difference near 15.

There's a few issues though. First, the expected values would be a little higher because the range is not 0-30, it's 1-30. Next, the distribution of a roll isn't continuous, it's discrete, so all if the probability is on a finite number of values, including the two extreme values of 1 and 30.

We could estimate the success probability to be below 50%, but it's not immediately clear which side of 50% the probability is.

Let's find out. First by simulation, then exhaustively.

rollmat = random.dice(dice_used = c(30,30,30), Nrolls=Nrolls, seed=12345)

maxs = apply(rollmat, 1, max)

mins = apply(rollmat, 1, min)

length(which((maxs-mins) >= 16)) / nrow(rollmat)

# 0.47381

#Here's the histogram

hist((maxs - mins), col="#4499BB", n=30, main="Experiment 003 - 100k simulations")

abline(v=16, col="Red", lwd=5)

rollmat = exhaustive.dice(dice_used = c(30,30,30))

maxs = apply(rollmat, 1, max)

mins = apply(rollmat, 1, min)

length(which((maxs-mins) >= 16)) / nrow(rollmat)

# 0.4744444

47.44% is farther below 50% than I would have though.

But tis the season to take a break from experiments.

Don't eat that Santa all in one go. |

Borel: Dice Edition has 100 cards. The original Borel has more randomization elements like coins and marbles in bags, as well as more complex betting elements like redos and different betting amounts, but The Dice Edition is much more approachable. The experiment cards can easily be modified or converted into assignment questions for a statistics course, and you'll find more experiment breakdowns like this in Statistics and Gambling when it's done.

[Foot 1] Just using the seven base dice once each takes a fair bit of memory, (7.78 million rows x 7 columns x ~4 bytes) = ~200 MB.

[Foot 2] The binomial approximation is also an option to see how far off the market random.dice() is likely to be, and we can always repeatedly use random.dice() to get higher accuracy.

[Foot 3] This (sum of symmetric = symmetric) property applies to sums and means, but it usually doesn't work for other functions like products.

[Foot 4] Actually the distribution from a single die roll is the discrete version of the uniform. If it helps, you can think of uniform coming from a million-sided die. Using what we learned in Experiment 002, the expected roll from a d1,000,000 would be 500000.5, or almost exactly half a million. So seeing an expected value of 1/2 for a uniform shouldn't be too surprising.

[Foot 5] This isn't saying that each of the two values we took from a uniform is different somehow, just after we sorted them, one value is going to be larger than the other. Just knowing that it's the larger of the two gives us enough information to move the expected value up from 1/2 to 2/3.

## No comments:

## Post a Comment