Contents


The ability to simulate data is a useful tool for better understanding statistical analyses and planning experimental designs. These notes illustrate how to simulate data using a variety of different functions in the R programming language, then discuss how data simulation can be used in research. These notes borrow heavily from a Stirling Coding Club session on randomisation, and to a lesser extent from a session on linear models. After working through these notes, the reader should be able to simulate their own data sets and use them to explore data visualisations and statistical analysis. These notes are also available as a PDF.



Introduction: Simulating data

The ability generate simulated data is very useful in a lot of research contexts. Simulated data can be used to better understand statistical methods, or in some cases to actually run statistical analyses (e.g., simulating a null distribution against which to compare a sample). Here I want to demonstrate how to simulate data in R. This can be accomplished with base R functions including rnorm, runif, rbinom, rpois, or rgamma; all of these functions sample univariate data (i.e., one variable) from a specified distribution. The function sample can be used to sample elements from an R object with or without replacement. Using the MASS library, the mvtnorm function will sample multiple variables with a known correlation structure (i.e., we can tell R how variables should be correlated with one another) and normally distributed errors.

Below, I will first demonstrate how to use some common functions in R for simulating data. Then, I will illustrate how these simulated data might be used to better understand common statistical analyses and data visualisation.

Univariate random numbers

Below, I introduce some base R functions that simulate (pseudo)random numbers from a given distribution. Note that most of what follows in this section is a recreation of a similar section in the notes for randomisation analysis in R.

Sampling from a uniform distribution

The runif function returns some number (n) of random numbers from a uniform distribution with a range from \(a\) (min) to \(b\) (max) such that \(X\sim\mathcal U(a,b)\) (verbally, \(X\) is sampled from a uniform distribution with the parameters \(a\) and \(b\)), where \(-\infty < a < b < \infty\) (verbally, \(a\) is greater than negative infinity but less than \(b\), and \(b\) is finite). The default is to draw from a standard uniform distribution (i.e., \(a = 0\) and \(b = 1\)) as done below.

rand_unifs_10 <- runif(n = 10, min = 0, max = 1);

The above code stores a vector of ten numbers rand_unifs_10, shown below. Note that the numbers will be different each time we re-run the runif function above.

##  [1] 0.1133932 0.5695949 0.4561517 0.9651846 0.4140484 0.7041649 0.8053480
##  [8] 0.6791450 0.6511194 0.8562102

We can visualise the standard uniform distribution that is generated by plotting a histogram of a very large number of values created using runif.

rand_unifs_10000 <- runif(n = 10000, min = 0, max = 1);
hist(rand_unifs_10000, xlab = "Random value (X)", col = "grey",
     main = "", cex.lab = 1.5, cex.axis = 1.5);

The random uniform distribution is special in some ways. The algorithm for generating random uniform numbers is the starting point for generating random numbers from other distributions using methods such as rejection sampling, inverse transform sampling, or the Box Muller method (Box and Muller 1958).

Sampling from a normal distribution

The rnorm function returns some number (n) of randomly generated values given a set mean (\(\mu\); mean) and standard deviation (\(\sigma\); sd), such that \(X\sim\mathcal N(\mu,\sigma^2)\). The default is to draw from a standard normal (a.k.a., “Gaussian”) distribution (i.e., \(\mu = 0\) and \(\sigma = 1\)).

rand_norms_10 <- rnorm(n = 10, mean = 0, sd = 1);

The above code stores a vector of 10 numbers, shown below.

##  [1] -0.5305369 -0.1051010 -0.8951676  1.3608435  0.5930973  0.3344486
##  [7] -1.0341829 -1.0413559  1.5183984  0.1976365

We can verify that a standard normal distribution is generated by plotting a histogram of a very large number of values created using rnorm.

rand_norms_10000 <- rnorm(n = 10000, mean = 0, sd = 1);
hist(rand_norms_10000, xlab = "Random value (X)", col = "grey",
     main = "", cex.lab = 1.5, cex.axis = 1.5);

Generating a histogram using data from a simulated distribution like this is often a useful way to visualise distributions, or to see how samples from the same distribution might vary. For example, if we wanted to compare the above distribution with a normal distribution that had a standard deviation of 2 instead of 1, then we could simply sample 10000 new values in rnorm with sd = 2 instead of sd = 1 and create a new histogram with hist. If we wanted to see what the distribution of sampled data might look like given a low sample size (e.g., 10), then we could repeat the process of sampling from rnorm(n = 10, mean = 0, sd = 1) multiple times and looking at the shape of the resulting histogram.

Sampling from a poisson distribution

Many processes in biology can be described by a Poisson distribution. A Poisson process describes events happening with some given probability over an area of time or space such that \(X\sim Poisson(\lambda)\), where the rate parameter \(\lambda\) is both the mean and variance of the Poisson distribution (note that by definition, \(\lambda > 0\), and although \(\lambda\) can be any positive real number, data are always integers, as with count data). Sampling from a Poisson distribution can be done in R with rpois, which takes only two arguments specifying the number of values to be returned (n) and the rate parameter (lambda).

rand_poissons <- rpois(n = 10, lambda = 1.5);
print(rand_poissons);
##  [1] 1 0 2 1 2 0 3 1 1 1

There are no default values for rpois. We can plot a histogram of a large number of values to see the distribution when \(\lambda = 4.5\) below.

rand_poissons_10000 <- rpois(n = 10000, lambda = 4.5);
hist(rand_poissons_10000, xlab = "Random value (X)", col = "grey",
     main = "", cex.lab = 1.5, cex.axis = 1.5);

Sampling from a binomial distribution

Sampling from a binomial distribution in R with rbinom is a bit more complex than using runif, rnorm, or rpois. Like those previous functions, the rbinom function returns some number (n) of random numbers, but the arguments and output can be slightly confusing at first. Recall that a binomial distribution describes the number of ‘successes’ for some number of independent trials (\(\Pr(success) = p\)). The rbinom function returns the number of successes after size trials, in which the probability of success in each trial is prob. For a concrete example, suppose we want to simulate the flipping of a fair coin 1000 times, and we want to know how many times that coin comes up heads (‘success’). We can do this with the following code.

coin_flips <- rbinom(n = 1, size = 1000, prob = 0.5);
print(coin_flips);
## [1] 527

The above result shows that the coin came up heads 527 times. Note, however, the (required) argument n above. This allows the user to set the number of sequences to run. In other words, if we set n = 2, then this could simulate the flipping of a fair coin 1000 times once to see how many times heads comes up, then repeating the whole process a second time to see how many times heads comes up again (or, if it is more intuitive, the flipping of two separate fair coins 1000 times).

coin_flips_2 <- rbinom(n = 2, size = 1000, prob = 0.5);
print(coin_flips_2);
## [1] 490 492

In the above, a fair coin was flipped 1000 times and returned 490 heads, and then another fair coin was flipped 1000 times and returned 492 heads. As with the rnorm and runif functions, we can check to see what the distribution of the binomial function looks like if we repeat this process. Suppose, in other words, that we want to see the distribution of the number of times heads comes up after 1000 flips. We can, for example, simulate the process of flipping 1000 times in a row with 10000 different coins using the code below.

coin_flips_10000 <- rbinom(n = 10000, size = 1000, prob = 0.5);

I have not printed the above coin_flips_10000 for obvious reasons, but we can use a histogram to look at the results.

hist(coin_flips_10000, xlab = "Random value (X)", col = "grey",
     main = "", cex.lab = 1.5, cex.axis = 1.5);

As would be expected, most of the time ‘heads’ occurs around 500 times out of 1000, but usually the actual number will be a bit lower or higher due to chance. Note that if we want to simulate the results of individual flips in a single trial, we can do so as follows.

flips_10 <- rbinom(n = 10, size = 1, prob = 0.5);
##  [1] 1 1 1 0 0 1 1 1 1 0

In the above, there are n = 10 trials, but each trial consists of only a single coin flip (size = 1). But we can equally well interpret the results as a series of n coin flips that come up either heads (1) or tails (0). This latter interpretation can be especially useful to write code that randomly decides whether some event will happen (1) or not (0) with some probability prob.

Random sampling using sample

Sometimes it is useful to sample a set of values from a vector or list. The R function sample is very flexible for sampling a subset of numbers or elements from some structure (x) in R according to some set probabilities (prob). Elements can be sampled from x some number of times (size) with or without replacement (replace), though an error will be returned if the size of the sample is larger than x but replace = FALSE (default).

Sampling random numbers from a list

To start out simple, suppose we want to ask R to pick a random number from one to ten with equal probability.

rand_number_1 <- sample(x = 1:10, size = 1);
print(rand_number_1);
## [1] 7

The above code will set rand_number_1 to a randomly selected value, in this case 7. Because we have not specified a probability vector prob, the function assumes that every element in 1:10 is sampled with equal probability. We can increase the size of the sample to 10 below.

rand_number_10 <- sample(x = 1:10, size = 10);
print(rand_number_10);
##  [1] 10  7  4  5  2  9  3  1  8  6

Note that all numbers from 1 to 10 have been sampled, but in a random order. This is becaues the default is to sample with replacement, meaning that once a number has been sampled for the first element in rand_number_10, it is no longer available to be sampled again. To change this and allow for sampling with replacement, we can change the default.

rand_number_10_r <- sample(x = 1:10, size = 10, replace = TRUE);
print(rand_number_10_r);
##  [1] 9 7 5 8 6 2 1 9 3 8

Note that the numbers {8, 9} are now repeated in the set of randomly sampled values above. We can also specify the probability of sampling each element, with the condition that these probabilities need to sum to 1. Below shows an example in which the numbers 1-5 are sampled with a probability of 0.05, while the numbers 6-10 are sampled with a probability of 0.15, thereby biasing sampling toward larger numbers.

prob_vec      <- c( rep(x = 0.05, times = 5), rep(x = 0.15, times = 5) );
rand_num_bias <- sample(x = 1:10, size = 10, replace = TRUE, prob = prob_vec);
print(rand_num_bias);
##  [1]  7  1  7  6  6  1 10  9  2  7

Note that rand_num_bias above contains more numbers from 6-10 than from 1-5.

Sampling random characters from a list

Sampling characters from a list of elements is no different than sampling numbers, but I am illustrating it separately because I find that I often sample characters for conceptually different reasons. For example, if I want to create a simulated data set that includes three different species, I might create a vector of species identities from which to sample.

species <- c("species_A", "species_B", "species_C");

This gives three possible categories, which I can now use sample to draw from. Assume that I want to simulate the sampling of these three species, perhaps with species_A being twice as common as species_B and species_C. I might use the following code to sample 24 times.

sp_sample <- sample(x = species, size = 24, replace = TRUE, 
                    prob = c(0.5, 0.25, 0.25) 
                    );

Below are the values that get returned.

##  [1] "species_A" "species_C" "species_B" "species_A" "species_B" "species_A"
##  [7] "species_C" "species_B" "species_A" "species_B" "species_A" "species_A"
## [13] "species_B" "species_A" "species_A" "species_B" "species_C" "species_B"
## [19] "species_A" "species_C" "species_A" "species_B" "species_A" "species_C"

Simulating data with known correlations

We can generate variables \(X_{1}\) and \(X_{2}\) that have known correlations \(\rho\) with with one another. The code below does this for two standard normal random variables with a sample size of 10000, such that the correlation between them is 0.3.

N   <- 10000;
rho <- 0.3;
x1  <- rnorm(n = N, mean = 0, sd = 1);
x2  <- (rho * x1) + sqrt(1 - rho*rho) * rnorm(n = N, mean = 0, sd = 1);

Mathematically, these variables are generated by first simulating the sample \(x_{1}\) (x1 above) from a standard normal distribution. Then, \(x_{2}\) (x2 above) is calculated as below,

\(x_{2} = \rho x_{1} + \sqrt{1 - \rho^{2}}x_{rand},\)

Where \(x_{rand}\) is a sample from a normal distribution with the same variance as \(x_{1}\). A simple call to the R function cor will confirm that the correlation does indeed equal rho (with some sampling error).

cor(x1, x2);
## [1] 0.2982606

This is useful if we are only interested in two variables, but there is a much more efficient way to generate any number of variables with different variances and correlations to one another. To do this, we need to use the MASS library, which can be installed and loaded as below.

install.packages("MASS");
library("MASS");

In the MASS library, the function mvrnorm can be used to generate any number of variables for a pre-specified covariance structure.

Suppose we want to simulate a data set of three measurements from a species of organisms. Measurement 1 (\(M_1\)) has a mean of \(\mu_{M_{1}} =\) 159.54 and variance of \(Var(M_1) =\) 12.68, measurement 2 (\(M_2\)) has a mean of \(\mu_{M_{1}} =\) 245.26 and variance of \(Var(M_2) =\) 30.39, and measurement 3 (\(M_2\)) has a mean of \(\mu_{M_{1}} =\) 25.52 and variance of \(Var(M_3) =\) 2.18. Below is a table summarising.

measurement mean variance
M1 159.54 12.68
M2 245.26 30.39
M3 25.52 2.18

Further, we want the covariance between \(M_1\) and \(M_2\) to equal \(Cov(M_{1}, M_{2}) =\) 13.95, the covariance between \(M_1\) and \(M_3\) to equal \(Cov(M_{1}, M_{3}) =\) 3.07, and the covariance between \(M_2\) and \(M_3\) to equal \(Cov(M_{2}, M_{3}) =\) 4.7. We can put all of this information into a covariance matrix \(\textbf{V}\) with three rows and three columns. The diagonal of the matrix holds the variances of each variable, with the off-diagonals holding the covariances (note also that the variance of a variable \(M\) is just the variable’s covariance with itself; e.g., \(Var(M_{1}) = Cov(M_{1}, M_{1})\)).

\[ V = \begin{pmatrix} Var(M_{1}), & Cov(M_{1}, M_{2}), & Cov(M_{1}, M_{3}) \\ Cov(M_{2}, M_{1}), & Var(M_{2}), & Cov(M_{2}, M_{3}) \\ Cov(M_{3}, M_{1}), & Cov(M_{3}, M_{2}), & Var(M_{3}) \\ \end{pmatrix}. \]

In R, we can create this matrix as follows.

matrix_data <- c(12.68, 13.95, 3.07, 13.95, 30.39, 4.70, 3.07, 4.70, 2.18);
cv_mat      <- matrix(data = matrix_data, nrow = 3, ncol = 3, byrow = TRUE);
rownames(cv_mat) <- c("M1", "M2", "M3");
colnames(cv_mat) <- c("M1", "M2", "M3");

Here is what cv_mat looks like (note that it is symmetrical along the diagonal).

##       M1    M2   M3
## M1 12.68 13.95 3.07
## M2 13.95 30.39 4.70
## M3  3.07  4.70 2.18

Now we can add the means to a vector in R.

mns <- c(159.54, 245.26, 25.52);

We are now ready to use the mvrnorm function in R to simulate some number n of sampled organisms with these three measurements. We use the mvrnorm arguments mu and Sigma to specify the vector of means and covariance matrix, respectively.

sim_data <- mvrnorm(n = 40, mu = mns, Sigma = cv_mat);

Here are the example data below.

M1 M2 M3
156.8401 239.2532 21.50215
159.5836 243.8455 23.58573
159.5599 243.0089 26.31123
157.4038 244.4229 25.12821
161.2432 247.7032 25.19604
162.7752 250.7659 27.67103
162.4503 250.6065 26.18582
161.7070 246.9288 27.21716
167.2550 252.9275 28.13273
156.3528 247.3137 25.06813
160.1374 254.1557 28.87317
157.0188 243.1276 26.16982
157.1850 242.3487 26.37687
164.6624 254.0036 27.43464
155.5627 245.4346 26.71897
160.4658 243.3243 30.04020
156.3634 236.9572 22.10719
156.3370 239.4776 23.75435
160.9510 246.7674 27.81518
163.4051 251.4403 27.47592
159.2332 242.6554 24.77689
157.7084 240.3702 25.22394
157.3519 241.2594 20.76773
157.6528 242.7551 23.06892
163.2168 251.2960 26.00757
157.8485 244.6028 24.94918
156.7668 234.9571 24.77648
156.7720 244.0703 22.67401
160.1196 243.9363 24.99330
157.0385 242.5131 25.58454
164.4998 251.6977 28.17435
163.6503 251.6433 23.47595
156.1016 243.8478 25.40560
163.4849 247.2491 24.73946
156.2721 241.0744 22.65221
161.4463 247.8021 25.72697
152.7829 242.0364 23.97818
169.0207 259.8599 28.67627
163.5667 248.3735 27.18757
166.9606 244.0992 26.87829

We can check to confirm that the mean values of each column are correct using apply.

apply(X = sim_data, MARGIN = 2, FUN = mean);
##        M1        M2        M3 
## 159.96885 245.74781  25.56205

And we can check to confirm that the covariance structure of the data is correct using cov.

cov(sim_data);
##           M1       M2       M3
## M1 13.674363 14.66860 4.522711
## M2 14.668598 26.50344 6.734690
## M3  4.522711  6.73469 4.443100

Note that the values are not exact, but should become closer to the specified values as increase the sample size n. We can visualise the data too; for example, we might look at the close correlation between \(M_{1}\) and \(M_{2}\) using a scatterplot, just as we would for data sampled from the field.

par(mar = c(5, 5, 1, 1));
plot(x = sim_data[,1], y = sim_data[,2], pch = 20, cex = 1.25, cex.lab = 1.25,
     cex.axis = 1.25, xlab = expression(paste("Value of ", M[1])),
     ylab = expression(paste("Value of ", M[2])));

We could even run an ordination on these simulated data. For example, we could extract the principle components with prcomp, then plot the first two PCs to visualise these data. We might, for example, want to compare different methods of ordination using a data set with different, pre-specified properties (e.g., Minchin 1987). We might also want to use simulated data sets to investigate how different statistical tools perform. I show this in the next section, where I put a full data set together and run linear models on it.

Simulating a full data set

Putting everything together, here I will create a data set of three different species from which three different measurements are taken. We can just call these measurements ‘length’, ‘width’, and ‘mass’. For simplicity, let us assume that these measurements always covary in the same way that we saw with \(\textbf{V}\) (i.e., cv_mat) above. But let’s also assume that we have three species with slightly different mean values. Below is the code that will build a new data set of \(N = 20\) samples with four columns: species, length, width, and mass.

N           <- 20;
matrix_data <- c(12.68, 13.95, 3.07, 13.95, 30.39, 4.70, 3.07, 4.70, 2.18);
cv_mat      <- matrix(data = matrix_data, nrow = 3, ncol = 3, byrow = TRUE);
mns_1       <- c(159.54, 245.26, 25.52);
sim_data_1  <- mvrnorm(n = N, mu = mns, Sigma = cv_mat);
colnames(sim_data_1) <- c("Length", "Width", "Mass");
# Below, I bind a column for indicating 'species_1' identity
species     <- rep(x = "species_1", times = 20); # Repeats 20 times
sp_1        <- data.frame(species, sim_data_1);

Let us add one more data column. Suppose that we can also sample the number of offspring each organism has, and that the mean number of offspring that an organism has equals one tenth of the organism’s mass. To do this, we can use rpois, and take advantage of the fact that the argument lambda can be a vector rather than a single value. So to get the number of offspring for each organism based on its body mass, we can just insert the mass vector sp_1$Mass times 0.1 for lambda.

offspring   <- rpois(n = N, lambda = sp_1$Mass * 0.1);
sp_1        <- cbind(sp_1, offspring);

I have also bound the offspring number to the data set sp_1. Here is what it looks like below.

species Length Width Mass offspring
species_1 157.8997 247.2485 25.07636 2
species_1 166.8389 254.5960 28.16908 5
species_1 169.8691 252.4852 25.95076 0
species_1 158.3659 244.8600 23.26223 3
species_1 160.3753 243.9834 25.83909 2
species_1 162.0691 244.0099 25.63732 2
species_1 163.9570 249.4643 26.65444 2
species_1 154.7245 234.9060 26.73195 2
species_1 160.7001 248.3932 26.65420 4
species_1 159.8588 241.1411 24.61550 2
species_1 163.4095 246.7568 26.13083 3
species_1 160.0924 249.5546 26.06573 1
species_1 159.1983 246.6818 25.71434 4
species_1 163.6008 252.5308 26.10885 0
species_1 155.4829 247.2605 22.65361 2
species_1 159.9242 242.0529 24.11410 3
species_1 161.9690 246.9981 27.28192 5
species_1 156.8021 241.0325 22.66747 2
species_1 159.0043 243.9699 25.00752 1
species_1 164.6949 246.7542 26.96176 0

To add two more species, let us repeat the process two more times, but change the expected mass just slightly each time. The code below does this, and puts everything together in a single data set.

# First making species 2
mns_2       <- c(159.54, 245.26, 25.52 + 3); # Add a bit
sim_data_2  <- mvrnorm(n = N, mu = mns, Sigma = cv_mat);
colnames(sim_data_2) <- c("Length", "Width", "Mass");
species     <- rep(x = "species_2", times = 20); # Repeats 20 times
offspring   <- rpois(n = N, lambda = sim_data_2[,3] * 0.1);
sp_2        <- data.frame(species, sim_data_2, offspring);
# Now make species 3
mns_3       <- c(159.54, 245.26, 25.52 + 4.5); # Add a bit more
sim_data_3  <- mvrnorm(n = N, mu = mns, Sigma = cv_mat);
colnames(sim_data_3) <- c("Length", "Width", "Mass");
species     <- rep(x = "species_3", times = 20); # Repeats 20 times
offspring   <- rpois(n = N, lambda = sim_data_3[,3] * 0.1);
sp_3        <- data.frame(species, sim_data_3, offspring);
# Bring it all together in one data set
dat <- rbind(sp_1, sp_2, sp_3);

Our full data set now looks like the below.

species Length Width Mass offspring
species_1 157.8997 247.2485 25.07636 2
species_1 166.8389 254.5960 28.16908 5
species_1 169.8691 252.4852 25.95076 0
species_1 158.3659 244.8600 23.26223 3
species_1 160.3753 243.9834 25.83909 2
species_1 162.0691 244.0099 25.63732 2
species_1 163.9570 249.4643 26.65444 2
species_1 154.7245 234.9060 26.73195 2
species_1 160.7001 248.3932 26.65420 4
species_1 159.8588 241.1411 24.61550 2
species_1 163.4095 246.7568 26.13083 3
species_1 160.0924 249.5546 26.06573 1
species_1 159.1983 246.6818 25.71434 4
species_1 163.6008 252.5308 26.10885 0
species_1 155.4829 247.2605 22.65361 2
species_1 159.9242 242.0529 24.11410 3
species_1 161.9690 246.9981 27.28192 5
species_1 156.8021 241.0325 22.66747 2
species_1 159.0043 243.9699 25.00752 1
species_1 164.6949 246.7542 26.96176 0
species_2 160.3955 246.1206 25.10094 0
species_2 160.4043 249.5417 27.24088 2
species_2 155.4147 237.0170 26.08505 4
species_2 160.2475 250.7335 26.58956 4
species_2 162.8253 247.1811 27.62468 4
species_2 164.2270 249.0396 27.02769 6
species_2 164.6313 245.4949 24.96247 3
species_2 165.2472 251.9183 27.98904 2
species_2 158.8623 243.6660 25.83268 2
species_2 159.1720 251.1752 25.68772 1
species_2 157.3919 247.4839 23.70365 0
species_2 159.8873 238.8698 25.17297 2
species_2 157.8144 243.4603 25.80216 2
species_2 165.4962 252.4239 26.85917 1
species_2 158.1037 244.8606 25.02206 1
species_2 155.8809 244.8382 23.87935 4
species_2 153.0293 243.0913 24.66058 2
species_2 159.8996 240.1169 25.89279 1
species_2 162.9840 253.0809 29.13562 3
species_2 160.1653 241.2856 24.67164 2
species_3 164.1158 250.2614 26.39159 4
species_3 163.1071 244.4463 26.90404 6
species_3 160.7772 248.8884 25.77013 1
species_3 153.9652 240.6978 26.49205 5
species_3 166.8952 245.7310 26.63060 1
species_3 160.9657 251.3784 25.67313 2
species_3 155.0207 241.7796 25.85391 3
species_3 157.9167 239.1128 25.13193 2
species_3 156.6035 239.1086 25.62935 3
species_3 155.2653 239.5433 25.69861 1
species_3 158.6467 242.4603 24.35432 5
species_3 155.4908 235.2602 22.48504 3
species_3 161.2673 247.8753 27.63845 3
species_3 158.3426 243.0807 28.17120 3
species_3 166.0403 247.8340 25.75077 2
species_3 163.3443 246.3803 26.35241 1
species_3 157.6857 239.8627 26.53239 7
species_3 166.3913 257.1384 28.55016 2
species_3 160.5942 244.3150 25.83948 2
species_3 159.3657 247.5116 25.92595 0

To summarise, we now have a simulated data set of measurements from three different species, all of which have known variances and covariances of length, width, and mass. Each species has a slightly different mean mass, and for all species, each unit of mass increases the expected number of offspring by 0.1. Because we know these properties of the data for certain, we can start asking questions that might be useful to know about our data analysis. For example, given this covariance structure and these small differences in mass, is a sample size of 20 really enough to even get a significant difference among species masses using an ANOVA? What if we tried to test for differences among masses using some sort of randomisation approach Instead? Would this give us more or less power? Let us run an ANOVA to see if the difference between group means (which we know exists) is recovered.

aov_result <- aov(Mass ~ species, data = dat);
summary(aov_result);
##             Df Sum Sq Mean Sq F value Pr(>F)
## species      2   2.94   1.469   0.751  0.476
## Residuals   57 111.44   1.955

It appears not! What about the relationship between body mass and offspring production that we know exists? Below is a scatterplot of the data for the three different species.

This looks like there might be a positive relationship, but it is very difficult to determine just from the scatterplot. We can use a generalised linear model to test it with species as a random effect, as we might do if these were data sampled from the field (do not worry about the details of the model here; the key point is that we can use the simulated data with known properties to assess the performance of a statistical test).

library(lme4);
## Loading required package: Matrix
mod <- glmer(offspring ~ Mass + (1 | species), data = dat, family = "poisson");
## boundary (singular) fit: see help('isSingular')
summary(mod);
## Generalized linear mixed model fit by maximum likelihood (Laplace
##   Approximation) [glmerMod]
##  Family: poisson  ( log )
## Formula: offspring ~ Mass + (1 | species)
##    Data: dat
## 
##      AIC      BIC   logLik deviance df.resid 
##    223.9    230.2   -108.9    217.9       57 
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -1.6404 -0.8304 -0.1751  0.7023  2.7455 
## 
## Random effects:
##  Groups  Name        Variance Std.Dev.
##  species (Intercept) 0        0       
## Number of obs: 60, groups:  species, 3
## 
## Fixed effects:
##             Estimate Std. Error z value Pr(>|z|)
## (Intercept) -1.52146    1.58536  -0.960    0.337
## Mass         0.09315    0.06080   1.532    0.125
## 
## Correlation of Fixed Effects:
##      (Intr)
## Mass -0.999
## optimizer (Nelder_Mead) convergence code: 0 (OK)
## boundary (singular) fit: see help('isSingular')

There does not appear to be any effect here either! To get one, it appears that we will need to simulate a larger data set (or a bigger effect size – or just get lucky when re-simulating a new data set).

Note that I have run a linear model that might be reasonable given the structure of our data. But the advantage of working with simulated data and knowing for certain what the relationship is between the underlying variables is that we can explore different statistical techniques. For example, we know that our response variable offspring is count data, so we are supposed to specify a Poisson error structure using the family = "poisson" argument above, right? But what would happen if we just used a normal error structure anyway? Would this really be so bad? Now is the opportunity to test because we know what the correct answer is supposed to be! Trying statistical methods that are normally ill-advised can actually be useful here, as it can help us see for ourselves when a technique is bad – or perhaps when it really is not (e.g., Ives 2015).

Conclusions

Simulating data can be a powerful tool for learning and investigating different statistical analyses. The main benefits of using simulated data are flexibility and certainty. Simulation gives us the flexibility to explore any number of hypotheticals, including different sample sizes, effect sizes, relationships between variables, and error distributions. It also works from a point of certainty; we know what the real relationship is between variables, and what the actual effect sizes are because we define them when generating random samples. So if we want to better understand what would happen if we were unable to sample an important variable in our system, or if we were to use a biased estimator, or if we we were to violate key model assumptions, simulated data is a very useful tool.

Literature cited

Box, G E P, and Mervin E Muller. 1958. A note on the generation of random normal deviates.” The Annals of Mathematical Statistics 29 (2): 610–11. https://doi.org/10.1214/aoms/1177706645.
Ives, Anthony R. 2015. For testing the significance of regression coefficients, go ahead and log-transform count data.” Methods in Ecology and Evolution 6: 828–35. https://doi.org/10.1111/2041-210X.12386.
Minchin, Peter R. 1987. An evaluation of the relative robustness of techniques for ecological ordination.” Vegetatio 69 (1-3): 89–107. https://doi.org/10.1007/BF00038690.