top of page

The Basics of Standard Deviation: A Simple Guide

Standard deviation is a statistical measure of the dispersion or variation of a set of data points around the mean, or average, value. It is often used to measure the risk or volatility of an investment or portfolio, as it indicates how much the return on the investment is likely to fluctuate over time.



It is calculated as the square root of the variance, which is the average of the squared differences between the observations in the dataset and the mean value.

To calculate the standard deviation of a population dataset, the following steps are typically followed:

  • Calculate the mean of the data points:

Mean Return = SUM(Returns) / COUNT(Returns)


Where returns are the array of daily returns, and the count of returns is the number of data points in the dataset.

  • Subtract the mean from each data point to obtain the deviation of each point:

Deviation = Returns - Mean Return

  • Square each deviation to eliminate negative values:

Squared Deviation = (Returns - Mean Return)^2

  • Calculate the average squared deviation (called variance). The variance is a measure of the dispersion of the values in the dataset around the mean but represented in %square.

Variance = SUM((Returns - Mean Return)^2) / COUNT(Returns))

  • Take the square root of the variance to arrive at the standard deviation:

Standard Deviation = SQRT(SUM((Returns - Mean Return)^2) / COUNT(Returns))


Where returns are the array of daily returns, and the count of returns is the number of data points in the dataset.


This formula first calculates the mean return of the dataset by summing all of the returns and dividing by the number of return data points. Next, the formula calculates the squared differences between each return in the dataset and the mean return. These return squared differences are then summed and divided by the number of return data points to calculate the variance. Finally, the formula takes the square root of the variance to calculate the standard deviation.

Example: Suppose we have the following set of daily returns on a stock over a period of 10 days:

Day 1: 2%, Day 2: -3%, Day 3: 1%, Day 4: 5%, Day 5: -1%, Day 6: 3%, Day 7: -2%, Day 8: 0%, Day 9: 4%, Day 10: -4%


To calculate the standard deviation of this dataset, you would first need to calculate the mean return. The mean return of the dataset is calculated by summing all of the returns and dividing by the number of return data points.

Mean Return = SUM(Returns) / COUNT(Returns)

= SUM(2, -3, 1, 5, -1, 3, -2, 0, 4, -4)

= 0.3

Next, you would need to calculate the variance by finding the squared differences between each observation and the mean of those observations, summing those differences, and dividing by the number of observations in the dataset.

Variance = SUM((Returns - Mean Return)^2) / COUNT(Returns))

= SQRT(SUM((2 - 0.3)^2 + (-3 - 0.3)^2 + (1 - 0.3)^2 + (5 - 0.3)^2 + (-1 - 0.3)^2 + (3 - 0.3)^2 + (-2 - 0.3)^2 + (0 - 0.3)^2 + (4 - 0.3)^2 + (-4 - 0.3)^2) / (10 - 1))

= 1.29

Finally, you would take the square root of the variance to calculate the standard deviation.

Standard Deviation = SQRT(Variance)

= SQRT(1.29)

= 1.13

The standard deviation of this dataset is 1.13. This means that the values in the dataset are dispersed around the mean by an average of 1.13.

The calculated standard deviation is expressed as a percentage (i.e. 1.13%) and is a measure of the degree to which the returns are spread out around the mean return. A higher standard deviation indicates a greater degree of dispersion or volatility, while a lower standard deviation indicates a smaller degree of dispersion or stability.


Standard deviation is an important measure in finance, as it is often used to measure the risk of an investment or portfolio.

237 views0 comments

Comments


bottom of page