Historical Simulation Method: Does It Really Help?
- Pankaj Maheshwari
- Oct 2, 2022
- 6 min read
Updated: Apr 1
"simulation is an incredibly powerful technique to understand the impact on a variable due to changes in a number of factors."
Many theories have been authored on simulations that suggest either using the same outcomes generated in the past (called the historical simulation method) or generating a wider range of scenarios produced out of a probability distribution (called the parametric-based monte-carlo simulation method) to predict and understand the impact on a variable.
Our research says that the latter one i.e. the parametric-based monte-carlo simulation method has increasingly become a key method used for quantitative analysis. However, one needs to start with understanding the historical simulation method (being simple and easy) to understand the intuition behind the monte-carlo simulation.

Historical Simulation Method
the historical simulation simply assumes that history will repeat itself, which means one out of the past outcomes will repeat in the future.
for example, to estimate /predict the stock price, the historical simulation method uses shocks computed on the time-series data i.e. the outcomes that the stock has generated in the past, and the same shocks are then randomly picked by the simulator and applied to the current stock price to estimate the possibilities that the stock price can take in future (called simulated prices) under different scenarios. The scenario results are then averaged out together to provide a single-figure estimate.
Our algorithm extracted the time-series data of a stock, trading at $1453.00 dated 2022-10-04, for a lookback period of 250 trading days.

The time-series was then given to a scenario generator tool to compute proportional shocks.

It is important to note that, statistically, the scenario generator tool is computing the proportional shocks in a continuous timeframe though the timeframe is discrete to the end-of-day close time.
As per the historical simulation, the same past outcomes will repeat in the future but the order of occurrence may be different (the sequence of occurrence may change), and therefore, our simulator has randomly picked one shock (in-sample) from the series of shocks and applied to the current spot price $1453.00 to estimate the stock price at time t+1.
for better understanding, the below syntax can also be used in excel to approach the same.
=spotPrice*EXP(SMALL(spotShocks,RANDBETWEEN(1,COUNT(spotShocks))))

the same process of applying a randomly picked shock to the current spot price is repeated a number of times to simulate the current stock price to achieve the maximum possibilities that a stock price can take in one trading day period.


[grid:1million simulated stock prices]
these simulated prices were then averaged out to get a single-figure estimate of the stock price at t+1.

let's run the simulator by extending the timeframe to 250 trading days to estimate the stock price at t+250.

at each t'node, the simulator has randomly picked one shock (in-sample) from the series of shocks computed using the scenario generator tool and applied it to the immediate previous day's stock price (t-1'node) to estimate the stock price at t'node.
[columns:1simulation, rows:250t'nodes, grid:simulated stock prices]

[y-axis:values, x-axis:250t'nodes, grid:1random path]
Our algorithm triggered the simulator again and repeated the same process to generate multiple paths that the stock price may follow in the future until t+250.


at each t'node, the simulator has randomly picked one shock (in-sample) from the series of shocks computed using the scenario generator tool and applied it to the immediate previous day's stock price (t-1'node) to estimate the stock price at t'node.
[columns:10000simulations, rows:250t'nodes, grid:simulated stock prices]

[y-axis:values, x-axis:250t'nodes, grid:50random paths]
these simulated prices were then averaged out to get a single-figure estimate of the stock price at t+250.

for better understanding, some snippets from our excel model are attached!

At the outset, this approach of simulation seems very intuitive as the algorithm tries to capture thousands (/millions) of possible values that the variable can take in the future, plus simple to understand, and easy to implement.
However, our algorithm together with other time-series models actually runs the parametric-based monte-carlo simulation to predict stochastic variables over a longer timeframe which actually resulted in a higher probability of estimated value getting achieved than running through historically observed data points.
Observation-1: one might have observed that, at the time of estimating the stock price over a longer timeframe, we had to reduce the number of simulations (generated 100k simulated paths instead of 1 million).

runtime recorded by our simulator through our time recorder tool.
It seems that these simulators are computationally heavy to run on a smaller machine and they are time-consuming too, they require high computational power /infrastructure to simulate a number of stochastic variables impacting multiple financial instruments.
Observation-2: the historical simulation method assumes that history will repeat itself, which means one out of the past outcomes will repeat in the future as it uses past historical outcomes to predict the future outcome which is not true in real life. Therefore, it fails to capture any new catastrophic event.
Assumptions of the Historical Simulation Method
The Historical Simulation method is one of the most intuitive approaches to estimate asset price or returns. It uses actual past returns to simulate future potential losses. However, its simplicity relies on several strong assumptions about how markets behave—and understanding these assumptions is essential to assess both its strengths and limitations in real-world applications.
Assumption of Stationarity: This method assumes that the statistical properties of asset returns, such as mean, volatility, and autocorrelation, remain constant over time. This is known as the stationarity assumption. This allows the model to treat past returns as representative of future possibilities. It assumes the distribution of returns is stable, such that historical return shocks can be replayed as scenarios for today’s portfolio.
Finite Historical Lookback Period: This method uses a fixed lookback window (often 250 or 260 trading days) to estimate potential losses. This assumes that this timeframe contains sufficient and relevant information about return variability and extremes. The period (usually ~1 year) is long enough to capture different market behaviors, and short enough to remain current.
Accuracy and Completeness of Historical Data: This method directly uses observed historical returns without statistical modeling or smoothing. As such, it assumes that the raw historical data is clean, accurate, and reliable. Therefore, preprocessing steps may include adjusting prices for corporate actions (splits, dividends), making sure that there are no missing observations or data anomalies, and every asset is time-synchronized in multi-asset portfolios.
Equal Probability Weighting of Historical Outcomes: In this method, each return in the lookback window is given equal weight, regardless of when it occurred and how relevant they are. Therefore, simple to implement and transparent and does not require a weighting scheme or decay structure.
Limitations of the Historical Simulation Method
Assumption of Stationarity: The method assumes that the market conditions that produced the past return distributions will remain consistent in the future. In reality, financial markets are non-stationary due to economic regime shifts (boom vs recession), policy changes (interest rate cuts/hikes), innovations (algorithmic trading), or behavioral changes in market participants.
Historical simulation may misrepresent risk if future market behavior diverges from the past. for example, a portfolio simulated using a calm period (low volatility) may severely underestimate risk in a crisis.
Limitations of the Lookback Period: The simulation relies on a defined historical window (e.g., 250 trading days). A short window may miss rare events (financial crises or recessions) or lack robustness for fat-tailed distributions. In emerging markets or for new instruments, historical data may be sparse or nonexistent. This limits the statistical validity and increases estimation error. for example, A company may wish to assess a 10-year risk but only have 2 years of price data for a recently listed — VaR estimates here will be highly uncertain.
Dependence on Data Quality and Completeness: The accuracy of historical simulations heavily depends on the quality and completeness of the historical data used. Inadequate or incomplete data can lead to biased estimates and unreliable predictions. Data gaps (missing values), inaccuracies, or errors in the historical records can distort the results of the simulation, making them less reliable for future decision-making. Pre-processing, cleansing, and validating historical datasets are crucial before simulation.
Ignoring Fundamental Factors: Historical simulation is purely technical—it does not integrate macroeconomic indicators or firm-specific fundamentals. for example, GDP growth rate, inflation, company earnings reports, and investor sentiment or news. It may miss structural changes in the market. for instance, rising inflation may signal higher volatility ahead, but historical simulation won’t reflect that unless similar events occurred in the lookback period.
Equal Weighting of Historical Outcomes: All historical return observations are treated with equal importance, regardless of recency or context. It does not recognize that:
Some events may be outdated or irrelevant,
More recent events might be more indicative of near-term risk.
Alternative: The Exponentially Weighted Historical Simulation (EWHS) approach assigns more weight to recent data, addressing this limitation.
Backward-Looking Nature of Historical Simulation: This is inherently a backward-looking method. It relies entirely on past data and does not incorporate any forward-looking information about future market expectations. Forward-looking information, such as anticipated economic policies, technological advancements, or geopolitical events, is crucial for making accurate predictions about future market conditions.
Following is My observations on HUL Stock price for 3years(746 data points):
1.Expected Return distribution which is arrived by applying formula =spotPrice*EXP(SMALL(spotShocks,RANDBETWEEN(1,COUNT(spotShocks))))
2. Continue applying for total Equity price data and plot pivot and then group by to arrive Cummulative distribution
3.Finally bar chart is inserted based on pivot data.
I observe expected return distribution to be Symmetric which implies it is normally distributed by PDF. Historical Simulation method however has limitation if any sudden news came to limelight into market, then stock price don't follow this method.
When I estimate Point and Path estimation and draw line chart,it appears
Average the point(t) and path estimation(t+252d) and finally got expected return for 1 year
The expected return Distribution for 1000 iterations, even though after reducing the the number of Iterations the expected Price distribution seems to be normally distributed as this has been assuming the History will repeat again in the future, which also means the Historical Time series return distribution is also normally distributed.