Assumptions of an Event Study
Event study assumptions are the conditions required for valid inference about event impact. The six key assumptions are: identifiable event timing, market efficiency, significant impact magnitude, normally distributed residuals, no autocorrelation in returns, and absence of confounding events in the event window.
Part of the Methodology Guide
This page is part of the Event Study Methodology Guide.
Event studies rely on several assumptions. When these hold, abnormal returns can be attributed to the event. When they don't, results may be misleading. The table below summarizes each assumption, how to test it, and what to do if it fails.
Assumption violations are common in practice. As demonstrated by Brown and Warner (1985) through Monte Carlo simulations of 250 samples, the standard cross-sectional t-test maintains correct size under most conditions, but event-induced variance increases can inflate rejection rates to 10-15% at a nominal 5% significance level. Researchers should test each assumption and apply appropriate remedies when violations are detected.
What Are the Key Assumptions?
Event study assumptions are the statistical and economic conditions that must hold for abnormal return estimates to be unbiased and for test statistics to have correct size. These six assumptions underpin every standard event study: without them, the researcher cannot reliably attribute observed stock price changes to the event of interest. Violations are common in practice, occurring in an estimated 30% to 50% of applied studies according to simulation evidence.
- Market Efficiency
- The hypothesis that stock prices fully and rapidly reflect all publicly available information. Under semi-strong efficiency, abnormal returns should appear immediately at the event and not drift afterward.
- Confounding Event
- An unrelated event (e.g., an earnings announcement, dividend change, or analyst upgrade) occurring within the same event window, making it impossible to attribute the observed abnormal return solely to the event of interest.
- Estimation Window
- The pre-event period, typically 120 to 250 trading days, used to estimate model parameters. It must not overlap with the event window to avoid contamination.
| Assumption | How to Test | If Violated |
|---|---|---|
| Identifiable event | Clear event date, narrow window | Use intraday data for precise timing |
| Market efficiency | Pre-trend test, CAR drift analysis | Use longer event window, non-parametric tests |
| Significant impact | Test statistics (Patell Z, Rank Test) | Increase sample size, use cross-sectional analysis |
| Normal residuals | Shapiro-Wilk test | Switch to non-parametric tests (Sign, Rank) |
| No autocorrelation | Durbin-Watson, Ljung-Box | Use BMP test or Newey-West standard errors |
| No confounding events | Pre-trend test, sample screening | Shorten event window, exclude contaminated events |
Why Must the Event Be Identifiable?
The event must have a clear date or narrow time frame. If the timing is ambiguous (e.g., a rumor that builds over weeks), abnormal returns bleed across the window boundary and the study loses power.
Precise timing
For events with known timestamps (earnings at 4:05 PM, Fed decisions at 2:00 PM), use Intraday Event Studies with minute-level data instead of daily returns.
Practical checks:
- Verify the event date against multiple sources (press releases, SEC filings, news wires)
- If the exact date is uncertain, widen the event window — but this reduces statistical power
- For staggered events affecting many firms, use Panel DiD instead
Why Does Market Efficiency Matter?
Event studies assume that prices reflect new information quickly. As shown by Fama (1970), the semi-strong form of market efficiency implies that public information is incorporated into prices within minutes to hours. If markets are slow to react, abnormal returns may drift beyond the event window, and the measured effect is understated. Empirical evidence suggests that for liquid U.S. equities, over 90% of the price adjustment occurs within the first trading day.
How to test: Check whether abnormal returns exist before the event. Significant pre-event returns suggest information leakage or a misspecified event date.
# Pre-trend test: are pre-event abnormal returns jointly zero?
pretrend <- pretrend_test(task)
print(pretrend)A significant result (low p-value) warns that something is happening before the event — either the market anticipated it, or the event date is wrong.
Post-event drift?
If CAR continues to rise or fall after the event window, the market may be incorporating information slowly. Consider extending the event window or using BHAR for long-horizon analysis.
How Large Must the Impact Be?
The event must produce abnormal returns large enough to detect above normal return variation. Small effects require larger samples. Simulation studies by Brown and Warner (1985) show that with an estimation window of 100 days and daily data, a 1% abnormal return can be detected with approximately 80% power using a sample of 50 events at the 5% significance level.
How to test: Run test statistics and check significance.
# AAR with test statistics — check p-values
task$get_aar()If individual events are insignificant but you expect a real effect, increase the sample size or use cross-sectional regression to identify which firm characteristics drive the response.
What If Residuals Are Not Normally Distributed?
Parametric tests (T-test, Patell Z) assume that estimation-window residuals are normally distributed. If residuals are skewed or heavy-tailed, these tests may over-reject or under-reject.
How to test:
# Model diagnostics include Shapiro-Wilk normality test
diagnostics <- model_diagnostics(task)
print(diagnostics)If the Shapiro-Wilk test rejects normality (p < 0.05), switch to non-parametric test statistics:
# Use non-parametric tests instead
ps <- ParameterSet$new(
multi_event_statistics = MultiEventStatisticsSet$new(
statistics = list(
SignTest$new(),
RankTest$new(),
GeneralizedSignTest$new()
)
)
)See Test Statistics for a full comparison of parametric vs. non-parametric tests.
What If Returns Are Autocorrelated?
Standard errors assume residuals are independent across time. Autocorrelated residuals inflate t-statistics, leading to false positives.
How to test: The Durbin-Watson test checks for first-order autocorrelation (values near 2.0 indicate no autocorrelation). The Ljung-Box test checks for higher-order serial correlation.
# Visual diagnostics include ACF plot for autocorrelation
plot_diagnostics(task)Autocorrelation detected?
Use the BMP test (Boehmer, Musumeci & Poulsen), which adjusts for cross-sectional correlation in abnormal returns. For panel studies, cluster-robust standard errors handle within-unit correlation automatically.
How Do I Handle Confounding Events?
The event window should not overlap with other significant events affecting the same firm (e.g., an earnings announcement on the same day as a merger). Confounding events make it impossible to attribute abnormal returns to the event of interest.
Practical strategies:
- Screen your sample: Remove firms with overlapping events in the event window
- Shorten the window: A [-1, +1] window is less likely to be contaminated than [-10, +10]
- Check pre-trends: Significant pre-event abnormal returns may signal a confounding event
# Visual check: do abnormal returns look clean around the event?
plot_event_study(task, type = "aar")A clean event study shows flat AAR before the event, a spike at the event, and a return to zero after. Patterns before the event or prolonged drift after suggest confounding.
Try it in the Event Study App
Apply these concepts to your own data with our free browser-based tool — no installation required.
What Should I Read Next?
- Getting Started — run your first event study in R
- Diagnostics & Export — full diagnostic toolkit
- Test Statistics — choosing the right significance test
- Applications — which approach fits your research question