AR & CAR Test Statistics

Single-event test statistics for abnormal returns — AR t-test, CAR t-test, BHAR t-test, and permutation tests with formulas and R code.

Single-event test statistics assess whether the abnormal return for one specific firm is significantly different from zero. These tests use the firm’s own estimation-window residuals to form the standard error.

Overview

Test R Class Null Hypothesis Distribution Best For
AR t-test ARTTest \(H_0: AR_{i,t} = 0\) \(t_{M-K}\) Daily impact at each event time
CAR t-test CARTTest \(H_0: CAR_i = 0\) \(t_{M-K}\) Total impact over event window
BHAR t-test BHARTTest \(H_0: BHAR_i = 0\) \(t_{M-K}\) Long-horizon compounded returns

All tests share the same estimation-window standard deviation:

\[ \hat{S}_i = \sqrt{\frac{1}{M - K}\sum_{t=t_0}^{t_1} AR_{i,t}^2} \]

where \(M = t_1 - t_0\) is the estimation window length and \(K\) is the model degrees of freedom (e.g., \(K=2\) for the Market Model).

AR t-test

Tests whether the abnormal return on a single day is significantly different from zero.

\[ t_{AR_{i,t}} = \frac{AR_{i,t}}{\hat{S}_i} \sim t_{M-K} \]

A significant t-statistic at event time \(t=0\) indicates the event had an immediate price impact on firm \(i\).

# AR with t-statistics for each event
task$get_ar(event_id = 1)
# A tibble: 11 × 2
   relative_index abnormal_returns
            <int>            <dbl>
 1             -5         0.0172  
 2             -4        -0.00182 
 3             -3        -0.0290  
 4             -2         0.0133  
 5             -1         0.0114  
 6              0         0.000899
 7              1         0.00415 
 8              2        -0.0114  
 9              3         0.00929 
10              4         0.00780 
11              5        -0.00530 
Pros Cons
Tests each day individually Ignores cumulative effects
Identifies exact timing of impact Multiple testing problem across days
Simple interpretation Sensitive to model choice

CAR t-test

Tests whether the cumulative abnormal return over the entire event window is significantly different from zero.

\[ CAR_i(t_2, t_3) = \sum_{t=t_2}^{t_3} AR_{i,t} \]

\[ t_{CAR_i} = \frac{CAR_i(t_2, t_3)}{\sqrt{t_3 - t_2 + 1} \cdot \hat{S}_i} \sim t_{M-K} \]

The denominator scales by \(\sqrt{L}\) (event window length) because the variance of the sum grows linearly with the number of independent observations.

# CAR with t-statistics
task$get_car(event_id = 1)
# A tibble: 11 × 3
   relative_index abnormal_returns       car
            <int>            <dbl>     <dbl>
 1             -5         0.0172    0.0172  
 2             -4        -0.00182   0.0154  
 3             -3        -0.0290   -0.0137  
 4             -2         0.0133   -0.000319
 5             -1         0.0114    0.0111  
 6              0         0.000899  0.0120  
 7              1         0.00415   0.0161  
 8              2        -0.0114    0.00469 
 9              3         0.00929   0.0140  
10              4         0.00780   0.0218  
11              5        -0.00530   0.0165  
plot_event_study(task, type = "car")

Pros Cons
Captures total event impact Sensitive to window length choice
More powerful than single-day AR test Assumes AR independence over time
Standard in the literature May include confounding events in wide windows

BHAR t-test

For long-horizon studies (months to years). Uses compounded (buy-and-hold) returns instead of summed abnormal returns.

\[ BHAR_i = \prod_{t=1}^{T}(1 + R_{i,t}) - \prod_{t=1}^{T}(1 + R_{m,t}) \]

\[ t_{BHAR_i} = \frac{BHAR_i}{\hat{S}_i \cdot \sqrt{T}} \sim t_{M-K} \]

The key difference from CAR: compounding captures the actual investor experience over long horizons, while summing AR underestimates cumulative returns due to the missing cross-product terms.

# Use BHAR t-test for long-horizon studies
ps <- ParameterSet$new(
  single_event_statistics = SingleEventStatisticsSet$new(
    tests = list(BHARTTest$new())
  )
)
Pros Cons
Captures compounding over long horizons Not appropriate for short windows
Reflects actual investor experience Standard errors grow with horizon
Better for IPO / M&A long-run studies Sensitive to rebalancing assumptions

Permutation Test

When residuals violate normality, permutation tests provide exact p-values without distributional assumptions. The idea: if the event has no effect, then the event-window AR should look no different from estimation-window AR.

Algorithm:

  1. Pool all abnormal returns from the estimation window (\(m\) observations) and event window (\(n\) observations): \(\mathbf{X} = \{X_1, \ldots, X_{m+n}\}\)
  2. Compute the test statistic \(T\) (e.g., AR t-test or CAR t-test) on the original data
  3. Randomly permute \(\mathbf{X}\) without replacement. Recompute \(T^*\) on the permuted data
  4. Repeat \(B\) times (e.g., \(B = 10{,}000\))
  5. The p-value is the fraction of permutations where \(T^*\) exceeds \(T\):

\[ p_{\text{perm}} = \frac{1}{B}\sum_{b=1}^{B} \mathbf{1}[T^*_b > T] \]

When to use permutation tests. Use when the Shapiro-Wilk test rejects normality (see Diagnostics), or when the estimation window is short (< 60 days) and asymptotic approximations are unreliable.

References:

Configuring Single-Event Tests

# Default: AR t-test + CAR t-test
ps <- ParameterSet$new()

# Add BHAR t-test
ps <- ParameterSet$new(
  single_event_statistics = SingleEventStatisticsSet$new(
    tests = list(ARTTest$new(), CARTTest$new(), BHARTTest$new())
  )
)

# One-sided test (positive abnormal returns only)
ps <- ParameterSet$new(
  single_event_statistics = SingleEventStatisticsSet$new(
    tests = list(
      ARTTest$new(confidence_type = "one-sided"),
      CARTTest$new(confidence_type = "one-sided")
    )
  )
)

# Custom confidence level
ps <- ParameterSet$new(
  single_event_statistics = SingleEventStatisticsSet$new(
    tests = list(ARTTest$new(confidence_level = 0.99))
  )
)

Literature

Next Steps