Causal Inference for Time Series

Causal inference involves finding the effect of intervention on one set of variables, on another variable. For instance, if A->B->C. Then all the three variables may be correlated, but intervention on C, does not affect the values of B, since C is not a causal ancestor of of B. But on the other hand, interventions on A or B, both affect the values of C.

While there are many different kinds of causal inference questions one may be interested in, we currently support two kinds– Average Treatment Effect (ATE) and conditional ATE (CATE). In ATE, we intervene on one set of variables with a treatment value and a control value, and estimate the expected change in value of some specified target variable. Mathematically,

\[\texttt{ATE} = \mathbb{E}[Y | \texttt{do}(X=x_t)] - \mathbb{E}[Y | \texttt{do}(X=x_c)]\]

where \(\texttt{do}\) denotes the intervention operation. In words, ATE aims to determine the relative expected difference in the value of \(Y\) when we intervene \(X\) to be \(x_t\) compared to when we intervene \(X\) to be \(x_c\). Here \(x_t\) and \(x_c\) are respectively the treatment value and control value.

CATE makes a similar estimate, but under some condition specified for a set of variables. Mathematically,

\[\texttt{CATE} = \mathbb{E}[Y | \texttt{do}(X=x_t), C=c] - \mathbb{E}[Y | \texttt{do}(X=x_c), C=c]\]

where we condition on some set of variables \(C\) taking value \(c\). Notice here that \(X\) is intervened but \(C\) is not.

While ATE and CATE estimate expectation over the population, Counterfactuals aim at estimating the effect of an intervention on a specific instance or sample. Suppose we have a specific instance of a system of random variables \((X_1, X_2,...,X_N)\) given by \((X_1=x_1, X_2=x_2,...,X_N=x_N)\), then in a counterfactual, we want to know the effect an intervention (say) \(X_1=k\) would have had on some other variable(s) (say \(X_2\)), holding all the remaining variables fixed. Mathematically, this can be expressed as,

\[\texttt{Counterfactual} = X_2 | \texttt{do}(X_1=k), X_3=x_3, X_4=4,\cdots,X_N=x_N\]

To understand how causal inference works in the case of time series, let’s consider the following graph as an example:

[1]:
from causalai.misc.misc import plot_graph
from causalai.data.data_generator import DataGenerator


fn = lambda x:x
coef = 1.
sem = {
        'a': [(('a', -1), coef, fn),],
        'b': [(('a', -1), coef, fn), (('b', -1), coef, fn),],
        'c': [(('c', -1), coef, fn), (('b', -1), coef, fn),],
        'd': [(('c', -1), coef, fn), (('d', -1), coef, fn),]
        }
T = 2000
data,var_names,graph_gt = DataGenerator(sem, T=T, seed=0)
plot_graph(graph_gt)

Given this graph with 4 variables– a b, c and d, and some observational data in the form for a \(T \times 4\) matrix, suppose we want to estimate the causal effect of interventions of the variable b on variable d. The SCM for this graph takes the form:

\[a[t] = f_a(a[t-1]) + n_a\]
\[b[t] = f_b(a[t-1], b[t-1]) + n_b\]
\[c[t] = f_c(b[t-1], c[t-1]) + n_c\]
\[d[t] = f_d(c[t-1], d[t-1]) + n_d\]

Here \(n_x\) are noise terms. Then intervening the values of the variable b at each time step, i.e., \(do(b[t])\) for every \(t\), causally affects the values of \(d\). This is because \(d\) directly depends on \(c\), and \(c\) depends on \(b\), thus there is an indirect causal effect.

Notice that if we were to intervene both \(a\) and \(b\), the intervention of \(a\) would not have any impact on \(d\) because it is blocked by \(b\), which is also intervened. On the other hand, if we were to intervene \(c\) in addition to \(b\), then the intervention of \(b\) would not have any impact on \(d\) because it would be blocked by \(c\).

Coming back to the example shown in the above graph, we have established that an intervention on the values of \(b\) impacts the values of \(d\). Now suppose we want to calculate the treatment effect (say ATE) of this intervention on \(d\). For the purpose of this exposition, let’s consider just one of the terms in the ATE formula above, since both the terms have the same form. Specifically, we want to calculate,

\[\mathbb{E}_t[d[t] | \texttt{do}(b)]\]

Conceptually, this is achieved by setting the value of \(b[t]=v\) (\(v\) is any desired value) in the observational data at every time step \(t \in \{0,1,...,T\}\), then starting from \(t=0\) in the above equations, we iterate through these equations in the order \(b[t]\), \(c[t]\), and \(d[t]\) (the causal order), for each time \(t\). Notice that we do not need to evaluate the equation for \(a[t]\) because the intervention does not affect its value at any time step, and therefore, it remains the same as the values in the given observational data. This saves computation. We would similarly have ignored any other variable during this computation if it was either not affected by the intervention, or if there was no causal path from that variable to the target variable \(d\). Finally, to compute \(\mathbb{E}_t[d[t] | \texttt{do}(b)]\), we simply average over the values of \(d\) computed using this procedure for all time steps.

Notice that we do not need to evaluate the equation for \(a\) in this process because its value has on impact on \(d\) once we intervene \(b\). This saves computation. We would similarly have ignored any other variable during this computation if it was either not affected by the intervention, or if there was no causal path from that variable to the target variable \(d\).

Now that we have a conceptual understanding, we point out that in reality, the functions \(f_x\) for \(x \in \{b,c,d \}\) are unknown in practice. In fact, given only observational data, we do not even know the causal graph as the one shown in the example above. Therefore, causal inference is treated as a two step process. First we estimate the causal graph using the observational data. We then use one of the various techniques to perform causal inference given both the observational data and the causal graph.

Causal Inferencne methods supported by CausalAI

In our library, for time series data, we support our in-house causal_path method that simulates the conceptual process described above for causal inference.

causal_path method (defaut)

Conceptually, this method works in two steps. For illustration, let’s use the causal graph shown above as our example. 1. We train two models \(P_{\theta_1}(c[t]|c[t-1], b[t-1])\) and \(P_{\theta_2}(d[t]|d[t-1], c[t-1])\) to predict \(c[t]\) from \(c[t-1], b[t-1]\), and \(d[t]\) from \(d[t-1], c[t-1]\), using the observational data. We have not used the intervention information in this step. 2. we set the value of \(b[t]=v\) (\(v\) is the desired intervention value) for all the time steps in the observational data, then traverse the causal graph in the order \(b\), \(c\), and \(d\) (the causal order), for each observation. For each of the nodes c and d, we use the corresponding trained models \(P_{\theta_1}(c[t]|c[t-1], b[t-1])\) and \(P_{\theta_2}(d[t]|d[t-1], c[t-1])\) as proxies for the unknown functions \(f_c\) and \(f_d\), and follow the steps described above to estimate the causal effect.

[1]:
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
%matplotlib inline
import pickle as pkl
import time
from functools import partial

from causalai.misc.misc import plot_graph
from causalai.data.data_generator import DataGenerator, ConditionalDataGenerator
from causalai.models.time_series.causal_inference import CausalInference
from sklearn.linear_model import LinearRegression
from sklearn.neural_network import MLPRegressor

def define_treatments(name, t,c):
    treatment = dict(var_name=name,
                    treatment_value=t,
                    control_value=c)
    return treatment

Continuous Data

For this example, we will use synthetic data that has linear dependence among data variables.

[2]:
fn = lambda x:x
coef = 0.1
sem = {
        'a': [],
        'b': [(('a', -1), coef, fn), (('f', -1), coef, fn)],
        'c': [(('b', -2), coef, fn), (('f', -2), coef, fn)],
        'd': [(('b', -4), coef, fn), (('g', -1), coef, fn)],
        'e': [(('f', -1), coef, fn)],
        'f': [],
        'g': [],
        }
T = 5000
data,var_names,graph_gt = DataGenerator(sem, T=T, seed=0)
graph_gt
[2]:
{'a': [],
 'b': [('a', -1), ('f', -1)],
 'c': [('b', -2), ('f', -2)],
 'd': [('b', -4), ('g', -1)],
 'e': [('f', -1)],
 'f': [],
 'g': []}
[3]:
# Notice c does not depend on a if we intervene on b. Hence intervening a has no effect in this case.
# This can be verified by changing the intervention values of variable a, which should have no impact on the ATE.
# (see graph_gt above)

t1='a'
t2='b'
target = 'c'
target_var = var_names.index(target)

intervention11 = 1*np.ones(T)
intervention21 = 10*np.ones(T)
intervention_data1,_,_ = DataGenerator(sem, T=T, seed=0,
                        intervention={t1:intervention11, t2:intervention21})

intervention12 = -0.*np.ones(T)
intervention22 = -2.*np.ones(T)
intervention_data2,_,_ = DataGenerator(sem, T=T, seed=0,
                        intervention={t1:intervention12, t2:intervention22})



true_effect = (intervention_data1[:,target_var] - intervention_data2[:,target_var]).mean()
print("True ATE = %.2f" %true_effect)
True ATE = 1.20
[4]:

tic = time.time() treatments = [define_treatments(t1, intervention11,intervention12),\ define_treatments(t2, intervention21,intervention22)] # CausalInference_ = CausalInference(data, var_names, graph_gt,\ # partial(MLPRegressor, hidden_layer_sizes=(100,100)) , False) CausalInference_ = CausalInference(data, var_names, graph_gt, LinearRegression , discrete=False) ate, y_treat,y_control = CausalInference_.ate(target, treatments) print(f'Estimated ATE: {ate:.2f}') toc = time.time() print(f'{toc-tic:.2f}s')

Estimated ATE: 1.19
0.98s

The data is generated using the following structural equation model:

\[C = noise\]
\[W = C + noise\]
\[X = C*W + noise\]
\[Y = C*X + noise\]

We will treat C as the condition variable, X as the intervention variable, and Y as the target variable in our example below. The noise used in our example is sampled from the standard Gaussian distribution.

[5]:
T=5000
data, var_names, graph_gt = ConditionalDataGenerator(T=T, data_type='time_series', seed=0, discrete=False)
# var_names = ['C', 'W', 'X', 'Y']
treatment_var='X'
target = 'Y'
target_idx = var_names.index(target)

intervention1 = 0.1*np.ones(T, dtype=float)
intervention_data1,_,_ = ConditionalDataGenerator(T=T, data_type='time_series',\
                                    seed=0, intervention={treatment_var:intervention1}, discrete=False)

intervention2 = 0.9*np.ones(T, dtype=float)
intervention_data2,_,_ = ConditionalDataGenerator(T=T, data_type='time_series',\
                                    seed=0, intervention={treatment_var:intervention2}, discrete=False)
[6]:
condition_state=2.1
diff = np.abs(data[:,0] - condition_state)
idx = np.argmin(diff)
# assert diff[idx]<0.1, f'No observational data exists for the conditional variable close to {condition_state}'


cate_gt = (intervention_data1[idx,target_idx] - intervention_data2[idx,target_idx])
print(f'Approx True CATE: {cate_gt:.2f}')

####
treatments = define_treatments(treatment_var, intervention1,intervention2)
conditions = {'var_name': 'C', 'condition_value': condition_state}

tic = time.time()
model = partial(MLPRegressor, hidden_layer_sizes=(100,100), max_iter=200)
CausalInference_ = CausalInference(data, var_names, graph_gt, model, discrete=False)#

cate = CausalInference_.cate(target, treatments, conditions, model)
toc = time.time()
print(f'Estimated CATE: {cate:.2f}')
print(f'Time taken: {toc-tic:.2f}s')
Approx True CATE: -1.69
Estimated CATE: -1.81
Time taken: 5.17s
[7]:
fn = lambda x:x
coef = 0.1
sem = {
        'a': [],
        'b': [(('a', -1), coef, fn), (('f', -1), coef, fn)],
        'c': [(('b', 0), coef, fn), (('f', -2), coef, fn)],
        'd': [(('b', -4), coef, fn), (('g', -1), coef, fn)],
        'e': [(('f', -1), coef, fn)],
        'f': [],
        'g': [],
        }
T = 5000
data,var_names,graph_gt = DataGenerator(sem, T=T, seed=0)
# plot_graph(graph_gt, node_size=500)

intervention={'b':np.array([10.]*10), 'e':np.array([-100.]*10)}
target_var = 'c'

sample, _, _= DataGenerator(sem, T=10, noise_fn=None,\
                                    intervention=None, discrete=False, nstates=10, seed=0)
sample_intervened, _, _= DataGenerator(sem, T=10, noise_fn=None,\
                                    intervention=intervention, discrete=False, nstates=10, seed=0)

sample=sample[-1] # use the last time step as our sample
sample_intervened=sample_intervened[-1] # use the last time step as our sample and compute ground truth intervention
var_orig = sample[var_names.index(target_var)]
var_counterfactual_gt = sample_intervened[var_names.index(target_var)] # ground truth counterfactual
# print(f'Original value of var {target_var}: {var_orig:.2f}')
[8]:
interventions = {name:float(val[0]) for name, val in intervention.items()}
print(f'True counterfactual {var_counterfactual_gt:.2f}')

# model = partial(MLPRegressor, hidden_layer_sizes=(100,100), max_iter=200)
model = LinearRegression
# model=None
CausalInference_ = CausalInference(data, var_names, graph_gt, model, discrete=False)
# model = None
counterfactual_et = CausalInference_.counterfactual(sample, target_var, interventions, model)
print(f'Estimated counterfactual {counterfactual_et:.2f}')
True counterfactual 1.16
Estimated counterfactual 1.26

Discrete Data

The synthetic data generation procedure for the ATE and CATE examples below are identical to the procedure followed above for the continuous case, except that the generated data is discrete in the cases below.

Importantly, when referring as discrete, we only treat the intervention variables as discrete in this case. The target variables and other variables are considered as continuous. Specifically, it doesn’t make sense for the target variable to be discrete when we compute ATE or CATE, because it involves estimating the difference in states of the target variable, and for discrete variables, the difference between two states is not a meaningful quantity (as discrete states are symbolic in nature).

For this example, we will use synthetic data that has linear dependence among data variables.

[9]:
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
import pickle as pkl
import time
from functools import partial

from causalai.data.data_generator import DataGenerator, ConditionalDataGenerator
from causalai.models.time_series.causal_inference import CausalInference
from sklearn.linear_model import LinearRegression
from sklearn.neural_network import MLPRegressor
from sklearn.linear_model import LogisticRegression
from sklearn.neural_network import MLPClassifier
from sklearn.tree import DecisionTreeClassifier

def define_treatments(name, t,c):
    treatment = dict(var_name=name,
                    treatment_value=t,
                    control_value=c)
    return treatment
[10]:
fn = lambda x:x
coef = 0.5
sem = {
        'a': [],
        'b': [(('a', -1), coef, fn), (('f', -1), coef, fn)],
        'c': [(('b', -2), coef, fn), (('f', -2), coef, fn)],
        'd': [(('b', -4), coef, fn), (('b', -1), coef, fn), (('g', -1), coef, fn)],
        'e': [(('f', -1), coef, fn)],
        'f': [],
        'g': [],
        }
T = 5000

t1='a'
t2='b'
target = 'c'
discrete = {name:True if name in [t1,t2] else False for name in sem.keys()}

data,var_names,graph_gt = DataGenerator(sem, T=T, seed=0, discrete=discrete, nstates=10)
graph_gt
[10]:
{'a': [],
 'b': [('a', -1), ('f', -1)],
 'c': [('b', -2), ('f', -2)],
 'd': [('b', -4), ('b', -1), ('g', -1)],
 'e': [('f', -1)],
 'f': [],
 'g': []}

Notice how we specify the variable discrete above. We specify the intervention variables as discrete, while the others as continuous, as per our explanation above.

[11]:


target_var = var_names.index(target) # note that states can be [0,1,...,9], so the multiples below must be in this range intervention11 = 0*np.ones(T, dtype=int) intervention21 = 7*np.ones(T, dtype=int) intervention_data1,_,_ = DataGenerator(sem, T=T, seed=0, intervention={t1: intervention11, t2:intervention21}, discrete=discrete, nstates=10) intervention12 = 9*np.ones(T, dtype=int) intervention22 = 2*np.ones(T, dtype=int) intervention_data2,_,_ = DataGenerator(sem, T=T, seed=0, intervention={t1:intervention12, t2:intervention22}, discrete=discrete, nstates=10) true_effect = (intervention_data1[:,target_var] - intervention_data2[:,target_var]).mean() print("Ground truth ATE = %.2f" %true_effect)
Ground truth ATE = 0.82
[12]:

tic = time.time() treatments = [define_treatments(t1, intervention11,intervention12),\ define_treatments(t2, intervention21,intervention22)] model = partial(MLPRegressor, hidden_layer_sizes=(100,100), max_iter=200) # LinearRegression CausalInference_ = CausalInference(data, var_names, graph_gt, model, discrete=True)# o, y_treat,y_control = CausalInference_.ate(target, treatments) print(f'Estimated ATE: {o:.2f}') toc = time.time() print(f'Time taken: {toc-tic:.2f}s')

Estimated ATE: 0.78
Time taken: 2.08s

For this example we will use synthetic data that has non-linear dependence among data variables.

[13]:
T=5000
treatment_var='X'
target = 'Y'
target_idx = ['C', 'W', 'X', 'Y'].index(target)

discrete = {name:True if name==treatment_var else False for name in ['C', 'W', 'X', 'Y']}
data, var_names, graph_gt = ConditionalDataGenerator(T=T, data_type='time_series', seed=0, discrete=discrete, nstates=10)
# var_names = ['C', 'W', 'X', 'Y']



# note that states can be [0,1,...,9], so the multiples below must be in this range
intervention1 = 9*np.ones(T, dtype=int)
intervention_data1,_,_ = ConditionalDataGenerator(T=T, data_type='time_series',\
                                    seed=0, intervention={treatment_var:intervention1}, discrete=discrete, nstates=10)

intervention2 = 1*np.ones(T, dtype=int)
intervention_data2,_,_ = ConditionalDataGenerator(T=T, data_type='tabular',\
                                    seed=0, intervention={treatment_var:intervention2}, discrete=discrete, nstates=10)
graph_gt
[13]:
{'C': [],
 'W': [('C', 0)],
 'X': [('C', 0), ('W', 0)],
 'Y': [('C', 0), ('X', 0)]}
[14]:
condition_var = 'C'
condition_var_idx = var_names.index(condition_var)
condition_state=0.5
idx = np.argmin(np.abs(data[:,condition_var_idx]-condition_state))
cate_gt = (intervention_data1[idx,target_idx] - intervention_data2[idx,target_idx]).mean()
print(f'Approx True CATE: {cate_gt:.2f}')

####
treatments = define_treatments(treatment_var, intervention1,intervention2)
conditions = {'var_name': condition_var, 'condition_value': condition_state}

tic = time.time()
model = partial(MLPRegressor, hidden_layer_sizes=(100,100), max_iter=200)
CausalInference_ = CausalInference(data, var_names, graph_gt, model, discrete=True)#

cate = CausalInference_.cate(target, treatments, conditions, model)
toc = time.time()
print(f'Estimated CATE: {cate:.2f}')
print(f'Time taken: {toc-tic:.2f}s')
Approx True CATE: 4.61
Estimated CATE: 1.55
Time taken: 7.01s
[15]:
fn = lambda x:x
coef = 0.1
sem = {
        'a': [],
        'b': [(('a', -1), coef, fn), (('f', -1), coef, fn)],
        'c': [(('b', 0), coef, fn), (('f', -2), coef, fn)],
        'd': [(('b', -4), coef, fn), (('g', -1), coef, fn)],
        'e': [(('f', -1), coef, fn)],
        'f': [],
        'g': [],
        }
T = 5000

intervention={'b':np.array([9]*10), 'e':np.array([0]*10)}
target_var = 'c'
discrete = {name:True if name in intervention.keys() else False for name in sem.keys()}

data,var_names,graph_gt = DataGenerator(sem, T=T, seed=0, discrete=discrete)
# plot_graph(graph_gt, node_size=500)

sample, _, _= DataGenerator(sem, T=10, noise_fn=None,\
                                    intervention=None, discrete=discrete, nstates=10, seed=0)
sample_intervened, _, _= DataGenerator(sem, T=10, noise_fn=None,\
                                    intervention=intervention, discrete=discrete, nstates=10, seed=0)

sample=sample[-1] # use the last time step as our sample
sample_intervened=sample_intervened[-1] # use the last time step as our sample and compute ground truth intervention
var_orig = sample[var_names.index(target_var)]
var_counterfactual_gt = sample_intervened[var_names.index(target_var)] # ground truth counterfactual
# print(f'Original value of var {target_var}: {var_orig:.2f}')

[16]:
interventions = {name:val[0] for name, val in intervention.items()}
print(f'True counterfactual {var_counterfactual_gt:.2f}')
# model = partial(MLPRegressor, hidden_layer_sizes=(100,100), max_iter=200)
model = LinearRegression
# model=None
CausalInference_ = CausalInference(data, var_names, graph_gt, model, discrete=True)
# model = None
counterfactual_et = CausalInference_.counterfactual(sample, target_var, interventions, model)
print(f'Estimated counterfactual {counterfactual_et:.2f}')
True counterfactual 0.30
Estimated counterfactual 0.22
[ ]: