Stochastic recurrence relation

Set

thoughts on a formalization of the notion of sampling and repeated generation of sample paths
(as opposed to the notion of random variables, also governed by a probability distribution)

Fix a collection of “initial data” $x_a, x_b, \dots x_j$. A recurrence relation is a mathematical expression that allows one to compute further $x_k$ with $k>j$ from those values. It can thus be used to produce a (fixed) sequence, i.e. a function with ordinals as domain.

A stochastic recurrence relation is also a mathematical quantity that can be used to produce a sequences. However

  1. one must be able to sample data from a probability distribution (either instead or in addition to the initial data).
  2. the resulting sequence can (and usually will be and sometimes must) differ each time the stochastic recurrence relation is used.

Euler–Maruyama method for an Itô process

(one-dimensional case, see also Itō integral)

A popular class of such relations is of the form

$X_k := X_{k-1} + \mu(X_{k-1})\,{\mathrm d}t + \sigma(X_{k-1})\,{\mathrm d}{\mathcal W}_{{\mathrm d}t}$

where $\mu, \sigma$ are functions (i.e. doesn't contain stochastic expressions itself), ${\mathrm d}t$ is some real and ${\mathrm d}{\mathcal W}_{{\mathrm d}t}$ is a sample value from a normal distribution with variance ${\mathrm d}t$, i.e. a finite discretization version of a Wiener process (the symmetric distribution that has a calculus/the Itô lemma).

The above is the discretization of an Itô process

${\mathrm d}X_t := \mu(X_t)\,{\mathrm d}t + \sigma(X_t)\,{\mathrm d}{\mathcal W}$

Remark

For $\sigma(x):=0$, this reduces to the Euler scheme for the ODE

$\tfrac{{\mathrm d}}{{\mathrm d}t}x(t)=\mu(x(t))$.

Code
import numpy as np
import matplotlib.pyplot as plt
 
num_sims = 5
N        = 1000
 
y_init = 0
t_init = 3
t_end  = 7
y_max  = 1.5
 
c_theta = 0.7
c_sigma = 0.9
 
# I permit mu and sigma also to depend on time explicitly
 
def mu(y, t): 
    return c_theta * (y_max - y)
 
def sigma(y, t): 
    return c_sigma
 
dt = float(t_end - t_init) / N
dW = lambda: np.random.normal(loc = 0.0, scale = np.sqrt(dt)) * np.sqrt(dt)
 
t    = np.arange(t_init, t_end, dt)
y    = np.zeros(N)
y[0] = y_init
 
for i_sim in range(num_sims):
    for i in xrange(1, t.size):
        a =    mu(y[i-1], (i-1) * dt)
        b = sigma(y[i-1], (i-1) * dt)
        y[i] = y[i-1] + a * dt + b * dW()
 
    plt.plot(t, y)
 
plt.show()
Feynman–Kac formula

The solution of a big class of parabolic partial differential equations can be written as the expectation value of a certain random variable that's a function the above $X_t$.

Reference

https://en.wikipedia.org/wiki/Feynman%E2%80%93Kac_formula


Context

Sequence