Picard Iteration for SDEs¶
Picard iteration (successive approximations) is the constructive proof of existence and uniqueness for SDEs under Lipschitz and linear growth conditions. It is the stochastic analogue of the classical Picard–Lindelöf theorem for ODEs. The conditions that make it work are stated in Lipschitz Conditions and Linear Growth.
The Iteration Scheme¶
Given the SDE in integral form:
define the sequence \(\{X_t^{(n)}\}_{n \geq 0}\) by:
Under Lipschitz and linear growth conditions, \(X^{(n)} \to X\) where \(X\) is the unique strong solution.
Convergence Analysis¶
Error sequence¶
Define:
Step 1 — Bounding the first increment¶
We need \(e_0(T) < \infty\). Starting from \(X_t^{(0)} = x_0\):
Since \(x_0\) is deterministic, no expectation is needed before applying linear growth. For the drift term, Cauchy–Schwarz gives:
For the stochastic integral term, Doob's maximal inequality followed by the Itô isometry gives:
where the linear growth condition \(|\sigma(s,x_0)|^2 \leq K^2(1+|x_0|^2)\) is used in both. Combining (and taking the expectation of the deterministic drift bound):
where \(C(K)\) is a finite constant depending only on \(K\).
Step 2 — Recursive estimate¶
For \(n \geq 1\), write:
The same three tools as Step 1, now applied to the differences, give:
| Tool | Applied to | Outcome |
|---|---|---|
| Cauchy–Schwarz | drift difference integral | $\leq t\int_0^t K^2\mathbb{E} |
| Doob's maximal inequality | stochastic integral sup | $\leq 4\,\mathbb{E} |
| Itô isometry | $\mathbb{E} | M_t |
Applying the Lipschitz bound \(|b(t,x)-b(t,y)| + |\sigma(t,x)-\sigma(t,y)| \leq K|x-y|\) and combining:
where \(C = C(K, T)\) is a finite constant.
Step 3 — Factorial decay by induction¶
Claim:
Proof. Base case \(n = 0\): this is Step 1. Inductive step: assume the bound holds at \(n-1\). Then by Step 2:
Step 4 — Convergence in \(L^2\)¶
Since \(n! \sim (n/e)^n\sqrt{2\pi n}\) grows faster than any exponential,
Hence \(\{X^{(n)}\}\) is Cauchy in \(L^2(\Omega; C([0,T]))\) — the Banach space of square-integrable, a.s.-continuous \(\mathbb{R}^d\)-valued processes with norm \(\|X\| = \mathbb{E}[\sup_{t \leq T}|X_t|^2]^{1/2}\). By completeness it converges to a limit \(X\). Passing to the limit in the integral equation — using dominated convergence for the drift (the dominating function \(\sup_n |b(s, X_s^{(n)})|\) is integrable by linear growth and the uniform \(L^2\) moment bound from Step 1) and \(L^2\) continuity of the Itô integral for the diffusion — shows \(X\) satisfies the SDE.
Uniqueness¶
Theorem. Under global Lipschitz and linear growth conditions, the solution is unique.
Proof. Let \(X_t\) and \(Y_t\) be two solutions with \(X_0 = Y_0 = x_0\). Set \(Z_t = X_t - Y_t\). Then:
By the same estimates as Step 2 (Cauchy–Schwarz + Doob + Itô isometry + Lipschitz):
Since \(Z_0 = 0\), Gronwall's inequality gives \(\mathbb{E}[\sup_{s \leq t}|Z_s|^2] = 0\) for all \(t\), hence \(X_t = Y_t\) a.s. \(\square\)
Example: Linear SDE (Full Iteration)¶
Consider \(dX_t = \alpha X_t\,dt + \beta X_t\,dW_t\) with \(X_0 = x_0\). This is globally Lipschitz with \(K = |\alpha| + |\beta|\).
Iterates:
Substituting \(X_s^{(1)} = x_0(1 + \alpha s + \beta W_s)\) into the iteration:
Both remaining integrals evaluate explicitly. By Itô's formula applied to \(sW_s\):
By Itô's formula applied to \(f(x) = x^2/2\):
Substituting both:
This is fully explicit in \(t\), \(W_t\), and \(\int_0^t W_s\,ds\). As \(n \to \infty\), the iterated stochastic integrals form the stochastic exponential series, and the sequence converges to:
This is the unique strong solution (geometric Brownian motion when \(\alpha = \mu\), \(\beta = \sigma\)), verifiable directly by Itô's formula.
Comparison with ODE Picard Iteration¶
| Aspect | ODE | SDE |
|---|---|---|
| Iteration | \(x^{(n+1)} = x_0 + \int_0^t f(s,x^{(n)})\,ds\) | \(X^{(n+1)} = x_0 + \int_0^t b(s,X^{(n)})\,ds + \int_0^t \sigma(s,X^{(n)})\,dW_s\) |
| Convergence metric | \(\sup\) norm | \(L^2\)-\(\sup\) norm: $\mathbb{E}[\sup_t |
| Extra tools vs ODE | — | Doob's maximal inequality + Itô isometry |
| Decay rate | \((Kt)^n/n!\) | \((Ct)^n/n!\) |
| Uniqueness | Gronwall on $\sup | Z_t |
Summary¶
The argument has four logical layers:
- Linear growth + Cauchy–Schwarz + Doob + Itô isometry bound the first increment: \(e_0(T) < \infty\).
- Lipschitz + the same three tools give the recursive bound \(e_n(t) \leq C\int_0^t e_{n-1}(s)\,ds\).
- Induction produces the factorial decay \(e_n(t) \leq (Ct)^n e_0(T)/n!\).
- Gronwall with zero initial data closes the uniqueness argument.
Exercises¶
Exercise 1. Consider the SDE \(dX_t = \alpha\,dt + \beta\,dW_t\) with constant coefficients and \(X_0 = x_0\). Compute the Picard iterates \(X_t^{(0)}\), \(X_t^{(1)}\), and \(X_t^{(2)}\) explicitly, and verify that all iterates coincide with the exact solution \(X_t = x_0 + \alpha t + \beta W_t\).
Solution to Exercise 1
The coefficients are \(b(t,x) = \alpha\) and \(\sigma(t,x) = \beta\), both constant (independent of \(x\)).
Iterate 0:
Iterate 1:
Iterate 2: Since \(X_s^{(1)} = x_0 + \alpha s + \beta W_s\), we substitute into the iteration formula. Because \(b\) and \(\sigma\) are constant (they do not depend on \(X_s^{(1)}\)):
Since the coefficients are independent of \(x\), every iterate gives the same result:
This coincides with the exact solution \(X_t = x_0 + \alpha t + \beta W_t\), which can be verified directly by taking differentials: \(dX_t = \alpha\,dt + \beta\,dW_t\).
Exercise 2. For the linear SDE \(dX_t = \alpha X_t\,dt + \beta X_t\,dW_t\), the first Picard iterate is
Compute \(\mathbb{E}[|X_t^{(1)} - X_t^{(0)}|^2]\) and verify that it matches the bound \(e_0(t) \leq C(K)(1 + |x_0|^2)\,t\) from Step 1 of the convergence analysis.
Solution to Exercise 2
We have \(X_t^{(0)} = x_0\) and \(X_t^{(1)} = x_0(1 + \alpha t + \beta W_t)\), so:
Computing the second moment:
Expanding the square and using \(\mathbb{E}[W_t] = 0\), \(\mathbb{E}[W_t^2] = t\):
Therefore:
For the bound \(e_0(t) \leq C(K)(1 + |x_0|^2)\,t\), note that the Lipschitz constant is \(K = |\alpha| + |\beta|\). We have:
where the last inequality uses \(\alpha^2 T + \beta^2 \leq K^2(T+1)\) and absorbing constants. This confirms the Step 1 bound.
Exercise 3. Prove the recursive estimate in detail. Starting from
apply (a) the inequality \((a+b)^2 \leq 2a^2 + 2b^2\), (b) Cauchy--Schwarz to the drift integral, (c) Doob's maximal inequality followed by the Ito isometry to the stochastic integral, and (d) the Lipschitz condition, to arrive at
Identify the constant \(C\) in terms of the Lipschitz constant \(K\) and the time horizon \(T\).
Solution to Exercise 3
Starting from the difference:
(a) Apply \((a+b)^2 \leq 2a^2 + 2b^2\):
(b) For the drift term, Cauchy--Schwarz gives:
so \(\sup_{s \leq t}|I_s|^2 \leq t \int_0^t |b(u, X_u^{(n)}) - b(u, X_u^{(n-1)})|^2\,du\).
(c) For the stochastic integral, Doob's maximal inequality gives:
and the Ito isometry gives:
(d) Applying the Lipschitz condition \(|b(s,x) - b(s,y)|^2 + |\sigma(s,x) - \sigma(s,y)|^2 \leq 2K^2|x-y|^2\) (which follows from \((|b|+|\sigma|)^2 \leq 2(|b|^2 + |\sigma|^2)\) and the Lipschitz bound):
Combining all estimates and taking expectations:
Since \(\mathbb{E}|X_s^{(n)} - X_s^{(n-1)}|^2 \leq \mathbb{E}[\sup_{u \leq s}|X_u^{(n)} - X_u^{(n-1)}|^2] = e_{n-1}(s)\):
where \(C = 4K^2(T + 2)\).
Exercise 4. The factorial decay \(e_n(t) \leq (Ct)^n e_0(T)/n!\) is proved by induction. Use this bound to show that
Explain why this summability implies that \(\{X^{(n)}\}\) is a Cauchy sequence in the space \(L^2(\Omega; C([0,T], \mathbb{R}^d))\).
Solution to Exercise 4
Using the factorial decay bound \(e_n(t) \leq (Ct)^n e_0(T)/n!\):
Therefore:
The series \(\sum_{n=0}^\infty (CT)^{n/2}/\sqrt{n!}\) converges by comparison with the exponential series. Specifically, by Stirling's approximation \(n! \geq (n/e)^n\), so \(\sqrt{n!} \geq (n/e)^{n/2}\), and the ratio test gives:
so the series converges.
Why this implies Cauchy: Define \(S_N(t) = X_t^{(0)} + \sum_{n=0}^{N-1}(X_t^{(n+1)} - X_t^{(n)})\), so \(S_N = X^{(N)}\). The norm in \(L^2(\Omega; C([0,T]))\) is \(\|X\| = \mathbb{E}[\sup_{t \leq T}|X_t|^2]^{1/2}\). For \(M > N\):
Since \(\sum_n \sqrt{e_n(T)} < \infty\), the partial sums form a Cauchy sequence. Completeness of \(L^2(\Omega; C([0,T], \mathbb{R}^d))\) then guarantees convergence to a limit \(X\).
Exercise 5. Consider the Ornstein--Uhlenbeck SDE \(dX_t = -\kappa X_t\,dt + \nu\,dW_t\) with \(X_0 = x_0\). Compute the first three Picard iterates \(X_t^{(0)}, X_t^{(1)}, X_t^{(2)}\) and verify that they match the Taylor expansion (in powers of \(\kappa\)) of the known exact solution
Solution to Exercise 5
The OU SDE is \(dX_t = -\kappa X_t\,dt + \nu\,dW_t\) with \(b(t,x) = -\kappa x\) and \(\sigma(t,x) = \nu\).
Iterate 0:
Iterate 1:
Iterate 2: Substituting \(X_s^{(1)} = x_0(1 - \kappa s) + \nu W_s\):
Comparison with the exact solution: The known solution is:
The Taylor expansion of the deterministic part is \(x_0 e^{-\kappa t} = x_0(1 - \kappa t + \kappa^2 t^2/2 - \cdots)\). For the stochastic integral, expand \(e^{-\kappa(t-s)} = 1 - \kappa(t-s) + \cdots\):
Using integration by parts, \(\int_0^t (t-s)\,dW_s = tW_t - \int_0^t s\,dW_s\) and \(\int_0^t s\,dW_s = tW_t - \int_0^t W_s\,ds\), so \(\int_0^t(t-s)\,dW_s = \int_0^t W_s\,ds\). Therefore the first-order stochastic correction is \(-\kappa\nu\int_0^t W_s\,ds\), matching \(X_t^{(2)}\).
Exercise 6. In the ODE setting, Picard iteration for \(\dot{x} = f(t,x)\) converges in the supremum norm. In the SDE setting, convergence is measured in the \(L^2\)-supremum norm \(\|X\| = \mathbb{E}[\sup_{t \leq T}|X_t|^2]^{1/2}\). Explain why the supremum norm alone is insufficient for SDEs. Specifically, what goes wrong if one tries to bound \(\sup_{s \leq t}|X_s^{(n+1)} - X_s^{(n)}|\) pathwise without taking expectations?
Solution to Exercise 6
In the ODE setting, \(\sup_{s \leq t}|x_s^{(n+1)} - x_s^{(n)}|\) is a deterministic quantity that can be bounded directly using the Lipschitz condition and Cauchy--Schwarz for ordinary integrals.
In the SDE setting, the difference \(X_t^{(n+1)} - X_t^{(n)}\) contains the stochastic integral:
The supremum of a stochastic integral cannot be bounded pathwise in a useful way. For any continuous local martingale \(M_t\), the sample paths of \(\sup_{s \leq t}|M_s|\) are not controlled by \(\langle M \rangle_t\) on a path-by-path basis — the supremum can be arbitrarily large on a set of small but positive probability.
To obtain a useful bound, one must take expectations and apply Doob's maximal inequality:
followed by the Ito isometry to convert \(\mathbb{E}[|M_t|^2]\) into an ordinary integral. Both tools operate at the level of \(L^2\) expectations, not individual paths. This is why the convergence metric must be \(\|X\| = \mathbb{E}[\sup_{t \leq T}|X_t|^2]^{1/2}\) rather than the pathwise supremum norm.
Exercise 7. Let \(X_t\) and \(Y_t\) be two solutions of the same SDE with \(X_0 = Y_0 = x_0\) under global Lipschitz conditions. Define \(\varphi(t) = \mathbb{E}[\sup_{s \leq t}|X_s - Y_s|^2]\). Show that \(\varphi\) satisfies \(\varphi(t) \leq C \int_0^t \varphi(s)\,ds\) and \(\varphi(0) = 0\). State Gronwall's inequality and use it to conclude \(\varphi(t) = 0\) for all \(t \in [0,T]\). Why is this stronger than the conclusion one would get from the Picard convergence argument alone?
Solution to Exercise 7
Deriving the integral inequality: Define \(Z_t = X_t - Y_t\). Since both solve the same SDE with \(X_0 = Y_0 = x_0\):
Using \((a+b)^2 \leq 2a^2 + 2b^2\), Cauchy--Schwarz on the drift integral, Doob's maximal inequality and the Ito isometry on the stochastic integral, and finally the Lipschitz condition:
Since \(Z_0 = 0\), we have \(\varphi(0) = 0\).
Gronwall's inequality (integral form): If \(\varphi : [0,T] \to [0,\infty)\) is continuous, \(\varphi(t) \leq \alpha + \beta\int_0^t \varphi(s)\,ds\) with \(\alpha \geq 0\) and \(\beta > 0\), then \(\varphi(t) \leq \alpha e^{\beta t}\).
Applying with \(\alpha = 0\) and \(\beta = C\): \(\varphi(t) \leq 0\) for all \(t\). Since \(\varphi(t) \geq 0\), we conclude \(\varphi(t) = 0\) for all \(t \in [0,T]\).
Why this is stronger than Picard convergence: The Picard convergence argument shows that the iterates \(X^{(n)}\) converge to a unique limit in \(L^2(\Omega; C([0,T]))\). This proves that the solution constructed by Picard iteration is unique among all possible limits of that iteration scheme. However, it does not immediately rule out the existence of a solution that is not the limit of the Picard iterates. The Gronwall argument is stronger because it takes any two solutions \(X\) and \(Y\) (not necessarily constructed by Picard iteration) and proves \(X = Y\) a.s. This establishes uniqueness among all adapted processes satisfying the SDE, not just among limits of a particular approximation scheme.