Filtration¶
The Philosophy of Information Flow¶
Before diving into formal definitions, it is worth pausing to appreciate the conceptual leap that filtrations represent. In deterministic mathematics, we work with fixed quantities. In probability theory, we work with random variables—quantities whose values are determined by an underlying experiment. But in stochastic processes, we face a subtler challenge: how do we model the gradual revelation of information over time?
Consider a stock price observed throughout a trading day. At 9:00 AM, we know nothing beyond the opening price. By noon, we have observed the morning's trajectory. By market close, the entire day's path is revealed. The filtration \((\mathcal{F}_t)\) formalizes this progression: \(\mathcal{F}_t\) encodes precisely what can be known at time \(t\)—no more, no less.
This framework is not merely technical bookkeeping. It fundamentally shapes what questions we can ask and what strategies we can implement. A trading strategy that uses tomorrow's closing price to decide today's position is not just impractical—it is mathematically incoherent within this framework.
Prerequisite: σ-Algebras¶
Recall that a σ-algebra (or σ-field) \(\mathcal{F}\) on a set \(\Omega\) is a collection of subsets satisfying:
- \(\Omega \in \mathcal{F}\)
- If \(A \in \mathcal{F}\), then \(A^c \in \mathcal{F}\) (closure under complements)
- If \(A_1, A_2, \ldots \in \mathcal{F}\), then \(\bigcup_{n=1}^\infty A_n \in \mathcal{F}\) (closure under countable unions)
Derived properties: Closure under countable intersections follows from De Morgan's laws. Closure under finite unions and intersections follows as special cases.
Simplest non-trivial example: For any subset \(A \subseteq \Omega\), the collection \(\{\emptyset, A, A^c, \Omega\}\) is a σ-algebra with four elements.
Generated σ-algebra: Given any collection \(\mathcal{C}\) of subsets of \(\Omega\), there is a smallest σ-algebra containing \(\mathcal{C}\), denoted \(\sigma(\mathcal{C})\). For a random variable \(X: \Omega \to \mathbb{R}\), we write
for the σ-algebra generated by \(X\)—the collection of events whose occurrence can be determined by observing \(X\).
Counting sets in a finite σ-algebra: If a σ-algebra is generated by \(k\) disjoint atoms (minimal non-empty sets), it contains exactly \(2^k\) sets (all possible unions of atoms, including \(\emptyset\)).
A σ-algebra represents the collection of events we can assign probabilities to. In the context of filtrations, each \(\mathcal{F}_t\) is a σ-algebra representing the events that are decidable at time \(t\).
Filtered Probability Spaces¶
A filtered probability space is a quadruple
where:
- \((\Omega, \mathcal{F}, \mathbb{P})\) is a probability space,
- \((\mathcal{F}_t)_{t \ge 0}\) is a filtration: an increasing family of sub-σ-algebras satisfying
Interpretation: \(\mathcal{F}_t\) represents the information available up to time \(t\). The monotonicity condition captures the irreversibility of information: what is known cannot be unknown.
Remark on notation: We write \((\mathcal{F}_t)\) for continuous-time filtrations (indexed by \(t \in [0, \infty)\)) and \((\mathcal{F}_n)\) for discrete-time filtrations (indexed by \(n \in \mathbb{N}\)). The discrete case offers cleaner intuition, while the continuous case introduces analytical subtleties.
Non-example: A family of σ-algebras \((\mathcal{G}_t)\) with \(\mathcal{G}_1 \supsetneq \mathcal{G}_2\) is not a filtration. Information cannot "disappear" over time.
Boundary Cases¶
Two extreme filtrations illustrate the range of possibilities:
-
Trivial filtration: \(\mathcal{F}_t = \{\emptyset, \Omega\}\) for all \(t\). No information is ever revealed; we can only identify events that either always occur or never occur.
-
Maximal filtration: \(\mathcal{F}_t = \mathcal{F}\) for all \(t\). All information is available from the start; there is no uncertainty resolution over time.
Most interesting filtrations lie between these extremes, with information genuinely accumulating as time progresses.
The Terminal σ-Algebra¶
For a filtration \((\mathcal{F}_t)_{t \ge 0}\), the terminal σ-algebra is defined as:
This is the smallest σ-algebra containing all the \(\mathcal{F}_t\). It represents the totality of information that eventually becomes available. Note that \(\bigcup_{t \ge 0} \mathcal{F}_t\) is typically only an algebra, not a σ-algebra (see Exercise 2), so we must take the generated σ-algebra.
``` Filtration growth (schematic):
F₀ ⊆ F₁ ⊆ F₂ ⊆ ⋯ ⊆ F_∞ ⊆ F
Time: 0 1 2 ∞
Information increases monotonically →
```
A Concrete Example: The Random Walk Filtration¶
Consider a simple random walk where we flip a fair coin at times \(n = 1, 2, 3, \ldots\). Let \(X_n = +1\) if the \(n\)-th flip is heads (H), and \(X_n = -1\) if tails (T). Define the walk \(S_n = X_1 + \cdots + X_n\) with \(S_0 = 0\).
Notation convention: We identify \(H \leftrightarrow +1\) and \(T \leftrightarrow -1\) throughout.
The sample space is \(\Omega = \{H, T\}^{\mathbb{N}}\), the set of all infinite sequences of coin flips.
The natural filtration \((\mathcal{F}_n)\) is defined by:
- \(\mathcal{F}_0 = \{\emptyset, \Omega\}\) — before any flips, we know nothing
- \(\mathcal{F}_1 = \sigma(X_1)\) — we know the first flip; this has 4 elements: \(\{\emptyset, \{H\cdots\}, \{T\cdots\}, \Omega\}\)
- \(\mathcal{F}_2 = \sigma(X_1, X_2)\) — we know the first two flips; 4 atoms partition \(\Omega\):
The σ-algebra \(\mathcal{F}_2\) contains all \(2^4 = 16\) unions of these 4 atoms (including \(\emptyset\)).
- \(\mathcal{F}_n = \sigma(X_1, \ldots, X_n)\) — we know the first \(n\) flips (\(2^n\) atoms, hence \(2^{2^n}\) total sets by the counting formula with \(k = 2^n\) atoms)
What can we "ask" at time \(n\)? Any question whose answer depends only on the first \(n\) flips:
| Question | Measurable w.r.t. | Why? |
|---|---|---|
| "Is \(S_2 = 0\)?" | \(\mathcal{F}_2\) | Equivalent to \(\{HT\cdots\} \cup \{TH\cdots\}\) |
| "Is \(S_1 > 0\)?" | \(\mathcal{F}_1\) | Equivalent to \(\{H\cdots\}\) |
| "Will \(S_{10} > 5\)?" | Not \(\mathcal{F}_2\) | Depends on flips 3–10 |
| "Is \(\max_{k \le 2} S_k \ge 1\)?" | \(\mathcal{F}_2\) | Equivalent to \(\{HH\cdots\} \cup \{HT\cdots\} \cup \{TH\cdots\}\) |
This discrete example captures the essential structure: \(\mathcal{F}_n\) grows as we observe more coin flips, and the monotonicity \(\mathcal{F}_m \subseteq \mathcal{F}_n\) for \(m \le n\) reflects that we never forget past observations.
Natural Filtrations¶
Given a stochastic process \(X = (X_t)_{t \ge 0}\), its natural filtration (or canonical filtration) is
This is the smallest filtration to which \(X\) is adapted—it contains exactly the information generated by observing \(X\) up to time \(t\).
Terminology note: A process \(X\) is adapted to a filtration \((\mathcal{F}_t)\) if \(X_t\) is \(\mathcal{F}_t\)-measurable for each \(t\). See the companion document Adapted Processes for a full treatment.
Brownian motion: For standard Brownian motion \((W_t)_{t \ge 0}\) with \(W_0 = 0\), the natural filtration is
This filtration does not automatically satisfy the usual conditions (defined in the next section):
- Right-continuity fails: There exist events in \(\mathcal{F}_{t+}^W := \bigcap_{u > t} \mathcal{F}_u^W\) that are not in \(\mathcal{F}_t^W\).
Concrete example: Consider the event \(A = \{W_s = 0 \text{ for some } s \in (t, t+\varepsilon), \text{ for every } \varepsilon > 0\}\). This is the event that Brownian motion returns to zero immediately after time \(t\). We can write \(A = \bigcap_{n=1}^\infty \{W_s = 0 \text{ for some } s \in (t, t+1/n)\}\); since each set in the intersection lies in \(\mathcal{F}_{t+1/n}^W\), we have \(A \in \mathcal{F}_{t+}^W\). However, \(A\) depends on the path immediately after time \(t\) and cannot be expressed in terms of \((W_s)_{s \le t}\) alone, so \(A \notin \mathcal{F}_t^W\).
- Completeness fails: Since \(W_0 = 0\) is deterministic, \(\mathcal{F}_0^W = \sigma(W_0) = \{\emptyset, \Omega\}\), which contains no non-trivial null sets.
The standard remedy is augmentation (see Augmentation of Filtrations below).
The Usual Conditions¶
In continuous-time stochastic analysis, technical pathologies can arise without additional regularity assumptions. A filtration \((\mathcal{F}_t)\) satisfies the usual conditions (or usual hypotheses) if:
-
Right-continuity: \(\mathcal{F}_t = \mathcal{F}_{t+} := \bigcap_{u > t} \mathcal{F}_u\) for all \(t \ge 0\).
-
Completeness: \(\mathcal{F}_0\) contains all \(\mathbb{P}\)-null sets in \(\mathcal{F}\).
Why right-continuity? Consider the stopping time "the first time Brownian motion exceeds level \(a\)":
Without right-continuity, the event \(\{\tau_a \le t\}\) might fail to be \(\mathcal{F}_t\)-measurable, because determining whether the path has crossed level \(a\) by time \(t\) might require "infinitesimally future" information. Right-continuity ensures that "stopping just after time \(t\)" is equivalent to "stopping at time \(t\)" from a measurability standpoint.
Why completeness? Probability theory operates "up to null sets." If we cannot distinguish an event \(A\) from a null event \(N\), we should treat \(A \cup N\) and \(A \setminus N\) as equivalent. Completeness ensures that all negligible events are properly accounted for from time zero. Since the filtration is increasing (\(\mathcal{F}_0 \subseteq \mathcal{F}_t\)), completeness of \(\mathcal{F}_0\) implies completeness of all \(\mathcal{F}_t\).
Convention: Throughout stochastic calculus, we tacitly assume the usual conditions unless stated otherwise.
Augmentation of Filtrations¶
To construct a filtration satisfying the usual conditions from a natural filtration, we apply a two-step procedure:
Step 1 (Completion): Let \(\mathcal{N} := \{N \subseteq \Omega : \exists A \in \mathcal{F} \text{ with } N \subseteq A \text{ and } \mathbb{P}(A) = 0\}\) be the collection of null sets. Define the completed filtration:
This adds all null sets to every σ-algebra in the filtration.
Step 2 (Right-continuification): Define the augmented filtration:
This enforces right-continuity by intersecting over all future σ-algebras.
Combined formula: The augmented Brownian filtration is often written as:
The augmented filtration \((\mathcal{F}_t)\) satisfies the usual conditions and is the standard filtration used in stochastic calculus with Brownian motion.
Enlargement of Filtrations¶
Sometimes we need to work with filtrations larger than the natural filtration. Common scenarios include:
Initial Enlargement¶
Add information about a random variable \(G\) to the entire filtration:
where \(\mathcal{F}_t \vee \sigma(G) := \sigma(\mathcal{F}_t \cup \sigma(G))\) is the smallest σ-algebra containing both.
Example (Insider Trading): Let \((W_t)_{0 \le t \le T}\) be a stock's log-price driven by Brownian motion, and let \(G = W_T\) be the terminal price. An insider who knows \(G\) from time zero operates with filtration:
Under \((\mathcal{G}_t)\), the insider has strictly more information than the market filtration \((\mathcal{F}_t^W)\) at every time \(t < T\). In particular, \(W_T\) is \(\mathcal{G}_0\)-measurable.
Progressive Enlargement¶
Add information about a random time \(\tau\) as it becomes known:
This encodes exactly: whether \(\tau\) has occurred by time \(t\), and if so, the value of \(\tau\).
Example (Credit Default): Let \(\tau\) be a firm's default time. Before default (\(t < \tau\)), we only know that \(\tau > t\). At the moment of default, we learn the exact value of \(\tau\). This models the information structure in credit risk, where default is observable when it occurs but not predictable beforehand.
Warning: Enlargement can destroy the martingale property. A process that is a martingale under \((\mathcal{F}_t)\) may fail to be a martingale—or even a semimartingale—under \((\mathcal{G}_t)\). Intuitively, if you know future information, your "best forecast" of a quantity changes. Characterizing when martingale properties survive enlargement is a deep topic in stochastic analysis (see: Jacod's criterion, Jeulin–Yor theory).
Looking Ahead: Why Filtrations Matter¶
The framework of filtered probability spaces is foundational for the theory developed in subsequent chapters:
-
Martingale theory: A process is a martingale only relative to a specific filtration. The same process may be a martingale under one filtration but not another.
-
Stochastic integration: The Itô integral \(\int_0^t H_s \, dW_s\) requires the integrand \(H_s\) to be predictable (or at least progressively measurable) with respect to the underlying filtration.
-
Stopping times: The definition of a stopping time explicitly references the filtration. This determines which random times are "observable."
-
Markov processes: The Markov property states that the future is independent of the past given the present. The filtration specifies what "the past" means.
-
Mathematical finance: No-arbitrage theory relies crucially on the filtration. A trading strategy must be predictable—it cannot use future information.
Historical Note¶
The systematic use of filtrations in probability theory emerged in the mid-20th century, particularly through the work of Joseph Doob and Paul-André Meyer. Doob's monumental treatise Stochastic Processes (1953) established martingale theory, while Meyer's school in Strasbourg refined the general theory of processes throughout the 1960s–70s.
The phrase "usual conditions" (or "conditions habituelles" in French) became standard terminology in the Strasbourg school's work. These conditions, while seemingly technical, resolved numerous pathological examples and enabled a clean, unified theory. Today, assuming the usual conditions is so standard that authors often do so without explicit mention.
Summary¶
| Concept | Definition | Interpretation |
|---|---|---|
| σ-algebra | Collection closed under complements and countable unions | Events we can assign probabilities to |
| Filtration \((\mathcal{F}_t)\) | Increasing family of σ-algebras | Information available over time |
| Natural filtration | \(\mathcal{F}_t^X = \sigma(X_s : s \le t)\) | Minimal information from observing \(X\) |
| Terminal σ-algebra | \(\mathcal{F}_\infty = \sigma(\bigcup_t \mathcal{F}_t)\) | All information eventually available |
| Usual conditions | Right-continuous + complete | Technical regularity for clean theory |
| Augmented filtration | Completed and right-continuified | Natural filtration satisfying usual conditions |
| Initial enlargement | \(\mathcal{G}_t = \mathcal{F}_t \vee \sigma(G)\) | Adding knowledge of a random variable |
| Progressive enlargement | \(\mathcal{G}_t = \mathcal{F}_t \vee \sigma(\mathbf{1}_{\{\tau \le t\}}, \tau \cdot \mathbf{1}_{\{\tau \le t\}})\) | Adding knowledge of a random time |
Exercises¶
Exercise 1: Discrete Filtration Computation¶
Let \((X_n)_{n \ge 1}\) be i.i.d. fair coin flips with \(X_n \in \{+1, -1\}\) (where \(+1\) corresponds to heads), and let \(S_n = X_1 + \cdots + X_n\) with \(S_0 = 0\).
(a) The σ-algebra \(\mathcal{F}_2 = \sigma(X_1, X_2)\) is generated by 4 atoms. List these atoms explicitly, then count the total number of sets in \(\mathcal{F}_2\).
(b) Determine whether each of the following is \(\mathcal{F}_2\)-measurable by expressing it as a union of atoms (or explaining why this is impossible):
- \(A = \{S_2 = 0\}\)
- \(B = \{S_1 > 0\}\)
- \(C = \{S_3 > 0\}\)
- \(D = \{\max_{k \le 2} S_k \ge 1\}\)
(c) Compute \(\mathbb{E}[S_3 \mid \mathcal{F}_2]\) by using the tower property and independence of \(X_3\) from \(\mathcal{F}_2\).
Solution to Exercise 1
(a) The four atoms of \(\mathcal{F}_2 = \sigma(X_1, X_2)\) are determined by the possible outcomes of the first two flips:
Since there are \(k = 4\) atoms, \(\mathcal{F}_2\) contains \(2^4 = 16\) sets (all possible unions of atoms, including \(\emptyset\)).
(b)
-
\(A = \{S_2 = 0\} = \{X_1 + X_2 = 0\} = A_2 \cup A_3\). This is a union of atoms, so \(A \in \mathcal{F}_2\). \(\mathcal{F}_2\)-measurable.
-
\(B = \{S_1 > 0\} = \{X_1 = +1\} = A_1 \cup A_2\). This is a union of atoms, so \(B \in \mathcal{F}_2\). \(\mathcal{F}_2\)-measurable.
-
\(C = \{S_3 > 0\}\) depends on \(X_3\), which is not determined by \((X_1, X_2)\). For instance, the outcome \((X_1, X_2) = (+1, -1)\) gives \(S_2 = 0\), and whether \(S_3 > 0\) depends on \(X_3\). Thus \(C\) cannot be expressed as a union of atoms of \(\mathcal{F}_2\). Not \(\mathcal{F}_2\)-measurable.
-
\(D = \{\max_{k \le 2} S_k \ge 1\}\). We check each atom: on \(A_1\), \(S_1 = 1 \ge 1\); on \(A_2\), \(S_1 = 1 \ge 1\); on \(A_3\), \(S_1 = -1, S_2 = 0\), so the max is \(0 < 1\); on \(A_4\), \(S_1 = -1, S_2 = -2\), so the max is \(-1 < 1\). Therefore \(D = A_1 \cup A_2\). \(\mathcal{F}_2\)-measurable.
(c) Write \(S_3 = S_2 + X_3\). Then:
Since \(X_3\) is independent of \(\mathcal{F}_2 = \sigma(X_1, X_2)\) and \(\mathbb{E}[X_3] = \frac{1}{2}(+1) + \frac{1}{2}(-1) = 0\):
Exercise 2: Filtration Properties¶
(a) Prove that if \(\mathcal{F}_s \subseteq \mathcal{F}_t\) for all \(s \le t\), then \(\bigcup_{t \ge 0} \mathcal{F}_t\) is an algebra (closed under complements and finite unions).
(b) Give a counterexample showing \(\bigcup_{t \ge 0} \mathcal{F}_t\) need not be a σ-algebra.
Hint: Let \(\Omega = [0,1]\) and let \(\mathcal{F}_n\) be generated by the dyadic partition \(\{[k \cdot 2^{-n}, (k+1) \cdot 2^{-n}) : k = 0, 1, \ldots, 2^n - 1\}\). The point \(1/3 = 0.010101\ldots_2\) in binary is not isolated by any finite dyadic partition. Show that \(\{1/3\}\) can be written as a countable intersection of sets in \(\bigcup_n \mathcal{F}_n\), but \(\{1/3\} \notin \bigcup_n \mathcal{F}_n\).
(c) Show that the natural filtration \(\mathcal{F}_t^W = \sigma(W_s : 0 \le s \le t)\) satisfies \(\mathcal{F}_s^W \subseteq \mathcal{F}_t^W\) for \(s \le t\).
(d) For the random walk filtration \((\mathcal{F}_n)\), prove that \(X_{n+1}\) is not \(\mathcal{F}_n\)-measurable.
Solution to Exercise 2
(a) Let \(\mathcal{A} = \bigcup_{t \ge 0} \mathcal{F}_t\). We verify \(\mathcal{A}\) is an algebra.
-
\(\Omega \in \mathcal{F}_0 \subseteq \mathcal{A}\), so \(\Omega \in \mathcal{A}\).
-
If \(A \in \mathcal{A}\), then \(A \in \mathcal{F}_t\) for some \(t\). Since \(\mathcal{F}_t\) is a \(\sigma\)-algebra, \(A^c \in \mathcal{F}_t \subseteq \mathcal{A}\).
-
If \(A, B \in \mathcal{A}\), then \(A \in \mathcal{F}_s\) and \(B \in \mathcal{F}_t\) for some \(s, t\). Let \(u = \max(s, t)\). Then \(A, B \in \mathcal{F}_u\) (by monotonicity), so \(A \cup B \in \mathcal{F}_u \subseteq \mathcal{A}\).
Hence \(\mathcal{A}\) is closed under complements and finite unions, so it is an algebra. \(\square\)
(b) Let \(\Omega = [0, 1]\) and \(\mathcal{F}_n\) be the \(\sigma\)-algebra generated by the dyadic partition of order \(n\): \(\{[k \cdot 2^{-n}, (k+1) \cdot 2^{-n}) : k = 0, \ldots, 2^n - 1\}\).
The point \(1/3 = 0.010101\ldots_2\) has infinite non-repeating binary expansion relative to any finite partition. For each \(n\), the atom of \(\mathcal{F}_n\) containing \(1/3\) is the interval \([k_n \cdot 2^{-n}, (k_n + 1) \cdot 2^{-n})\) where \(k_n = \lfloor 2^n / 3 \rfloor\). These atoms are nested and their intersection is \(\{1/3\}\):
Each interval in the intersection belongs to \(\mathcal{F}_n \subseteq \bigcup_n \mathcal{F}_n\), so \(\{1/3\}\) is a countable intersection of sets in \(\mathcal{A} = \bigcup_n \mathcal{F}_n\).
However, \(\{1/3\} \notin \mathcal{F}_n\) for any \(n\), because every set in \(\mathcal{F}_n\) is a finite union of intervals of length \(2^{-n}\), and the singleton \(\{1/3\}\) is not such a union. Since \(\{1/3\} \notin \mathcal{F}_n\) for any \(n\), we have \(\{1/3\} \notin \bigcup_n \mathcal{F}_n\). Thus \(\mathcal{A}\) is not closed under countable intersections and hence is not a \(\sigma\)-algebra. \(\square\)
(c) For \(s \le t\), we have \(\{W_r : 0 \le r \le s\} \subseteq \{W_r : 0 \le r \le t\}\). Therefore any set in \(\sigma(W_r : 0 \le r \le s)\) can be expressed using \((W_r)_{r \le s}\), which is a subset of the collection \((W_r)_{r \le t}\). By the definition of generated \(\sigma\)-algebras, the smallest \(\sigma\)-algebra containing the former collection is contained in the smallest \(\sigma\)-algebra containing the latter:
(d) The \(\sigma\)-algebra \(\mathcal{F}_n = \sigma(X_1, \ldots, X_n)\) is generated by the first \(n\) coin flips. The random variable \(X_{n+1}\) depends only on the \((n+1)\)-th flip, which is independent of \((X_1, \ldots, X_n)\).
If \(X_{n+1}\) were \(\mathcal{F}_n\)-measurable, then by the Doob-Dynkin lemma, \(X_{n+1} = g(X_1, \ldots, X_n)\) for some measurable function \(g\). But \(X_{n+1}\) is independent of \((X_1, \ldots, X_n)\) and takes values \(\pm 1\) each with probability \(1/2\), so it is non-constant. A non-constant random variable cannot simultaneously be independent of a \(\sigma\)-algebra and measurable with respect to it (if \(X_{n+1} = g(X_1, \ldots, X_n)\), then \(X_{n+1}\) is \(\sigma(X_1, \ldots, X_n)\)-measurable, contradicting independence of \(X_{n+1}\) with itself unless \(X_{n+1}\) is constant a.s.). \(\square\)
Exercise 3: Right-Continuity and Stopping Times¶
(a) Explain intuitively why right-continuity (\(\mathcal{F}_t = \bigcap_{s > t} \mathcal{F}_s\)) is natural. Why would left-continuity (\(\mathcal{F}_t = \bigvee_{s < t} \mathcal{F}_s\)) be problematic for stopping time theory?
(b) Construct an example of a filtration that is not right-continuous.
Hint: Let \(\Omega = \{\omega_1, \omega_2\}\), let \(A = \{\omega_1\}\), let \(\mathcal{F} = \{\emptyset, A, A^c, \Omega\}\), and define \(\mathcal{F}_t = \{\emptyset, \Omega\}\) for \(t < 1\) and \(\mathcal{F}_t = \mathcal{F}\) for \(t \ge 1\).
(c) Definition: A random time \(\tau: \Omega \to [0, \infty]\) is a stopping time with respect to \((\mathcal{F}_t)\) if \(\{\tau \le t\} \in \mathcal{F}_t\) for all \(t \ge 0\).
Prove: If \(\tau\) is a stopping time, then \(\{\tau < t\} \in \mathcal{F}_t\) for all \(t > 0\).
Hint: Write \(\{\tau < t\} = \bigcup_{n=1}^\infty \{\tau \le t - 1/n\}\). Each term is in \(\mathcal{F}_{t-1/n} \subseteq \mathcal{F}_t\) by the stopping time definition and monotonicity of the filtration.
(d) Show that right-continuity is relevant for the relationship between \(\{\tau < t\}\) and \(\{\tau \le t\}\): with the non-right-continuous filtration from part (b), find a random time \(\tau\) that is a stopping time but where \(\{\tau < t\} \notin \mathcal{F}_{t-}\) for some \(t\).
Hint: Define \(\tau(\omega_1) = 1\) and \(\tau(\omega_2) = 2\). Verify \(\tau\) is a stopping time. Then compute \(\mathcal{F}_{1-} := \bigvee_{s < 1} \mathcal{F}_s\) and compare to \(\mathcal{F}_1\). What does this say about "predictability" at the jump time?
Solution to Exercise 3
(a) Right-continuity \(\mathcal{F}_t = \bigcap_{s > t} \mathcal{F}_s\) states that knowing "just after time \(t\)" is the same as knowing "at time \(t\)." This is natural because in continuous time, there is no "next instant" — the intersection over all strictly future \(\sigma\)-algebras captures the information available at time \(t\) without any gap.
Left-continuity \(\mathcal{F}_t = \bigvee_{s < t} \mathcal{F}_s\) would mean that information at time \(t\) equals the information accumulated strictly before \(t\). This is problematic for stopping times because it would preclude "learning something new at the exact moment \(t\)." For example, the first hitting time \(\tau = \inf\{t : W_t = a\}\) reveals the event \(\{\tau = t\}\) at time \(t\), which may not be determined by information strictly before \(t\). Left-continuity would exclude such naturally arising stopping times.
(b) Let \(\Omega = \{\omega_1, \omega_2\}\), \(A = \{\omega_1\}\), \(\mathcal{F} = \{\emptyset, A, A^c, \Omega\}\). Define \(\mathcal{F}_t = \{\emptyset, \Omega\}\) for \(t < 1\) and \(\mathcal{F}_t = \mathcal{F}\) for \(t \ge 1\).
This is a filtration since \(\mathcal{F}_s \subseteq \mathcal{F}_t\) for \(s \le t\). However:
Actually, let us check right-continuity at \(t = 0\):
For any \(s \in (0, 1)\), \(\mathcal{F}_s = \{\emptyset, \Omega\}\), but for \(s \ge 1\), \(\mathcal{F}_s = \mathcal{F}\). Since the intersection includes \(s \in (0, 1)\), we get \(\mathcal{F}_{0+} = \{\emptyset, \Omega\} = \mathcal{F}_0\). Now check at \(t\) slightly below 1: For \(t < 1\), \(\mathcal{F}_{t+} = \bigcap_{s > t} \mathcal{F}_s\). If \(t < 1\), there exist \(s\) with \(t < s < 1\), so \(\mathcal{F}_s = \{\emptyset, \Omega\}\) for those \(s\), giving \(\mathcal{F}_{t+} = \{\emptyset, \Omega\} = \mathcal{F}_t\). But at \(t = 1\), \(\mathcal{F}_{1+} = \bigcap_{s > 1} \mathcal{F}_s = \mathcal{F} = \mathcal{F}_1\). So this filtration is actually right-continuous!
A non-right-continuous variant: define \(\mathcal{F}_t = \{\emptyset, \Omega\}\) for \(t < 1\) and \(\mathcal{F}_t = \mathcal{F}\) for \(t \ge 1\). The issue is that, as shown, this is right-continuous everywhere. Instead, try: \(\mathcal{G}_t = \{\emptyset, \Omega\}\) for all \(t\), and enlarge it by adding \(A\) at \(t = 1\) but including a "gap." Specifically, define \(\mathcal{G}_t = \{\emptyset, \Omega\}\) for \(t \le 1\) and \(\mathcal{G}_t = \mathcal{F}\) for \(t > 1\). Then \(\mathcal{G}_1 = \{\emptyset, \Omega\}\) but \(\mathcal{G}_{1+} = \bigcap_{s > 1} \mathcal{G}_s = \mathcal{F} \neq \mathcal{G}_1\). This filtration is not right-continuous at \(t = 1\).
(c) We have \(\{\tau < t\} = \bigcup_{n=1}^\infty \{\tau \le t - 1/n\}\). For each \(n\) with \(t - 1/n \ge 0\), the set \(\{\tau \le t - 1/n\} \in \mathcal{F}_{t - 1/n}\) by the stopping time definition. Since \(t - 1/n < t\), the monotonicity of the filtration gives \(\mathcal{F}_{t - 1/n} \subseteq \mathcal{F}_t\). Therefore each set in the union belongs to \(\mathcal{F}_t\), and since \(\mathcal{F}_t\) is a \(\sigma\)-algebra (closed under countable unions), \(\{\tau < t\} \in \mathcal{F}_t\). \(\square\)
(d) Use the filtration from part (b) (corrected version): \(\mathcal{G}_t = \{\emptyset, \Omega\}\) for \(t \le 1\) and \(\mathcal{G}_t = \mathcal{F}\) for \(t > 1\). Define \(\tau(\omega_1) = 1\) and \(\tau(\omega_2) = 2\).
Verify \(\tau\) is a stopping time: \(\{\tau \le t\} = \emptyset\) for \(t < 1\), \(\{\tau \le t\} = \{\omega_1\}\) for \(1 \le t < 2\), and \(\{\tau \le t\} = \Omega\) for \(t \ge 2\). For \(t < 1\): \(\emptyset \in \mathcal{G}_t\). For \(1 \le t < 2\): we need \(\{\omega_1\} \in \mathcal{G}_t\). At \(t = 1\), \(\mathcal{G}_1 = \{\emptyset, \Omega\}\), so \(\{\omega_1\} \notin \mathcal{G}_1\). Thus \(\tau\) is not a stopping time for this filtration.
Instead use: \(\mathcal{F}_t = \{\emptyset, \Omega\}\) for \(t < 1\) and \(\mathcal{F}_t = \mathcal{F}\) for \(t \ge 1\). Then \(\{\tau \le 1\} = \{\omega_1\} \in \mathcal{F}_1 = \mathcal{F}\). For \(t \ge 2\), \(\{\tau \le t\} = \Omega \in \mathcal{F}_t\). For \(t < 1\), \(\{\tau \le t\} = \emptyset \in \mathcal{F}_t\). So \(\tau\) is a stopping time.
Now \(\mathcal{F}_{1-} = \bigvee_{s < 1} \mathcal{F}_s = \{\emptyset, \Omega\}\) (since \(\mathcal{F}_s = \{\emptyset, \Omega\}\) for all \(s < 1\)). But \(\{\tau = 1\} = \{\omega_1\} \notin \mathcal{F}_{1-}\). Thus the event \(\{\tau = 1\}\) is not in the "just before time 1" \(\sigma\)-algebra. This shows the arrival of information at time 1 is not predictable — the filtration has a "jump" at \(t = 1\) that cannot be anticipated.
Exercise 4: Enlargement of Filtrations¶
Let \((W_t)_{t \ge 0}\) be standard Brownian motion with natural filtration \((\mathcal{F}_t^W)\).
(a) Consider the initial enlargement \(\mathcal{G}_t = \mathcal{F}_t^W \vee \sigma(W_1)\).
- Is \(W_{1/2}\) measurable with respect to \(\mathcal{G}_0\)?
- Is \(W_1\) measurable with respect to \(\mathcal{G}_0\)?
(b) Under the natural filtration \((\mathcal{F}_t^W)\), we have \(\mathbb{E}[W_1 \mid \mathcal{F}_t^W] = W_t\) for \(t \le 1\) (martingale property). Compute \(\mathbb{E}[W_1 \mid \mathcal{G}_t]\) for the enlarged filtration. Why does this show \(W_t\) is not a \((\mathcal{G}_t)\)-martingale?
(c) In the progressive enlargement with \(\tau = \inf\{t : W_t = 1\}\), describe qualitatively:
- What information is in \(\mathcal{G}_t\) for \(t < \tau\)?
- What additional information appears in \(\mathcal{G}_t\) for \(t \ge \tau\)?
Solution to Exercise 4
(a)
-
\(W_{1/2}\) is not \(\mathcal{G}_0\)-measurable. We have \(\mathcal{G}_0 = \mathcal{F}_0^W \vee \sigma(W_1) = \{\emptyset, \Omega\} \vee \sigma(W_1) = \sigma(W_1)\). Knowing \(W_1\) does not determine \(W_{1/2}\), since \(W_{1/2}\) and \(W_1 - W_{1/2}\) are independent Gaussians, and \(W_{1/2}\) cannot be recovered from \(W_1 = W_{1/2} + (W_1 - W_{1/2})\) alone.
-
\(W_1\) is \(\mathcal{G}_0\)-measurable, since \(\sigma(W_1) \subseteq \mathcal{G}_0\) by construction.
(b) Since \(W_1\) is \(\mathcal{G}_t\)-measurable for all \(t\) (as \(\sigma(W_1) \subseteq \mathcal{G}_t\)), by the "taking out what is known" property:
for all \(t \ge 0\). This is a constant (in \(t\)) random variable.
If \(W_t\) were a \((\mathcal{G}_t)\)-martingale, we would need \(\mathbb{E}[W_1 \mid \mathcal{G}_t] = W_t\) for \(t \le 1\). But \(\mathbb{E}[W_1 \mid \mathcal{G}_t] = W_1 \neq W_t\) for \(t < 1\) (since \(W_t \neq W_1\) a.s. for \(t < 1\)). This shows \(W_t\) is not a \((\mathcal{G}_t)\)-martingale.
(c) For \(t < \tau\): the filtration \(\mathcal{G}_t\) contains all the information in \(\mathcal{F}_t^W\) (the path of Brownian motion up to time \(t\)) plus the information that \(\tau\) has not yet occurred (i.e., \(\{\tau > t\}\), equivalently that \(W_s < 1\) for all \(s \le t\)). Since \(\tau\) has not occurred, we know \(\mathbf{1}_{\{\tau \le t\}} = 0\), but we do not know the exact value of \(\tau\).
For \(t \ge \tau\): the filtration \(\mathcal{G}_t\) additionally contains the exact value of \(\tau\) (since \(\mathbf{1}_{\{\tau \le t\}} = 1\) and \(\tau \cdot \mathbf{1}_{\{\tau \le t\}} = \tau\) reveals the precise hitting time). This means at time \(t \ge \tau\), we know not only the Brownian path up to \(t\) but also the exact moment when the path first reached level 1.
Exercise 5: Verifying the Usual Conditions¶
(a) Verify that the trivial filtration \(\mathcal{F}_t = \{\emptyset, \Omega\}\) satisfies the usual conditions (assuming all null sets are trivial, i.e., \(\mathcal{N} \subseteq \{\emptyset\}\)).
(b) Let \((\mathcal{F}_t)\) be any filtration. Define \(\mathcal{F}_t^+ = \bigcap_{s > t} \mathcal{F}_s\). Prove that \((\mathcal{F}_t^+)\) is right-continuous.
Hint: Show \(\mathcal{F}_t^+ = (\mathcal{F}_t^+)^+ := \bigcap_{s > t} \mathcal{F}_s^+\).
(c) Explain why completeness of \(\mathcal{F}_0\) implies completeness of all \(\mathcal{F}_t\) when the filtration is increasing.
(d) The two-step augmentation procedure (complete, then right-continuify) produces a filtration satisfying the usual conditions. Does the order matter? That is, if we right-continuify first and then complete, do we still get the usual conditions?
Solution to Exercise 5
(a) The trivial filtration \(\mathcal{F}_t = \{\emptyset, \Omega\}\) for all \(t\).
-
Right-continuity: \(\bigcap_{s > t} \mathcal{F}_s = \bigcap_{s > t} \{\emptyset, \Omega\} = \{\emptyset, \Omega\} = \mathcal{F}_t\). So right-continuity holds.
-
Completeness: If all null sets are trivial (\(\mathcal{N} \subseteq \{\emptyset\}\)), then \(\emptyset \in \mathcal{F}_0 = \{\emptyset, \Omega\}\), so \(\mathcal{F}_0\) contains all null sets. Completeness holds.
Therefore the trivial filtration satisfies the usual conditions. \(\square\)
(b) Define \(\mathcal{F}_t^+ = \bigcap_{s > t} \mathcal{F}_s\). We must show \((\mathcal{F}_t^+)^+ = \mathcal{F}_t^+\), i.e., \(\bigcap_{u > t} \mathcal{F}_u^+ = \mathcal{F}_t^+\).
First, \(\mathcal{F}_t^+ \subseteq \mathcal{F}_u^+\) for \(u > t\) is not automatic; rather we need the reverse. Since \(\mathcal{F}_u^+ = \bigcap_{v > u} \mathcal{F}_v \subseteq \bigcap_{v > t} \mathcal{F}_v = \mathcal{F}_t^+\) for \(u > t\) (because \(\{v : v > u\} \subset \{v : v > t\}\), so the intersection over a smaller index set is larger). Wait, that gives \(\mathcal{F}_u^+ \supseteq \mathcal{F}_t^+\) when \(u > t\), since we intersect over fewer sets.
Actually: if \(u > t\), then \(\{v : v > u\} \subset \{v : v > t\}\), so \(\bigcap_{v > u} \mathcal{F}_v \supseteq \bigcap_{v > t} \mathcal{F}_v\), meaning \(\mathcal{F}_u^+ \supseteq \mathcal{F}_t^+\). Therefore \(\bigcap_{u > t} \mathcal{F}_u^+ \supseteq \mathcal{F}_t^+\).
Conversely, let \(A \in \bigcap_{u > t} \mathcal{F}_u^+\). Then for every \(u > t\), \(A \in \mathcal{F}_u^+ = \bigcap_{v > u} \mathcal{F}_v\). So for every \(u > t\) and every \(v > u\), \(A \in \mathcal{F}_v\). Given any \(w > t\), choose \(u\) with \(t < u < w\); then \(A \in \mathcal{F}_w\) (taking \(v = w > u\)). Since this holds for all \(w > t\), \(A \in \bigcap_{w > t} \mathcal{F}_w = \mathcal{F}_t^+\).
Therefore \(\bigcap_{u > t} \mathcal{F}_u^+ = \mathcal{F}_t^+\), proving \((\mathcal{F}_t^+)\) is right-continuous. \(\square\)
(c) If \(\mathcal{F}_0\) contains all \(\mathbb{P}\)-null sets and the filtration is increasing (\(\mathcal{F}_0 \subseteq \mathcal{F}_t\) for all \(t \ge 0\)), then every null set \(N \in \mathcal{F}_0 \subseteq \mathcal{F}_t\). So \(\mathcal{F}_t\) also contains all null sets for every \(t \ge 0\). \(\square\)
(d) The order can matter. If we right-continuify first (obtaining \(\mathcal{F}_t^+ = \bigcap_{s > t} \mathcal{F}_s\)) and then complete, the resulting filtration is right-continuous and complete. However, the completion step might break right-continuity: adding null sets to each \(\mathcal{F}_t^+\) might enlarge some \(\sigma\)-algebras in a way that is no longer right-continuous.
In fact, for the standard Brownian filtration, both orders yield the same result (this is a theorem). But in general, the order can matter. The standard procedure (complete first, then right-continuify) is preferred because right-continuification of a complete filtration preserves completeness (since null sets in \(\mathcal{F}_0\) remain in \(\bigcap_{s > t} \widetilde{\mathcal{F}}_s\)), whereas completion of a right-continuous filtration need not preserve right-continuity.