To estimate causal effects and build causally-aware machine learning systems, a precise language for representing causal assumptions and deriving identification strategies is necessary. This chapter provides that foundation.
We begin by revisiting Structural Causal Models (SCMs), clarifying the notation and assumptions that underpin much of causal inference. You'll work with advanced graphical representations, including complex Directed Acyclic Graphs (DAGs) and alternative models, to depict intricate causal relationships.
A core part of this chapter focuses on causal identification: determining whether a causal effect, like the average effect of an intervention P(Y∣do(X=x)), can be computed from observational data. You will learn to apply the formal rules of Pearl's do-calculus for this purpose and study identification strategies that handle situations where standard backdoor or frontdoor criteria are insufficient. We also address complexities such as cycles and feedback loops within causal graphs.
Finally, you will learn about sensitivity analysis techniques to evaluate how violations of identification assumptions might affect your conclusions. The chapter concludes with practical exercises applying these identification principles to challenging causal problems.
1.1 Structural Causal Models Revisited
1.2 Advanced Graphical Representations: DAGs and Beyond
1.3 Do-calculus Rules and Applications
1.4 Identification Beyond Standard Criteria
1.5 Addressing Cycles and Feedback in Causal Graphs
1.6 Sensitivity Analysis for Identification Assumptions
1.7 Practice: Applying Identification Logic
© 2025 ApX Machine Learning