
carl shulman
nullCarl Shulman quantifies existential risk from misaligned AI as potentially exceeding nuclear war, advocating for preemptive governance and technical alignment research. His work emphasizes that even modest probabilities (e.g., 10% chance of catastrophe by 2070) justify drastic mitigation efforts given civilization-scale stakes