AI safety researcher and co-founder of MIRI (Machine Intelligence Research Institute). Known for writings on rationality and AI risk. Eliezer Yudkowsky warns of near-certain doom from unaligned AGI, advocating for shutdown fail-safes and precision linguistics to constrain goal misgeneralization