Future of Humanity Institute logo

Global Catastrophic Risks Conference - Speakers Biographies

Eliezer Yudkowsky , Research Fellow, Singularity Institute for Artificial Intelligence

Eliezer Yudkowsky is a Research Fellow at the Singularity Institute for Artificial Intelligence, where he works to extend decision theory to describe self-modifying agents (agents that think about how to think and change their own source code). Yudkowsky also has a secondary focus on the implications for human rationality of the recent advances in cognitive psychology and Bayesian mathematics. He is currently blogging at the econblog Overcoming Bias with the intent of turning the material into a book. Yudkowsky is concerned with transhumanist ethics and was one of the founding Directors of the World Transhumanist Association. He is one of the major modern advocates of I. J. Good's "intelligence explosion" hypothesis regarding positive feedback effects in smarter-than-human minds creating even smarter minds.