Confronting AlphaGo
The value of human teachers in the age of machines
In March 2016, world champion Go player Lee Sedol was defeated by the computer program AlphaGo in a five-game match. As someone who doesn’t play Go, follow professional Go, or study computer science, this shouldn’t have been a big deal to me. But it was. Go is incredibly complex: if every atom in our universe contained a whole copy of our universe inside it, the total number of atoms in all those universes would be fewer than the number of legal Go board positions. Computers can now beat the best humans in the world at Go, one of the most complex games ever created. If computers are better than humans at that, what’s to stop them from being better at everything else?
I spent many hours over the following months reading articles about the accelerating progress of artificial intelligence, the inevitability of automation, and the imminent uselessness of human work. I would describe what I was going through as a mid-PhD crisis, even though it happened during my fifth year at MIT (and I hope I was well past the midpoint by then). AlphaGo sparked a crisis for me because it challenged a core idea behind why I had come to complete a PhD in the first place. I wanted to become an engineer to solve problems that would alleviate suffering in the world. Coming out of high school, I thought I already had a fully developed philosophy (Utilitarianism), and my aspiration to alleviate suffering was a product of this worldview. I thought I was maximizing my utility by studying engineering, and after getting good grades and getting some research experience as an undergrad, I thought I could further increase my impact as a professor, expanding human knowledge through academic research.
AlphaGo posed a problem to my philosophy. With AlphaGo defeating a world champion in 2016, how long would it be before AlphaGradStudent was outperforming me, a decent-but-definitely-not-world-champion PhD candidate? If a computer is better at solving problems than I am, do I still have utility? Would all my research be effectively useless if it didn’t contribute to the supplantation of human workers by superhuman computers? And to some extent: once computers can out-think humans, what does it mean to be human?
Fortunately the PhD leaves time for thinking about the big questions. I read philosophy while my code ran, or during the automated portions of my experiments, or often just as pure procrastination. I spent long lunches debating with my friends, discussing issues like when we would be able to replace politicians with algorithms. Multiple Friday afternoons ended up dedicated to reading David Foster Wallace essays as I sought a better understanding of the human condition.
“Piled Higher and Deeper” by Jorge Cham www.phdcomics.com
I didn’t find satisfying answers to all my questions, but I found enough to get back on the path of deliberate (rather than dawdling) research progress. And being on the other side of it, I’m glad I went through my mid-PhD crisis. The goal of the PhD is to become an independent researcher, but I now know that there is opportunity for much more than that. Struggling through questions of philosophy has allowed me to develop a certain measure of self-awareness. I used to envy the students who went through their studies like a day job, applying rote diligence to their work and graduating in a few short years. But now I see it from another perspective: if I approach my work like a robot, what’s to stop a robot from replacing me in the not too distant future?
After all of the questioning, my career aspirations still lie in academia. The difference, I guess, is that where I used to see my opportunity for impact in research and expanding knowledge, I now see an opportunity to develop students as people. And to me that development of people, of whole human beings, is incredibly important. The main reason I would want to be a professor rather than a stay-at-home dad (#2 on my career aspirations list) is that I worry another professor in my place might want to develop their students only as researchers. To me, being a technical guide but not a philosophical or ethical one is a disservice. I would encourage my students to consider questions of meaning and purpose in addition to the scientific research questions they are tasked with answering.
It is true that someday a computer might be better than me at helping a student develop as a person, but that doesn’t really bother me. Once AlphaMentor is superior to the world’s best teacher, a cell phone won’t just be an incredibly powerful computer that fits in your pocket, it will be an enlightenment machine. And a world where everyone has access to a superhuman teacher is one I’d be happy to live in, even if it means losing my utility.
Share this post: