On Friday night, after work, I crossed the street to Boston’s Museum of Science, where my cousin, my boyfriend, and I settled into our seats in an auditorium to hear two experts’ takes on artificial intelligence and where it could take us in the coming decades.
The discussion, titled Life 3.0., was free and available to the public. It was funny, terrifying, enlightening, optimistic, and unabashedly political. The speakers, Max Tegmark — who looked like he’d driven into the event on a motorcycle — and Erik Brynjolfsson, are good friends.
Brynjolfsson serves as the director of the MIT Initiative on the Digital Economy and is a well-known scholar in information systems and economics. Tegmark teaches physics at MIT and is cofounder of the Future of Life Institute, which has connections to Elon Musk. They have both recently published books on artificial intelligence — The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies and Life 3.0: Being Human in the Age of Artificial Intelligence, respectively.
After giving separate presentations, Tegmark and Brynjolfsson lounged at a small table onstage and, over the course of a lively discussion punctuated by dark humor, pushed each other to elucidate their predictions for AI’s anticipated impact on humankind. Towards the end, the audience joined in for a Q & A.
I was intrigued by how much the discussion touched on economic and civic considerations. Our technology is growing increasingly powerful. Siri can understand us pretty well already, and we see driverless cars making our commuting lives safer and more efficient. AI could render plenty of current occupations obsolete, from sales to military service. And, as Tegmark and Brynjolfsson both asked, what does all that mean for our society?
“It’s not a matter of us making predictions,” Brynjolfsson said, “it’s a matter of us making choices.”
Could all the wealth go to a few, while impoverishing the many? Or could AI completely change the infrastructure of our day-to-day lives, leveling the playing field and creating a utopia where people don’t need to work for access to shelter and food?
In that case, without the need for a career, how will we redefine what it means to have a purpose?
And what will advanced “intelligent” technology mean for diplomacy? We’re already seeing its potential consequences in North Korea‘s disturbing threats. (Elon Musk has actually said AI carries “vastly more risk” than North Korea’s nuclear capabilities). What kind of agreements must we make, what kinds of restrictions can and should we collectively put on our technology?
Tegmark and Brynjolfsson acknowledged the grave danger of unregulated AI and what it could mean for our society and economy, but attempted to frame it for us in a positive light: “It’s not a matter of us making predictions,” Brynjolfsson said, “it’s a matter of us making choices.”
And I get it. What I heard, moreover, is that we need to act now. This is a matter of us electing the right people in 2018 and in 2020. It’s a matter of us finding ways to empathize with and protect one another. It depends on us learning to see the big picture, understanding the consequences of bad policy and thoughtless leadership, and of agreeing to limits on technology where necessary.
I walked out into the night with a new appreciation for technology, humans, and their growing connection. And determined, more than ever, to fight for better leadership for our country in coming years.