“If we are talking primary school, AI is probably moving too fast currently for it to make sense to try to integrate specific hot new ideas into the curriculum. But providing a broader base of computer science education and some opportunity to try programming sounds like a good idea.” — Nick Bostrom
In nearly every industry, smart machines have edged their way into our lives. Whether their longer term impact will be beneficial to our lives (curing diseases, reversing climate change, eradicating food shortages) or whether their impact will shatter our lives (automate our jobs, threaten our personal security, increase inequality), we can be pretty sure that we are dealing with an intelligence that is radically different. If we are to flourish as a species based on what we know now of the Superintelligence (intellect much smarter than the human brains in all fields) still in development, we’re going to need a good dose of humility and great deal of preparation. What can our education systems do now?
Swedish philosopher Nick Bostrom is a Professor at Oxford University and is a founding Director of the Future of Humanity Institute — an institute that studies the future of the human species. At the institute, Bostrom identifies threats to the human species and how to reduce the possibilities or completely prevent such events from occurring. He emphasizes that “superintelligent AI should be developed only for the benefit of all of humanity and in the service of widely shared ethical ideals.”
Bostrom, who directs the Governance of Artificial Intelligence Program, is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), Human Enhancement (ed., OUP, 2009), and Superintelligence: Paths, Dangers, Strategies (OUP, 2014), a New York Times bestseller.
He joins us in The Global Search for Education today to talk about the impact of AI in the future of education.
“Things in 2050 are difficult to predict because we don’t know whether machine superintelligence will have happened by then. If it has, then the world could have been very profoundly transformed indeed.” — Nick Bostrom
What role do you think AI should play in future schools in terms of the learning process as well as curriculum offerings?
If we are talking primary school, AI is probably moving too fast currently for it to make sense to try to integrate specific hot new ideas into the curriculum. But providing a broader base of computer science education and some opportunity to try programming sounds like a good idea. From the point of view of teaching general problem solving skills to kids, basic concepts from programming and some good old-fashioned AI techniques offer more value than the latest neural network stuff.
AI could also contribute to the learning process by making it possible to fine-tune teaching materials and exercises to the attributes of the learner. However, I think there is much unpicked low-hanging fruit (in, say, online learning) that doesn’t require fancy machine learning to implement; so I don’t expect current-level AI capabilities to make much difference there.
What do you think will be the biggest differences for students graduating from school in 2050 to those who graduated this year? How much will the world their older peers lived in have changed, including the kinds of work opportunities they might be looking at?
Things in 2050 are difficult to predict because we don’t know whether machine superintelligence will have happened by then. If it has, then the world could have been very profoundly transformed indeed. If not, then maybe as a baseline we should expect about the same amount of change that has taken place between 1986 and 2018.
“As the world gets richer, it seems we should focus less on making money and more on using our historically unprecedented wealth in ways that create value.” — Nick Bostrom
Given all the changes that AI is bringing to our world, what other kinds of things do you believe parents and teachers should be focusing on to ensure kids can flourish in their new world?
Maybe having fun and being able to have meaningful, fulfilling leisure? As the world gets richer, it seems we should focus less on making money and more on using our historically unprecedented wealth in ways that create value. This could change if the technological frontier moves in ways that increase the marginal utility of money, for example if expensive but effective forms of life extension became available.
Your research has also focused on ethics and policy. What do you believe are the top ethical issues in AI that humanity needs to focus on?
The common good principle, that super intelligent AI should be developed only for the benefit of all of humanity and in the service of widely shared ethical ideals, is worth emphasizing. I think we also need to start moving the idea that digital minds can have varying degrees of moral status into the Overton window.
What are the tips you would give the next generation about being future ready to co-exist with AI?
Practice cosmic humility.
Thank you, Nick.
C. M. Rubin and Nick Bostrom