There’s a lot we don’t know about workplaces of the future. After all, some 60 per cent of jobs that exist today did not in the 1940s. Still, K12 education aims to prepare kids for the future, whether that be in college or career—and the odds are pretty good that artificial intelligence (AI) will play some sort of role in that future. Researchers working with Microsoft estimate that 19% of workers may see at least half of their job tasks impacted by AI tools. How do educators prepare students for a road that doesn’t yet exist and is sure to be paved with AI?
The spectrum of K12 AI opinions: “steroids vs a coach”
As with so many new technologies, fears cloud our opinions of AI more than anything. Educators who grew up with card catalogues and Dewey decimal systems have witnessed data collection and sources change at breakneck speed (is it okay yet to cite Wikipedia, given its stringent editing policies?). It’s hard to undo years of training. These traditionalists are more likely to view AI as a threat, a usurper of students’ blossoming skills and knowledge. After all, why learn it if a word calculator in your pocket can give you the right answer on command?
These educators worry that students will turn to generative AI to do their thinking for them—and that’s a scary prospect, given that AI itself tells us it cannot be trusted—and neglect to develop their own skills and master their own concepts. To them, the prospect of AI in the classroom is akin to the bodybuilder abusing performance-enhancing drugs: it’s not real progress, and the risk is too great for experimentation.
Educators with a slightly rosier view of AI hope students will take a more judicious approach: instead of over-reliance, students will learn to delegate necessary but tedious tasks such as summarization, translation, or analysis of a text, to name a few. Students will ideally use the time they saved and novel ideas they didn’t intuit at first to achieve a deeper understanding of both the course materials and their AI prompt writing. Two birds!
Finally, AI’s biggest cheerleaders have even greater dreams for the role of large language models (LLMs) in students’ learning. They envision personalized tutoring tailored to a student’s specific abilities, available at a low cost—a great equalizer for student access to one-on-one coaching.
With such a wide spectrum of opinions, how do K12 administrators proceed with crafting AI policies and expectations? A couple of cultural shifts can back you up.
Two big shifts impacting the future of work
Since the COVID-19 pandemic upended life in 2020, American people have reached the collective realization that remote or hybrid work environments are possible (dubiously so in K12, but let’s keep our sights on the future workforce outside schools). Next, generative AI is now capable of being helpful to the workplace instead of just a neat parlour trick or internet troll bridge.
These two enormous shifts have come to fruition recently and will change the workplace forever. Schools must shift along with them, but how?
First, the all-important mantra of working with AI: it’s faster but less accurate than people are. In one study of consultants using AI tools, those who took suggestions from AI were less likely to produce correct solutions by about 19 percentage points.
AI is a terrible sheriff, but it might be a pretty good deputy. AI can aid in critical thinking tasks by serving as a “provocateur” in addition to an assistant. Users can achieve this by changing the way they prompt the AI and the way they think of AI—by assigning it a debate opponent’s role, and students are less likely to consider it a flawless sidekick and more likely to question its responses deeply. Since the responses are, by definition, the opposite of their beliefs or thesis, students also stand to learn more than when AI serves as an assistant alone.
And a UX shift to help cue healthy AI scepticism
The primary worry of people who distrust AI often centres around its hiccups in understanding: hallucinations, as the industry calls them, which result in errors that people are likely to fall for if they are too trusting with AI assistants. User experience designers are learning along with users. Recent data shows that simply
highlighting AI responses based on low confidence is enough to prompt users to double-check their sources. Imagine the red underlines that appear under a misspelt word: an AI can use the same UX principles to flag the pieces of information it is less confident about.
While it’s not yet possible to predict the future, we can certainly use the tools we have now in strategic ways to prepare students for what we think might be coming in their futures. As much as AI can seem daunting, scary, or just unsettling, students today must build skills in prompting, analyzing, and overthinking AI.
Their futures depend on it.