Rohit Prasad, an Amazon executive, has an urgent message for ninth and tenth graders at the Dearborn STEM Academy, a community center for Boston’s Roxbury neighbourhood. a public school.
On a recent morning, he came to school to watch an Amazon-sponsored artificial intelligence class that taught students how to program simple tasks for Alexa, Amazon’s voice-activated virtual assistant.He assures Dearborn students that there will soon be millions of new jobs in AI
“We need to create talent for the next generation,” Mr Prasad said. Alexa Chief Scientist, tell the whole class. “So we’re educating AI at the earliest grassroots level.”
miles away, Sally Cohen Brucethe dean of the Massachusetts Institute of Technology, is delivering a more sobering message about artificial intelligence to students at local schools gathered at the Kennedy Library building in Boston for a workshop on AI risks and regulation.
“Because AI is such a powerful new technology, in order for it to function well in society, it does need some rules,” Dr. Kornbluth said. “We have to make sure it doesn’t cause harm.”
The events on the same day — one encouraging piece of AI work, another a warning against deploying the technology too hastily — mirror the broader debate currently taking place in the U.S. over the promise and potential dangers of artificial intelligence
Both student workshops were organized by MIT’s Responsible AI initiative, whose donors include Amazon, Google and Microsoft. They highlighted a question plaguing school districts across the country this year: How should schools prepare students for a world where, according to some prominent AI developers, the dominance of AI tools seems inevitable?
Teaching artificial intelligence in schools is nothing new. Courses such as computer science and civics now regularly include exercises on the social impact of facial recognition and other automated systems.
But the push for AI education has taken on more urgency this year after news of ChatGPT — a novel chatbot that can create human-like homework assignments and sometimes misinformation — began circulating in schools.
Now, “AI literacy” is a new educational buzzword. Schools are scrambling to find resources to help teach it. Some universities, tech companies and nonprofits are responding with ready-made courses.
Courses are proliferating even as schools are grappling with a fundamental question: Should they teach students to code and use AI tools, providing the technical skills training employers seek? Or should students learn to predict and mitigate the harms of AI?
“We want students to be informed, responsible users and informed, responsible designers of these technologies,” said Dr Breazeal, whose team organizes AI workshops for schools. “We want to make them responsible citizens by making them aware of these rapid developments in AI and the many ways they affect our personal and professional lives.”
(Disclosure: I was most recently a fellow at MIT’s Knight Science Journalism Project)
Schools should also encourage students to consider the broader ecosystem in which AI systems operate, other education experts said. This might include students researching the business models behind new technologies or studying how AI tools leverage user data.
“If we’re engaging students in learning these new systems, we really have to think about the context around these new systems,” said jennifer higgs, Assistant Professor of Learning and Mind Sciences at UC Davis. But “that part is still missing,” she noted.
The workshop in Boston was part of an AI Day event organized by the Breazeal Ph.D. program, which attracts thousands of students from around the world. It offers a glimpse into the various approaches schools are taking in AI education.
In Dearborn STEM, Hilah Barbot, senior product manager for Amazon Future Engineers, the company’s computer science education program, led a speech AI class for students. The courses were developed by MIT in conjunction with Project Amazon, which offers coding lessons and other programs to K-12 schools. The company provided more than $2 million in grants to MIT for the project.
First, Ms. Barbot explains some voice AI jargon. She teaches students “utterances,” phrases a consumer might say to prompt Alexa to respond.
The students then program simple tasks for Alexa, such as telling jokes. Ninth grader Jada Reed programmed Alexa to answer questions about Japanese manga characters. “I think it’s really cool that you can train it to do different things,” she said.
Dr Breazeal said it was important for students to have access to specialized software tools from leading technology companies. “We’re giving them the future-proof skills and perspective on how to work with AI to do the things they care about,” she said.
Some Dearborn students who have built and programmed robots at the school said they were excited to learn how to program a different technology: voice-activated help robots. Alexa uses a range of artificial intelligence technologies, including automatic speech recognition.
At least some students also said they had privacy and other concerns about AI-assisted tools.
After a person says a “wake word” like “Alexa,” Amazon records conversations with its Echo speakers.Unless users opt out, Amazon may use their interactions with Alexa to target them with ads or use their recordings train its artificial intelligence modelLast week, Amazon agreed to pay $25 million to settle federal charges that it kept recordings of children indefinitely, violating federal online children’s privacy laws. The company said it disputed the allegations and denied that it broke the law. Customers can view and delete their Alexa recordings, the company noted.
But the one-hour workshop, led by Amazon, didn’t touch on the company’s data practices.
Dearborn STEM students regularly review technology. A few years ago, the school launched a course in which students used artificial intelligence tools to create their own deepfake videos (that is, fake content) and examine the consequences. The students thought about the virtual assistant they were learning to program that morning.
“You know there’s a conspiracy theory that Alexa listens to your conversations to show you ads?” asks a ninth grader named Eboni Maxwell.
“I’m not afraid of it listening,” replied Laniya Sanders, another ninth-grader. Even so, Ms Sanders said she avoids voice assistants because “I just want to do it myself”.
A few miles away, at the Edward F. Kennedy Academy for the U.S. Senate, an educational center that houses a full-scale replica of the chamber of the U.S. Senate, dozens of students from the Warren Prescott School in Charlestown, Massachusetts, are exploring A different topic: AI policy and safety regulations.
Middle school students took on the roles of senators from different states and participated in a mock hearing in which they debated the terms of a hypothetical AI safety bill.
Some students want to ban companies and police departments from using artificial intelligence to target people based on data such as race or ethnicity. Others want to require schools and hospitals to assess the fairness of AI systems before deploying them.
This exercise is no stranger to middle school students. Nancy Arsenault, an English and civics teacher at Warren Prescott, said she often asks her students to think about how digital tools affect them and those they care about.
“As much as students love technology, they are acutely aware that unfettered AI is not what they want,” she said. “They want to see restrictions.”