February 29, 2024

Marisa Shuman’s computer science class at the Bronx School for Young Women in Leadership started as usual on a recent January morning.

Just after 11:30, energetic Year 11 and Year 12 students flooded into classrooms, sat down at communal study tables and pulled out their laptops. They then turned to the front of the room and stared at a whiteboard on which Ms. Schuman had posted a question about wearable technology, the topic of the day’s class.

For the first time in her decade-long teaching career, Ms. Schuman did not write any lesson plans. She generated course materials using ChatGPT, a new chatbot that relies on artificial intelligence to provide written answers to questions in clear prose. Ms Schuman is using algorithmically generated lessons to examine the potential usefulness of chatbots and the pitfalls her students may encounter.

“I don’t care if you know about wearable technology today,” Ms. Schuman told her students. “We are evaluating ChatGPT. Your goal is to determine whether the course is effective.”

Across the United States, universities and school districts are scrambling to develop new chatbots that can generate human-like text and images. But while many are eager to ban ChatGPT in an attempt to prevent its use as a cheating aid, teachers like Ms. Schuman are using these innovations to inspire more critical thinking in the classroom. They encourage students to question the hype surrounding rapidly evolving AI tools and to consider the potential side effects of these technologies.

The goal, the educators say, is to train the next generation of technology creators and consumers in “critical computing.” This is an analytical approach in which understanding how to critique computer algorithms is as important, if not more important, than understanding how to program a computer.

New York City Public Schools, the largest district in the United States, serving approximately 900,000 students, is training a cadre of computer science teachers to help their students identify AI bias and potential risks. The class included a discussion of flawed facial recognition algorithms that were more accurate at identifying white faces than darker-skinned faces.

In Illinois, Florida, New York, and Virginia, some middle school science and humanities teachers are using AI Literacy Course Developed by researchers at the MIT Scheller Teacher Education Program.One class asks students to consider the ethics of powerful AI systems known as “generative adversarial networks,” which can be used to produce false media content, Like real videos of famous politicians saying things they never actually said.

As generative artificial intelligence technologies proliferate, educators and researchers say understanding such computer algorithms is a critical skill students need to navigate everyday life and engage in civic and social activism.

“It’s important for students to understand how AI works because their data is being collected and their user activity is being used to train these tools,” said Kate Moore, an MIT education researcher who helps create AI courses for schools . “Decisions are being made about young people’s use of AI, whether they know it or not.”

To observe how some educators encourage their students to take a closer look at AI technologies, I recently spent two days visiting Bronx School of Leadership for Young Womenan girls’ public middle and high school that is at the forefront of this trend.

The hulking beige brick school specializes in maths, science and technology. It serves nearly 550 students, most of whom are Latino or Black.

It is by no means a typical public school. Teachers are encouraged to help their students become, as a school website “Innovative” young women with the skills to complete college and “influence public attitudes, policy and law to create a more socially just society,” in its words. The school also boasts an enviable 98 percent four-year high school graduation rate, significantly higher than the New York City high school average.

On a January morning, about 30 ninth and tenth graders, many in navy blue sweatshirts and gray pants, strode into a class called Software Engineering 1. This hands-on course introduces students to coding, computer problem solving, and the social impact of technological innovation.

It’s one of several computer science courses at the school that asks students to consider how popular computer algorithms — often developed by tech company teams composed mostly of white and Asian men — might have disparate effects on groups such as immigrants and low-income neighborhoods. Influence. Topic of the morning: Facial matching systems that can have trouble recognizing darker-skinned faces, like some of the students and their families in the room.

Standing in front of the class, computer teacher Abby Hahn knew her students might be shocked by the subject. Faulty facial-matching technology led to wrongful arrest of black man.

So Ms. Hahn reminds her students that sensitive topics like racism and sexism will be discussed in class.then she played youtube videoin 2018 by Joy Buolamwini, Computer Scientistshowing how some popular facial analysis systems incorrectly identify iconic black women as men.

Some students gasped as the class watched the video. The video showed Oprah Winfrey “appearing to be male,” Amazon’s technology said with a 76.5 percent confidence level. Elsewhere in the video, Microsoft’s system mistook Michelle Obama for “a young man in a black shirt,” while IBM’s system misidentified Serena Williams with 89 percent confidence. for “male”.

(Microsoft and amazon It later announced improvements to the accuracy of its system, while IBM Stop selling such tools. Amazon said it is committed to continuously improving its facial analysis technology through customer feedback and collaboration with researchers, and Microsoft and IBM say they are committed to the responsible development of AI)

“I’m appalled that women of color are seen as men, even though they don’t look like men at all,” said student Nadia Zadine, 14. “Joe Biden knows about it ?”

Ms. Hahn said the purpose of the AI ​​bias course was to show student programmers that computer algorithms can be flawed, just as humans design cars and other products, and to encourage them to challenge problematic technologies.

“You are the next generation,” Ms. Hahn said to the young women at the end of the course. “Would you let this happen when you were out in the world?”

“No!” A group of students responded in unison.

A few doors down the hall, in a colorful classroom filled with handmade paper snowflakes and origami cranes, Ms. Schuman is preparing to teach a more advanced programming course, Software Engineering 3, that focuses on Creative computing such as game design and art. Earlier that week, her student coders discussed how new AI systems like ChatGPT can analyze vast amounts of information and then generate human-like text and images based on short prompts.

As part of the lesson, students in grades 11 and 12 read news articles about how ChatGPT can be both useful and error-prone. They also read social media posts about how chatbots were prompted to generate text promoting hate and violence.

But students can’t try ChatGPT in class on their own. The school district has blocked Worried it might be used for cheating. So the students asked Mr. Schuman to create a lesson for the class using the chatbot as an experiment.

Ms. Schuman spent hours at home prompting the system to generate lessons about wearable technology like smart watches. In response to her specific request, ChatGPT developed a highly detailed 30-minute lesson plan—including a warm-up discussion, wearable technology reading, class exercises, and a wrap-up discussion.

At the start of class, Ms. Shuman asks students to spend 20 minutes listening to the scripting lesson as if it were a real lesson on wearable technology. They then analyze the effectiveness of ChatGPT as a simulated teacher.

In small groups, students read aloud robot-generated information about the convenience, health benefits, brand names and market value of smartwatches and fitness trackers. Students groaned as they read out ChatGPT’s bland sentences—”Examples of smart glasses include Google Glass Enterprise 2″—and said it sounded like marketing copy or a rave product review.

“It reminds me of fourth grade,” said Jayda Arias, 18. “Very bland.”

The class found the lesson tedious compared to Ms. Schuman’s, a charismatic teacher who crafted course material for her specific students, asked them provocative questions, and instantly Come up with relevant, real examples.

“The only part of the class that works is that it’s easy,” Alexania Echevarria, 17, said of the ChatGPT material.

“ChatGPT seems to like wearable technology,” said another student, Alia Goddess Burke, 17. “It’s biased!”

Ms. Shuman’s lessons go beyond learning how to spot AI bias. She is using ChatGPT to send a message to her students that AI is not inevitable and that young women have the power to challenge it.

“Should your teachers be using ChatGPT?” Ms. Schuman asked towards the end of the session.

The students’ answer has been a resounding “No!” At least so far.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *