Back to All Events

In Depth Discussion

  • English Round Table 서울시 서초구 나루터로 10길 29 (용마일렉트로닉스) (map)

Today is the first class of our new February class period. The start time for our class will be 10:00am. We will begin class with a casual conversation. Our reading today is about fashion. Please try to read as much as possible. Underline any words or sentences that are unfamiliar. Our listening is about AI in schools. Please listen and read the transcript. We will complete our grammar sentences.

Click HERE for the reading

A new study finds that using generative AI in education can, quote, "undermine children's foundational development." This comes from the Brookings Institution and its Center for Universal Education. The study also says the damages AI has already caused are, quote, "daunting." Brookings reviewed research and interviewed students, parents, educators and tech experts around the world. NPR's Cory Turner has been reading through the report and joins us now. Welcome.

CORY TURNER, BYLINE: Hey, Ayesha.

RASCOE: The report makes things sound pretty dire. What was your take after reading it?

TURNER: Yeah. I think dire is pretty fair, although I want to start with a glimmer of hope or at least some good news. You know, some kids with disabilities, for example, are benefiting from AI improvements to things like text-to-speech programs. Or imagine being in science class - right? - and because of AI, you're able to visually adventure inside a cell or zip around the solar system. The problem here is that these tools are really the exception right now because they're complex, and they can cost a lot of money that many schools just don't have. And what kids are far more likely to be using in school and at home are these free, easily accessible chatbots.

RASCOE: And that's the kind of AI that these researchers are worried about with kids?

TURNER: Yeah. Exactly. The report lays out really two big buckets of risk here. So first, young people who use this kind of AI aren't learning how to think for themselves. And that's because most of these common chatbots don't actually supplement kids' learning, right? Students just tell them to do something, and then the chatbot does it. Here's Rebecca Winthrop. She's one of the researchers on the study.

REBECCA WINTHROP: They're not learning to parse truth from fiction. They're not learning to understand what makes a good argument. They're not learning about different perspectives in the world because they're not actually engaging in the material.

TURNER: And, Ayesha, Winthrop told me if students rely on this kind of AI too much, it can actually stunt the kind of brain growth, the wiring, that comes from the trying and doing and failing and trying again.

RASCOE: And you said there are two buckets of risk. What's the other one?

TURNER: The other is social-emotional growth. So it's in childhood - right? - that we learn how to get along with others, hopefully, especially people who may look and think and feel differently from us. But these free chatbot tools are designed to be sycophantic. What that means is they tell the user essentially whatever the AI thinks the student wants to hear. For children and teens now, this can be really intoxicating because the user is always right. Again, here's Rebecca Winthrop.

WINTHROP: So if you are on a chatbot, complaining about your parents and saying, they want me to wash the dishes. This is so annoying. I hate my parents - the chatbot will likely say, you're right. You're misunderstood. I'm so sorry. I understand you. Versus a friend, who would say, dude, I wash the dishes all the time in my house. I don't know what you're complaining about. That's normal. That right there is the problem.

TURNER: And, Ayesha, the stakes are obviously a lot higher than kids refusing to do dishes. The stakes are children growing into adults who never learned empathy or how to relate because they spent more time engaging with chatbots than they did with other kids. And Winthrop told me 1 in 3 teens in the U.S. who use AI say they actually prefer talking about important or serious subjects with a chatbot than they do with other people.

RASCOE: So what can be done about this?

TURNER: So the report says AI designed for use by children and teens, for one thing, should be less sycophantic and more what they call antagonistic, so it pushes kids' preconceived notions. But one of the biggest recommendations they make is really for governments to do more to regulate the use of AI by children. And in the U.S., we're at a really weird impasse right now. The Trump administration has issued an executive order trying to prohibit states from regulating AI for themselves, but Congress hasn't created any federal regulations so far. So it's - really, for parents and schools, it's kind of Wild West right now.

Earlier Event: January 30
Independent Study 25
Later Event: January 30
Independent Study (Y)