Public Shame Is the Most Effective Tool for Battling Big Tech
Our federal government won’t regulate, but regular Americans aren’t helpless.
Jessica Grose
January 14, 2026
An illustration of people pushing away a giant teddy bear with a dollar sign in its eye.
Eleanor Davis

The federal government may not act, but we can

The toymaker Mattel announced in June a “strategic collaboration” with OpenAI to “bring the magic of A.I. to age-appropriate play experiences.” The backlash was swift. I don’t trust any A.I. companies to make “play experiences” that are positive for minors, because they seem to be prioritizing fast growth over user safety for customers of all ages.

This headline, from CNN in November, was predictable to anyone paying attention: “Sales of A.I.-Enabled Teddy Bear Suspended After It Gave Advice on B.D.S.M. Sex and Where to Find Knives.” That particular teddy bear — the Kumma bear from FoloToy — was pulled after researchers from the consumer advocacy group U.S. PIRG Education Fund were able, in longer conversations with the teddy, to get past its flimsy safety guardrails.

In addition to inappropriate conversations, A.I. toys pose data privacy risks to children, and chatbots may interfere with children’s normal social and emotional development. After months of parent and activist group complaints, Mattel confirmed to Axios in December that it was delaying the release of any A.I.-powered toys.

This may seem like a small victory. It’s a delay, not a cancellation. But I would argue that it’s a signal of something potentially bigger, if individuals and interest groups continue to draw attention to the fatal flaws in these products.

It might be harder to shame the tech companies themselves into making their products safer, but we can shame third-party companies like toymakers, app stores and advertisers into ending partnerships. And with enough public disapproval, legislators might be inspired to act.

In some of the very worst corners of the internet might lie some hope.

The proliferation of deepfake child sexual abuse material seems to be the one area of A.I. that our federal government actually cares about. In May, President Trump signed the Take It Down Act into law. It requires tech companies to remove, within two days of notification, nonconsensual sexually explicit images, including A.I.-generated deepfakes. The law was passed with overwhelming bipartisan support — the kind of across-the-aisle collaboration that basically doesn’t happen anymore.

This should make the problem of Grok, the A.I. chatbot created by Elon Musk’s company xAI, easier to solve. Grok has been creating “sexualized images of what appear to be minors,” according to Wired’s Matt Burgess. This has been going on for several months. Musk cannot be shamed, and somehow continues to brag about how Grok is “solid as a rock” and even inspired by the pursuit of truth and beauty.

But public opprobrium about these images has forced his company to respond, including gesturing at concessions, even if it is not yet fixing the problem at its root. Musk’s X platform says its policy is to “take action” against users making child sexual abuse deepfakes, and the company was pressured into restricting access to image generation and editing to paying subscribers only, though many have pointed out that this isn’t fixing the problem, just monetizing it.

Lawmakers outside the United States, perhaps because they see a lack of major changes from Grok, have condemned it and have tried to hold it legally accountable. Indonesia and Malaysia blocked access to the chatbot over the weekend. Regulators from Brazil to France have vowed to take action against the creation of these images. Britain’s media regulator opened an investigation into X on Monday and technology minister Liz Kendall told Reuters that the government is considering legislation that would outlaw tools that allow users to create deepfakes.

Advertisers in Britain were fleeing X even before this latest publicity nightmare. According to The Guardian, ad revenues from British companies fell 60 percent in 2025. X admits this loss of revenue is “primarily driven by a reduction in spend from large brand advertisers due to concerns about brand safety, reputation and/or content moderation,” and claims it is taking measures.

Three Democrats in the U.S. Senate, Ron Wyden, Ed Markey and Ben Ray Luján, wrote a letter to Google and Apple on Friday urging them to remove the X and Grok apps from their stores. “Turning a blind eye to X’s egregious behavior would make a mockery of your moderation practices,” the senators wrote.

Without more public shaming, what seems to be the implacable forward march of A.I. is unstoppable. Our secretary of education thinks A.I. — or as she puts it, A1, like the steak sauce — should be integrated into every year of K-12 schooling. President Trump signed an executive order that would allow the federal government to override state laws regulating A.I. in order to “sustain and enhance the United States’ global A.I. dominance.” While Trump’s executive order notes that it will not pre-empt state laws that relate to child safety protection, I would not put money on the administration siding with children over tech company lobbyists.

Negative publicity has worked before with social media companies: as of October, 80 percent of parents surveyed by Pew Research say that “the harms of their child using social media outweigh the benefits.” Because of the negative view of social media, when A.I. became widespread, Americans were more attuned to the downsides of chatbots. For example, the more students use A.I. at school, the more the kids themselves are concerned about the various harms the technology may perpetuate.

As Jay Caspian Kang noted in The New Yorker recently, changing social norms around kids and tech use can be powerful, and reforms like smartphone bans in schools have happened fairly quickly, and mostly on the state and local level. A desire to keep our children from the deliberate chaos of the internet crosses basically all demographic groups. “The signs of this quiet revolution waged on behalf of internet-addicted children are already all around us,” Kang wrote.

I am mildly optimistic that he is right, and that every appalled adult takes some small step — whether it’s writing an angry letter to Barbie’s boss, quitting a once-beloved social platform, or joining a children’s tech advocacy group — to signal our disgust.

End Notes

  • In case you missed it, I had a rousing conversation with Tressie McMillan Cottom and Meher Ahmad on “The Opinions” about how there is a new transparency around plastic surgery, hair transplants and injections. Take a listen here or wherever you get your podcasts.
  • A post on the White House’s X account with a strangely darkened photo of Health Secretary Robert F. Kennedy Jr. declares: “We are ending the war on protein.” I was not aware we were in a battle with this macronutrient, which is so ubiquitous that it in everything from popcorn to water, even when it has no business being in those things.
  • Feel free to drop me a line about anything here.

Read past editions of the newsletter here.

If you’re enjoying what you’re reading, please consider recommending it to others. They can sign up here.

Portrait of Jessica Grose

If you received this newsletter from someone else, subscribe here.

Need help? Review our newsletter help page or contact us for assistance.

You received this email because you signed up for Jessica Grose from The New York Times.

To stop receiving Jessica Grose, unsubscribe. To opt out of other promotional emails from The Times, including those regarding The Athletic, manage your email settings.

Subscribe to The Times

Connect with us on:

facebookxinstagramwhatsapp

Change Your EmailPrivacy PolicyContact UsCalifornia Notices

LiveIntent LogoAdChoices Logo

The New York Times Company. 620 Eighth Avenue New York, NY 10018