Here's this week's free edition of Platformer: a look at Meta's decision to replace part of its content operation with Community Notes, a novel but limited approach to spreading misinformation that stops well short of Meta's previous efforts. Maybe kick in a few bucks to support our work? You can do that by upgrading your subscription today. We'll email you all our scoops first, like our recent one about Meta's new hate-speech guidelines. Plus you'll be able to discuss each today's edition with us in our chatty Discord server, and we’ll send you a link to read subscriber-only columns in the RSS reader of your choice.
Eager to curry favor with the incoming Trump administration, Meta kicked off 2025 by announcing a large-scale retreat from its content moderation and fact-checking efforts in the United States. The company had invested billions of dollars in building sophisticated automated systems that detected and removed vast quantities of misinformation, hate speech, and other things that most people would rather not see. But scared of retribution from Trump, and frustrated with high error rates in its systems, CEO Mark Zuckerberg said the company would stop scanning most new posts for violations and instead begin relying more on the user base to report instances of harm. As part of these changes, Meta also said it would stop funding third-party fact-checking organizations, which it said had broadly failed to improve trust in its platforms. In their place, Zuckerberg said the company would borrow a feature from X: Community Notes, a way for users to add context to posts by providing additional information about what they are seeing. If a viral photo of a celebrity was actually generated by artificial intelligence, or an image presented as breaking news was actually taken years earlier, Community Notes can help audiences understand the truth of what they’re seeing. In a sign of how abruptly Meta had decided to abandon years of investment in content moderation, work on its Community Notes feature had barely begun when Zuckerberg announced it was coming. But today the company announced that they are almost ready to launch: on Tuesday, the first notes will begin appearing across Facebook, Instagram and Threads. “We expect Community Notes to be less biased than the third party fact checking program it replaces because it allows more people with more perspectives to add context to posts,” the company said in an unsigned blog post. Let’s start with the one true innovation that Community Notes brought to content moderation after Twitter began experimenting with them in 2021. Community Notes are the most prominent example to date of a product that uses what’s known as bridging-based ranking — algorithms that reward behavior that bridges people with different views. When Twitter expanded the program then known as Birdwatch, it decided to display notes on tweets only if they had been marked as helpful by people who normally disagree. If a post was upvoted only by people with left- or right-leaning views, it would not appear. But if a post could get left-leaning and right-leaning users to agree that a post was helpful, Twitter would display it right on the tweet. This incentivized people to add notes that were based in fact rather than opinion. In its early tests, Twitter found that surveys showed people were 20 to 40 percent less likely to agree with tweets rated as misleading after they read Birdwatch notes. This is good and useful work, even if it stops short of what authors Ajeet Singh and Aviv Ovadya called for in their original proposal for bridging-based ranking. “Imagine if Facebook rewarded content that led to positive interactions across diverse audiences, including around divisive topics,” they wrote in 2022. “How might that change what people, posts, pages, and groups are successful?” I’d still love to find out. That core innovation aside, though, Community Notes have several important limitations, according to former Twitter executives I’ve interviewed. One, despite being available globally, they appeared relatively rarely outside the United States. Some Twitter executives speculate that this is due to lack of trust in the company, particularly in countries with more authoritarian leaders. If you correct a comment made by your fascist president, will X share that comment with the government? The fear alone has been enough to limit notes’ usefulness abroad. Two, Community Notes take longer to appear on viral posts than Meta’s fact checks typically do. This is a downside of bridging-based ranking: it typically takes many hours for a post to accumulate enough votes from people with different perspectives for the system to deem it helpful. In situations where every minute matters — the aftermath of a tragedy, or in the midst of an election — Community Notes may not arrive until they’re too late to do much good. Three, Community Notes on X often rely on information from the independent fact-checking programs that Meta just pulled the funding for. Meta has long said it doesn’t want to be an “arbiter of truth,” but it has funded those arbiters for the past several years, and it’s not clear whether anyone will step up to replace it. If no one does, Community Notes will suffer both on X and on Meta’s platforms. Most importantly, though, Meta said it will not reduce the distribution of posts that its users have determined to be false. That’s the opposite of the approach it took with fact checks. Until last year, when third-party fact checkers rated a post as false, Meta reduced its distribution significantly. While fact checks themselves seem to have done little to change people’s beliefs, showing them fewer false posts overall seems like an obvious benefit to the quality of Meta’s services. Now, no matter how false or misleading a post is, Meta has committed to showing it to as many people as its ranking systems predict will find it engaging. When I asked about its rationale, the company told me that having a note attached doesn’t mean that a post is false — only that people thought it needed more context. And even in cases where someone is lying, Meta said it wants to move away from a more “punitive” system, instead showing people additional information and letting them make up their own minds. “Notes … won’t have penalties associated with them the way fact checks did,” the company said in its blog post today. “Fact checked posts often had their distribution across our platforms reduced. That won’t be the case with posts that have notes applied to them. Notes will provide extra context, but they won’t impact who can see the content or how widely it can be shared.” And to be fair, Twitter took this approach as well — it never downranked content that has Community Notes attached. And notes can result in content being downranked in an indirect way: seeing them generally makes people less likely to interact with a post, which in turn results in platforms showing them to fewer people. At the same time, Twitter’s Community Notes — like Meta’s fact-checking program — was never intended to be a load-bearing pillar of either company’s content moderation efforts. Rather, it was intended to complement other, more robust efforts, such as relying on internal teams and automated systems to label obvious falsehoods. Meta will continue to use some automated systems and human moderators, though it has said it will focus them on what it considers the highest-severity harms, such as terrorism. It will be up to individual users to do the rest. (The company says “around” 200,000 people in the United States have signed up to participate so far.) How this will affect the average experience of using Facebook, Instagram, or Threads is still anyone’s guess. Still, it seems likely that people will now see more nonsense that they did before. If all goes well, it will come appended with a note informing you that you’re looking at nonsense. Meta has expressed confidence that this approach will build trust in its systems. Even if it does so, though, it will come at the expense of wasting huge amounts of its users’ time and attention. An earlier generation of leaders at Meta argued, among other things, that filling up feeds with obvious falsehoods made for a bad user experience. But those leaders are gone, and in the place of third-party fact checks you can now find Shrimp Jesus and other AI slop. There was a moment when Meta seemed to take seriously the idea that its users might want the truth — might want journalism, even. But that belief has been thoroughly bullied out of them, and there’s no telling what that might mean for the rest of us. On the podcast this week: Kevin and I sort through what's going on with Apple Intelligence. Then, the Times' Adam Satariano joins us to explain how Starlink became the most powerful telecom in the world. And finally, a new study asks: is AI making us dumber? Apple | Spotify | Stitcher | Amazon | Google | YouTube Governing- The Senate approved Trump nominee Gail Slater to lead the Justice Department’s antitrust division. (David McCabe / New York Times)
- OpenAI asked the Trump administration to help shield AI companies from an increasing amount of state regulations, in exchange for sharing their models with the federal government. (Jackie Davalos / Bloomberg)
- Google's proposal for Trump's AI action plan includes weakened copyright and export protections. (Kyle Wiggers / TechCrunch)
- Elon Musk visited the National Security Agency and told them it needed an overhaul. Sure, why not. (Alexander Ward, Dustin Volz, and Brett Forrest / Wall Street Journal)
- A look inside what’s left at CISA – where employees are reportedly afraid to discuss the cybersecurity threats they are finding, and where new Trump assignments interfere with existing priorities. (Eric Geller / Wired)
- A look at Caroline Crenshaw, the only Democratic commissioner at the Securities and Exchange Commission, as she voices dissenting views about the agency. (Matthew Goldstein / New York Times)
- A group of companies, including Amazon, Google and Meta, are calling on governments to triple nuclear energy capacity by 2050. (Malcolm Moore, Jamie Smyth and Amanda Chu / Financial Times)
- The Federal Trade Commission said it does have the resources to pursue its Amazon Prime deceptive practices case on the requested deadlines despite previously requesting a delay because of constraints. (Annie Palmer / CNBC)
- Trump's FTC will continue the broad antitrust probe of Microsoft that began under previous agency chief Lina Khan. I guess it didn't donate enough to the inauguration fund. (Leah Nylen, Josh Sisco and Dina Bass / Bloomberg)
- Meta won an injunction against a former employee Sarah Wynn-Williams, who alleged misconduct at the company in a memoir. She was ordered by an arbitrator to stop making disparaging remarks, which the court said likely violated her non-disclosure agreements with Meta. (Jay Peters / The Verge)
- Meta’s lobbying for laws that require app stores to let parents control their kids’ app downloads is a misguided effort to “offload” its responsibility to online child safety, Google said. The company presented an alternative proposal. (Lauren Feiner / The Verge)
- An investigation into how the AI monitoring tools schools use to keep children safe carry serious security and privacy risks. (Claire Bryan and Sharon Lurye / Associated Press)
- Parents worried about child safety on Roblox should not let their children use it, Roblox chief executive David Baszucki said. On one hand, sure. On the other hand reflects platforms' suddenly defeatist attitudes around child safety. (Zoe Kleinman and Georgina Hayes / BBC)
- The UK’s competition regulator said Apple and Google’s mobile browsers make up a duopoly and are “holding back innovation.” (Natasha Lomas / TechCrunch)
- Spain will soon impose massive fines on companies that use AI-generated content without properly labeling it. (Reuters)
- French publishers and authors are suing Meta, alleging that the company used their books to train its AI model without permission. The funniest possible outcome here is that Meta illegally trains its large language models on Sarah Wynn-Williams' book, whose publication it sought to block for violating NDAs. (Benoit Berthelot / Bloomberg)
- An investigation into how new networking apps in India are helping to spread disinformation and hate while operating outside of regulatory scrutiny. (Ayushi Kar, Aditya Anurag and Gayatri Sapru / The Reporters’ Collective)
 Industry- Google updated its affiliate ads policy to prohibit Chrome extensions from taking affiliate revenue from creators they paid for promotion, after creators accused the Honey browser extension of doing so. (Jay Peters / The Verge)
- Google announced Gemma 3, its latest open model for developers, which it describes as the “world’s best single-accelerator model.” (Abner Li / 9to5Google)
- Google DeepMind is launching two new AI models designed for robots, including a vision-language-action model named Gemini Robotics that it says is capable of understanding new situations it hasn’t been trained on. (Emma Roth / The Verge)
- Google Gemini’s Deep Research feature is now available to everyone in more than 45 languages. (Igor Bonifacic / Engadget)
- Gemini can also now automatically analyze a query to see if a user is referring to their Search history. (Emma Roth / The Verge)
- Apple is working on a feature to offer live translations through AirPods. (Mark Gurman / Bloomberg)
- Netflix’s first gaming lead, Mike Verdu, is no longer at the company. (Stephen Totilo / Game File)
- Snapchat is introducing its first video generative AI Lenses, powered by its in-house generative video model. But you have to pay for a super-premium tier of Snapchat to use them. (Aisha Malik / TechCrunch)
- Spotify says it paid more than $10 billion in royalties to the music industry last year. (Anna Nicolaou / Financial Times)
- Niantic Labs, developer of Pokémon Go, is selling its video game division to Saudi Arabia-owned developer Scopely for $35 billion. It's going to suck when they behead Pikachu for criticizing the regime. (Jess Weatherbed / The Verge)
- Fleet management company Motive Technologies is the latest to join the increasing number of Silicon Valley AI startups expanding into India. (Saritha Rai / Bloomberg)
- BeReal’s second largest market is Japan, CEO Aymeric Roffe said, and the company is eager to expand more into the market. (Chihiro Ishikawa / Nikkei Asia)
- A look at how to get LLMs to help write code. (Simon Willison’s Weblog)
 Those good postsFor more good posts every day, follow Casey’s Instagram stories. (Link) (Link) (Link) Talk to usSend us tips, comments, questions, and Community Notes: casey@platformer.news. Read our ethics policy here.
|