|
Here's this week's free edition of Platformer: a fun investigation into the mysterious case of what happened when a psychedelic rock band left Spotify in protest, only to find that something strange had taken their place on the platform. Want to read more fun tech investigations? If so, consider upgrading your subscription today. We'll email you all our scoops first, like our recent one about how Grok's porn companion is rated for children 12+ in Apple's App Store. Plus you'll be able to discuss each today's edition with us in our chatty Discord server, and we’ll send you a link to read subscriber-only columns in the RSS reader of your choice. Or if not that, how about taking our 2-minute audience survey? Hundreds of you have already responded; it's helping us make big plans for the future. You can help here.
|
|
|
|
You can check out of Spotify any time you like. Whether you can leave, though, is another question. In July, Australian psychedelic rock band King Gizzard and the Lizard Wizard pulled their music from the streaming platform. The group left in protest of Spotify CEO Daniel Ek leading a €600 million investment in Helsing, a German company that makes military drones and AI tools for weapons systems. “We just removed our music from the platform,” the company wrote in an Instagram post. “Can we put pressure on these Dr. Evil tech bros to do better? Join us on another platform.” Over the next several days, most of the band’s catalog disappeared from the site. Visit the band’s artist page on Spotify today and you’ll find a single song — a remix the band did for another artist. But for some of the band’s other tracks, a strange thing happened. Browsing playlists of King Gizzard’s songs, some fans noticed that several tracks were still available — sort of. Cue up “Deadstick,” a song off the band’s 2025 record Phantom Island, and what you hear is a kind of ringtone version of the original. “Spotify presented this as being the real thing: i.e. same artist name, same song name, same video artwork,” Gizzard fan Scott Harvey told me. “And the music is similar. If I didn't know the song already, I may not have known this wasn't the original.” “Deadstick” was not the only track to have been swapped out for its Muzak equivalent. The record’s title track, “Aerodynamic,” and “Grow Wings and Fly” were all also replaced by instrumentals. Until I asked, they remained playable on Phantom Island album page on Spotify, and collectively had more than 10 million streams. I shared what I found with the band's manager, and will update this piece with a comment from the band should they offer one. Who is behind the doppelgänger tracks? Harvey noticed something strange when he tapped on “view album” while listening to “Deadstick.” It took him to a new album page where the artist name had been replaced with the likely creator of the instrumental track: an entity known as “Jayilor.” Who or what is Jayilor? A web search turns up only playlists on which its music features. And what kind of music does it make? Instrumental, Muzak-style covers of popular songs. One of Jayilor’s most popular tracks, with nearly 16 million streams, is an instrumental cover of the Slumdog Millionaire soundtrack hit “Jai Ho.” And if you search for Jayilor on Spotify, you’ll find that it has covered all of the King Gizzard songs that are still available to stream on Spotify — and that, in fact, when you stream King Gizzard on Spotify today, you are unwittingly streaming Jayilor. So what exactly is happening here? A Spotify spokesperson told me that because King Gizzard removed its catalog from the platform, the fake stuff “wasn’t immediately caught through the usual checks.” It’s still not clear how tracks attributed to Jayilor managed to show up on the Phantom Island album page. Spotify removed them following Platformer’s inquiry. It’s impossible to know what Jayilor was hoping for when it recorded ersatz copies of King Gizzard songs for Spotify. But I can tell you what they sound like: perfect fit content. Perfect fit content, or PFC, is Spotify’s term for what is essentially stock music. As described in Liz Pelly’s recent book Mood Machine — which was excerpted at length in Harper’s — PFC is Spotify’s effort to fill up playlists that are typically listened to as background music with inexpensive alternatives to major-label releases. The company can make deals with stock music companies in which it does not pay musicians royalties, letting it stream tracks at a much lower price than it pays the major labels. It can be difficult, if not impossible, to spot PFC just by looking at the name of an artist or a playlist. But by 2023, Pelly writes, hundreds of popular playlists — including “Deep Focus,” “Ambient Relaxation,” and “Cocktail Jazz” — consisted almost entirely of these tracks. And I’d wager that Spotify’s popular playlists of covers also rely on PFC. (There are many popular piano covers by best-selling artists, but I don’t see any of them on Spotify’s “Relaxing Piano Covers” list.) This helps explain why, if you’re Jayilor, covering King Gizzard is worth the effort. At a minimum, it might have expected to find itself on a Spotify playlist of “psychedelic instrumental covers” or something similar. Instead, though, it briefly hit the jackpot — being swapped in for King Gizzard’s work with almost no one being the wiser. To Pelly, the problem with Spotify’s move to promote PFC is that it crowds out more important art in favor of bland audio wallpaper — while also making it harder for musicians to make a living. “Some face the possibility of losing out on crucial income by having their tracks passed over for playlist placement or replaced in favor of PFC,” she writes. “Others, who record PFC music themselves, must often give up control of certain royalty rights that, if a track becomes popular, could be highly lucrative. But it also raises worrying questions for all of us who listen to music. It puts forth an image of a future in which — as streaming services push music further into the background, and normalize anonymous, low-cost playlist filler — the relationship between listener and artist might be severed completely." And the King Gizzard case raises an additional cause for concern: that even after an artist leaves a platform, they must still remain vigilant about their presence there. Otherwise, some unscrupulous artist might capitalize on their popularity by masquerading as them. And some overwhelmed platforms might fail to catch them. As it so happens, Spotify is facing a deluge of impersonators and knock-off tracks, many of them created using artificial intelligence. I noted here last month that Spotify has removed 75 million spam tracks from the service over the past year. In September, the company introduced stronger rules against impersonation and a new music spam filter, along with a way for artists to voluntarily disclose their use of AI in making music. Every platform of Spotify’s size has a formidable spam challenge to deal with. And the more prominent the artist, the more they can be assured that the company’s teams will intervene swiftly whenever impersonators are discovered. But King Gizzard’s experience suggests just how easy it may remain for an impersonator to find fortune in the long tail of Spotify’s catalog. And at some point, that may undermine trust in the platform overall. Especially when, according to a survey of 9,000 people from streaming service Deezer this month, 97 percent of people can’t tell the difference between AI-generated music and the real thing. The rise of AI slop this year is teaching everyone not to believe what they see with their eyes. Before too long we may not trust what we hear with our ears, either. On the podcast this week: Kevin and I talk about Google's Project Suncatcher and the surprisingly intense effort to build data centers in space. Then, former Trump policy adviser Dean Ball stops by to talk about MAGA's view of AI. And finally, history professor Mark Humphries joins us to explain how he used an experimental Google model — it's very likely Gemini 3 — to do something pretty incredible. Apple | Spotify | Stitcher | Amazon | Google | YouTube Sponsored Safer by Thorn is a purpose-built child sexual abuse material (CSAM) and exploitation (CSE) solution powered by trusted data and Thorn’s issue expertise. Safer helps trust and safety teams proactively detect CSAM and CSE conversations.Safeguard your platform and users with: - Proprietary hashing and matching for verified CSAM
- CSAM classification to detect possible novel CSAM
- CSE text classification to detect text that contains or could lead to online child sexual exploitation
- Self-hosted options to help you maintain a privacy-forward stance
Learn how Safer by Thorn can help you mitigate the risk of your platform hosting CSAM or being misused to sexually exploit children. FollowingChatGPT will keep getting "warmer" until morale improvesWhat happened: OpenAI released GPT-5.1, a new version of its popular chatbot. In a blog post, the company said the model is “smarter,” “warmer,” and “more conversational,” offering examples of the model taking on a friendlier tone while offering emotional support and answering questions about baseball. It was an unusual chatbot rollout. Most language model announcements foreground improvements in specific impressive capabilities, like math, coding, or trivia, and post improved scores on commonly-used benchmarks. This time, those numbers were relegated to the end of a “GPT-5.1 for developers” post, and showed only modest improvements over GPT-5. Instead, the company focused on qualitative changes and an expansion to their preset tone options, which now let you choose between “Default,” “Friendly,” “Efficient,” “Professional,” “Candid,” and “Quirky” versions of the model. In a post on her personal blog, Fidji Simo, OpenAI’s head of product, emphasized that the team had been listening to user feedback, and said she wants ChatGPT to feel personal. “Instead of trying to build one perfect experience that fits everyone (which would be impossible), we want ChatGPT to feel like yours and work with you in the way that suits you best.” Why we’re following: In a recent private earnings call, OpenAI CFO Sarah Friar told investors that time spent on ChatGPT has gone down “slightly” in response to “content restrictions” the company rolled out in August. They’d rolled out those restrictions in response to concerns about people becoming emotionally dependent, having delusions affirmed, and even becoming suicidal after interacting frequently with the chatbot. Users also noticed and complained about personality changes in OpenAI’s new GPT-5 model, which they found colder and more distant. GPT-5.1 looks like an attempt to win back users with what OpenAI thinks they want: a warmer, friendlier AI that’s easy to talk to. What people are saying: The difference between OpenAI’s announcement messaging on X, which is a bit more tech-wonk-heavy, and Reddit, was striking. On X, OpenAI said “It’s smarter, more reliable, and a lot more conversational.” CEO Sam Altman posted, “I particularly like the improvements in instruction following, and the adaptive thinking.” On Reddit, rather than emotional warmth, many commenters requested fewer content restrictions and more reliable access to OpenAI’s old 4o model, which had previously caused concern because it affirmed many users’ delusions. A top comment read: “When will the guardrails be softened?” They added that it “Often feels like I'm being gentle-parented by a machine.” They felt this was odd, “when the majority of other AI chat models do not have such narrow definitions of what is acceptable to talk about.” (What models are they talking to?) OpenAI research engineer Johannes Heidecke replied, “We are working on more precise safeguards that don't over-trigger. We understand the frustration and impact of overly-strict safeguards.” But they are there for a reason, he said: “We want our models to respond safely and empathetically in sensitive situations.” One reply said the safeguards made “mental health conversations in general” more difficult. “I say 'man I'm fucking done' once and it won't stop telling me to call a hotline for the next ten messages, even when I tell it I'm safe and not going to do anything to myself.” The user found this disheartening. “Kind of just makes me feel worse honestly, like even the AI thinks I'm too much? Damn.” —Ella Markianos Side QuestsSeven former FCC members representing both parties said current chair Brendan Carr is improperly using a policy against broadcasters critical of President Trump. He is! How the AI Cold War between the US and China will redefine power. Videos of horrific ICE raids generated by Sora are going viral on Facebook. OpenAI is fighting a court order that required it to turn over 20 million ChatGPT chat logs. ChatGPT violated copyright laws in Germany by using lyrics from a musician to train its models, a court ruled. Meta’s chief AI scientist Yann LeCun is reportedly leaving to found a startup. WhatsApp will face stricter content moderation rules in the EU. Meta released a new open source multilingual ASR system. Threads is rolling out features aimed at podcasters. Facebook Marketplace is getting a slew of new features. The UK will allow proactive testing for the ability to create CSAM before an AI model is released. A new AI tool helping people find ways to object planning for new homes could halt the UK planning system, experts |