AI models take on EQ, the Iowa Writers’ Workshop, and mountain climbing |
|
|
Hello and welcome to Eye on AI. In today’s edition…AI creeps further into human territory; Google’s $3 billion investment in Anthropic is revealed; Google and Cohere each release new models designed for efficiency; and French publishers and authors sue Meta for copyright infringement.
AI assistants designed to free us from the burden of mundane tasks have taken center stage, but they’re far from the only types of AI models companies are working on. There’s also been steady effort going into AI technologies designed to do things people have historically wanted to do and which had, until very recently, felt like uniquely human abilities.
This week has seen a few such models released or teased. Alibaba unveiled a model it says can more accurately read human emotions. Sam Altman said OpenAI trained a model that excels at creative writing and that it’s the first time he has been “really struck by something written by AI.” An AI model to assist humans with walking—or more specifically, summiting a mountain—was also recently debuted in the Chinese province of Shandong: Tourism officials made AI-powered exoskeleton legs available to rent at its famous Mount Tai so that anyone can climb the usually grueling 4,000 feet to the top with ease.
These technologies are all in their early stages, but together, they provide a fascinating look at the different facets of society and the human experience companies are looking to augment with AI.
AI gets emotional Various companies including Affectiva, Beyond Verbal, and Hume AI have been developing AI products designed to decipher human emotions. This week, Alibaba unveiled its new emotion-reading model, called R1-Omni. The model is open source so that anyone can download it and use it for free.
Researchers from the company’s Tongyi Lab describe in a paper how the model’s superior reasoning abilities enable it to decipher human emotions with more nuance and accuracy than prior models. To demonstrate this, the researchers showed the model (as well as other competing models) film clips and asked the model to describe what emotions the people pictured are feeling.
In one example, the model described the emotions of Jonah Hill’s character in a scene from the movie This Is The End, evaluating both visual and audio clues. “His facial expression is complex, with wide eyes, slightly open mouth, raised eyebrows, and furrowed brows, revealing surprise and anger. Speech recognition technology suggests that his voice contains words like ‘you’, ‘lower your voice’, ‘freaking out’, indicating strong emotions and agitation. Overall, he displays an emotional state of confusion, anger, and excitement,” the model determined, which I would mostly agree with after viewing the clip. Hill’s character is “excited” as in being emotionally elevated and eager, but he is not feeling positive, as the word often suggests. Perhaps another descriptor would’ve been better there.
While the paper doesn’t get into potential use cases, it’s easy to imagine uses ranging from marketing to surveillance by law enforcement, or by employers in the workplace, or teachers in a classroom. Affectiva, for example, advertises use cases including ad targeting, behavioral research, analyzing audience reactions, and the ability to understand what’s going on inside a vehicle (the last being the main reason presumably that SmartEye acquired Affectiva in 2021).
Will AI write the next great novel? So far, the general consensus has been that while AI models are fine at writing business emails, they’re not so great at creative writing. Via an X post, OpenAI CEO Sam Altman confirmed this is something the company is working on, sharing that they trained a model that is “good” at creative writing and “got the vibe of metafiction so right.” He said he doesn’t know when they’ll release it, but he shared a metafictional short story about AI and grief written by the model.
I’d say the writing was OK and didn’t have a strong reaction to it either way, but I honestly don’t read a lot of short fiction and so don’t have a great basis for comparison. But whenever confronted with an AI designed to create art, which I believe exists for human expression, I do wonder why we need it. Commenters on Altman’s posts echoed this sentiment, with some asking what the point is and suggesting he stop pushing AI into artistic spaces.
“As soon as someone knows what they’re reading is written by AI they lose all emotional interest. We consume human curated content because we want relationships with humans, or at least, the experience of human interaction,” read one reply.
It also strikes me as a bit ironic. AI leaders like Altman proselytize that AI will take over work that is uninteresting to humans so we can do things that are more creative and meaningful. Yet, they’re designing AI technologies to do those uniquely and meaningfully human things, too.
First AI came for our brains. Then our legs. Exoskeletons that can assist humans with walking have obvious benefits for people with mobility issues. The AI-powered exoskeletons recently trialed with tourists at China’s Mount Tai, however, were purely geared toward making the trek easier. The robotic device cinches around the wearer’s waist and thighs, and using AI algorithms, senses their movements to provide “synchronized assistance.” Users described that it felt like the device was pulling their legs uphill.
As an avid hiker myself, I was fascinated thinking of the implications and reading other outdoor enthusiasts debate about it online. On one hand, these exoskeletons could threaten what they love—removing all the comradery that comes with having to work hard to get to the top, turning peaks into tourist traps made reachable for anyone willing to pay for robotic assistance, and infusing technology into some of the few places left to escape it. On the other, it could grant people with disabilities greater access to the outdoors and even help those same skeptical hikers continue doing what they love much longer than their bodies would otherwise allow.
It’s another use case that sparks interesting questions about how we want to integrate AI into our human experiences. Do we really want to outsource something as innate to our bodies as walking just because we can? At the same time, creating technologies to make life easier is exactly what humans have done all along.
“The technologies that we use to do things are constantly changing,” James Hughes, a researcher and futurist at the University of Massachusetts, Boston, told me. “We have always been technological beings for 100,000 years, and so for people to say, oh, it should stop with this technology, not that next one, it’s just ahistorical.”
And with that, here’s more AI news.
Sage Lazzaro sage.lazzaro@consultant.fortune.com sagelazzaro.com
|
|
|
AI: Speed matters more, scale matters less, innovation matters most As businesses embrace AI-driven models, they’ll need to rethink everything from workforce strategies to innovation processes. Critical shifts in strategy will emphasize speed more, scale less and innovation most of all. The time to embrace AI is now. Read more
|
|
|
Google owns a 14% stake in Anthropic, legal filings reveal. The tech giant is set to invest another $750 million into Anthropic in September through a convertible debt loan, making for a total investment of $3 billion. The stake doesn’t offer Google much in terms of control over the startup: Google holds no board seats, voting rights, or observer rights. (Google may have preferred this arrangement to avoid drawing antitrust scrutiny.) The investment, uncovered through legal filings obtained by the New York Times, does however show how Google is spreading its AI bets wider than its own research and products, as well as how entangled the leading AI startups are with big tech. Anthropic is also closely tied to Amazon, which has invested $8 billion in the startup and is its primary cloud provider.
Google and Cohere each release new models designed for efficiency. Gemma 3, the latest model in Google’s series of fast and efficient models designed to run directly on edge devices such as laptops or perhaps even mobile phones, runs on a single GPU. The company is touting it as “world’s best single-accelerator model.” Today, AI startup Cohere also released a small footprint model called Command AI. The company says it runs on two H100s with a 256k context length, compared to DeepSeek v3 requiring eight H100s to run with only 128k context length. Since DeepSeek released its R1 model, the industry has begun questioning how many chips are truly needed to create and run high-performing models. We’re likely to see a lot more models touting efficiency in the coming months. You can read more on Gemma 3 from Fortune’s David Meyer and Cohere from BetaKit.
OpenAI argues it should be exempt from state-level AI regulation and copyright concerns because: China. David Meyer also has this story on OpenAI's proposals to the White House advisors tasked with drafting the Trump administration's "AI Action Plan." The AI company invokes the threat of the U.S. losing a technological arms race with China in calling for federal policies that would exempt it from having to comply with a patchwork of state-level AI regulation. It also uses the threat of China leap-frogging U.S. AI companies at the frontier of AI model development to ask the administration to alter U.S. copyright rules to explicitly allow American AI companies to train their models on copyrighted material.
French publishers and authors sue Meta for training AI models on their books without consent. Three trade groups are involved in the lawsuit and claim they have evidence of “massive” breaches of copyright. It’s yet another legal challenge to AI companies' use of various creative works in their training data. Many of the AI companies claim their usage of copyrighted materials falls under “fair use” doctrines. Meta has not commented on the lawsuit. You can read more from Bloomberg.
|
|
|
March 17-20: Nvidia GTC, San Jose
April 9-11: Google Cloud Next, Las Vegas
April 24-28: International Conference on Learning Representations (ICLR), Singapore
May 6-7: Fortune Brainstorm AI London. Apply to attend here
May 20-21: Google IO, Mountain View, Calif.
July 13-19: International Conference on Machine Learning (ICML), Vancouver
|
|
|
$11.6 million That’s how much Voyage AI’s recent sale to MongoDB equates to per employee. According to CB Insights data on 2025 first quarter acquisitions, smaller tech companies are fetching big price tags. Tech companies acquired for $100 million or more so far this year had just 100 employees at the median, compared to averages between 140 and 310 employees throughout last year (and similar numbers in prior years).
Voyage AI had the best valuation-to-employee ratio, selling for $220 million with only 19 employees. And of course, it wasn’t the only tiny AI team to reap a big windfall: other such sales so far this year include AI chips company Kinara and real-time facial recognition company Oosto.
|
|
|
Thanks for reading. If you liked this email, pay it forward. Share it with someone you know. |
|
|
|