Plus: What Nvidia's Move Into $4 Trillion Territory Means |
Several tech companies have been closing in on becoming the world’s first to hit a $4 trillion valuation for more than a year, and considering the AI boom, the first to cross that line shouldn’t be a surprise. Earlier this month, Nvidia won that crown. The GPU powerhouse is now bigger than 97% of global economies, total defense spending around the world, and the entire cryptocurrency market, writes Forbes’ Derek Saul. Nvidia’s stock has taken off in the last few years, with its valuation just hitting the $1 trillion mark in 2023. Nvidia owns a whopping 92% of the market for datacenter GPU chips, according to an IoT Analytics report published in January. This week, the company’s stock has continued to climb after the company said the Trump Administration reversed its decision to ban exports of Nvidia’s H20 chip to China. Sale of the chips, which were designed specifically for the Chinese market and had been a top seller for Nvidia, were banned in April. Nvidia’s stock hit an all-time high on Thursday morning. But this growth is about much more than Nvidia, writes Forbes contributor Steven Wolfe Pereira. After all, Nvidia is a hardware company. Investing in hardware is an important first step to developing better and more impactful AI systems, but everything in AI is built on top of it. Nvidia’s growth shows that right now is the best time for companies in all industries to dig into their AI development strategies. Companies should look at how their AI capabilities are progressing compared to competitors, work toward AI solutions for your processes, and determine how AI can be used to reshape what you do. After all, Wolfe Pereira writes, the change AI will bring to business is just beginning, and the IDC predicted it will drive an economic transformation worth nearly $20 trillion by 2030. But as company use of AI is increasing, there’s a huge gap between executive excitement about its possibilities and trust and ability to use the systems. I talked to Cisco President and Chief Product Officer Jeetu Patel about this gap. An excerpt from our conversation is later in this newsletter.
If you like what you read here, you can easily share it online and on your social media pages. This newsletter, and all previous editions of Forbes CIO, can be found on our website here. |
|
In today’s CIO newsletter: |
|
The online privacy case involving executives of Meta was suddenly settled Thursday morning, the second day of the trial. The case seeking $8 billion in damages was filed in 2018 against leaders of what was then known as Facebook—including CEO Mark Zuckerberg and then-COO Sheryl Sandberg—over a violation of Federal Trade Commission rules in which the social networking company was accused of sharing user data with third parties without permission. The data sharing incident resulted in the Cambridge Analytica scandal. No details were provided about the nature of the settlement, which was reached before top officials and powerful investors—including billionaire venture capitalist Marc Andreesen and former board members Palantir cofounder Peter Thiel and Netflix cofounder Reed Hastings—were called to the stand to testify. The plaintiffs' lawyer told Reuters the agreement came together quickly. While the settlement ostensibly compensates the shareholders, it means there will be no public airing on Facebook’s policies on personal data use as part of the trial. And while those policies may have changed over the past decade, the issue is far from settled—especially with disparate policies in different regions. |
|
An AI-first web browser race is upon us. Earlier this month, Perplexity released its new Comet browser, which has AI built in. It lets users ask questions and do research in the browser interface, and it includes an AI agent that can help with tasks. It’s currently only available to paid subscribers in a limited test run, but will expand slowly. Forbes contributor Steven Wolfe Pereira writes that the Comet browser represents the fundamental shift toward AI-native processes and platforms for general use, and OpenAI reportedly has its own AI-powered browser in the works. The Comet Assistant—the AI agent embedded in the browser—is available from a sidebar at all times. A TechRadar test of the browser found that this assistant can help users through the spiderweb of disparate tasks they always need to do, though Wolfe Pereira found that sometimes hallucinates answers. While it’s not yet widely available, Comet could successfully show the wider potential of AI to everyday users; the most common AI task today has to do with writing, Wolfe Pereira writes. But it could also exacerbate the shift away from the traditional search engine, writes Forbes contributor Tim Bajarin. Google’s Chrome browser helped move traditional search traffic to Google, and Comet does the same with AI. While traditional search engines still see 34 times more traffic than AI chatbots, according to SEMrush data, chatbot traffic has increased 80.92% year-over-year. Those statistics are still somewhat deceptive; Google has been employing its Gemini AI bot in an increasing amount of searches over the past year. |
|
For engineers who are highly skilled with AI, today’s job market is full of high-ticket offers, major—and sometimes unexpected—acquisitions, and dramatic moves. In this week’s edition of The Prompt, Forbes’ Rashi Shrivastava runs down some of the latest moves. Cognition bought what was left of its rival Windsurf, gaining access to the “shell of the company,” including employees, coding tool Windsurf Editor, and the company’s IP and training data. Meanwhile, Windsurf cofounders Varun Mohan and Douglas Chen and other staff members were poached by Google DeepMind for $2.4 billion to improve Gemini’s coding abilities. But, Forbes’ Richard Nieva writes, for those who work in the tech industry that don’t have top-shelf skills, the job market is becoming increasingly depressing. Job statistics have shown only a slight decline in entry-level coders so far, but many tech companies are being upfront about what the future may hold. Anthropic CEO Dario Amodei has said AI could wipe out half of entry-level white-collar jobs within the next five years. Amazon CEO Andy Jassy said AI will “reduce our total corporate workforce” over the next few years. Earlier this year, Shopify CEO Tobi Lutke told his team in a memo that funds for new hires would only be granted for jobs that can’t be automated by AI. “We’re going from mass hiring to precision hiring,” Ruyu Chen, a postdoctoral fellow at the Digital Economy Lab of Stanford’s Institute for Human-Centered AI, told Nieva. “The superstar workers are in a better position.” |
|
 | Cisco President and Chief Product Officer Jeetu Patel. Cisco |
|
| | Cisco’s Chief Product Officer Says AI Security Use Is A Matter Of Trust |
|
|
|
AI is being implemented in many business functions, but it is not quite as prevalent in cybersecurity. I talked to Jeetu Patel, who was recently promoted to president and chief product officer at Cisco, about why this is, and he said it’s a question of trust. This conversation has been edited for length, clarity and continuity. Out of all corporate uses of AI, you’ve said it seems to be lagging in cybersecurity. Why? Patel: There’s two reasons for it. One of them is the efficacy with AI tends to be low and the costs tend to be prohibitively high. And then the skills shortage has been pretty profound so far. People are just trying to just keep their heads above water. On the efficacy side, what do we think needs to happen? If you look today, most of the AI work has been done with generic models that have also been used for a multitude of other areas. I feel like there is an opportunity for having a much more specialized bespoke model or set of models that create this notion of what we call super intelligent security. We launched this at [springtime cybersecurity conference] RSA, where we said, ‘Let’s make sure that we have a security-specific model that eventually can do reasoning, that we should open source, that ideally is going to have a very different set of performance characteristics, from both an efficacy perspective and a cost perspective.’ This model was trained on an 8 billion parameter model, which is small. It’s showing in early benchmarks that it’s performing [as well or] better than a 70 billion parameter model. It can be run on a single A100 GPU versus, having a cluster of 32 H100s, which is what typically a 70 billion parameter model would need. Why is this important? A huge constraint is having the infrastructure to go out and use AI effectively. The cost is typically pretty high if you go to a company that’s not in the tech space and is a low-margin business. It’s very hard for them to go out and start putting millions of dollars towards GPU costs. The second thing is we have to secure AI itself. The biggest risk we have is not AI taking our jobs away. The attack surface increases so dramatically that it could have harmful effects that we haven’t fully considered. That requires that you start from the foundation of saying, ‘I’m going to secure AI itself, rather than just use AI for my cybersecurity defenses.’ So what does it take to secure AI? You need to make sure that you have full visibility on what data is flowing into what models. You can’t really protect something you can’t see. No. 2, you have to then say, ‘Can we validate these models?’ Because these models, by definition, tend to be unpredictable. We are trying to build very predictable enterprise applications on top of a foundation that's inherently nondeterministic. We have to make sure that we validate to say, ‘Are these models working the way that you want them to work? And if not, then what kind of guardrails do you need to put in place so that the model doesn’t start behaving in a way that can then put you or the company in harm’s way?’ How much are businesses thinking about both of these things? I would imagine that the whole business has more of a purview on the first one, but are most even considering the second? I think the use of AI has been increasing, but not quite at the base of the growth of AI itself. This is where I think there’s an opportunity. We do readiness indexes and CEO surveys to find out the sentiment of the market. There was this one stat that’s really stuck with me: 97% of the CEOs that we talked to were really excited about AI. Almost giddy excited, like, ‘Wow, this is going to change our business.’ Only 1.7% of them felt like they were prepared. There’s this big disconnect between people getting excited about the possibility, but not really having a level of preparedness. Then you double click and say, ‘Why is that the case?’ Three things come to mind. They feel like they don’t have the infrastructure knowhow to be able to fully prosecute the opportunity of AI. Second, they feel like the safety and security dimension, which is what we call the trust dimension of AI, is still low and people are worried about AI to really lean all the way in. We’ve got to solve that. The third one is they just don’t have the right level of skills, especially in cybersecurity. That’s a pretty rampant skill shortage right now. Three to 4 million jobs go unfilled every year in cybersecurity, and that problem’s only exacerbating; it’s not getting better. I feel like companies are thinking about it, but I think this is still far too complicated. Companies like Cisco are trying to work at simplifying the use of AI in the fabric of what we do. What advice would you give to a CIO or a CISO who is grappling with this right now? There’s only going to be two classes of companies in the world. There’s companies who are going to get very dextrous with the use of AI, and then there are going to be companies that really struggle for relevance—and you don't want to be the second kind of company. You have to lean in with AI, and lean in with making sure that people understand that the only way that adoption and AI scale is if you solve the trust problem. Unlike the past, where security used to be an impediment to adoption, today security might become one of the biggest accelerants to adoption. Because if you don’t trust an AI system, you’re not going to use it. But if you trust it, you’re going to use it. And in order to trust it, you need to make sure that there’s a common substrate of security that’s embedded across all of the models, all the agents, all the applications. Imagine when you start using these systems where you just ask an AI agent to do the work for you: I’m going out with my spouse for my second anniversary and I need you to book me a movie as well as a restaurant. Make sure that the restaurant’s not too far from the movie, and here’s my credit card. Go ahead and make the reservations and buy the tickets and pick the right seats so that two of us can watch the movie properly. In order for you to do that, there’s a technology element, but then there’s a trust element. Do I actually trust the system to give it my credit card number and autonomously work with other agents, exchange data and come back to you with something? One of the reasons why this is so important is the majority of the breaches and hacks are because of identity. People are saying, ‘Why do I need to hack into a system if I can just steal the identity through social engineering and log in?’ That problem gets even more exacerbated as you have many more agents that are talking to each other. I have an orchestrator agent that’s talking to maybe five other agents. Agent one is booking a restaurant, agent two is booking a movie, and agent three is booking an Uber, and all three of them have to work well together and exchange data. Every agent is going to need to have an identity, and every identity needs to have a set of permissions that then knows what data to exchange with some other agent. And so identity is no longer going to be just about a human. It is also going to be about a machine and a service identity and an agent identity. In my mind, in the next year to two years, the progress of AI will largely be dependent on the progress of security and people psychologically feeling a level of trust with AI. That trust has to be real, where you have to have the right underlying technology to keep humans safe and secure. It’s not just about keeping us secure. It’s about making sure that we can unlock human potential if we keep ourselves secure. Now, what [security professionals] are saying is they’ve been made responsible not just to manage risk, but the degree to which you can unlock human potential will be based on the progress that the security industry makes on managing risk. Literally, adoption will accelerate, based as a function of security and safety being well-formed. |
|
|
Beauty and skincare company Estée Lauder appointed Aude Gandon as its first chief digital & marketing officer, effective August 1. Gandon joins from Nestlé, where she most recently served as global chief marketing officer.
|
|
HR software provider HireRight tapped Lars Ewe as chief technology officer, effective July 16. Ewe previously worked in the same role at DTN, and has also worked in leadership for Anaconda, Evariant, and Click Security.
|
|
Primary care provider network Aledade selected Lalith Vadlamannati, Ph.D. to be its chief technology officer, effective July 15. Vadlamannati most recently worked in the same role at Hinge Health, and as a vice president of engineering at Amazon prior to that.
|
|
Send us C-suite transition news at forbescsuite@forbes.com. |
|
As you are developing new ideas and applications, how do you know when you’re ready to move them forward? Here are some tips to better figure out when to test, iterate, pilot and launch new initiatives. In cybersecurity, there are a varie |
|
|