Plus: Study Shows Workers Are Enthusiastic About AI, But Fear It Will Make Them Obsolete |
| A large majority of employees at U.S. companies know the power that AI can bring to the workplace, but a new study from Ernst & Young shows that their enthusiasm for AI is thoroughly tempered by fears that it could make them obsolete. In a survey of 1,148 corporate staff workers at large companies in a variety of industries, 84% said they are eager to embrace agentic AI in their roles, anticipating positive impacts on productivity and efficiency. But more than half also believe agentic AI will eliminate the need for their position. Fears are higher among rank-and-file employees: 65% worry about their job security, while only 48% of managers feel that way. The EY study is full of more seemingly contradictory numbers, showing that feelings about enterprise AI are extremely complex. While a huge majority—86%—feel that AI agents have already had a positive impact on their team’s productivity and nine in 10 say they’re confident in using AI agents, looking a bit deeper shows that many have personal issues with the technology. More than half of all employees—54%—felt they were falling behind their peers in using agentic AI at work. Six in 10 said they’re overwhelmed by information about what agentic AI can do for them, with nearly two-thirds finding it hard to deal with the amount of new agentic AI tools available to them. More than half of managers are concerned about new duties overseeing agentic AI, and 63% of rank-and-file employees say the challenges of AI agents make them unwilling to seek manager roles. The study shows that both companies and AI providers have made their case for the potential positive impacts AI can bring to the workforce, but the human side of the transformation seems to be lacking. EY recommends investing in better internal communication and training. Employees who receive clear information about their company’s agentic AI strategy are much more likely to embrace it and recognize its value. The study also found that 59% said a lack of AI training is an organizational barrier. “This isn't just a technology rollout; it’s a human transformation that requires intentional support to redefine the partnership between people and AI,” Kim Billeter, EY Global People Consulting leader said in a statement. While companies introduce AI agents, they need to ensure they follow established governance procedures. This can be difficult for both the tech employees building them and employees using them, especially in these early days of this type of tool, said Sanjeev Vohea, chief technology and innovation officer at agentic AI solutions provider Genpact. I spoke with Vohra about how to design and build this governance, and an excerpt from our conversation is later in this newsletter. We are currently accepting nominations for the Forbes CIO Next 2025 list. We’re looking for innovators who have had significant impact both at their own companies and for other tech leaders as a whole. (And yes, you can nominate yourself.) Nominations are accepted until 5 p.m. ET on October 30.
If you like what you read here, you can easily share it online and on your social media pages. This newsletter, and all previous editions of Forbes CIO, can be found on our website here. |
|
| In today’s CIO newsletter: |
|
First Up: This week’s massive AWS outage is resolved, but it exposed old vulnerabilities in the digital ecosystem
|
|
Artificial Intelligence: OpenAI gets into the browser business, but it doesn’t look like it will take market share from Google Chrome just yet
|
|
|
|
Matthias Balk/picture alliance via Getty Images
|
|
| Many websites, apps and services were down at the beginning of the week due to a massive outage at Amazon Web Services on Monday. As reports of issues poured in during the early morning hours on Monday, Amazon identified DNS issues in its DynamoDB service in the US-EAST-1 region, which is located in several data centers in Northern Virginia, writes Forbes senior contributor Kate O’Flaherty. The region powers many applications and services and is central to authentication services, meaning an outage there can trigger a global ripple effect. Forbes senior contributor David Phelan cited BBC reporting that the affected location is the original and still-largest AWS hub, and that its age, size and on-demand capacity make it outage-prone. More than 2,500 companies and services were impacted, according to Downdetector, including Venmo, Instacart, Coinbase, Snapchat, Canva, Duolingo, Roblox, Fortnite, Amazon, British bank Lloyds, Britain’s tax collection agency and British National Rail. By 4 p.m. EDT on Monday, AWS reported all services returned to normal operations. Rob Demain, CEO of cybersecurity mitigation company e2e-assure, told O’Flaherty that the massive outage appeared to be related to a configuration issue, network routing fault, or a DNS error—not a cybersecurity incident. That much is good news, but AWS and cybersecurity experts said to carefully monitor people trying to take advantage of the situation through scams. Forbes senior contributor Christer Holloman writes that the incident is a stark reminder of how relying on one big tech company can be problematic. AWS holds about 30% of the global cloud infrastructure market, meaning a single outage can kneecap about a third of online services. Holloman dives into the financial institutions harmed by the outage. Much of the world’s sensitive financial data is hosted on clouds, which is good for connectivity and convenience—but costly and potentially devastating in the wake of an outage. Many institutions are pursuing multi-cloud solutions to ensure their data is always available, either for their own resilience or based on legal requirements—that are going into effect across Europe. |
|
| OpenAI announced this week that it’s getting into the browser business. The company behind ChatGPT announced its own browser, Atlas, in a livestream on Tuesday. Atlas, which can now be downloaded for MacOS, basically puts ChatGPT at the center of web browsing. The chatbot can see what users are viewing and bring context by “remembering” what a user did online and in ChatGPT, and some ChatGPT subscribers have access to an agent that can do simple tasks, like booking restaurant reservations. Forbes senior contributor John Koetsier writes that it makes sense for OpenAI to launch its own browser. Atlas can improve ChatGPT’s UX by deepening its integration. It keeps users on OpenAI’s tools, and provides OpenAI with more user data—potentially for eventual advertising models. It also provides a captive audience for new agentic tools. But so far, Atlas isn’t exactly set to steal market share from Google’s leading Chrome browser—even though Google’s stock price dipped sharply when Atlas was announced. HSBC analysts said it’s not a threat to Chrome, according to Seeking Alpha; the Atlas browser itself is not unique, and Google Chrome is well positioned to introduce its own built-in AI features. And while some journalists have posted their experiences using the browser for shopping tasks—which so far aren’t quite as streamlined or easy as OpenAI makes it sound—Atlas also has some launch hiccups. Cybersecurity experts told Fortune the entire browser is vulnerable to prompt injection attacks, a claim OpenAI’s CISO conceded in an X post. Meanwhile, the Verge reports OpenAI’s Atlas leader David Fry posted a long list of upgrades they’re already making to the browser, including an ad blocker and an overflow menu for bookmarks. |
|
| As technology evolves, new answers emerge to an old question: What kind of data can the government see from this platform? A recently unsealed federal government warrant shows that the Department of Homeland Security was able to obtain some information about an individual based on their ChatGPT prompts. The warrant was to get information for an investigation into a dark web platform sharing child pornography, writes Forbes’ Thomas Brewer. It’s not clear what kind of information the government was able to get from OpenAI, but the situation sets a precedent for law enforcement’s AI data access. Meanwhile, as TikTok may be getting closer to being controlled in the U.S. by some of President Donald Trump’s allies, it’s quietly made a few changes to its law enforcement policy, making it easier for the social video site to share users’ personal information with the government and regulatory authorities, writes Forbes’ Emily Baker-White. The company did not respond to repeated questions about why it made the policy changes, all of which came at the end of April. It also didn’t respond to direct questions about whether it makes user information available to ICE. After the story was published, TikTok’s Head of Global Corporate Communications, Nathaniel Brown, sent a statement calling it “misleading” and saying it “deliberately distorts and sensationalizes how we handle legal requests.” It would make sense for TikTok to defer to Trump. Baker-White points out that without him, the app would not be online in the U.S. at all. Last year, Congress passed a bipartisan law forcing either the sale of the app to a U.S.-friendly entity or a ban. Trump has put holds on that law since taking office and issued an executive order last month for an ownership deal to go forward. After a very brief blackout around Trump’s inauguration, the app came back to life, featuring videos and notifications praising the president. |
|
 | | Genpact Chief Innovation and Technology Officer Sanjeev Vohra. Genpact |
|
| | | How To Design AI Agents With Built-In Governance |
|
|
|
| AI agents are a new technology that can make work more efficient, but they need to be designed with appropriate governance in mind. I talked to Sanjeev Vohra, chief technology and innovation officer at agentic AI solutions provider Genpact, about how to make this a part of your agents—and whose input is most important. This conversation has been edited for length, clarity and continuity. How should AI agents be designed with governance, either put into the agent itself or as part of policy for the enterprise? Vohra: Before you start putting any kind of a framework or structure, you need to have a policy as a company. Every company has values. It’s a written document in many companies that also is the input to the policy for responsible AI. When you build AI systems, they should follow the same values and principles as your human employees follow as a company. And that makes a very important element to start with. But that also is influenced by a lot of regulations across [AI]. You know how much regulations have evolved in the last five years, so you have to put that criteria to the policy definition stage. I also believe that the team that does that in the company should not be the technical team. [It should be] a multidisciplinary team with legal, HR, corporate functions together, so that they can think about this as a more holistic definition of policy. The risk for the companies who will give this to a technical team would be that they will not develop and think through how to train their AI agent and workforce. What do they want to do in conflicting situations, and how should they say, ‘We don’t have the answer?’ Because if you give it to our technical team, they will always say ‘yes.’ They may not understand the implication of right or wrong, if you may, making decisions. Once you have done that, then the question is putting a framework in place where these policies are baked into the design principles of building the AI system. I can tell you one thing with my personal experiences: If you don’t design a good system and you train the system in the wrong way, you [may] have to abandon the system. It’s very hard to correct a system later on because AI is a learning system, not software. I can't delete a piece of code. There’s no code in AI. You have to be very good in institutionalizing the design itself in the right, correct design. Because if you don’t do that, if you figure out that your agent or AI system is not behaving, the chances for you to correct that is very low. It’s very hard to correct the system. You have to make a new system. When you design an agentic workforce, you have to have to design with people who are going to work with the agents, not the technical workforce. That’s why I keep saying AI is a business challenge, not a technical challenge. The business users like you and I who are going to be using [the systems]— I call them consumers of AI rather than builders of AI—the consumer of AI is more involved in design cycles. There’s a much higher trust, and there's much higher adoption of using them and trusting them in the day-to-day life, and that’s the difference. What do companies need to do at the beginning to implement useful AI agents, and who should oversee them? Many companies have implemented more traditional AI systems. They’re still migrating to more advanced systems. Even if they have started working on responsibility-by-design or accountability-by-design principles, they have a framework around that. The first thing which I keep asking them is: ‘Do you have an inventory of validations? Do you know how agents are working in your company?’ Sometimes there's silence in the room. The most important thing is just having an inventory of all agents: Just like you have your employee record for the company, you have to have the same record for agents. Even an agent workforce as a concept should also have three stakeholders. One is the people who are building these agents. You can have multiple people building, but you need to still have an accountable person in the company. It could be somebody like a chief digital officer, a chief information officer or a chief AI officer: who has a mandate to build these systems or give guidance on how to build [them]. You will have a business who will be using these agents and drawing value from them—sponsors for them. They will know which agents are working or not, and are they drawing enough value from them. Then you also need to involve HR and the policy people. They should know how many agents are working at any point of time, how they’re interacting with humans and the mood of the company. It would really be helpful in the future to have full physical feedback on the agent performance. What advice would you give CIOs trying to increase both responsible AI use and AI agents in their companies? I think they should definitely lean in and spend more time of their own to learn and get the right teams in place. Maybe hire a few people, maybe partner with the right people and create simple models to move forward. Learning is extremely critical in the process. In fact, if you want, you can be busy just listening to people because there could be 100 people pitching you ideas. That’s how the world is evolving, and it can be very confusing and intimidating. If they can, look at going narrow: They can pick which cases they want, then truly build AI systems, which can create business value by using some sort of a responsible AI framework. Not just building one solution, but looking at building a small system, which would have multiple AI agents, [then] experiment[ing] to create automation of a process and seeing the business value. I think it’s all about first experience, and then going after the next because we learn from the first one. The reason why I think everybody has to do their own learning is because every company has been sitting on a large investment in technology, which is custom to themselves. Everybody has their own IT systems, infrastructure vendors, data platforms, application layers, and mobile application and front-end application to [the] customer. Every company stack is different. You can’t just learn from somebody else immediately. You have to spend time and energy to do it yourself. Evolve, as some of the things which look very difficult right now will get better over the next few years. |
|
|
Shipments and logistics provider Saia named Tarak Patel as its executive vice president and chief information officer, effective October 22. Patel joins Saia from Smurfit WestRoc, and he will succeed Rohit Lal, who is retiring and will remain as an advisor.
|
|
Tax services and software provider Ryan appointed Scott Wilson as its senior vice president, chief transformation and strategy officer. Wilson joins the firm after most recently working as chief underwriting and investment officer at The Beneficient Company Group.
|
|
Furniture firm The Lovesac Company tapped Jacob Pat as its chief technology and digital transformation officer, effective October 21. Pat most recently worked as vice president of product at Salesforce following its acquisition of PredictSpring.
|
|
|