Nvidia to focus on competition-beating AI advances at megaconference
The Hindu
Nvidia GTC, as the conference is known, has become CEO Huang’s preferred event to show off Nvidia’s AI advances
When Jensen Huang strides onto the stage of a packed hockey arena to kick off Nvidia’s annual developer conference on Monday, he is likely to reveal products and partnerships geared toward keeping the AI chipmaker atop a growing array of competitors.
Taking over the heart of Silicon Valley for most of a week, Nvidia GTC, as the conference is known, has become CEO Huang’s preferred event to show off Nvidia’s AI advances in chips, data centres, its chip programming software CUDA, digital assistants known as AI agents, and physical AI such as robots.
This year, the four-day event is even more crucial as investors will seek assurance that Nvidia’s strategy of plowing back its profits into the AI ecosystem is paying off. “I expect Nvidia to present a full-stack roadmap update from Rubin to Feynman while emphasizing inference, agentic AI, networking, and AI factory infrastructure,” said eMarketer analyst Jacob Bourne, using the names for Nvidia’s current and forthcoming generations of chips.
Nvidia’s chips sit at the center of hundreds of billions of dollars in investments in data centres by governments and companies around the globe, but the company is facing competition from other chipmakers and even from some of its customers who are developing their own chips.
Analysts told Reuters they expect that overall AI chip market to keep growing, but Nvidia’s slice to shrink somewhat as the AI chip market changes rapidly to where AI agents scurry back and forth among computer applications carrying out tasks on behalf of humans. That is a shift from training, where AI labs link many Nvidia chips together into one computer to chew through huge amounts of data to perfect their AI models.
Those agents are expected to become so numerous that the humans asking them to do work will even need a new layer of AI middle managers, what technologists call an “orchestration” layer, to sit between human users and their fleets of agents.













